Transforming Health Systems with AI From Lab to Last Mile

20 Feb 2026 17:00h - 18:00h

Transforming Health Systems with AI From Lab to Last Mile

Session at a glance

Summary

This discussion focused on the responsible development and implementation of AI in healthcare, featuring a demonstration of an end-to-end AI healthcare solution and a panel of global health experts and funders. Vikalp Sahni from EkaCare demonstrated how AI can address three key healthcare challenges: fragmentation of information, difficulty in collecting patient history, and reducing doctors’ administrative burden. The demonstration showed a 65-year-old diabetic patient named Neeti using AI to summarize her health records, communicate symptoms in her local language, book appointments, and receive care through an AI-enhanced electronic medical records system that could detect drug allergies and generate prescriptions.


The panel discussion brought together regulators, funders, and health experts to address the balance between accelerating innovation and ensuring safety. Dr. Richard Rukwata, a medical regulator from Zimbabwe, discussed how AI could help streamline regulatory processes while maintaining accountability. The panelists emphasized that technology represents only 10% of successful AI implementation, with the remaining 90% involving people and ecosystems. They stressed the critical importance of keeping humans in the loop and conducting rigorous real-world evaluations of AI systems.


A major announcement was made regarding a collaborative funding initiative between major health foundations including Wellcome Trust, Gates Foundation, and Novo Nordisk Foundation. This partnership aims to generate real-world evidence on AI integration in healthcare systems, particularly in low- and middle-income countries. The initiative will focus on rigorous evaluations of AI systems integrated into clinical decision-making, examining costs, effectiveness, and unexpected challenges. The discussion concluded with hopes for next year’s summit to feature honest conversations about what works and what doesn’t in AI healthcare implementation, while maintaining the essential human element in medical care.


Keypoints

Major Discussion Points:

AI-powered healthcare solutions demonstration: Vikalp Sahni presented EkaCare’s end-to-end AI system that addresses healthcare fragmentation, from patient symptom collection through multilingual AI assistants to automated medical record generation and prescription management with safety alerts.


Regulatory challenges in the AI era: Discussion of how regulators must balance accelerating innovation with maintaining safety standards, including the potential for AI tools to help regulators themselves process applications more efficiently while maintaining accountability.


Funding collaborative for real-world AI evidence: Announcement of a major joint funding initiative by global health foundations (Wellcome Trust, Gates Foundation, Novo Nordisk Foundation) to generate rigorous real-world evidence on AI integration in healthcare systems, particularly in low- and middle-income countries.


Human-in-the-loop approaches and safety considerations: Emphasis on the critical importance of maintaining human oversight in AI healthcare applications, including multi-agent architectures, continuous medical team involvement, and transparent decision-making processes, especially in high-anxiety situations like maternal care.


Data privacy and ethical implementation: Discussion of technical and policy approaches to protecting sensitive health data while enabling AI innovation, including federated learning, encryption, and adherence to regulatory frameworks like HIPAA and India’s DPDP Act.


Overall Purpose:

The discussion aimed to explore responsible AI development and implementation in healthcare, bringing together technology developers, regulators, and major global health funders to address how AI can improve healthcare delivery while maintaining safety, privacy, and human-centered care. The session focused on moving beyond hype to practical, evidence-based approaches for integrating AI into health systems.


Overall Tone:

The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenges and risks. The tone became increasingly focused on practical solutions and partnerships, with speakers emphasizing the need for rigorous evaluation, human oversight, and responsible development. The session concluded on a hopeful note about future collaboration while maintaining realistic expectations about the complexities involved in healthcare AI implementation.


Speakers

Speakers from the provided list:


Vikalp Sahni: Works at EkaCare, involved in building end-to-end healthcare solutions using AI technology


Sindura Ganapathi: Conference moderator/host, has veterinary background, works with regulatory agencies, was involved in G20 meetings from India side


Charlotte Watts: Executive Director of Solutions at Wellcome Trust, extensive career in healthcare, HIV, gender-based violence, epidemiology, mathematics, former UK government official who participated in G20 meetings


Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in pharmaceutical industry and global health, works as a funder of innovation


Richard Rukwata: Dr. Richard Rukwata, Director General of Medicines Control Authority of Zimbabwe, Chief Regulator, involved in regulatory harmonization work in Africa


Monika Sharma: Dr. Monica Sharma, Lead for No One Artists India Foundation, background in biomedical field and science innovation, extensive experience in funding programs including Newton Fund, IRTG, and India’s BioPharma Mission Program


Participant: Multiple unidentified audience members who asked questions during the session


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

This comprehensive discussion explored the responsible development and implementation of artificial intelligence in healthcare, featuring both a practical demonstration of AI healthcare solutions and a strategic panel discussion among global health experts, regulators, and major funding organisations. The session, moderated by Sindura Ganapati, addressed critical questions about balancing innovation acceleration with safety requirements whilst maintaining human-centred approaches to healthcare delivery.


The session began with personal reflections from panelists about their feelings regarding AI development. Sindura referenced the Anthropic CEO blog and shared her own mixed emotions about AI advancement, while Monica Sharma from No One Artists India Foundation shared an anecdote about her 6.5-year-old child asking whether AI robots would be good or bad, highlighting the human concerns surrounding AI development.


AI Healthcare Solution Demonstration

Vikalp Sahni from EkaCare presented a compelling end-to-end AI healthcare solution designed to address three fundamental challenges in modern healthcare delivery. The demonstration centred on a case study of Neeti, a 65-year-old diabetic patient with a February 14th appointment, illustrating how AI can transform the entire healthcare journey from initial symptom assessment to final prescription delivery.


The first challenge addressed was the fragmentation of healthcare information and delivery systems. Traditional healthcare often involves disconnected processes from appointment booking to vital sign collection, creating inefficiencies and potential gaps in care. EkaCare’s solution provides seamless integration across all touchpoints, enabling patients to navigate the healthcare system more effectively.


The second challenge focused on the difficulty patients face in communicating their medical history. Rather than fumbling through physical files and struggling to recall complex medical information, the AI system allows patients to photograph medical records, which are then digitally processed and summarised. The system incorporates ABHA, the digital identity that the Indian government provides, enabling comprehensive patient health record management.


The third challenge addressed the critical issue of doctors spending excessive time on administrative tasks rather than patient interaction. The demonstration showed how AI-powered medical scribes (EkaScribe) can capture doctor-patient conversations, convert them into structured medical notes, and automatically populate electronic medical records, freeing physicians to focus on patient care and counselling.


The technical demonstration revealed capabilities allowing patients to communicate symptoms in their local language whilst receiving contextually appropriate prompts to guide the conversation. The system demonstrated advanced safety features, including drug allergy detection that prevented potentially harmful prescriptions, automatically alerting the physician when prescribed medications conflicted with patient medical history. In the demonstration, the system changed a prescription from amoxicillin to clindamycin based on the patient’s allergy profile.


Sahni emphasised the importance of multiple AI agents working together rather than single-agent systems, particularly for complex healthcare workflows. This approach involves multiple AI agents working collaboratively, including grounding agents whose role is ensuring other agents remain within appropriate boundaries. This technical architecture, combined with continuous human oversight through a dedicated medical team of 10 members (which is growing), represents a sophisticated approach to maintaining safety whilst leveraging AI capabilities.


Regulatory Perspectives and Challenges

Dr Richard Rukwata, Director General of Zimbabwe’s Medicines Control Authority, provided crucial insights into the regulatory challenges facing AI implementation in healthcare. He articulated the fundamental tension regulators face: intense pressure from industry to accelerate approval processes whilst maintaining ultimate responsibility when things go wrong. This dual pressure creates a particularly challenging environment in the rapidly evolving AI landscape.


Rukwata highlighted how AI could potentially serve as a solution to regulatory bottlenecks rather than merely creating new challenges. His organisation is currently working with a Gates Foundation grant to develop AI applications for screening marketing authorisation applications. The vision involves creating neutral AI tools that serve both regulators and industry, helping both parties reach common positions more efficiently without favouring either side.


The regulatory perspective revealed an interesting paradox: whilst AI creates new complexities requiring oversight, it simultaneously offers tools to make regulatory processes more efficient and consistent. Rukwata noted that AI systems don’t have emotional biases or preferences, potentially making them valuable for creating more objective evaluation processes.


However, the discussion also acknowledged that regulatory jobs may be among the last to be replaced by AI, as society requires human accountability when things go wrong. Rukwata jokingly referenced the “Moonshot podcast” in noting that regulatory positions might offer job security in an AI-dominated future, underscoring the fundamental need for human responsibility in AI governance.


Global Health Funding Collaboration

A major announcement during the session revealed a groundbreaking collaborative funding initiative between three organisations: Wellcome Trust, Gates Foundation, and No One Artists India Foundation. This partnership represents a significant shift towards coordinated approaches in AI healthcare funding, addressing the fragmentation that has historically characterised global health innovation support.


Charlotte Watts from Wellcome Trust explained that the initiative specifically targets the critical evidence gap between promising AI efficacy studies and rigorous real-world evaluations. Whilst numerous studies demonstrate AI’s potential in controlled environments, there’s a significant shortage of randomised controlled trials assessing AI interventions when actually implemented in healthcare systems.


The funding call focuses particularly on low- and middle-income countries, recognising that these settings often face the greatest healthcare challenges whilst having the least resources for implementing and evaluating new technologies. The initiative will support rigorous evaluations examining not just clinical outcomes but also system integration challenges, cost-effectiveness, and unexpected implementation barriers.


Trevor Mundel from Gates Foundation emphasised that global health has been constrained by over-reliance on modelling and simulation due to lack of primary data. This collaborative funding approach aims to generate the real-world evidence necessary to move beyond theoretical models to practical implementation guidance.


Monica Sharma from No One Artists India Foundation highlighted how the coordinated approach reduces fragmentation and creates shared standards, eliminating the burden on researchers and developers who previously faced different criteria, timelines, and expectations from multiple funders. This alignment represents a commitment to shared standards and recognition that real-world evaluation is foundational rather than optional.


Human-Centred AI Development and Implementation Challenges

A recurring theme throughout the discussion was the critical importance of maintaining human involvement in AI healthcare systems. Trevor Mundel made a particularly insightful observation that whilst people frequently acknowledge that technology represents only 10% of AI applications, with the remaining 90% involving people and ecosystems, discussions invariably return to focusing on technology.


The discussion explored various models for human involvement, from technical architectures with medical team oversight to research standards requiring ethical clearance and anonymity protections. The human-centred approach extends beyond technical implementation to address emotional and psychological aspects of healthcare delivery. A participant raised important questions about building AI agents that are not only intelligent but reassuring in high-anxiety environments like maternal and infant care, highlighting the need for AI systems that can provide emotional support.


Vikalp Sahni identified key technical challenges including building systems that work across multiple languages and generating verifiable data for large-scale model training. He also raised the important question of who evaluates AI capabilities being built in healthcare, highlighting the need for standardised evaluation frameworks.


A particularly insightful contribution came from a participant working on geospatial AI models for tuberculosis case finding, who distinguished between clinical decision support and operational decision support. This participant highlighted that whilst patients entering healthcare systems generally receive care, there are “silent patients” in communities who remain undetected and underserved, expanding the discussion beyond optimising existing healthcare delivery to addressing fundamental equity and access issues.


Data Privacy and Balancing Speed with Safety

The discussion addressed critical concerns about data privacy and ethical implementation of AI in healthcare. Vikalp Sahni outlined EkaCare’s approach to data privacy, emphasising adherence to established frameworks including HIPAA for healthcare data and India’s Data Protection and Privacy Act, noting that customers increasingly require continuous certification from privacy authorities.


The technical discussion explored emerging approaches like federated learning, which allows local data to remain private whilst contributing to model improvement. Trevor Mundel noted that whilst this approach shows promise, regulatory frameworks haven’t fully addressed whether such data contribution constitutes disclosure under current policies.


A central tension throughout the discussion involved balancing the urgency of addressing healthcare challenges with the need for careful, safe AI implementation. Trevor Mundel articulated this paradox, suggesting that “completely focusing on fast might be slow” because premature deployment could create setbacks that ultimately delay beneficial applications. He drew parallels to self-driving vehicle development, where single fatal accidents can derail entire programmes despite potentially superior safety records.


Interactive Elements and Future Directions

The session included interactive elements, with Sindura mentioning QR codes for audience engagement and sharing her background as a veterinarian, suggesting potential applications in pet care. The discussion concluded with participants sharing aspirations for next year’s AI Summit in Geneva.


Trevor Mundel expressed hope for seeing the next iteration of patient-facing AI agents that would be completely transparent, allowing users to understand decision-making processes whilst maintaining confidence in critical areas like drug contraindications. Charlotte Watts, who mentioned previous interactions with Sindura during G20 meetings, hoped to see funded partners presenting operational results rather than funders discussing plans, representing a shift from theoretical discussions to practical implementation experiences.


Richard Rukwata called for increased collaboration between industry and regulators, recognising that both parties ultimately want safe, effective healthcare solutions. Monica Sharma concluded with a powerful reminder that regardless of AI advancement, human doctors should retain final decision-making authority, emphasising the importance of maintaining the human element in healthcare.


The session concluded with Sindura presenting a souvenir from India, maintaining the collaborative and international spirit of the discussion.


Conclusion

This comprehensive discussion represented a mature approach to AI in healthcare that acknowledges both transformative potential and significant challenges. The session successfully moved beyond typical technology demonstrations towards substantive examination of implementation challenges, regulatory requirements, and human-centred design principles.


The collaborative funding announcement represents a significant step towards coordinated, evidence-based approaches to AI healthcare development. The emphasis on human oversight, real-world evidence generation, and careful implementation reflects a field that’s learning from other technology sectors whilst recognising the unique sensitivities of healthcare applications. The discussion demonstrated that successful AI implementation requires not just technological advancement but sophisticated understanding of regulatory frameworks, funding mechanisms, human psychology, and system integration challenges, all whilst preserving human agency and maintaining safety standards.


Session transcript

Vikalp Sahni

All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your experience when you visit a doctor? How do you express your symptoms? How the doctor interacts with you and how the interaction happens with the medical systems where EMRs comes in? What we are trying to show today and what we’ve built at EkaCare is an end -to -end solution that solves three key challenges that we face today. One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals. The second is how easily and comfortably you can tell about your history rather than fumbling through lots of files and how easily it can be collected, collated and being.

And the last but not the least. we would want doctors to spend time with us and not with machines writing about prescriptions, rather talking to us, counseling us, connecting with us. So the solution that we have built solves for all these three challenges. Obviously, thanks to the advancement in AI, we have been able to do a lot of this due to the capabilities that we have built in -house. So I’m going to narrate a story. This story is of Neeti. She’s a 65 -year -old female, has diabetes, and she wants to now see how she can do the whole end -to -end care delivery. To start off, Neeti is quite digital savvy. She actually has created her ABHA address.

ABHA is the digital identity that India government provides. This digital identity allowed her to collect a lot of her medical records. records into the app, which is her PHR or patient health record app. She has also taken many photographs so that the AI can read through these photographs and collects her medical history in a digital format so that it can be summarized. Now what happens is Neeti wants to talk to an AI, which is a med assist or an assistant for Neeti. She goes ahead, she just picks up a prompt, says summarize my health. What is happening now is all of Neeti’s health is getting summarized. You would know, Neeti would know these are the kind of things that has come up from the medical records.

Also, there is a prompt that Neeti would get, which is very, very relevant to the kind of things that Neeti is supposed to talk about. But today Neeti came for a very different purpose. And now in a local language, she’s talking to the bot. And she’s talking to Neeti. And Neeti is actually telling that In English, she’s expressing that she has fever and there is a wound in her foot. What AI would start doing now is try to understand more about this specific condition. Where is the wound? Is it swelled? Are there any kind of smell that is coming in? And all of this is happening in the local language that Neeti understands. More importantly, it is not letting Neeti to only type or talk.

There are these prompts that are coming in that will ease off the interaction of a 65 -year -old female. After collecting more information, such as mobile number, the AI would identify that this is an important case and this needs a doctor’s intervention. But which doctor’s intervention? With which clinic? On which day? All of this information will now get collected. This will be displayed. So in this case, Neeti is being told that there is an availability of these two doctors on 14th of February. But she can always say that, okay, I want to do it in a different day. Pick up the doctor. As soon as she picks up the doctor, the appointments gets created. Neeti can actually do all of this by typing or by acting on the prompts as well.

So this is how all the information that Neeti wanted to share with the doctor gets collected, gets summarized, and now appointment is created. The next story goes to when Neeti visits the doctor’s clinic. And when Neeti visits the doctor’s clinic, this is the doctor’s view where a doctor is looking at a classical EMR screen. but how this EMR screen is fitted with these AI utilities that can help a doctor to get the better outcome is what we want to demonstrate. If you see all the current EMR and the current prescription for Neeti is completely empty. There is nothing there. Doctor is looking at the past history of Neeti as well as what are the current ailments and current issues that has been listed.

AI also ensured that it not only figures out the important information for patient, but here a doctor is also able to understand and get to know more about Neeti that there is an uncontrolled diabetes. So this is the kind of person that he’s dealing with. But more importantly, it would be very hard for a doctor to start filling all of these information. During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient. These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see. Again, this entire thing has come out just by the interaction between the doctor and the patient.

Doctor has to just do copy to EMR pad. As soon as the copy to EMR pad happens, this entire information gets filled, whatever has been discussed, all the medication that doctors wanted to do. But here we go and see that during the consultation, doctor prescribed amoxicillin. But the patient’s medical history said that he is or she is allergic to amoxicillin. The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin. So, the patient is allergic to amoxicillin. So, the patient is allergic to it. without actually going deeper, a doctor can very easily go ahead now and change this medication to provide for a better outcome as well as to reduce the medical errors.

So it’s changed from amoxicillin to clindamycin. As it changed, the prompt also changed. If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language. There is a translation of all the remarks, advices, everything in the language that patient understands. And at the click of a button, this information goes and sits into the patient’s PHR app, creating another node into her medical system. That can be used for the further consultation and any kind of other ailments. So that’s what is the power of AI and the utilities that we are seeing today. The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured.

Today, all of that is possible. But yes, there are challenges. How to build these things at scale for multiple languages, how to generate that data so that your models are verifiable at those large scale. Who is evaluating these capabilities that are being built? All of these are challenges that we as a developer space. And I’m looking forward to building more and working more in this domain.

Sindura Ganapathi

I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background. And then it’s only a half joke, actually. In the pet care industry, there is real value and business to be made there. So just a thought. And on a more serious note, you could change the name of the lady and adjust age, et cetera. That could be my mother. And I deal with this personally as a caregiver, has all these conditions, deal with so many papers. And every interface you mentioned is a leaf out of my personal life. So thank you for thinking about building a solution here.

I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief regulator. He is the director general of Medicines Control Authority of Zimbabwe. I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is. And the fact that you have seen this through for ML3 recognition, that’s a wonderful accomplishment. Congratulations. Congratulations on that. Not an easy job. And also, you are involved in the regulatory harmonization work of Africa, and there is a lot of interesting thoughts you will be hopefully able to share on Next, I would like to invite Professor Charlotte Watts.

Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had extensive career in both health care, HIV, gender -based violence, epidemiology, mathematics, and a deep experience working in the government, UK government, which was the capacity she came for G20 meetings, which I was involved in from India side. So it’s a pleasure to have you back, Charlotte. And now she’s working at Wellcome Trust as Executive Director of Solutions. I would love to hear more about how you are thinking about these things. And next I would like to invite Dr. Monica Sharma. I happened to meet her just now, and she is the lead for No One Artists India Foundation.

And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has extensive experience working in putting together funding programs, whether it is Newton Fund, whether it is IRTG, Germany’s International Research Training Groups, or India’s BioPharma Mission Program. So all of these, I’m sure, will come in very handy in your current role and would love to hear from you on thoughts related to the topic today. And last but not least, my dear friend and mentor, Dr. Trevor Mundell. I should say Dr. Dr. Trevor Mundell. He is both a – he has an unusual background. People who work with him smile when I say unusual. And Trevor, he did medical degree and then he figured he wanted a Ph .D.

in mathematics. So and is a Rhodes Scholar and has extensive experience in pharmaceutical industry as from early research to development and decade plus experience in global health. With that, we will get started. First to begin with, so I think hopefully you all have mics. For me personally, coming here after having read the blog that went out very famously by CEO of Anthropic, I came in with a very bleak feeling, to be very honest. It’s. Kind of depressing. what are we creating but I have to say last two three days has been energizing seeing all the chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building it brought me memories of vegetable market where I grew up from where people are like life there right so people are trying to sell something people are trying to buy something people are talking and the reason I talk about that as a happy thing is it’s nice to see so many human beings that’s what came to my mind in this in the backdrop of that blog so just love to hear from you what was your feeling as human beings I think this this seeing anything that you want to particularly share last two three days You have been here.

You saw all of this. What did that make you feel? Because I think going forward, this feeling of human beings, I think, will have a currency of its own. Anybody wants to volunteer and say something? An open ended question.

Charlotte Watts

Yes, I’m happy to jump in. So I just got here actually yesterday. So I actually missed, I think, the early start of the week, which I heard was fantastic because you had the youth here. As well as, you know, older people who’ve been in the global health or the global sort of sphere or in the AI world for longer. So that that mix and the drive of the kind of energy, I think, is what I was hearing people tell me about the start of the week. But now I’ve just been here. Yeah. Sort of last night and today. And for me, what I feel quite reassured about, I wonder if it’s so, you know, the change.

is so profound and so I suppose it I was sort of wary because there’s so much hype um and then clearly the risks are being articulated but what I feel reassured about in going to a number of sessions is just actually we’re starting to have the more meaningful conversations about what this really means that is getting beyond that either hyper cell or hyper fear to actually how do we navigate this space and also how do we navigate this as a global community because this is not something that’s one country’s problem to fix so so actually I’m feeling that you know this is this is a really important conference and and we’re starting to get into the we need and we’re starting to get into the nitty -gritty of of how on earth we move forward in in the in the best way that really

Sindura Ganapathi

Anybody else want to share? Trevor and then Monica.

Trevor Mundel

Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself. And defining the actual role for humans in the loop is going to be, I think, you know, as important as any of the technological advances.

Monika Sharma

So, Sundara, I don’t have an experience from the summit as such, because I’ve just arrived here. But I want to share a very relevant experience from this model. morning so while i was coming here i have a six and a half years old who just saw ai on my you know computer and he said where are you going i said yeah i have a meeting to attend he said ai and i was like oh he’s able to see i said so you know what is here he said yeah it’s artificial intelligence and i said what else you know about it he said yeah soon they’re going to be robots robots doing everything for us and i was like no but still you would need me and i found like oh my god that’s not a good start of a conversation like like everybody is influenced by this so thank you so much for bringing that human back to to this summit yeah that’s what i thought i’ll add as a you know a conversation from my household this morning thank

Sindura Ganapathi

you yeah no i i charlotte i hope you’re right that there is a lot of hype there now i’m praying for hype after reading how many of you have read that what i referred to the blog you by deriva modi by anthropic seal place of Okay. Okay. Few hands. I am not even sure whether I want to urge you to go read because it really makes you think. And there were some people who are in the field. They said, I am choosing not to read it because I don’t want to know. No. So it’s a good thing to hear this, that this human in the loop and the way we responsibly develop, because that’s the theme we want to explore, especially in the context of health.

That, I think, is a good segue, Dr. Richard, I want to ask you, start with you. Job of a regulator, I said, is hard. The reason I experienced it firsthand, having now very close. We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast. It needs to be less like everybody wants it to be. regulation and you want to speed up innovation and any every day gets counted and you are held to the metric. That’s one extreme. The other extreme is, boy, if anything goes wrong, who is the first person? Who approved this? Who allowed it to come out? So, these are two extreme things and usually in a slower cycle, you are able to have some time.

So, how are you thinking about it in the age of both the eye, but in general, in reconciling these two extremes of demands put on

Richard Rukwata

Yes. Thank you for that insight. I have to think on my feet here, but you’re quite right. It’s a matter of industry wanting more results from the regulator. Thank you. investment and also wanting to retain or rather wanting the regulator to retain responsibility when things go wrong. I remember watching a very interesting podcast, I think it was called Moonshot, and in this episode they were saying, well, if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have, people should always have somebody to blame, right? We can’t say, oh, you know, somebody was harmless. No, AI did it. No, that would never work. So worst case scenario, I’ll be the last person there so that they can hang me when something goes wrong.

At least I have that job security to think about. But really, with respect to what is happening as far as industry’s expectations are concerned, we see a lot of potential in AI. We’re currently working with Grant from the Gates Foundation. on an application for screening applications for marketing authorizations. I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually. So we also blame industry. We’re saying, well, you know, you submit, you know, incomplete applications and then blame it on us. So we’re hoping that with technology we’ll have, you know, applications in the near future that can work for both sides of the fence, right?

Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly. This is the beautiful thing about computers, right? They don’t feel any type of way. They don’t feel any type of way about you. They don’t necessarily like you. They don’t dislike you. so we’re hoping that this will allow us to do I was just saying not yet so we’re hoping that as we work more towards the development of these tools we’ll be able to see more traction from industry so that we become a more efficient part of the supply chain from development to market and not to be seen as the barrier to entry in this field.

Thank you.

Sindura Ganapathi

That’s very helpful and also there is both a challenge for a regulator when this AI speeds up the cycle of innovation brings new complexities but also itself a very good tool in either summarizing a complex application or building models that allows a few people to actually have the same capability as a well developed pharma so be on the same page so lots of interesting possibilities here which in India we’re also thinking about. along those lines all three of you are coming from one type of shared commonality which is funding innovation and as a funder of innovation you are also in not too dissimilar way are trying to balance promoting innovation while upholding safety and minimizing risk etc so i would like to hear from each of you because each of you are different kind of funders how you are thinking about balancing these two in the funding programs and scouring innovation and speeding that up you can go in any order you can thumb wrestle

Charlotte Watts

trevor’s pointing to me but i went first last time but now i can go um i mean we we fund uh so you know we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we a range of innovations with ambition of improving and saving lives.

Increasingly, we are funding innovation

Trevor Mundel

you know on the acceleration front we look at it you know on the acceleration front we look at it in that every month we don’t have the next generation malaria vaccine. You know, and certainly every year we’re seeing hundreds of thousands of deaths in young children. Every year we don’t have the enhanced personal coaching in education. We see a generation that is losing opportunities. So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems. But I think that it really behoves us here to think about completely focusing on fast might be slow.

And we have to have this moment of reflection because what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur, the, you know, unfortunate outcome for a patient. which can be attributed to a system which was probably misused by, you know, the people who are using it maybe, but nevertheless will be attributed to AI. And that leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self -driving vehicles, you know, where they may be incredibly good drivers and better than the average human at driving, but one fatal accident puts that whole enterprise at risk.

So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast.

Sindura Ganapathi

Monika?

Monika Sharma

So I represent No Notice Foundation and we support health, people and planet both. So at this point, sitting with… funders, global funders. with yourself. And I think it sends a strong message how important AI is at the moment with respect to health. So while we are trying to address as funders different parts of the ecosystem addressing health, but having AI bringing evidence to this really matters. So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI.

Sindura Ganapathi

QR code allows you to look at it and all the details I believe are there. I have not tried it. I would very quickly like to hear from any of you or all of you. What are you trying to what are you hoping from this? And after this, you know, usually panels, I find panels very boring, by the way. So and as a person sitting there or as a person trying to sit here and trying to give Gyan in two minutes. So I would love to make it more interactive. So get your questions and there is still time. So right after this, hopefully I would like to see you interacting and sharing your thoughts and sharing questions.

I’ll be coming to you. So who wants to say your hope from this call?

Charlotte Watts

Yeah, so we’re really excited about this announcement today. You’ll see it’s the big health research and innovation foundations coming together. to jointly support what is a major initiative. And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems? So we have lots of exciting opportunities that are showing the efficacy of particular application, but what this call really wants to support is rigorous evaluations of where AI systems are integrated into clinical decision -making. Our focus is on low – and middle -income countries. We are interested in really asking a range of questions.

What does it mean for the health system? Are these new initiatives actually operable? Can it be integrated into what often is quite a big bureaucracy of a health system? what are the costs associated with that? Are these interventions actually cost effective? In the end, ministries of health have to make decisions based on affordability. So how do we learn more about the costs of this transition? And what are the things we didn’t expect? Right. And what we see, you know, if we look at the evidence base, we’ve got a lot of exciting evidence of interventions that show promise. We’ve only got a relative handful of rigorous randomized controlled trials that are actually assessing interventions when they’re implemented.

So there’s a massive gap there. And then we’re also now starting to see in different contexts anecdotal evidence of where AI has been integrated, but it’s actually butted against the system. And actually that opportunity isn’t realizing and sort of is showing it’s easier said than done. So basically this investment is to try and address that evidence gap. And I just want to call out that Jay Powell is here and APHRC. who are key partners on this in supporting the implementation and for APHRC, the contextualization of the work that we hope to be supporting in Africa.

Sindura Ganapathi

Wonderful. Thank you. Anything else, Trevor, you want to add?

Trevor Mundel

Well, I just want to say thanks to our partners that welcomed Novo Nudist Foundation on this initial effort. I hope it’s the start of even more in the future over there because the global health world has been plagued by this lack of primary data. You know, us and others have funded a lot of modeling simulation around global health problems. But you cannot transcend the lack of primary data at the end of the day. And AI is too important for that to be the constraint that impedes implementation at the end of the day.

Monika Sharma

I thought maybe. It’ll be good to also add that how as we fund this together. envision this as a commitment towards shared standards. So while we’ll be working together as part of this call, we are saying that the real world evaluation is not optional. It is the foundation. And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations. And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field. And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products.

And we make sure that the investments that we do are getting into the real world. I mean, they do create an impact because of the coordination that is part of this whole process. And I also say that when we sit together, it adds the seriousness. to the ecosystem that what we are doing is not a side experiment. This is something that we are creating as infrastructure for a long -term process that I would say governments have been asking for it. And the best part as a researcher, I would say, is that we don’t, like the researchers don’t have to navigate three different timelines. Okay, yeah, that’s great for that. And no three different criterias. We just have not like one agreed aligned criteria.

And I would say that no three different deadlines, no timeline. So it makes really life easy as a researcher, I would say. I hope you get some interesting calls from it.

Sindura Ganapathi

So if we have questions, is there a mic going around? I hope there is. If not, I’ll give you mine so I don’t have to answer your questions. And please, there is one hand up there. Okay, please direct your question. Including to Vikalp, if you have questions. Yep. Let’s start with the gentleman at the back and then you’re up next.

Participant

Thank you, folks. Very interesting. My questions around data privacy and data privacy by design. And the lady mentioned three different parameters. Could you elaborate more on how data privacy can be incorporated, at least at a policy level?

Sindura Ganapathi

Anyone, anyone wants to take that question, at least in the context of this call, I guess you can are in general. Yeah. How are you handling this? Yeah.

Vikalp Sahni

So I think health data is quite sensitive and I mean, more sensitive data rather when it comes to country, when it comes to individual, when it comes to even places such as police, military, et cetera. So. So it’s a pretty valid question. Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India. And more importantly, if we look at the data exchanges, such as NHA in India have also created clear guidelines. I think following those guidelines and getting yourself tested against those guidelines are fundamentally important.

And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuous like sort of applicable certificates from these. privacy authorities as well as these privacy -based frameworks. So that’s how we solve for it. And I think it’s a good thing. In health, it is fundamentally critical. And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private.

Sindura Ganapathi

So there are two aspects to it. One is technological, and another is policy. There are other sessions entirely focused on people who are working on it. So I wouldn’t put you in the shoes to answer that. But on the technological front, both Charlotte, if you want to address, or Trevor, on what are some of the things, model learning without data being exchanged, or synthetic data, so many aspects of it which have been at the forefront. And Charlotte, whatever you want to add.

Charlotte Watts

I mean, I just… I just wanted to say, in terms of the evaluation… that we want to support through this funding. We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards. So the kind of bars and checks and controls that you’d expect if you’re doing any sort of research study on health and the sort of ethical guidance and clearance procedures that you need to adhere to. So for us, that’s just an important part of any aspect of research that we support and that we’ll be supporting in this initiative. And that includes issues of privacy and other things.

Sindura Ganapathi

Do you want to say anything about the technological emergence of any new technology that has been helping with preserving data privacy, but not the innovative learnings and improvements of the models?

Trevor Mundel

Yeah, Sundar, you know, so I think that for us, it’s. There’s no compromise on. patient data, privacy from the clinical trial, as Charlotte has mentioned over here. But AI does raise a lot of other issues that go almost beyond that. So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source. Now, has that actually been regulated? We had an example of one of our grantees who produced a very good system for using ultrasound to diagnose certain chest diseases, and it was based on a federated contribution from different groups that kept their own data local and private, but they contributed to the model.

And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable now in the age of AI, I think it’s something that we may want to. So, I think it’s something that we encourage with the right framework.

Sindura Ganapathi

Thank you. Do you have the mic? Okay. Then if you have another mic, you can take it to the gentleman, madam, and then after you.

Participant

My question is to Professor Watts. You mentioned about clinical decision support. So the context from an Indian healthcare setting, as you’re well aware, is majority of our health is run at the front line. So there’s also an element of operational decision support as such. So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization. So my question is, from an evidence perspective, we obviously are doing some retrospective analysis, and we plan to follow it up with a prospective analysis as such, although it’s a single user. So I’m wondering if you have any thoughts on that. I think it’s a great question.

I think it’s a great question. I think it’s a great question. but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD. I can tell you for one thing for sure is the patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community. So what’s your inclination indeed in this research grant for such solutions?

Charlotte Watts

It’s a wonderful question because essentially I come from public health. So our interest, I think our collective interest is actually how do we in particular focus our evaluations and generate evidence where there’s the greatest opportunity to improve health and to strengthen systems and some of that aspect might be actually how do you other opportunities to other opportunities to really help to improve health and to strengthen systems and some of outreach and improved care for the underserved and so we’re not going to say you know this works and this fits and this isn’t fit but ultimately we are interested in how does that integrate within the system in the call we mentioned the importance of looking at interventions as the primary care level not only at tertiary care and I think the things that will resonate in our interest is is really are there areas where actually the opportunity is big enough that actually it merits that assessments to say is this really translating into tangible health impacts and is the return on that actually affordable and is it something that could be scaled so that issue of you know how does it connect with the system is an important part of the question as well that we’re interested in.

Trevor Mundel

Now I do think it’s a very important question because you’re probably all aware of the constraints that we face now in the global health space in terms of funding. some of the exciting new technologies that are coming along, whether it be at the level of the Global Fund or of Gavi, who both have not met quite the standard that we would like to in their replenishments. So there’s just a reduced amount of funding available for those critical commodities that could be life -changing. And when we get a TB vaccine, which we hope we might have in, say, three years, how are we going to afford to actually put that out to the people who need it?

So it’s exactly the kind of targeting that you’re talking about in terms of risk targeting that can make all the difference in terms of taking now the lesser amount that we can afford, but putting it to where the need is the greatest. And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works.

Sindura Ganapathi

So, person who has a mic, and then you can hand it to the person after you ask the question.

Participant

It has been like a great session. So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? How do we go about that? I would love to hear your thoughts because we’re building something on the same.

Sindura Ganapathi

When you say high anxiety, just so that.

Participant

High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do. Right. And it’s an open field. And the pediatricians, Gainax and the mother support system is very low. When you go down to tier two and tier three cities. How do we go about building that? I would love to get some thoughts.

Sindura Ganapathi

Take it.

Vikalp Sahni

So I think one of the things that we have done while we build, a lot of these agentic pipelines for doctors, for users is. having human in the loop while the development is happening is extremely, extremely important. And that’s what Trevor also mentioned, because today, how this can go and where it can lead is not something that you can fully control. And so there are these systems that are specifically designed where an anonymous de -identified conversations are practically being distilled to see if the agents are working together in tandem. The second thing, and that’s more technical that we have sort of figured out is the models are quite capable. But when you are running them with a single goal, or a single agent with a single prompt, that practically, at times narrows down the whole worldview.

But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare. If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being. It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop. Because we, as a company, hasn’t been able to find a way to get out of it.

That’s why we have like a strong 10 member medical team, which is also growing where these are doctors working with the technology.

Sindura Ganapathi

Thank you. And unfortunately, I have been told we are out of time, but speakers will be available. If you can please come up to them. And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit? I just heard that it is being hosted in Geneva. So we are all showing up there. We have all these aspirations. What would it look like when we show up there to say, OK, this year we did something together? Anything that comes to your mind?

Trevor Mundel

You know, I’d love to see the next iteration of VicAlp’s patient facing agent, and that would be an agent that would be able to guide you in your health pathway, would be completely transparent. And that I would actually understand why it made its decisions. And I would have 100 % confidence. that in that anxiety -provoking situation, it never made an error related to guidances, drug contraindications. It was always correct in those things. And I wouldn’t have to be concerned about that. That’s the next iteration that I’d love to see next year.

Sindura Ganapathi

Next year, maybe.

Charlotte Watts

And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be.

Richard Rukwata

Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side. We want the same thing. better quality safe and effective medicines for all our people so development in that area would be very exciting

Sindura Ganapathi

final word to you

Monika Sharma

i think i i would still love to see that no matter how much evidence we generate from ai no matter what we do we still have that last uh word from the doctor who is sitting there and never forget the human angle while we navigate the ai space that’s what i always want to thank you so much

Sindura Ganapathi

yes thank you so much next time we meet i hope we all feel as optimistic as we do and some some more thank you so much for attending here thank you speakers thank you speakers we just have a souvenir for you from india side for the session thank you so much okay where we go Yeah. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you. Thank you. you

V

Vikalp Sahni

Speech speed

140 words per minute

Speech length

1769 words

Speech time

753 seconds

Fragmented health information unified

Explanation

The speaker describes how current health data is fragmented across appointments and vitals, and how an AI‑enabled platform can consolidate this information into a single patient record, saving clinicians time and improving care continuity.


Evidence

“One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals” [1]. “The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured” [2].


Major discussion point

AI‑enabled end‑to‑end patient care platform


Topics

Artificial intelligence | Social and economic development


AI summarises full history & supports local languages

Explanation

The AI can ingest collected records, generate a concise health summary, and present it in the patient’s native language, enabling better understanding and communication between doctor and patient.


Evidence

“She goes ahead, she just picks up a prompt, says summarize my health” [11]. “If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language” [14]. “There is a translation of all the remarks, advices, everything in the language that patient understands” [16]. “And now in a local language, she’s talking to the bot” [17].


Major discussion point

AI‑enabled end‑to‑end patient care platform


Topics

Artificial intelligence | Social and economic development


Real‑time AI scribe (EkaScribe) creates verified notes & allergy alerts

Explanation

During consultations the doctor activates the audio‑based EkaScribe, which records the encounter, converts it into verifiable medical notes, and automatically flags drug allergies for safety.


Evidence

“During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient” [22]. “These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see” [23]. “The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin” [18]. “So, the patient is allergic to amoxicillin” [27].


Major discussion point

AI‑enabled end‑to‑end patient care platform


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Human‑in‑the‑loop & multi‑agent safety architecture

Explanation

The design emphasizes a multi‑agent AI system with a grounding agent that enforces boundaries, while keeping clinicians in the loop to ensure safe and trustworthy decisions.


Evidence

“It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop” [89]. “But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare” [90].


Major discussion point

Human‑in‑the‑loop and ecosystem‑centric AI design


Topics

Artificial intelligence | Capacity development


Data privacy, HIPAA, DPDP & end‑to‑end encryption

Explanation

The solution adheres to major privacy regulations such as HIPAA (US) and DPDP (India) and employs end‑to‑end encryption to protect sensitive health data throughout its lifecycle.


Evidence

“Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India” [97]. “And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private” [98].


Major discussion point

Data privacy, security and ethical safeguards


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs | Data governance


AI agents for high‑anxiety maternal & infant care

Explanation

The speaker stresses that single‑prompt agents are insufficient for maternal health; layered, multi‑agent systems with clinician oversight are needed to reassure users in high‑anxiety contexts.


Evidence

“If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being” [96]. “So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care?” [26].


Major discussion point

AI for high‑anxiety and operational decision‑support contexts


Topics

Artificial intelligence | Social and economic development


S

Sindura Ganapathi

Speech speed

86 words per minute

Speech length

1852 words

Speech time

1280 seconds

Human energy counters AI hype

Explanation

The facilitator notes that sessions focusing on people and practical work bring a human element that pushes back against hype and encourages concrete dialogue.


Evidence

“There are other sessions entirely focused on people who are working on it” [136]. “And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit?” [138].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

Social and economic development | The enabling environment for digital development


C

Charlotte Watts

Speech speed

189 words per minute

Speech length

1321 words

Speech time

417 seconds

Funding seeks rigorous real‑world evidence in LMICs

Explanation

The speaker emphasizes that large‑scale investments aim to generate high‑quality real‑world evidence, especially for low‑ and middle‑income countries, to assess health impact of AI integration.


Evidence

“Our focus is on low – and middle -income countries” [55]. “And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems?” [28]. “So basically this investment is to try and address that evidence gap” [59].


Major discussion point

Funding, evaluation and generation of real‑world evidence


Topics

Financial mechanisms | Monitoring and measurement | Artificial intelligence


Research must guarantee anonymity & ethical clearance

Explanation

Evaluations of AI interventions must protect participant privacy, ensure anonymity, and follow strict ethical review procedures.


Evidence

“We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards” [61]. “And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working” [111].


Major discussion point

Data privacy, security and ethical safeguards


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Operational decision‑support at primary‑care level (TB geospatial targeting)

Explanation

The speaker highlights the need to evaluate AI‑driven geospatial models for tuberculosis case‑finding and diagnostic network optimization, focusing on primary‑care impact and scalability.


Evidence

“And essentially, what we want to do here is say, how do we generate real -world evidence…” [28]. “So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization” [131]. “And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works” [132].


Major discussion point

AI for high‑anxiety and operational decision‑support contexts


Topics

Artificial intelligence | Social and economic development


Moving from hype to nitty‑gritty implementation discussions

Explanation

The participant observes that the summit is shifting from hype and fear toward concrete, detailed conversations about how AI can be operationalized in health systems.


Evidence

“it’s a wonderful question because essentially I come from public health… we are starting to get into the nitty -gritty of what this really means” [141]. “And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be” [145].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

Social and economic development | Artificial intelligence


T

Trevor Mundel

Speech speed

169 words per minute

Speech length

981 words

Speech time

346 seconds

Technology is only ~10 % of AI success

Explanation

The speaker stresses that the majority of AI impact depends on people, processes, and ecosystem factors rather than the technology itself.


Evidence

“Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI” [74].


Major discussion point

Human‑in‑the‑loop and ecosystem‑centric AI design


Topics

Artificial intelligence | Capacity development


Funders must balance rapid AI deployment with safety

Explanation

From the funder perspective, there is pressure to accelerate access to AI tools while also ensuring a reflective, safety‑first approach to avoid downstream failures.


Evidence

“So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems” [43]. “So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast” [71]. “But I think that it really behoves us here to think about completely focusing on fast might be slow” [72].


Major discussion point

Funding, evaluation and generation of real‑world evidence


Topics

Financial mechanisms | Monitoring and measurement | Artificial intelligence


Federated learning enables privacy‑preserving model improvement

Explanation

Federated learning allows models to be trained on locally stored patient data, improving performance without moving raw data, thereby addressing privacy concerns.


Evidence

“So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source” [120].


Major discussion point

Data privacy, security and ethical safeguards


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs | Data governance


Next‑year goal: transparent, trustworthy patient‑facing agents

Explanation

The speaker looks ahead to developing patient‑facing AI agents that are fully transparent and trustworthy, building on lessons from the current summit.


Evidence

“That’s the next iteration that I’d love to see next year” [149].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

Artificial intelligence | Social and economic development


Unified standards reduce fragmentation

Explanation

Coordinated efforts to define shared standards help reduce duplication and fragmentation in the rapidly evolving AI‑health field.


Evidence

“And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field” [79].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

Artificial intelligence | The enabling environment for digital development


M

Monika Sharma

Speech speed

166 words per minute

Speech length

629 words

Speech time

227 seconds

Coordinated funding reduces duplication & risk

Explanation

When funders work together they avoid redundant efforts, improve quality, and create shared standards that ease the burden on researchers and developers.


Evidence

“And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products” [75]. “envision this as a commitment towards shared standards” [77]. “So it makes really life easy as a researcher, I would say” [78].


Major discussion point

Funding, evaluation and generation of real‑world evidence


Topics

Financial mechanisms | Artificial intelligence


Joint approach strengthens AI ecosystem

Explanation

A collaborative funding model strengthens the overall AI ecosystem, defining what good looks like and reducing the fragmented expectations faced by countries and developers.


Evidence

“So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI” [50]. “And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations” [77]. “And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field” [79].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

Artificial intelligence | The enabling environment for digital development


R

Richard Rukwata

Speech speed

143 words per minute

Speech length

445 words

Speech time

186 seconds

Regulators face opposite pressures: speed vs accountability

Explanation

Regulators must accelerate innovation while remaining accountable for safety, creating a tension between rapid approval and rigorous oversight.


Evidence

“We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast” [36]. “I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually” [40]. “I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is” [41].


Major discussion point

Regulatory landscape and AI integration


Topics

The enabling environment for digital development | Artificial intelligence


AI tools can accelerate drug‑approval & enable neutral reviews

Explanation

AI‑driven applications can streamline the drug‑approval process, providing neutral, faster reviews that help align industry and regulators.


Evidence

“Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly” [47]. “on an application for screening applications for marketing authorizations” [49].


Major discussion point

Regulatory landscape and AI integration


Topics

Artificial intelligence | The enabling environment for digital development


Industry‑regulator collaboration to streamline safe medicine delivery

Explanation

The speaker calls for closer collaboration between industry and regulators so that AI can be used to speed up safe medicine delivery without becoming a barrier.


Evidence

“Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side” [38].


Major discussion point

Overall sentiment, hype mitigation and future outlook


Topics

The enabling environment for digital development | Artificial intelligence


P

Participant

Speech speed

162 words per minute

Speech length

385 words

Speech time

142 seconds

AI agents must reassure users in high‑anxiety maternal/infant care

Explanation

The participant asks how AI agents can be both intelligent and comforting for users facing high‑stress situations such as maternal and infant health.


Evidence

“So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care?” [26]. “High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do” [127].


Major discussion point

AI for high‑anxiety and operational decision‑support contexts


Topics

Artificial intelligence | Social and economic development


Interest in operational decision‑support at primary‑care level

Explanation

The participant expresses interest in AI tools that provide operational decision support, especially for primary‑care applications like geospatial TB targeting.


Evidence

“So there’s also an element of operational decision support as such” [128]. “but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD” [129].


Major discussion point

AI for high‑anxiety and operational decision‑support contexts


Topics

Artificial intelligence | Social and economic development


Data privacy by design concerns

Explanation

The participant raises questions about how data privacy is built into AI systems, emphasizing the need for privacy‑by‑design approaches.


Evidence

“My questions around data privacy and data privacy by design” [116].


Major discussion point

Data privacy, security and ethical safeguards


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Agreements

Agreement points

Human oversight and involvement is essential in AI healthcare systems

Speakers

– Vikalp Sahni
– Trevor Mundel
– Monika Sharma

Arguments

Multi-agent architecture with human oversight is essential for complex healthcare workflows like maternal care


Technology represents only 10% of AI applications; people and ecosystems are the remaining 90%


Human involvement remains essential regardless of AI advancement levels


Summary

All three speakers emphasized that despite AI advancement, human involvement remains crucial – Sahni advocates for human-in-the-loop during development with medical teams, Mundel stresses that people and ecosystems are 90% of AI applications, and Sharma insists that final decisions should always rest with human doctors


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Real-world evidence and rigorous evaluation are critical for AI healthcare implementation

Speakers

– Charlotte Watts
– Trevor Mundel

Arguments

Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries


Primary data collection is crucial as global health has been constrained by lack of real-world evidence beyond modeling


Summary

Both speakers agree on the urgent need for real-world evidence rather than just theoretical models – Watts describes a collaborative funding initiative to evaluate AI systems in actual health systems, while Mundel emphasizes that global health has been plagued by lack of primary data and over-reliance on modeling


Topics

Artificial intelligence | Monitoring and measurement | Social and economic development


Coordinated approach reduces fragmentation and improves outcomes

Speakers

– Charlotte Watts
– Monika Sharma
– Richard Rukwata

Arguments

Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries


Coordinated funding approach reduces fragmentation and creates shared standards for AI evaluation


AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently


Summary

All three speakers advocate for coordination to reduce fragmentation – Watts through joint funding initiatives, Sharma through aligned funding standards, and Rukwata through neutral AI tools that help regulators and industry reach common ground


Topics

Artificial intelligence | Financial mechanisms | The enabling environment for digital development


Cautious implementation approach is necessary to avoid setbacks

Speakers

– Trevor Mundel
– Charlotte Watts

Arguments

Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment


Moving beyond hype to meaningful conversations about practical AI implementation and global collaboration


Summary

Both speakers emphasize moving beyond hype toward careful, thoughtful implementation – Mundel warns that rushing could cause setbacks similar to self-driving vehicles, while Watts appreciates the conference’s move toward substantive discussions rather than excessive hype or fear


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers see AI as a solution to reduce inefficiencies and improve collaboration in healthcare systems – Sahni through comprehensive patient care solutions and Rukwata through neutral regulatory tools

Speakers

– Vikalp Sahni
– Richard Rukwata

Arguments

End-to-end AI solution addresses fragmentation, medical history collection, and doctor-patient interaction time


AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently


Topics

Artificial intelligence | The enabling environment for digital development


Both emphasize the importance of AI applications at the primary care and community level rather than just high-level tertiary care, focusing on reaching underserved populations and addressing public health challenges

Speakers

– Charlotte Watts
– Participant

Arguments

Focus on primary care level interventions, not just tertiary care applications


Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Unexpected consensus

Regulatory jobs as the last to be replaced by AI

Speakers

– Richard Rukwata

Arguments

Regulators face dual pressure to accelerate innovation while maintaining responsibility for safety outcomes


Explanation

Rukwata’s humorous but insightful observation that regulatory jobs may be the last to remain because people always need someone to blame represents an unexpected consensus on the fundamental human need for accountability in AI systems, even from a regulator’s perspective


Topics

Artificial intelligence | The enabling environment for digital development


Personal experiences driving professional perspectives

Speakers

– Sindura Ganapathi
– Monika Sharma

Arguments

Conference energy demonstrates human connections remain vital in AI development discussions


Human involvement remains essential regardless of AI advancement levels


Explanation

Both speakers drew from personal experiences (vegetable market analogy and conversation with child about AI) to emphasize human elements in AI development, showing unexpected consensus that personal, human perspectives are valuable in technical discussions


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The speakers demonstrated strong consensus on the need for human-centered AI development, real-world evidence generation, coordinated approaches to reduce fragmentation, and cautious implementation strategies. There was particular alignment on the importance of human oversight, primary care focus, and moving beyond hype toward practical implementation.


Consensus level

High level of consensus with complementary perspectives rather than conflicting views. The implications suggest a mature, responsible approach to AI in healthcare that prioritizes safety, evidence, and human involvement while recognizing the transformative potential of the technology. This consensus could facilitate collaborative efforts in AI healthcare development and regulation.


Differences

Different viewpoints

Speed vs. caution in AI implementation

Speakers

– Trevor Mundel
– Charlotte Watts

Arguments

Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment


Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries


Summary

While both speakers acknowledge the need for evidence-based AI implementation, Mundel explicitly advocates for a slower, more reflective approach to avoid setbacks, whereas Watts focuses on accelerating evidence generation through coordinated funding initiatives. Mundel warns that rushing could be counterproductive, while Watts emphasizes the urgency of generating real-world evidence.


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Primary focus of AI evaluation scope

Speakers

– Charlotte Watts
– Participant

Arguments

Focus on primary care level interventions, not just tertiary care applications


Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding


Summary

Watts emphasizes evaluating AI interventions at the primary care level rather than tertiary care, while the participant argues for broader operational decision support including geospatial AI for community-level case finding. The participant specifically highlights that silent patients in communities need attention beyond those who enter the health system.


Topics

Social and economic development | Artificial intelligence


Unexpected differences

Role of AI in reducing vs. maintaining human oversight

Speakers

– Richard Rukwata
– Monika Sharma

Arguments

AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently


Human involvement remains essential regardless of AI advancement levels


Explanation

Rukwata sees AI as potentially reducing friction and making processes more efficient by being neutral and unbiased, while Sharma insists that human doctors must always have the final word regardless of AI advancement. This represents a fundamental disagreement about whether AI should reduce human involvement (Rukwata) or maintain it as essential (Sharma).


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The main areas of disagreement center around the pace of AI implementation (speed vs. caution), the scope of AI evaluation (clinical vs. operational focus), and the appropriate level of human oversight in AI systems. While speakers generally agree on the importance of evidence-based approaches and human involvement, they differ significantly on implementation strategies and priorities.


Disagreement level

Moderate disagreement level with significant implications for AI governance and implementation strategies. The disagreements reflect fundamental tensions in the field between innovation acceleration and risk mitigation, between different application domains, and between varying philosophies about human-AI interaction. These disagreements could impact funding priorities, regulatory approaches, and the development of AI standards in healthcare.


Partial agreements

Partial agreements

All speakers agree on the critical importance of human involvement in AI systems, but they disagree on implementation approaches. Mundel emphasizes defining human roles in ecosystems, Sahni focuses on technical multi-agent architectures with medical team oversight, while Watts emphasizes research standards and ethical procedures.

Speakers

– Trevor Mundel
– Vikalp Sahni
– Charlotte Watts

Arguments

Technology represents only 10% of AI applications; people and ecosystems are the remaining 90%


Multi-agent architecture with human oversight is essential for complex healthcare workflows like maternal care


Research evaluations must follow high-quality standards including anonymity and ethical clearance procedures


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Both speakers acknowledge the tension between speed and safety in AI implementation, but propose different solutions. Rukwata sees AI as a neutral tool to resolve regulator-industry conflicts and speed up processes, while Mundel advocates for deliberately slowing down to avoid errors that could derail progress.

Speakers

– Richard Rukwata
– Trevor Mundel

Arguments

AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently


Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment


Topics

The enabling environment for digital development | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers see AI as a solution to reduce inefficiencies and improve collaboration in healthcare systems – Sahni through comprehensive patient care solutions and Rukwata through neutral regulatory tools

Speakers

– Vikalp Sahni
– Richard Rukwata

Arguments

End-to-end AI solution addresses fragmentation, medical history collection, and doctor-patient interaction time


AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently


Topics

Artificial intelligence | The enabling environment for digital development


Both emphasize the importance of AI applications at the primary care and community level rather than just high-level tertiary care, focusing on reaching underserved populations and addressing public health challenges

Speakers

– Charlotte Watts
– Participant

Arguments

Focus on primary care level interventions, not just tertiary care applications


Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Takeaways

Key takeaways

AI in healthcare requires a multi-agent architecture with human oversight rather than single-agent systems, especially for complex workflows like maternal care


Technology represents only 10% of AI applications – the remaining 90% involves people and ecosystems, requiring focus beyond just technological advancement


Regulators can use AI as neutral tools to bridge gaps between industry and regulatory bodies, potentially reducing friction in approval processes


A coordinated funding approach among major health foundations can reduce fragmentation, create shared standards, and eliminate duplicate evaluation criteria for researchers


Real-world evidence generation is critical – there’s a significant gap between promising AI efficacy studies and rigorous evaluations of integrated AI systems in actual healthcare settings


Taking a slower, more reflective approach to AI implementation may ultimately be faster by avoiding setbacks from premature deployment and safety issues


Data privacy in healthcare AI must adhere to established frameworks like HIPAA and local data protection acts, with continuous certification requirements


Human involvement remains essential regardless of AI advancement levels, particularly in final decision-making roles


Resolutions and action items

Joint funding initiative launched by major health foundations (Wellcome Trust, Gates Foundation, Novo Nordisk Foundation) to support rigorous evaluations of AI integration in health systems


Focus funding specifically on low- and middle-income countries to generate real-world evidence of AI health impacts


Establish shared standards and aligned criteria for AI evaluation to reduce burden on countries and developers


Support evaluations at primary care level, not just tertiary care applications


Implement federated learning approaches that allow local data privacy while contributing to model improvement


Next year’s AI Summit in Geneva should feature funded partners presenting operational results rather than just funders discussing plans


Unresolved issues

How to effectively balance speed of innovation with safety requirements in AI healthcare applications


Specific technological solutions for preserving data privacy while enabling AI learning and improvement


Regulatory frameworks for new AI approaches like federated learning that haven’t been fully tested in policy contexts


How to build reassuring AI agents for high-anxiety healthcare environments like maternal and infant care


Addressing the ‘silent patients’ in communities who remain undetected and underserved by current healthcare systems


How to afford and deploy new healthcare technologies given reduced global health funding constraints


Ensuring AI systems remain transparent and explainable in their decision-making processes


Suggested compromises

Accepting that a slower, more reflective approach to AI implementation might be necessary to ensure long-term success and avoid setbacks


Using AI as neutral tools that serve both regulators and industry rather than favoring one side


Implementing multi-agent AI architectures with human oversight as a middle ground between fully automated and fully manual healthcare processes


Coordinated funding approach that balances innovation acceleration with rigorous safety evaluation requirements


Federated learning models that allow data contribution to AI improvement while maintaining local data privacy and control


Thought provoking comments

Technology is just 10% of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself.

Speaker

Trevor Mundel


Reason

This comment cuts through the typical AI hype by highlighting a fundamental contradiction in how AI discussions are conducted. It’s insightful because it calls out the gap between what people claim to prioritize (human-centered approaches) versus what they actually focus on (technology), forcing participants to confront this inconsistency.


Impact

This observation reframed the entire discussion from being technology-centric to human-centric. It influenced subsequent speakers to emphasize human oversight, regulatory frameworks, and real-world implementation challenges rather than just technical capabilities. The comment served as a philosophical anchor that kept bringing the conversation back to practical, human-centered considerations.


Completely focusing on fast might be slow… what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur… leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self-driving vehicles… one fatal accident puts that whole enterprise at risk.

Speaker

Trevor Mundel


Reason

This paradoxical insight challenges the conventional wisdom that speed is always beneficial in innovation. By drawing parallels to self-driving cars, it introduces a sophisticated understanding of how public perception and trust can make or break technological adoption, especially in sensitive areas like healthcare.


Impact

This comment fundamentally shifted the discussion from ‘how fast can we deploy AI?’ to ‘how can we deploy AI responsibly?’ It influenced other speakers to emphasize evaluation, evidence generation, and careful implementation. The regulatory perspective from Dr. Rukwata and the funding approach discussion all built upon this foundation of cautious optimism.


I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty-gritty of what this could be and can be.

Speaker

Charlotte Watts


Reason

This comment is particularly insightful because it acknowledges the current state of AI discourse as being dominated by ‘hype’ and calls for radical honesty about both successes and failures. It represents a mature approach to innovation that values learning from setbacks as much as celebrating wins.


Impact

This comment validated the earlier concerns about moving beyond superficial discussions and established a framework for future conversations. It influenced the audience question about operational decision support and reinforced the theme that real-world evidence and honest evaluation are crucial for meaningful progress.


The patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community.

Speaker

Participant (physician and medical informaticist)


Reason

This observation is profound because it shifts focus from improving existing healthcare delivery to addressing the invisible healthcare gap. It highlights how AI’s greatest potential might lie not in optimizing current systems but in reaching underserved populations who never access healthcare at all.


Impact

This comment expanded the scope of the discussion beyond clinical decision support to population health and equity. It prompted responses from both Charlotte and Trevor about targeting resources more effectively and using AI for outreach and risk identification, fundamentally broadening the conversation’s scope.


I would still love to see that no matter how much evidence we generate from AI, no matter what we do, we still have that last word from the doctor who is sitting there and never forget the human angle while we navigate the AI space.

Speaker

Monika Sharma


Reason

This closing comment encapsulates the central tension of the entire discussion – the balance between technological capability and human judgment. It’s insightful because it doesn’t reject AI but insists on preserving human agency and responsibility, which is crucial for maintaining trust and accountability in healthcare.


Impact

As the final substantive comment, it served as a philosophical bookend to Trevor’s earlier observation about human-centered approaches. It reinforced the discussion’s evolution from technology-focused to human-centered thinking and left participants with a clear principle to guide future AI development in healthcare.


Overall assessment

These key comments collectively transformed what could have been a typical AI showcase into a nuanced discussion about responsible innovation. The conversation evolved from demonstrating technological capabilities to examining the complex ecosystem needed for successful AI implementation in healthcare. Trevor’s early observations about the technology-versus-people paradox set the tone for deeper reflection, while Charlotte’s call for honest evaluation and Monika’s emphasis on preserving human judgment provided practical frameworks for moving forward. The participant’s insight about ‘silent patients’ expanded the scope beyond clinical optimization to population health equity. Together, these comments created a mature dialogue that acknowledged both AI’s potential and the critical importance of human-centered, evidence-based approaches to healthcare innovation.


Follow-up questions

How to build AI systems at scale for multiple languages and generate verifiable data for large-scale models?

Speaker

Vikalp Sahni


Explanation

This addresses the technical challenges of scaling AI healthcare solutions across diverse linguistic populations while ensuring model reliability and accuracy.


Who is evaluating the AI capabilities being built in healthcare?

Speaker

Vikalp Sahni


Explanation

This highlights the need for standardized evaluation frameworks and oversight mechanisms for AI healthcare applications.


How do we define the actual role for humans in the loop beyond just paying lip service to this concept?

Speaker

Trevor Mundel


Explanation

This addresses the critical need to move beyond theoretical discussions about human involvement to practical implementation of human oversight in AI systems.


How can federated learning models that keep data local but contribute to model improvement be properly regulated?

Speaker

Trevor Mundel


Explanation

This explores the regulatory gaps around new AI training methodologies that could preserve privacy while enabling model advancement.


What are the real-world health impacts when AI systems are integrated into different health systems?

Speaker

Charlotte Watts


Explanation

This addresses the evidence gap between AI efficacy studies and actual implementation outcomes in healthcare settings.


What are the costs and cost-effectiveness of integrating AI into health systems, particularly for ministry of health decision-making?

Speaker

Charlotte Watts


Explanation

This is crucial for understanding the economic viability and scalability of AI healthcare solutions for government health programs.


What unexpected challenges arise when AI systems are integrated into existing health bureaucracies?

Speaker

Charlotte Watts


Explanation

This seeks to identify implementation barriers and system integration issues that may not be apparent in controlled studies.


How can data privacy be incorporated at a policy level, particularly regarding privacy by design principles?

Speaker

Participant


Explanation

This addresses the need for comprehensive policy frameworks that protect sensitive health data while enabling AI innovation.


How do we build AI agents that are not only intelligent but reassuring in high-anxiety environments like maternal and infant care?

Speaker

Participant


Explanation

This explores the human-centered design challenges of creating AI systems that can provide emotional support and reassurance in sensitive healthcare contexts.


What does operational decision support look like for AI systems focused on community health and active case finding?

Speaker

Participant


Explanation

This examines how AI can support public health operations beyond clinical decision-making, particularly for underserved populations.


How can we develop more collaboration between industry and regulators in AI healthcare development?

Speaker

Richard Rukwata


Explanation

This addresses the need for improved partnerships to streamline AI healthcare innovation while maintaining safety standards.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.