Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable
20 Feb 2026 12:00h - 13:00h
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable
Summary
The panel convened to examine how artificial intelligence can be responsibly integrated into health care to improve outcomes while ensuring safety and equity [15]. Zameer Brey warned that AI-assist tools are often introduced too early in clinical workflows, and argued that progress should be judged by tangible health improvements such as better TB diagnosis or diabetes adherence [12-16]. Using a flight-safety analogy, he stressed that health-care AI must aim for essentially zero risk of error and called for a shift from opaque “black-box” models to transparent, verifiable “glass-box” systems that log inputs, logic and safeguards against harmful prescriptions [22-30]. He concluded by inviting partners to collaborate on pathways toward verified AI, emphasizing the need for traceable decision chains that satisfy legal and regulatory requirements [31-35].
Professor Prokar Dasgupta, representing Responsible AI UK, emphasized that implementation-not just invention-is critical and described initiatives such as placing AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete projects including an ambient-AI system that reduces operating-room time, a tele-surgery platform enabling surgeons to operate remotely with sub-60 ms latency, and a robotic system capable of fully automated gallbladder removal, illustrating AI’s potential to expand equitable surgical access [49-53]. Dasgupta also noted limited public acceptance, observing that only a single hand was raised when clinicians were asked to volunteer for fully automated procedures, underscoring the importance of trust [54-57]. He argued that sustained investment must also target workforce development, because few medical curricula currently include AI training, and without such skills the promised benefits will not materialize [60].
Alain Labrique reinforced the shift from focusing on algorithmic accuracy to measuring real-world impact, acknowledging that clinician behavior change is slow but feasible when humans remain in the loop [62-65].
Payden P. summarized the discussion by declaring that AI in health has reached an inflection point, moving from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined that future investment must extend beyond innovation to include governance, regulation, evidence generation, data systems, workforce readiness and long-term partnerships, which together build the trust that unlocks sustainable funding [110-118]. The panel concluded that achieving equitable health outcomes with AI will depend on building verified, transparent systems, securing cross-sector trust, and investing in the people and infrastructure needed to translate promise into progress [119-120].
Keypoints
Major discussion points
– The need for “verified” AI that is transparent and risk-free in healthcare.
Zameer argued that health-AI must move from a “black-box” to a “glass-box” model, providing a full audit trail of inputs, logic and safeguards to ensure zero-risk prescribing and decision-making [22-31].
– Shifting focus from hype to concrete investment, implementation and impact.
Payden highlighted that AI in health has reached an inflection point where the conversation is now about funding governance, regulation, evidence generation, workforce readiness and long-term partnerships to make AI an equitable tool rather than a source of new inequalities [101-112].
– Barriers to clinical adoption and the need for research-driven change.
Zameer noted the entrenched nature of medical workflows and asked what level of clinical research and evaluation investment is required to alter long-standing practice patterns [38-40].
– Global equity, data diversity and real-world pilots as a pathway to inclusive AI.
Prokar described initiatives such as Responsible AI UK’s hospital champions, tele-surgery trials, and robotic automation projects, stressing that diverse data and equitable access are essential for success [46-60].
– Human-centered AI and coordinated donor/partner action.
Several speakers (Ken, Haitham, Zameer) called for keeping people at the core of AI systems, aligning donor strategies, and building coordinated, cross-sector partnerships to ensure AI benefits are realized responsibly [70][72-76][83-92].
Overall purpose / goal of the discussion
The panel was convened to move the conversation on artificial intelligence in health from speculative promise to practical, equitable impact. Participants examined how to verify AI safety, invest in the necessary infrastructure and evidence base, overcome clinical inertia, and ensure global inclusivity, ultimately seeking a shared roadmap for responsible implementation.
Overall tone and its evolution
– The session opened with formal, repetitive gratitude, establishing a courteous but neutral atmosphere.
– It quickly shifted to a critical and analytical tone, using analogies (e.g., flight safety) to stress the zero-risk expectation for health AI [22-25].
– As speakers presented concrete examples and investment needs, the tone became optimistic and solution-focused, highlighting pilots, partnerships, and skill-building [46-60][101-112].
– The concluding remarks returned to a collaborative and rallying tone, urging coordinated donor action and emphasizing human-centered design [70][72-76][83-92].
Overall, the discussion progressed from polite acknowledgment to a rigorous debate on challenges, and finally to a hopeful call for collective action.
Speakers
– Zameer Brey – Panelist (speaker)
– Ken Ichiro Natsume – Assistant Director General at the World Intellectual Property Organization (WIPO), policy expert on international intellectual property matters [S3]
– Prokar Dasgupta – Professor, practicing surgeon, innovator; leads AI implementation initiatives, affiliated with King’s College London (mentioned “my own group in King’s”) [S5]
– Alain Labrique – Panel moderator/facilitator, expert in digital health interventions and global health partnerships [S8]
– Justice Prathiba M. Singh – Justice (judicial title)
– Haitham Ali Ahmed El-Noush – (role not specified)
– Payden P. – Closing speaker, panel participant
Additional speakers:
– Elaine – referenced in discussion about legislative chain of proof (no role or title provided)
– Justice Simo – referenced as nodding, judicial title implied (no further details)
– Dr. Pagan – mentioned by Alain Labrique as “Dr. Pagan” (no role or title provided)
The session opened with a series of formal thank-you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began [1-6][7-8].
Zameer Brey began by repeatedly stating “This is the product flow,” three times, using the diagram as the framework for the discussion [12-13]. He then questioned the premature placement of AI-assist functions within clinical workflows, noting that clinicians usually complete all preparatory steps before being offered AI support, a design choice that “moved the AI assist button earlier on” and altered outcomes [12-13]. He argued that true progress must be demonstrated through measurable health benefits rather than merely deploying AI tools [12-13]. Brey used a flight-safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights [22-25]. From this premise he advocated a shift from opaque “black-box” models to transparent “glass-box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions, including checks for allergies and catastrophic errors [26-30]. He concluded by inviting partners to co-create pathways toward verified AI, emphasizing a traceable decision chain that satisfies legal and regulatory requirements-a point underscored by Justice Simo’s nod of approval [31-35].
Brey also highlighted the entrenched nature of medical practice, describing clinicians as “well-involved and well-trodden” in their workflows and asking what level of clinical research and evaluation investment is required to shift these long-standing pathways [38-40]. This set the stage for a broader debate on the resources needed to overcome professional inertia.
Prokar Dasgupta, speaking on behalf of Responsible AI UK, reframed the conversation around implementation rather than invention. He noted that the programme places AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete pilots: an ambient-AI system that drafts clinical notes and saves a month of operating-room time [49-50]; a tele-surgery platform (2,500 km distance, ≤60 ms latency) that could bring specialist surgery to underserved regions [51]; and a fully autonomous robotic system for gallbladder removal, described as “100 % accurate” while the audience expressed scepticism, with only a single hand raised when clinicians were asked to volunteer [55-57]. Dasgupta stressed that equitable impact depends on diverse data sets, illustrating the point with a story about his mother’s watch and the lack of diversified data [51-53]. He emphasized that without patient participation the investment will fail [57-58] and outlined the need to work with the “three C’s”-companies, governments, and civil society-to ensure responsible deployment [58-60]. He warned that without skilled health-workforce training-currently absent from most medical and nursing curricula-the investments will fail [60]. Dasgupta also referenced the “Wieselbaum test” as a future societal-impact benchmark [66-68].
The human-centred principle resonated across the panel. Ken Ichiro Natsume reiterated that AI should be leveraged with “human beings at the centre of those utilizations” [74-75]. Justice Prathiba M. Singh summed the sentiment with a hopeful “Here’s to a healthier world” and called for technology and development initiatives to work together [77-79]. Alain Labrique added that the focus should shift from algorithmic accuracy to real-world impact, arguing that benchmarks ought to measure behavioural change and health outcomes rather than pure predictive performance [62-65].
Payden P. provided the closing synthesis, declaring that AI in health has reached an inflection point where the debate has moved from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined four pillars of future funding: (1) governance and regulation to ensure safety and trust; (2) evidence generation to demonstrate efficacy; (3) workforce readiness and capacity-building; and (4) long-term, cross-sector partnerships [110-118]. Trust, he argued, is the “currency that unlocks sustainable investment” [118-119], and he called on donors, governments and industry to collaborate in building these foundations [120].
Complementing this, Haitham Ali Ahmed El-Noush stressed the need for coordinated donor strategies, urging the development of shared priorities and pooled investments to rally behind AI-health initiatives [70].
Across the discussion, several points of agreement emerged. All speakers endorsed a human-centred, transparent approach to AI, the necessity of coordinated investment beyond pure innovation, and the imperative that AI benefits be equitably distributed (e.g., Brey’s glass-box vision, Dasgupta’s global pilots, Labrique’s impact focus, Natsume’s human-in-the-loop stance, and Haitham’s donor coordination) [24-30][46-48][62-65][74-75][70]. They also concurred that trust must be built through verifiable systems, robust governance and demonstrable outcomes [24-30][101-108][110-118].
Notable disagreements surfaced. First, Brey’s demand for zero-risk, fully verified AI contrasted with Dasgupta’s promotion of high-autonomy tools that, while touted as “100 % accurate,” still faced public reluctance, revealing tension between ideal safety standards and pragmatic deployment [24-30][55-57]. Second, the allocation of funding diverged: Brey emphasized resources for verification infrastructure [26-30], whereas Payden and Labrique argued for broader system-wide investments in regulation, data infrastructure and capacity-building [101-108][110-114]. Third, the metric of success was contested; Labrique advocated impact-oriented benchmarks, while Brey prioritized absolute safety and error-free operation [63][24-30].
The panel distilled several key takeaways. Verified, glass-box AI that guarantees zero-risk prescribing is essential [24-30]; investment must now target governance, evidence generation, data systems, workforce training and long-term partnerships to translate promise into progress [101-108][110-118]; coordinated donor mechanisms are required to align priorities [70]; and human-centred design-keeping clinicians and patients in the loop and embedding AI education in curricula-is critical for acceptance and equity [74-75][60].
Action items proposed
1. Form a working group on verified/glass-box AI (Zameer’s invitation) [31].
2. Create pooled donor mechanisms for coordinated investment (Haitham’s suggestion) [70].
3. Fund governance, regulatory and evidence-generation programmes (Payden’s pillars) [101-108][110-114].
4. Embed AI modules into medical and nursing curricula (Dasgupta’s training call) [60].
5. Pilot inclusive projects such as ambient-AI note-taking, tele-surgery 2.0, and autonomous robotics with patient involvement (Dasgupta’s pilots) [49-57].
Unresolved issues remain, notably how to operationalise the zero-risk standard in real-world settings, the precise mechanisms for shifting entrenched clinical workflows, detailed funding models for coordinated donor action, and the development of global standards for data diversity and regulatory certification. The panel suggested a phased compromise: maintain human oversight while progressively increasing AI autonomy, pair rapid deployment of low-risk tools with rigorous verification before scaling to higher-risk applications, and align technological capability with societal acceptance through continuous patient and clinician engagement [74-75][55-57].
In sum, the discussion moved from polite acknowledgements to a rigorous examination of safety, verification, investment and equity, converging on a shared roadmap that balances stringent risk-mitigation with pragmatic, impact-driven implementation. The consensus underscores that AI can transform health only if it is transparent, trustworthy, human-centred and supported by coordinated, long-term investment in both technology and the people who will use it [101-108][118-120].
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Thank you. Thank you. Thank you. Thank you. We’ve confused shine… I’m sorry. Thank you. Thank you. Thank you. At the end. So think about this, you’ve done all your hard work, you’ve made your notes, you’ve written your prescription, you’ve counseled the patient and now you press AI assist. No thank you. All they did was to move the AI assist button earlier on and give the user the prescription to use it when it made sense to that user and the results changed. The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.
So these are some of the fundamental questions and I think we’ve got caught up with investment at levels one and two. Let’s just check how this model works. Let’s just check the product and having given enough investment into how this gets integrated into the world. Let’s just see how this goes. So this is the product flow. This is the product flow. This is the product flow. and then ultimately how does this shift outcomes over time I think can I take one more minute and talk about verified AI, should I come back to this I was thinking to myself and probably a bad analogy but I’m going to put it out there anyway because I’m flying this evening that’s why I didn’t want to use it but if I said to you all would you fly if the likelihood of the flight arriving safely was 95 % I’d fly, you’d fly if it was 95 % if you fly if I told you if it was 96, 97 or 98 would you fly no even if it was 95 just think for a second if it was 99 % That means every 100th flight taking off from Delhi airport would crash.
We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this is that when it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box? Can we really know why did the model make a particular decision? We gave it X input. The patient had these criteria. Here’s the logic model.
and it gave that particular output. But when it gives that output, can we put some safeguards in place that makes 100 % sure that it isn’t prescribing something the patient’s allergic to or that’s going to end up in a catastrophic event or that’s fundamentally flawed in its logic? And that’s where we’d like to invite partners to work with us on a pathway to verified AI. Thank you. Thank you. And I can see Justice Simo. So Justice Simo is just nodding her head because I think, you know, having that chain of proof is something we like to have in legislation. So, you know, it’s always nice when there’s a trail to follow to that decision. We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician and an innovator.
I’m sure you’ve experienced the recalcitrance and challenge of shifting medical practice. And, you know, nurses and doctors are well known for being entrenched in the way of doing things. And changing those well -involved and well -trodden paths of workflows and clinical decision pathways are very difficult to shift. So what kind of investment do we need to make in clinical research and evaluation and evidence to shift those well -trodden paths of practice? Professor Vasgupta.
Namaskar. Namaskar. Thanks. To realize that I am a working surgeon, so in addition to invention and innovation, what I’m really interested in is implementation. I want to make a difference. And if you may be patient someday, it will make a difference to you. I come here on behalf of Responsible AI UK, a major investment from UK research and innovation, not just in AI in the UK, but into an international ecosystem, including the greater south. We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most. Let me give you some examples of how we are doing this. Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.
Shortening the operating time, saving a month of wasted time in the operating room. The British Association of Physicians of Indian origin, realized that wouldn’t it be wonderful if our parents, many of whom are living in India my mother is 87 before she has a heart attack wouldn’t it be nice if a message on my watch told me something was going to happen the reason I decided to make a note is because the data is not diversified enough without diversity of data we are not going to win this battle let me give you another example of investment of inequity two weeks ago if you look at the British Medical Journal there is a major article from us on tele -surgery 2 .0 it means to me the technology exists for a surgeon to operate from two and a half thousand kilometers away using a weapon with a time delay of 60 milliseconds or less it feels like you’re in the same operating room imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery that is an example let me give you a third example and this is in automation my own group in King’s has funded and invested in automation big time the levels of autonomy in robotics takes place from 0 to 5 0 is more autonomy most autonomous machine is level 3 you map with the ultrasound the prostate all the men in this room have a prostate as we know we have difficulty in pain you move the middle of the prostate with an ultrasound you press a button a water jet floats at home in the middle of the prostate so that you don’t have to wake up 20 times at night to pee until last November when one university announced the the first the first the first in the world in the world the first in the world the first on a robotic system which can operate on big gallbladders.
Big gallbladders with 100 % misery, 100 % accurate. Five days after this, I was at the Royal Academy of Engineering, a group like this, and I said, hands up everyone who is going to allow this machine to operate on them. So hands up everyone who will allow a completely automated machine, 100 % accurate in pigs, to take out your gallbladder here. And in takers, there was one hand in the room. On the other occasion, there was a single hand in the room. He is down to his own. So we went into these public cells, but they are saying not yet. They are still going to hear them. Still today. So I do. companies of course we have to work with them, countries including the government side, civil society the three C’s, if we do not bring our patients with us all this investment is going to fail and the final investment I would urge is in skills there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail so these are my parting thoughts to you, thank you thank you Thank you.
Thank you. Thank you.
and impressive with impactful, focusing on things that get used and work in the real world. A benchmark might be the wrong thing, not accuracy but actually impact. And then, of course, you know, the challenge that Professor Dasvipta brought to us that, you know, it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop. So I’d like to give each of you one sentence now just to wrap up. As you’ve heard others, what has changed your thoughts and what’s the one message you’d like to have people read the room with? And let me just go sequentially down the road. Thank you. … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … …
S
o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.
Fantastic. Ineji.
Thank you. I think we’re asked to respond in one sentence. I was going to say, we’re not going to do something simple. We need something. But I haven’t changed my mind, but one point which resonated to my heart, which I was not able to mention in my opening sentence, but one thing I’d like to highlight is that, okay, we can leverage artificial intelligence with human being at the center of those utilizations. So that’s what I want to highlight. Thank you.
That’s the thing. I’m going to actually say one sentence. Here’s to a healthier world. Hey, D .I. and technology, we really work together in the world.
Fantastic. Professor.
For AI tools and for the patients, I urge you to sell Mexico the test, which means do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.
I think for me, the question of how do we move from promise to progress is underpinned by I think a theme that I’m seeing at the conference. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important we need to keep humans at the center of the AI revolution.
Fantastic. So, Dr. Pagan, you’ve been patiently giving these wise words from our panel. I’d like to give you the last word to bring this home and everyone keep the audience with food for thought before they go for food for their stomachs.
Thank you very much. Good afternoon to all. Sincere thanks to all the… I think it’s on. Yes. Sincere thanks to all the distinguished panelists for this very thought -provoking and very interesting conversation around AI and health. I think today’s conversation makes one thing very clear. AI and health has reached an inflection point. And for years we spoke about possibility. Today the conversation has shifted to investment, implementation, and impact. I think that was really highlighted. And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few. Over the past hour, several themes have emerged.
And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long -term partnerships. These are not optional. They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation. New inequalities. Second, predictability builds confidence. When countries strengthen regulatory and legal frameworks, investment flows in. When evidence is generated and transparency shared, investment grows. When partnerships are built across sectors, investment scales. In short, trust is the currency that unlocks sustainable investment. So I think these are some important points that I could take out from here.
And we look forward to working with different partners, investors, donors, government agencies to take AI and health further for the benefit of all the populations. Thank you.
Thank you so much. Those are reserved test patients in writing from the Capacity Building Commission and Curfew Borrow.
Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for shifting from “black box” to “glass box” AI systems where inputs can be document…
EventIn conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective and ethical application in various industries. There is a need for greater scrut…
EventHealthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals…
UpdatesProfessor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time. Like previous innovations such as MRI scanners and antibio…
UpdatesThis observation grounded the discussion in practical realities and influenced subsequent conversations about the need for clearer communication, linguistic diversity, and more accessible formats. It …
Event– The need to move from high-level discussions to concrete, actionable measures
EventThe conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
EventThis comment highlights the urgent need for practical action beyond policy discussions. It’s thought-provoking because it shifts the focus from theoretical governance to concrete investment needs.
EventThe discussion revealed that technical capabilities often exceed institutional readiness for AI adoption. Behavioral change among healthcare professionals emerged as a critical barrier. Nikhil Dhongar…
Event– Focus on the barriers to adoption. Successful efforts address multiple barriers to adoption simultaneously. They combine financial support with applications and training that make broadband connecti…
ResourceMs. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and the need for appropriate incentive structures. Digital literacy challenges, con…
EventIt can take 20-30 years to develop a new drug or vaccine, and the costs and risks are high. R&D efforts are not coordinated to achieve their greatest impact: the current model prioritizes the deve…
Resource– Access to healthcare facilities. – Cost savings via telemedicine activities. – Collaboration amongst the key participants. – Possible barriers of telemedicine. – Needs assessment on telemedicine app…
ResourceFadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusion, access. I’ll ask, I’ll be back to you, Dr. Nibal, and ask you about the hig…
EventIf compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an AI divide, that will deepen global disparities and weaken national agency. For …
EventWhile this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified establish a foundational framework that likely shaped the entire day’s discussions. T…
EventAchieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructure, availability of diverse datasets, and development of necessary technical ski…
EventThe Partnership on AI, founded in September 2016 by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft with the aim to develop best practices on the challenges and opportunities in the field of art…
UpdatesThe speakers demonstrate significant consensus on key principles including the need for inclusive governance, building on existing institutions, addressing data and connectivity challenges, and moving…
Event– 178 By promoting a common understanding, common ground and common benefits, the proposals above seek to address the gaps identified in the emerging international AI governance regime. The gaps in re…
ResourceDuring the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates on a pleasant morning. The intent of his address was to endorse the joint statem…
EventThe expanded summary opens with the speaker expressing gratitude towards the chairperson, thereby establishing a respectful and courteous tone for their forthcoming remarks. They proceed to unequivoca…
EventIn summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call for a significant and operational Convention. They emphasised strategic planning …
EventThe overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, with multiple speakers thanking organizers and participants. The tone became more …
EventThe tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciation and maintains an upbeat, accomplished atmosphere. The speakers express relief…
EventThe discussion maintained a serious, critical tone throughout, with panelists expressing genuine concern and urgency about AI’s current trajectory. While not alarmist, the conversation was notably ske…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, …
EventThe discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realistic concerns about its challenges. While speakers acknowledged significant risks a…
EventThe discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenge…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Backpack and similar technologies to make a positive impact. There was also a coll…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a professional, confident demeanor while discussing serious societal challenges. The ton…
EventThe tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiatives for technological advancement. There was a collaborative spirit, with panelist…
EventThe tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory yet professional atmosphere, with speakers expressing gratitude for the collabora…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looking atmosphere, with speakers expressing mutual respect and shared commitment to …
EventThe overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stressed the need for immediate action rather than just words. While acknowledging the …
Event“The session opened with a series of formal thank‑you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began.”
The knowledge base records that speakers in similar sessions began by expressing gratitude to the chair or delegates, establishing a respectful tone, e.g., S85, S86 and S87 describe opening remarks that thank the chairperson and set a courteous atmosphere.
“Brey used a flight‑safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights.”
Risk framing with an airplane-safety analogy is discussed in the knowledge base, which defines risk as probability of undesirable outcomes and explicitly uses an aviation safety analogy to illustrate acceptable risk levels [S114].
“He advocated a shift from opaque “black‑box” models to transparent “glass‑box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions.”
The call for converting black-box AI into a “glass-box” with full transparency is echoed in the knowledge base, which states “The black box of data must become a glass box” and stresses the need for users to see data sources and training details [S13].
“True progress must be demonstrated through measurable health benefits rather than merely deploying AI tools.”
Several knowledge-base entries stress that success should be measured by concrete health outcomes (e.g., reduced mortality, fewer complications) instead of technical metrics, aligning with the report’s emphasis on measurable health impact [S111] and [S112].
“AI systems need safeguards such as allergy checks and catastrophic‑error prevention, implying a need for human oversight in clinical decision‑making.”
The Oxford study cited in the knowledge base warns that AI health tools must operate with human oversight to avoid serious risks, supporting the report’s point about embedding safety checks and human-in-the-loop controls [S107]; a related source also calls for transparent, human-in-the-loop systems to maintain agency [S116].
The panel shows strong convergence on four pillars: (1) human‑centred, transparent AI; (2) coordinated investment and capacity building beyond mere innovation; (3) trust and verifiability as pre‑conditions for adoption; and (4) equity and patient safety as overarching goals. These shared positions indicate a high level of consensus that the next phase for AI in health must be grounded in robust governance, pooled funding, skilled workforce and patient‑focused design.
High consensus – the majority of speakers independently reinforce the same themes, suggesting that future policy and funding streams are likely to prioritize trustworthy, human‑centric AI systems supported by coordinated investment and capacity development.
The panel shows broad consensus that AI can transform health, but disagreement centers on risk tolerance, investment priorities, and measurement of success. Zameer pushes for a zero‑risk, fully auditable AI model, while others advocate for pragmatic deployment, system‑level governance, and impact‑focused metrics.
Moderate – the divergences are substantive (risk vs deployment, funding focus) but do not fracture the shared vision of AI‑enabled health improvement. The implications are a need for coordinated policy that balances stringent safety standards with realistic pathways for scaling AI tools.
The discussion evolved from an initial, somewhat procedural focus on AI assistance to a nuanced debate about safety, verification, real‑world impact, and equity. Zameer Brey’s risk‑analogy and call for verifiable ‘glass‑box’ AI forced the panel to confront safety standards, while Prokar Dasgupta’s concrete implementation examples expanded the conversation to global equity and practical deployment. Alain Labrique’s emphasis on impact over accuracy redirected evaluation metrics, and the repeated human‑centric reminders reinforced ethical grounding. The introduction of a societal‑impact test (the ‘Wieselbaum test’) further deepened the ethical dimension. All these pivotal comments converged in Payden’s closing synthesis, which reframed the dialogue as a strategic investment challenge, highlighting governance, trust, and inclusive outcomes as the decisive factors for AI’s future in health. These key interventions shaped the panel’s trajectory, moving it from abstract enthusiasm to a concrete, action‑oriented roadmap.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

