Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

20 Feb 2026 12:00h - 13:00h

Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence can be responsibly integrated into health care to improve outcomes while ensuring safety and equity [15]. Zameer Brey warned that AI-assist tools are often introduced too early in clinical workflows, and argued that progress should be judged by tangible health improvements such as better TB diagnosis or diabetes adherence [12-16]. Using a flight-safety analogy, he stressed that health-care AI must aim for essentially zero risk of error and called for a shift from opaque “black-box” models to transparent, verifiable “glass-box” systems that log inputs, logic and safeguards against harmful prescriptions [22-30]. He concluded by inviting partners to collaborate on pathways toward verified AI, emphasizing the need for traceable decision chains that satisfy legal and regulatory requirements [31-35].


Professor Prokar Dasgupta, representing Responsible AI UK, emphasized that implementation-not just invention-is critical and described initiatives such as placing AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete projects including an ambient-AI system that reduces operating-room time, a tele-surgery platform enabling surgeons to operate remotely with sub-60 ms latency, and a robotic system capable of fully automated gallbladder removal, illustrating AI’s potential to expand equitable surgical access [49-53]. Dasgupta also noted limited public acceptance, observing that only a single hand was raised when clinicians were asked to volunteer for fully automated procedures, underscoring the importance of trust [54-57]. He argued that sustained investment must also target workforce development, because few medical curricula currently include AI training, and without such skills the promised benefits will not materialize [60].


Alain Labrique reinforced the shift from focusing on algorithmic accuracy to measuring real-world impact, acknowledging that clinician behavior change is slow but feasible when humans remain in the loop [62-65].


Payden P. summarized the discussion by declaring that AI in health has reached an inflection point, moving from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined that future investment must extend beyond innovation to include governance, regulation, evidence generation, data systems, workforce readiness and long-term partnerships, which together build the trust that unlocks sustainable funding [110-118]. The panel concluded that achieving equitable health outcomes with AI will depend on building verified, transparent systems, securing cross-sector trust, and investing in the people and infrastructure needed to translate promise into progress [119-120].


Keypoints


Major discussion points


The need for “verified” AI that is transparent and risk-free in healthcare.


Zameer argued that health-AI must move from a “black-box” to a “glass-box” model, providing a full audit trail of inputs, logic and safeguards to ensure zero-risk prescribing and decision-making [22-31].


Shifting focus from hype to concrete investment, implementation and impact.


Payden highlighted that AI in health has reached an inflection point where the conversation is now about funding governance, regulation, evidence generation, workforce readiness and long-term partnerships to make AI an equitable tool rather than a source of new inequalities [101-112].


Barriers to clinical adoption and the need for research-driven change.


Zameer noted the entrenched nature of medical workflows and asked what level of clinical research and evaluation investment is required to alter long-standing practice patterns [38-40].


Global equity, data diversity and real-world pilots as a pathway to inclusive AI.


Prokar described initiatives such as Responsible AI UK’s hospital champions, tele-surgery trials, and robotic automation projects, stressing that diverse data and equitable access are essential for success [46-60].


Human-centered AI and coordinated donor/partner action.


Several speakers (Ken, Haitham, Zameer) called for keeping people at the core of AI systems, aligning donor strategies, and building coordinated, cross-sector partnerships to ensure AI benefits are realized responsibly [70][72-76][83-92].


Overall purpose / goal of the discussion


The panel was convened to move the conversation on artificial intelligence in health from speculative promise to practical, equitable impact. Participants examined how to verify AI safety, invest in the necessary infrastructure and evidence base, overcome clinical inertia, and ensure global inclusivity, ultimately seeking a shared roadmap for responsible implementation.


Overall tone and its evolution


– The session opened with formal, repetitive gratitude, establishing a courteous but neutral atmosphere.


– It quickly shifted to a critical and analytical tone, using analogies (e.g., flight safety) to stress the zero-risk expectation for health AI [22-25].


– As speakers presented concrete examples and investment needs, the tone became optimistic and solution-focused, highlighting pilots, partnerships, and skill-building [46-60][101-112].


– The concluding remarks returned to a collaborative and rallying tone, urging coordinated donor action and emphasizing human-centered design [70][72-76][83-92].


Overall, the discussion progressed from polite acknowledgment to a rigorous debate on challenges, and finally to a hopeful call for collective action.


Speakers

Zameer Brey – Panelist (speaker)


Ken Ichiro Natsume – Assistant Director General at the World Intellectual Property Organization (WIPO), policy expert on international intellectual property matters [S3]


Prokar Dasgupta – Professor, practicing surgeon, innovator; leads AI implementation initiatives, affiliated with King’s College London (mentioned “my own group in King’s”) [S5]


Alain Labrique – Panel moderator/facilitator, expert in digital health interventions and global health partnerships [S8]


Justice Prathiba M. Singh – Justice (judicial title)


Haitham Ali Ahmed El-Noush – (role not specified)


Payden P. – Closing speaker, panel participant


Additional speakers:


– Elaine – referenced in discussion about legislative chain of proof (no role or title provided)


– Justice Simo – referenced as nodding, judicial title implied (no further details)


– Dr. Pagan – mentioned by Alain Labrique as “Dr. Pagan” (no role or title provided)


Full session reportComprehensive analysis and detailed insights

The session opened with a series of formal thank-you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began [1-6][7-8].


Zameer Brey began by repeatedly stating “This is the product flow,” three times, using the diagram as the framework for the discussion [12-13]. He then questioned the premature placement of AI-assist functions within clinical workflows, noting that clinicians usually complete all preparatory steps before being offered AI support, a design choice that “moved the AI assist button earlier on” and altered outcomes [12-13]. He argued that true progress must be demonstrated through measurable health benefits rather than merely deploying AI tools [12-13]. Brey used a flight-safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights [22-25]. From this premise he advocated a shift from opaque “black-box” models to transparent “glass-box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions, including checks for allergies and catastrophic errors [26-30]. He concluded by inviting partners to co-create pathways toward verified AI, emphasizing a traceable decision chain that satisfies legal and regulatory requirements-a point underscored by Justice Simo’s nod of approval [31-35].


Brey also highlighted the entrenched nature of medical practice, describing clinicians as “well-involved and well-trodden” in their workflows and asking what level of clinical research and evaluation investment is required to shift these long-standing pathways [38-40]. This set the stage for a broader debate on the resources needed to overcome professional inertia.


Prokar Dasgupta, speaking on behalf of Responsible AI UK, reframed the conversation around implementation rather than invention. He noted that the programme places AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete pilots: an ambient-AI system that drafts clinical notes and saves a month of operating-room time [49-50]; a tele-surgery platform (2,500 km distance, ≤60 ms latency) that could bring specialist surgery to underserved regions [51]; and a fully autonomous robotic system for gallbladder removal, described as “100 % accurate” while the audience expressed scepticism, with only a single hand raised when clinicians were asked to volunteer [55-57]. Dasgupta stressed that equitable impact depends on diverse data sets, illustrating the point with a story about his mother’s watch and the lack of diversified data [51-53]. He emphasized that without patient participation the investment will fail [57-58] and outlined the need to work with the “three C’s”-companies, governments, and civil society-to ensure responsible deployment [58-60]. He warned that without skilled health-workforce training-currently absent from most medical and nursing curricula-the investments will fail [60]. Dasgupta also referenced the “Wieselbaum test” as a future societal-impact benchmark [66-68].


The human-centred principle resonated across the panel. Ken Ichiro Natsume reiterated that AI should be leveraged with “human beings at the centre of those utilizations” [74-75]. Justice Prathiba M. Singh summed the sentiment with a hopeful “Here’s to a healthier world” and called for technology and development initiatives to work together [77-79]. Alain Labrique added that the focus should shift from algorithmic accuracy to real-world impact, arguing that benchmarks ought to measure behavioural change and health outcomes rather than pure predictive performance [62-65].


Payden P. provided the closing synthesis, declaring that AI in health has reached an inflection point where the debate has moved from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined four pillars of future funding: (1) governance and regulation to ensure safety and trust; (2) evidence generation to demonstrate efficacy; (3) workforce readiness and capacity-building; and (4) long-term, cross-sector partnerships [110-118]. Trust, he argued, is the “currency that unlocks sustainable investment” [118-119], and he called on donors, governments and industry to collaborate in building these foundations [120].


Complementing this, Haitham Ali Ahmed El-Noush stressed the need for coordinated donor strategies, urging the development of shared priorities and pooled investments to rally behind AI-health initiatives [70].


Across the discussion, several points of agreement emerged. All speakers endorsed a human-centred, transparent approach to AI, the necessity of coordinated investment beyond pure innovation, and the imperative that AI benefits be equitably distributed (e.g., Brey’s glass-box vision, Dasgupta’s global pilots, Labrique’s impact focus, Natsume’s human-in-the-loop stance, and Haitham’s donor coordination) [24-30][46-48][62-65][74-75][70]. They also concurred that trust must be built through verifiable systems, robust governance and demonstrable outcomes [24-30][101-108][110-118].


Notable disagreements surfaced. First, Brey’s demand for zero-risk, fully verified AI contrasted with Dasgupta’s promotion of high-autonomy tools that, while touted as “100 % accurate,” still faced public reluctance, revealing tension between ideal safety standards and pragmatic deployment [24-30][55-57]. Second, the allocation of funding diverged: Brey emphasized resources for verification infrastructure [26-30], whereas Payden and Labrique argued for broader system-wide investments in regulation, data infrastructure and capacity-building [101-108][110-114]. Third, the metric of success was contested; Labrique advocated impact-oriented benchmarks, while Brey prioritized absolute safety and error-free operation [63][24-30].


The panel distilled several key takeaways. Verified, glass-box AI that guarantees zero-risk prescribing is essential [24-30]; investment must now target governance, evidence generation, data systems, workforce training and long-term partnerships to translate promise into progress [101-108][110-118]; coordinated donor mechanisms are required to align priorities [70]; and human-centred design-keeping clinicians and patients in the loop and embedding AI education in curricula-is critical for acceptance and equity [74-75][60].


Action items proposed


1. Form a working group on verified/glass-box AI (Zameer’s invitation) [31].


2. Create pooled donor mechanisms for coordinated investment (Haitham’s suggestion) [70].


3. Fund governance, regulatory and evidence-generation programmes (Payden’s pillars) [101-108][110-114].


4. Embed AI modules into medical and nursing curricula (Dasgupta’s training call) [60].


5. Pilot inclusive projects such as ambient-AI note-taking, tele-surgery 2.0, and autonomous robotics with patient involvement (Dasgupta’s pilots) [49-57].


Unresolved issues remain, notably how to operationalise the zero-risk standard in real-world settings, the precise mechanisms for shifting entrenched clinical workflows, detailed funding models for coordinated donor action, and the development of global standards for data diversity and regulatory certification. The panel suggested a phased compromise: maintain human oversight while progressively increasing AI autonomy, pair rapid deployment of low-risk tools with rigorous verification before scaling to higher-risk applications, and align technological capability with societal acceptance through continuous patient and clinician engagement [74-75][55-57].


In sum, the discussion moved from polite acknowledgements to a rigorous examination of safety, verification, investment and equity, converging on a shared roadmap that balances stringent risk-mitigation with pragmatic, impact-driven implementation. The consensus underscores that AI can transform health only if it is transparent, trustworthy, human-centred and supported by coordinated, long-term investment in both technology and the people who will use it [101-108][118-120].


Session transcriptComplete transcript of the session
Haitham Ali Ahmed El‑Noush

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Zameer Brey

Thank you. Thank you. Thank you. Thank you. We’ve confused shine… I’m sorry. Thank you. Thank you. Thank you. At the end. So think about this, you’ve done all your hard work, you’ve made your notes, you’ve written your prescription, you’ve counseled the patient and now you press AI assist. No thank you. All they did was to move the AI assist button earlier on and give the user the prescription to use it when it made sense to that user and the results changed. The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.

So these are some of the fundamental questions and I think we’ve got caught up with investment at levels one and two. Let’s just check how this model works. Let’s just check the product and having given enough investment into how this gets integrated into the world. Let’s just see how this goes. So this is the product flow. This is the product flow. This is the product flow. and then ultimately how does this shift outcomes over time I think can I take one more minute and talk about verified AI, should I come back to this I was thinking to myself and probably a bad analogy but I’m going to put it out there anyway because I’m flying this evening that’s why I didn’t want to use it but if I said to you all would you fly if the likelihood of the flight arriving safely was 95 % I’d fly, you’d fly if it was 95 % if you fly if I told you if it was 96, 97 or 98 would you fly no even if it was 95 just think for a second if it was 99 % That means every 100th flight taking off from Delhi airport would crash.

We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this is that when it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box? Can we really know why did the model make a particular decision? We gave it X input. The patient had these criteria. Here’s the logic model.

and it gave that particular output. But when it gives that output, can we put some safeguards in place that makes 100 % sure that it isn’t prescribing something the patient’s allergic to or that’s going to end up in a catastrophic event or that’s fundamentally flawed in its logic? And that’s where we’d like to invite partners to work with us on a pathway to verified AI. Thank you. Thank you. And I can see Justice Simo. So Justice Simo is just nodding her head because I think, you know, having that chain of proof is something we like to have in legislation. So, you know, it’s always nice when there’s a trail to follow to that decision. We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician and an innovator.

I’m sure you’ve experienced the recalcitrance and challenge of shifting medical practice. And, you know, nurses and doctors are well known for being entrenched in the way of doing things. And changing those well -involved and well -trodden paths of workflows and clinical decision pathways are very difficult to shift. So what kind of investment do we need to make in clinical research and evaluation and evidence to shift those well -trodden paths of practice? Professor Vasgupta.

Prokar Dasgupta

Namaskar. Namaskar. Thanks. To realize that I am a working surgeon, so in addition to invention and innovation, what I’m really interested in is implementation. I want to make a difference. And if you may be patient someday, it will make a difference to you. I come here on behalf of Responsible AI UK, a major investment from UK research and innovation, not just in AI in the UK, but into an international ecosystem, including the greater south. We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most. Let me give you some examples of how we are doing this. Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.

Shortening the operating time, saving a month of wasted time in the operating room. The British Association of Physicians of Indian origin, realized that wouldn’t it be wonderful if our parents, many of whom are living in India my mother is 87 before she has a heart attack wouldn’t it be nice if a message on my watch told me something was going to happen the reason I decided to make a note is because the data is not diversified enough without diversity of data we are not going to win this battle let me give you another example of investment of inequity two weeks ago if you look at the British Medical Journal there is a major article from us on tele -surgery 2 .0 it means to me the technology exists for a surgeon to operate from two and a half thousand kilometers away using a weapon with a time delay of 60 milliseconds or less it feels like you’re in the same operating room imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery that is an example let me give you a third example and this is in automation my own group in King’s has funded and invested in automation big time the levels of autonomy in robotics takes place from 0 to 5 0 is more autonomy most autonomous machine is level 3 you map with the ultrasound the prostate all the men in this room have a prostate as we know we have difficulty in pain you move the middle of the prostate with an ultrasound you press a button a water jet floats at home in the middle of the prostate so that you don’t have to wake up 20 times at night to pee until last November when one university announced the the first the first the first in the world in the world the first in the world the first on a robotic system which can operate on big gallbladders.

Big gallbladders with 100 % misery, 100 % accurate. Five days after this, I was at the Royal Academy of Engineering, a group like this, and I said, hands up everyone who is going to allow this machine to operate on them. So hands up everyone who will allow a completely automated machine, 100 % accurate in pigs, to take out your gallbladder here. And in takers, there was one hand in the room. On the other occasion, there was a single hand in the room. He is down to his own. So we went into these public cells, but they are saying not yet. They are still going to hear them. Still today. So I do. companies of course we have to work with them, countries including the government side, civil society the three C’s, if we do not bring our patients with us all this investment is going to fail and the final investment I would urge is in skills there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail so these are my parting thoughts to you, thank you thank you Thank you.

Thank you. Thank you.

Alain Labrique

and impressive with impactful, focusing on things that get used and work in the real world. A benchmark might be the wrong thing, not accuracy but actually impact. And then, of course, you know, the challenge that Professor Dasvipta brought to us that, you know, it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop. So I’d like to give each of you one sentence now just to wrap up. As you’ve heard others, what has changed your thoughts and what’s the one message you’d like to have people read the room with? And let me just go sequentially down the road. Thank you. … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … …

Zameer Brey

S

Haitham Ali Ahmed El‑Noush

o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.

Alain Labrique

Fantastic. Ineji.

Ken Ichiro Natsume

Thank you. I think we’re asked to respond in one sentence. I was going to say, we’re not going to do something simple. We need something. But I haven’t changed my mind, but one point which resonated to my heart, which I was not able to mention in my opening sentence, but one thing I’d like to highlight is that, okay, we can leverage artificial intelligence with human being at the center of those utilizations. So that’s what I want to highlight. Thank you.

Justice Prathiba M. Singh

That’s the thing. I’m going to actually say one sentence. Here’s to a healthier world. Hey, D .I. and technology, we really work together in the world.

Zameer Brey

Fantastic. Professor.

Prokar Dasgupta

For AI tools and for the patients, I urge you to sell Mexico the test, which means do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.

Zameer Brey

I think for me, the question of how do we move from promise to progress is underpinned by I think a theme that I’m seeing at the conference. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important we need to keep humans at the center of the AI revolution.

Alain Labrique

Fantastic. So, Dr. Pagan, you’ve been patiently giving these wise words from our panel. I’d like to give you the last word to bring this home and everyone keep the audience with food for thought before they go for food for their stomachs.

Payden P

Thank you very much. Good afternoon to all. Sincere thanks to all the… I think it’s on. Yes. Sincere thanks to all the distinguished panelists for this very thought -provoking and very interesting conversation around AI and health. I think today’s conversation makes one thing very clear. AI and health has reached an inflection point. And for years we spoke about possibility. Today the conversation has shifted to investment, implementation, and impact. I think that was really highlighted. And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few. Over the past hour, several themes have emerged.

And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long -term partnerships. These are not optional. They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation. New inequalities. Second, predictability builds confidence. When countries strengthen regulatory and legal frameworks, investment flows in. When evidence is generated and transparency shared, investment grows. When partnerships are built across sectors, investment scales. In short, trust is the currency that unlocks sustainable investment. So I think these are some important points that I could take out from here.

And we look forward to working with different partners, investors, donors, government agencies to take AI and health further for the benefit of all the populations. Thank you.

Alain Labrique

Thank you so much. Those are reserved test patients in writing from the Capacity Building Commission and Curfew Borrow.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session opened with a series of formal thank‑you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began.”

The knowledge base records that speakers in similar sessions began by expressing gratitude to the chair or delegates, establishing a respectful tone, e.g., S85, S86 and S87 describe opening remarks that thank the chairperson and set a courteous atmosphere.

Additional Contextmedium

“Brey used a flight‑safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights.”

Risk framing with an airplane-safety analogy is discussed in the knowledge base, which defines risk as probability of undesirable outcomes and explicitly uses an aviation safety analogy to illustrate acceptable risk levels [S114].

Confirmedhigh

“He advocated a shift from opaque “black‑box” models to transparent “glass‑box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions.”

The call for converting black-box AI into a “glass-box” with full transparency is echoed in the knowledge base, which states “The black box of data must become a glass box” and stresses the need for users to see data sources and training details [S13].

Additional Contextmedium

“True progress must be demonstrated through measurable health benefits rather than merely deploying AI tools.”

Several knowledge-base entries stress that success should be measured by concrete health outcomes (e.g., reduced mortality, fewer complications) instead of technical metrics, aligning with the report’s emphasis on measurable health impact [S111] and [S112].

Additional Contextmedium

“AI systems need safeguards such as allergy checks and catastrophic‑error prevention, implying a need for human oversight in clinical decision‑making.”

The Oxford study cited in the knowledge base warns that AI health tools must operate with human oversight to avoid serious risks, supporting the report’s point about embedding safety checks and human-in-the-loop controls [S107]; a related source also calls for transparent, human-in-the-loop systems to maintain agency [S116].

External Sources (121)
S1
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S2
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S3
Panel Discussion AI and the Creative Economy — -Kenichiro Natsume: Assistant Director General at WIPO (World Intellectual Property Organization), works on policy side …
S4
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the …
S5
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Professor Prokar Dasgupta, speaking as both a practicing surgeon and innovator, provided sobering real-world evidence of…
S6
Classification of Digital Health Interventions v1.0 — 1. Hawkins, R. P., et al. (2008). Understanding tailoring in communicating about health. Health Education Research, 23(3…
S7
Multistakeholder Dialogue on National Digital Health Transformation — Alain Labrique: Fantastic. Thank you, Leah. I really appreciate everyone’s partnership. and engagement this morning,…
S8
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Alain Labrique- Panel moderator/facilitator
S9
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Prokar Dasgupta- Justice Prathiba M. Singh
S10
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Haitham Ali Ahmed El‑Noush- Role/expertise not specified in transcript
S11
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Haitham Ali Ahmed El‑Noush- Payden P. – Prokar Dasgupta- Payden P.
S12
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S13
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S14
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Thank you very much. That’s very nice. So moving to your left, Nikhil. Nikhil, you guys have done a phenomenal job in, y…
S15
Press Conference: Closing the AI Access Gap — They argue that the only path forward is through a collaborative approach that prioritizes trust. This requires active e…
S16
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S17
Education meets AI — This aligns with the Sustainable Development Goals (SDGs) of Quality Education (SDG 4) and Reduced Inequalities (SDG 10)…
S18
Successes & challenges: cyber capacity building coordination | IGF 2023 — In today’s world, cyberattacks and cybercrime incidents are on the rise, resulting in international, governmental, multi…
S19
How to believe in the future? — Another viewpoint suggests that the current profit-driven business model needs to be revisited. While acknowledging that…
S20
Keynote-Rishad Premji — The conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
S21
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — These key comments shaped the discussion by grounding abstract concepts in concrete possibilities, emphasizing the moral…
S22
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — In no case have we seen that level of accuracy. So it’s very important that we keep the human in the loop. It’s very imp…
S23
Leveraging the UN system to advance global AI Governance efforts — The speaker advocates for a horizontal approach within the UN, urging agencies such as the WIPO, ITU, UNU, ILO, and FAI …
S24
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Apart from data protection, the speakers also emphasized the significance of collaboration between the public and privat…
S25
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S26
Harnessing AI for Child Protection | IGF 2023 — In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones imple…
S27
OPENING SESSION | IGF 2023 — In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective …
S28
Healthcare experts demand transparency in AI use — Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but dem…
S29
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — This comment set a foundational tone for the entire discussion by establishing the importance of evidence over hype. It …
S30
Transforming Health Systems with AI From Lab to Last Mile — The speakers demonstrated strong consensus on the need for human-centered AI development, real-world evidence generation…
S31
Launch / Award Event #78 Digital Governance in Africa: Post-Summit of the Future — These key comments shaped the discussion by moving it from high-level policy frameworks to practical implementation chal…
S32
MedTech and AI Innovations in Public Health Systems — Ms. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and…
S33
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S34
Introducción — – Que los registros y programas nacionales, así como los registros de vacunación, de vigilancia epidemiológ…
S35
Traversing biomedical science, technology & innovation, policy, and diplomacy — Building on these experiences, I am now keen on engaging the lifesciences community across countries with varying levels…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Global Cooperation Versus Regional Diversity Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in …
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S38
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S39
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S40
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S41
Democratizing AI: Open foundations and shared resources for global impact — ## International Collaboration Examples Mary-Anne Hartley: Yeah, sure. I think what we all saw with the use case over t…
S42
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Background and Context Throughout the discussion, speakers consistently emphasised that government ownership and lea…
S43
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Current evaluation focuses on technical accuracy, but real-world success depends on user acceptance, which varies based …
S44
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S45
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — “A benchmark might be the wrong thing, not accuracy but actually impact.”[26]. “and impressive with impactful, focusing …
S46
High-level AI Standards panel — Practical Implementation and Real-World Impact While the database represents a valuable step forward, the true measure …
S47
Prosperity Through Data Infrastructure — Despite the challenges, the analysis suggests that successfully digitalising requires creative solutions, even when face…
S48
Can we test for trust? The verification challenge in AI — Moderate to high disagreement with significant implications. The fundamental disagreement between Yampolskiy’s pessimist…
S49
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Moderate disagreement level with significant implications – the speakers largely agree on goals (effective data governan…
S50
Acknowledgements — The advantages of physically removing the human from a weapon delivery platform (such as remotely piloted vehicles like …
S51
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — Kevin Whelan: Thank you and good afternoon everyone. It’s a pleasure to be here and to speak on behalf of Amnesty Inte…
S52
AI, smart cities, and the surveillance trade-off — The key is keeping humans in the loop at decision points that matter. AI can surface insights and recommendations, but p…
S53
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S54
Welcome Address — Modi emphasizes that AI development must focus on human values rather than purely machine efficiency. A human‑centric ap…
S55
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — Additionally, they emphasise the critical need for safeguarding security and user privacy in the interoperability standa…
S56
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — In virtual spaces, regulation and safety measures were discussed. Speakers underscored the need for flexible, ecosystem-…
S57
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of reg…
S58
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S59
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S60
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S61
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — The adoption of digital health technology should consider the principle of equitable access. This means ensuring that al…
S62
Open Forum #33 Building an International AI Cooperation Ecosystem — Ethical Considerations and Inclusivity Human rights principles | Children rights | Privacy and data protection Pelayo …
S63
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Geralyn Miller:Yeah, thank you very much for the question. So I want to respond to in this context to some of the commen…
S64
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S65
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S66
OPENING SESSION | IGF 2023 — In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective …
S67
Healthcare experts demand transparency in AI use — Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but dem…
S68
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S69
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — This observation grounded the discussion in practical realities and influenced subsequent conversations about the need f…
S70
WS #103 Aligning strategies, protecting critical infrastructure — – The need to move from high-level discussions to concrete, actionable measures
S71
Keynote-Rishad Premji — The conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
S72
IGF 2024 Opening Ceremony — This comment highlights the urgent need for practical action beyond policy discussions. It’s thought-provoking because i…
S73
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The discussion revealed that technical capabilities often exceed institutional readiness for AI adoption. Behavioral cha…
S74
29, filed Jan. 22, 2010, at 9-10. — – Focus on the barriers to adoption. Successful efforts address multiple barriers to adoption simultaneously. They combi…
S75
MedTech and AI Innovations in Public Health Systems — Ms. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and…
S76
World Economic Forum® — It can take 20-30 years to develop a new drug or vaccine, and the costs and risks are high. R&D efforts are not coor…
S77
Adoption and adaptation of e-health systems for developing nations: The case of Botswana — – Access to healthcare facilities. – Cost savings via telemedicine activities. – Collaboration amongst the key participa…
S78
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S79
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S80
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S81
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S82
Partnership on AI expands and launches initiatives focused on AI challenges and opportunities — The Partnership on AI, founded in September 2016 by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft with the aim t…
S83
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S84
GOVERNING AI FOR HUMANITY — – 178 By promoting a common understanding, common ground and common benefits, the proposals above seek to address the ga…
S85
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S87
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S88
Open Mic & Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S89
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S90
World Economic Forum Town Hall on AI Ethics and Trust — The discussion maintained a serious, critical tone throughout, with panelists expressing genuine concern and urgency abo…
S91
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S92
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S93
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S94
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S95
WS #6 Bridging Digital Gaps in Agriculture & trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S96
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S97
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S98
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S99
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S100
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S101
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S102
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S103
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S104
https://dig.watch/event/india-ai-impact-summit-2026/catalyzing-global-investment-in-ai-for-health_-who-strategic-roundtable — Fantastic. Professor. Here’s the logic model. and it gave that particular output. But when it gives that output, can we…
S105
Closing Plenary of Global Roundtable — Chair:Thank you very much, Ms. Amner, for the very good summary, as well as to Ms. Lenker for the earlier summary. Excel…
S106
High Level Session 3: AI & the Future of Work — Ishita Barua: Thank you. In a world where AI can generate content faster than we are actually able to consume it and rea…
S107
AI health tools need clinicians to prevent serious risks, Oxford study warns — The University of Oxfordhas warnedthat AI in healthcare, primarily through chatbots, should not operate without human ov…
S108
AI shows promise in supporting emergency medical decisions — Drexel University researchers studied howAI can aid emergency decisions in pediatric traumaat Children’s National Medica…
S109
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S110
The Intelligent Coworker: AI’s Evolution in the Workplace — Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction s…
S111
Responsible AI for Shared Prosperity — Success should be measured by actual impact on lives – reducing maternal mortality, eliminating diseases, escaping pover…
S112
Keynote-Roy Jakobs — Success will ultimately be measured by health outcomes rather than technology metrics – earlier disease detection, fewer…
S113
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — So which are the specific use cases that companies like Anthropic view or are targeting for to solve healthcare problems…
S114
Building Trustworthy AI Foundations and Practical Pathways — Risk should be defined as probability of undesirable outcomes characterized by likelihood and severity, using airplane s…
S115
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Oka…
S116
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S117
Host Country Open Stage — D Silva emphasized the transformative potential of sustainability reporting, stating that “transparency is not just abou…
S118
What is it about AI that we need to regulate? — Concrete Actions to Address AI in Judicial Systems, Immigration and Government Decision-MakingThe discussions across IGF…
S119
Smart Regulation Rightsizing Governance for the AI Revolution — Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, cap…
S120
The Innovation Beneath AI: The US-India Partnership powering the AI Era — He sees a large opportunity for U.S. and Indian firms to co‑create companies that will build refining capacity and reduc…
S121
In brief — – External evidence from systematic research: valid and clinically relevant findings from patient-centred clinical resea…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
Z
Zameer Brey
1 argument82 words per minute789 words572 seconds
Argument 1
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey)
EXPLANATION
Zameer argues that healthcare AI must be fully verifiable, moving from a black‑box to a glass‑box model, with zero tolerance for failure. He stresses that a transparent chain of input, logic and output is essential to ensure patient safety.
EVIDENCE
He uses a flight safety analogy to illustrate that healthcare AI must have zero tolerance for failure, then calls for a verifiable, ‘glass-box’ system that records inputs, logic and outputs, and includes safeguards against allergic reactions or catastrophic errors [24-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Roundtable transcripts record Brey advocating a shift from “black box” to “glass box” AI with documented inputs, transparent logic and traceable reasoning, emphasizing zero-tolerance for failure [S2] and broader calls for algorithmic transparency in AI systems [S13].
MAJOR DISCUSSION POINT
Verified AI
AGREED WITH
Prokar Dasgupta, Payden P.
DISAGREED WITH
Alain Labrique
P
Prokar Dasgupta
2 arguments108 words per minute743 words410 seconds
Argument 1
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
EXPLANATION
Prokar highlights several real‑world AI projects—ambient AI for clinical note‑taking, tele‑surgery that enables remote operations, and autonomous robotic systems for procedures—as examples of concrete investment. He stresses that patient involvement and acceptance are crucial for equitable outcomes.
EVIDENCE
He cites several projects: an evaluation of ambient AI that writes clinical notes and reduces operating-room time [49-50]; a BMJ article on tele-surgery 2.0 enabling surgeons to operate from 2,500 km away with sub-60 ms latency [51]; and work on autonomous robotic systems for prostate procedures and gallbladder surgery, highlighting the need for patient acceptance [51-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The WHO roundtable cites Dasgupta’s presentation of real-world projects: ambient AI for clinical note-taking, tele-surgery with sub-60 ms latency, and autonomous robotic procedures, illustrating concrete investment and the need for patient acceptance [S2].
MAJOR DISCUSSION POINT
Concrete AI implementations
AGREED WITH
Zameer Brey, Payden P.
DISAGREED WITH
Zameer Brey
Argument 2
Embedding AI education within medical and nursing curricula to build a skilled health workforce (Prokar Dasgupta)
EXPLANATION
Prokar points out that very few medical and nursing schools currently teach AI, and calls for investment in skills development to embed AI education, ensuring the next generation of health workers can safely and effectively use AI tools.
EVIDENCE
He notes that there are hardly any medical and nursing schools that include AI in their curricula and urges investment in skills to embed AI education for the next generation of healthcare workers [60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy briefs on AI workforce development stress the importance of integrating AI training into medical and nursing education as part of a broader reskilling agenda, echoing Dasgupta’s call for curriculum integration [S16] and [S17].
MAJOR DISCUSSION POINT
AI education in health curricula
H
Haitham Ali Ahmed El‑Noush
1 argument10 words per minute47 words258 seconds
Argument 1
Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
EXPLANATION
Haitham calls for donors to work together, establishing coordinated strategies, clear priorities and pooled funding mechanisms so that AI‑for‑health initiatives can be effectively supported and scaled.
EVIDENCE
He states that donors need coordination, clear strategies, priorities and pooled investments to rally behind AI for health initiatives [70].
MAJOR DISCUSSION POINT
Donor coordination
AGREED WITH
Prokar Dasgupta, Payden P., Alain Labrique
P
Payden P.
1 argument117 words per minute276 words141 seconds
Argument 1
Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
EXPLANATION
Payden observes that AI in health has reached an inflection point, shifting the conversation from possibility to concrete investment, implementation, governance, evidence generation and sustained partnerships. He emphasizes that these elements are now the primary drivers of progress.
EVIDENCE
He notes that AI in health has reached an inflection point, moving from possibility to investment, implementation and impact, and emphasizes the need for governance, evidence generation and long-term partnerships [101-108]; he further outlines that investment must flow into safety, trust and scalability through regulation, data systems and capacity building [110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Keynote remarks describe the AI-in-health conversation moving from possibility to practicality, emphasizing investment, governance, evidence generation and sustained partnerships, matching Payden’s framing [S20] and roundtable observations on safety, trust and scalability investments [S2].
MAJOR DISCUSSION POINT
Shift to investment and governance
AGREED WITH
Zameer Brey, Prokar Dasgupta
DISAGREED WITH
Zameer Brey, Alain Labrique
A
Alain Labrique
2 arguments87 words per minute219 words150 seconds
Argument 1
Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
EXPLANATION
Alain argues that funding should not stop at innovative AI tools; it must also support the surrounding systems—regulatory frameworks, data infrastructure and workforce capacity—that make AI safe, trustworthy and scalable, preventing new inequalities.
EVIDENCE
He argues that investment must go beyond pure innovation to fund systems that make AI safe, trusted and scalable, including governance, regulation, data infrastructure and workforce capacity building, describing these as essential enabling conditions [110-114]; he adds that predictability and trust attract further investment, linking regulatory strength, evidence generation and partnerships to increased funding [115-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Roundtable discussions note that funding should extend beyond novel tools to cover governance, regulation, data systems and workforce capacity building, directly supporting Labrique’s point [S2] and further reinforced by capacity-building commentary [S7].
MAJOR DISCUSSION POINT
Investment beyond innovation
AGREED WITH
Zameer Brey, Payden P.
DISAGREED WITH
Zameer Brey, Payden P.
Argument 2
Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique)
EXPLANATION
Alain stresses that keeping humans involved in AI‑driven clinical workflows is essential for changing entrenched practices and delivering tangible health outcomes, noting that behavior change is possible when humans remain central.
EVIDENCE
He emphasizes that keeping humans in the loop is essential for changing clinical practice and achieving real-world impact, noting that behavior change is possible when humans remain involved [64-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panels on AI deployment stress keeping humans in the loop for safety, behavior change and real-world impact, aligning with Labrique’s argument [S22] and roundtable remarks on human-centric workflows [S2].
MAJOR DISCUSSION POINT
Human in the loop
AGREED WITH
Zameer Brey, Prokar Dasgupta, Ken Ichiro Natsume, Justice Prathiba M. Singh
DISAGREED WITH
Zameer Brey
K
Ken Ichiro Natsume
1 argument143 words per minute84 words35 seconds
Argument 1
AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
EXPLANATION
Ken asserts that AI technologies must be deployed with a human‑centric approach, ensuring that people remain central to decision‑making and that AI augments rather than replaces human expertise.
EVIDENCE
He says AI can be leveraged but humans must remain at the centre of its utilisation, emphasizing a human-centric approach [74-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Natsume’s comments in the roundtable highlight a human-centric approach to AI, echoing broader calls for AI systems that augment rather than replace human expertise [S2] and the importance of human oversight noted in other AI governance sessions [S22].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Zameer Brey, Prokar Dasgupta, Alain Labrique, Justice Prathiba M. Singh
J
Justice Prathiba M. Singh
1 argument120 words per minute27 words13 seconds
Argument 1
Collaboration between AI and broader technology sectors is essential for a healthier world (Justice Prathiba M. Singh)
EXPLANATION
Justice Singh delivers a concise statement that achieving a healthier world requires AI to work together with other technology sectors, highlighting the need for cross‑sector collaboration.
EVIDENCE
She delivers a concise statement that a healthier world requires AI and broader technology sectors to work together [78-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN-level discussions advocate for cross-sector collaboration between AI and other technology domains to advance health outcomes, supporting Singh’s statement [S23] and examples of public-private partnership for innovation [S24].
MAJOR DISCUSSION POINT
AI‑technology collaboration
AGREED WITH
Zameer Brey, Prokar Dasgupta, Alain Labrique, Ken Ichiro Natsume
Agreements
Agreement Points
Human‑centered AI and keeping humans in the loop is essential for safe and acceptable health‑AI deployment.
Speakers: Zameer Brey, Prokar Dasgupta, Alain Labrique, Ken Ichiro Natsume, Justice Prathiba M. Singh
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume) Collaboration between AI and broader technology sectors is essential for a healthier world (Justice Prathiba M. Singh)
All speakers stress that AI systems must remain transparent, involve patients or users, and keep humans central to decision-making to ensure safety and acceptance [24-30][51-57][64-65][74-75][78-79].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for human-in-the-loop safeguards in AI governance, as highlighted in discussions on smart-city AI and responsible AI at scale [S52][S53][S54].
Coordinated investment and capacity building beyond pure innovation are required to scale AI for health.
Speakers: Prokar Dasgupta, Payden P., Alain Labrique, Haitham Ali Ahmed El‑Noush
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
Speakers call for pooled, strategic funding, governance structures, and workforce training to move AI from pilots to sustainable health impact [49-57][60][101-108][110-114][70].
POLICY CONTEXT (KNOWLEDGE BASE)
The WHO roundtable emphasized shifting investment from pure benchmarks toward impact-driven scaling and capacity building, echoing the need for coordinated financing and infrastructure development [S45][S46][S64].
Trust, verification and transparency are prerequisites for AI adoption in health.
Speakers: Zameer Brey, Alain Labrique, Payden P.
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
All three emphasize that AI must be auditable, predictable and trustworthy; trust is described as the currency that unlocks sustainable investment [24-30][115-118][118-119].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder AI principles stress trust and transparency as preconditions for deployment, and recent debates highlight challenges in verification frameworks [S59][S60][S48].
Equity and patient safety must be central; AI should improve health for everyone, not a privileged few.
Speakers: Zameer Brey, Prokar Dasgupta, Payden P.
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Speakers underline zero-tolerance for error, the need for patient involvement, and the goal that AI benefits all populations, not just a few [24-30][51-57][106-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Digital health policy documents underline equitable access and a human-rights-based AI approach, calling for inclusive design and safety for all populations [S61][S58][S63][S55].
Similar Viewpoints
Both stress that AI must be transparent and trustworthy, with verifiable logic and safeguards, as a foundation for safe health deployment [24-30][115-118].
Speakers: Zameer Brey, Alain Labrique
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
Both call for a shift from pure innovation to concrete, funded implementation, governance and evidence generation to realise health impact [49-57][101-108].
Speakers: Prokar Dasgupta, Payden P.
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Both highlight that AI must be deployed with patients/humans at the centre, ensuring acceptance and ethical use [51-57][74-75].
Speakers: Prokar Dasgupta, Ken Ichiro Natsume
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
Both stress the need for coordinated, strategic funding mechanisms and partnerships to scale AI for health [70][101-108].
Speakers: Haitham Ali Ahmed El‑Noush, Payden P.
Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Unexpected Consensus
Even while advocating for highly autonomous technologies, speakers still stress the necessity of patient safety safeguards.
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
Zameer calls for zero-risk, fully verifiable AI, while Prokar promotes autonomous robotics and tele-surgery; nevertheless both agree that safeguards, patient involvement and transparency are non-negotiable, revealing an unexpected alignment between caution and high-tech ambition [24-30][51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines on safe AI at scale and governance frameworks insist on safety-by-design and guardrails even for autonomous systems [S53][S59][S44].
Overall Assessment

The panel shows strong convergence on four pillars: (1) human‑centred, transparent AI; (2) coordinated investment and capacity building beyond mere innovation; (3) trust and verifiability as pre‑conditions for adoption; and (4) equity and patient safety as overarching goals. These shared positions indicate a high level of consensus that the next phase for AI in health must be grounded in robust governance, pooled funding, skilled workforce and patient‑focused design.

High consensus – the majority of speakers independently reinforce the same themes, suggesting that future policy and funding streams are likely to prioritize trustworthy, human‑centric AI systems supported by coordinated investment and capacity development.

Differences
Different Viewpoints
Risk tolerance – zero‑risk verified AI vs pragmatic deployment with acceptable risk
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
Zameer argues that health-care AI must be fully verifiable with a 0 % tolerance for failure, using a glass-box model and safeguards [24-30]. Prokar promotes deploying existing AI tools (ambient note-taking, tele-surgery, autonomous robots) even though they are not yet perfect and notes patient reluctance, suggesting a more pragmatic, incremental approach [49-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI trust reveal divergent views on zero-risk expectations versus pragmatic risk acceptance in deployment [S48][S44].
Where investment should be directed – verification infrastructure vs system‑wide governance, data and capacity building
Speakers: Zameer Brey, Payden P., Alain Labrique
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
Zameer focuses funding on creating a transparent, auditable AI pipeline and safeguards [24-30]. Payden stresses that the new priority is financing governance, evidence generation and partnership mechanisms to make AI trustworthy [101-108]. Alain adds that investment must also cover regulatory frameworks, data systems and workforce capacity as essential enabling conditions [110-114]. The three speakers therefore disagree on the primary allocation of resources.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder discussions note disagreement on prioritizing verification infrastructure versus broader governance, data and capacity investments [S49][S64][S45].
Metrics of success – safety/accuracy versus real‑world impact
Speakers: Alain Labrique, Zameer Brey
Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique) Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey)
Alain suggests that impact should be measured by behavior change and outcomes, arguing that benchmarks should focus on impact rather than pure accuracy [63]. Zameer centers the discussion on eliminating any error, using safety as the primary metric [24-30].
POLICY CONTEXT (KNOWLEDGE BASE)
WHO and AI standards panels argue that impact metrics should outweigh pure accuracy benchmarks, urging real-world outcome measurement [S45][S46][S44].
Unexpected Differences
Zero‑risk requirement versus realistic acceptance of residual risk in autonomous systems
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) … autonomous robotic system … 100 % accurate … yet only one hand raised in the room, indicating patient reluctance (Prokar Dasgupta)
Zameer’s insistence on a 0 % failure tolerance [24-30] clashes with Prokar’s presentation of advanced autonomous systems that are touted as “100 % accurate” but still face human acceptance barriers, revealing an unexpected tension between theoretical safety guarantees and practical deployment realities [51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
The verification challenge and broader AI safety discourse highlight tension between zero-risk demands and tolerable residual risk in autonomous health AI [S48][S44].
Overall Assessment

The panel shows broad consensus that AI can transform health, but disagreement centers on risk tolerance, investment priorities, and measurement of success. Zameer pushes for a zero‑risk, fully auditable AI model, while others advocate for pragmatic deployment, system‑level governance, and impact‑focused metrics.

Moderate – the divergences are substantive (risk vs deployment, funding focus) but do not fracture the shared vision of AI‑enabled health improvement. The implications are a need for coordinated policy that balances stringent safety standards with realistic pathways for scaling AI tools.

Partial Agreements
All speakers share the overarching goal of harnessing AI to improve health outcomes, but they diverge on the primary pathway: Zameer stresses absolute verification, Payden stresses governance and partnership, Alain stresses system‑level investment, and Ken stresses a human‑centric deployment. The consensus is that AI must be trustworthy and human‑centered, yet the route to achieve that differs [24-30][101-108][110-114][74-75].
Speakers: Zameer Brey, Payden P., Alain Labrique, Ken Ichiro Natsume
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
Both agree that financial resources are essential to scale AI for health. Prokar calls for targeted investment in specific tools and patient‑centred pilots, while Haitham calls for coordinated donor strategies and pooled funding mechanisms. They share the goal of mobilising money but differ on coordination versus project‑specific focus [49-57][70].
Speakers: Prokar Dasgupta, Haitham Ali Ahmed El‑Noush
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
Takeaways
Key takeaways
AI in health must be trustworthy and verifiable – a ‘glass‑box’ approach that provides a transparent input‑output chain and aims for zero risk of error (Zameer Brey). The conversation has shifted from speculative possibilities to concrete investment, implementation, and impact; funding must support safety, governance, evidence generation, data systems, workforce readiness and long‑term partnerships (Payden P., Alain Labrique). Coordinated donor strategies and pooled investments are essential to align priorities and scale equitable AI solutions (Haitham Ali Ahmed El‑Noush). Human beings must remain at the centre of AI deployment – keeping humans in the loop, embedding AI education in medical and nursing curricula, and ensuring societal impact and patient involvement (Ken Ichiro Natsume, Prokar Dasgupta, Justice Prathiba M. Singh). Equity is a cross‑cutting requirement: AI tools need diverse data, global collaboration (UK, India, Africa), and patient‑focused design to avoid new inequalities (Prokar Dasgupta). Trust, built through transparent regulation and demonstrable outcomes, is the currency that unlocks sustainable investment in health AI (Payden P.).
Resolutions and action items
Form a working group with partners to develop a pathway for verified, glass‑box AI in healthcare (proposed by Zameer Brey). Create coordinated donor mechanisms and a shared strategic framework for AI health investments (suggested by Haitham Ali Ahmed El‑Noush). Invest in governance, regulatory frameworks, evidence generation, data infrastructure and capacity‑building programmes to ensure safe, scalable AI (highlighted by Payden P. and Alain Labrique). Integrate AI education into medical and nursing curricula worldwide to build a skilled health workforce (Prokar Dasgupta). Pilot and evaluate concrete AI applications such as ambient note‑taking, tele‑surgery and autonomous robotics with patient involvement to demonstrate equity impact (Prokar Dasgupta). Establish long‑term, cross‑sector partnerships (government, industry, civil society) to sustain AI health initiatives (Payden P.).
Unresolved issues
Concrete methods and standards for achieving the claimed zero‑risk, fully verifiable AI in clinical practice. Specific strategies to overcome entrenched clinical workflows and achieve behaviour change among clinicians. Detailed funding models and allocation mechanisms for coordinated donor investments. How to operationalise global equity – e.g., data diversification, patient engagement, and access for low‑resource settings. Exact regulatory and legal frameworks required to certify AI tools as safe and trustworthy. Metrics and timelines for measuring real‑world health‑outcome improvements attributable to AI.
Suggested compromises
Adopt a phased approach that keeps humans in the loop while progressively increasing AI autonomy, balancing safety with innovation. Combine rapid AI deployment (e.g., ambient note‑taking) with rigorous verification processes before scaling to higher‑risk applications. Align AI development with both technological capability and societal acceptance, ensuring patient and clinician involvement throughout. Prioritise investment in foundational systems (governance, data, training) alongside product development to mitigate risk while advancing impact.
Thought Provoking Comments
When it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. We need AI that is verifiable – a ‘glass box’ where we can document the input, see the logic, and ensure it never prescribes something harmful.
This reframes the AI discussion from performance metrics to absolute safety, introducing a stringent verification standard that challenges the prevailing tolerance for probabilistic risk in AI systems.
It shifted the conversation from generic enthusiasm about AI assistance to a critical focus on safety and transparency. Subsequent speakers (e.g., Prokar Dasgupta) referenced the need for trustworthy, equitable AI, and the panel later emphasized governance and trust as essential for investment.
Speaker: Zameer Brey
Responsible AI UK is funding real‑world implementations – from ambient AI that writes notes and shortens OR time, to tele‑surgery 2.0 that lets a surgeon operate 2,500 km away with ≤60 ms latency, and autonomous robotic systems for procedures like prostate treatment.
He moves the dialogue from abstract concepts to concrete, scalable examples, highlighting both technological feasibility and the importance of implementation in diverse settings.
His examples broadened the scope of the discussion to include equity, data diversity, and global health impact, prompting other panelists (Alain Labrique, Payden P.) to stress the need for investment in infrastructure and workforce readiness.
Speaker: Prokar Dasgupta
The challenge isn’t just about accuracy; it’s about impact. We must measure whether AI actually changes health outcomes, not just whether it makes the right prediction.
This comment redirects the metric of success from technical performance to real‑world health impact, urging a shift in evaluation criteria.
It prompted the panel to discuss evidence generation and outcome measurement, influencing Payden’s later summary about moving from possibility to measurable impact.
Speaker: Alain Labrique
We need to move from promise to progress – keep humans at the centre of the AI revolution and ensure AI tools are integrated into workflows with clear, trusted pathways.
Reiterates the central theme of human‑centric AI, reinforcing the ethical and practical necessity of integrating AI without displacing clinicians.
This repetition reinforced the human‑in‑the‑loop principle, which was echoed by Ken Ichiro Natsume and Justice Prathiba Singh, solidifying it as a consensus point before the closing remarks.
Speaker: Zameer Brey (repeated emphasis)
For AI tools and for patients, we must think beyond the Turing test to the ‘Wieselbaum test’ – evaluating societal effects, not just technical capability.
Introduces a novel evaluative framework that expands assessment from technical competence to societal impact, challenging the audience to consider broader consequences.
This sparked a subtle shift toward discussing ethical implications and equity, which later appeared in Payden’s emphasis on AI as a tool for equity versus a driver of new inequalities.
Speaker: Prokar Dasgupta
AI and health have reached an inflection point. The question is no longer whether AI can improve health, but whether we will invest in the right foundations – governance, regulation, evidence, workforce capacity – to ensure it improves health for everyone, not a few.
Synthesizes the discussion into a clear call to action, framing the next steps as investment in systemic enablers rather than just technology development.
Serves as the concluding turning point, consolidating earlier themes (safety, impact, equity, human‑centered design) into a strategic roadmap, and setting the tone for future collaborations and commitments.
Speaker: Payden P.
Overall Assessment

The discussion evolved from an initial, somewhat procedural focus on AI assistance to a nuanced debate about safety, verification, real‑world impact, and equity. Zameer Brey’s risk‑analogy and call for verifiable ‘glass‑box’ AI forced the panel to confront safety standards, while Prokar Dasgupta’s concrete implementation examples expanded the conversation to global equity and practical deployment. Alain Labrique’s emphasis on impact over accuracy redirected evaluation metrics, and the repeated human‑centric reminders reinforced ethical grounding. The introduction of a societal‑impact test (the ‘Wieselbaum test’) further deepened the ethical dimension. All these pivotal comments converged in Payden’s closing synthesis, which reframed the dialogue as a strategic investment challenge, highlighting governance, trust, and inclusive outcomes as the decisive factors for AI’s future in health. These key interventions shaped the panel’s trajectory, moving it from abstract enthusiasm to a concrete, action‑oriented roadmap.

Follow-up Questions
To what extent will AI improvements translate into actual health outcome improvements?
Understanding the real-world impact of AI on patient health is essential to justify its adoption beyond technical performance.
Speaker: Zameer Brey
How will AI integration shift health outcomes over time?
Longitudinal effects determine whether AI provides sustained benefits or merely short‑term gains.
Speaker: Zameer Brey
How can AI be made verifiable – shifting from a black‑box to a glass‑box model?
Transparency is needed for clinicians and regulators to trust AI recommendations and to audit decision pathways.
Speaker: Zameer Brey
What safeguards can ensure AI never prescribes something a patient is allergic to or that could cause catastrophic error?
Safety guarantees are critical for clinical acceptance and for meeting the zero‑risk expectation in healthcare.
Speaker: Zameer Brey
What level and type of investment is required in clinical research, evaluation, and evidence generation to shift entrenched clinical practice pathways?
Changing long‑standing workflows demands robust evidence and dedicated funding to overcome resistance from clinicians.
Speaker: Zameer Brey, Prokar Dasgupta
What are the broader societal effects of deploying AI tools in healthcare?
Beyond technical performance, AI may influence equity, employment, patient autonomy, and public trust, requiring systematic study.
Speaker: Prokar Dasgupta
How can we move from AI promise to measurable progress in health systems?
Identifying concrete steps, metrics, and implementation pathways is needed to translate hype into tangible benefits.
Speaker: Zameer Brey
How can AI curricula be integrated into medical and nursing education worldwide?
Embedding AI knowledge in health‑professional training ensures future workforce readiness and safe AI use.
Speaker: Prokar Dasgupta
How can we obtain and incorporate more diverse data sets to avoid bias and improve AI equity?
Diverse data are essential to develop AI that works reliably across different populations and reduces health disparities.
Speaker: Prokar Dasgupta
What pathways and standards are needed to develop verified AI that provides a transparent chain of proof for each decision?
Creating verifiable AI frameworks will support regulatory compliance and clinician confidence.
Speaker: Zameer Brey
How can regulatory and legal frameworks be strengthened to build trust and attract sustainable investment in AI for health?
Clear governance and legal certainty are prerequisites for scaling AI solutions responsibly.
Speaker: Payden P.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.