Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable
20 Feb 2026 12:00h - 13:00h
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable
Session at a glance
Summary
This discussion focused on the investment, implementation, and impact of artificial intelligence in healthcare, examining how to move from promise to practical progress. The panelists emphasized that AI in health has reached an inflection point where the conversation has shifted from possibility to real-world application and outcomes. A key theme was the need for strategic investment beyond just innovation, extending into governance, regulation, evidence generation, workforce capacity building, and data systems that make AI safe, trusted, and scalable.
Zameer Brey highlighted the importance of verified AI with zero risk of failure in healthcare, advocating for transparent “glass box” models rather than “black box” systems where decision-making processes can be documented and understood. He stressed the need for safeguards to prevent catastrophic errors, such as prescribing medications patients are allergic to. Professor Dasgupta, speaking as both a working surgeon and innovator, emphasized the critical importance of implementation and bringing patients along in the AI revolution. He provided examples of successful AI applications, including ambient AI for note-taking, tele-surgery capabilities, and robotic automation, while noting public hesitancy toward fully automated medical procedures.
The discussion revealed that investment must flow into enabling conditions that determine whether AI becomes a tool for equity or drives new inequalities. Panelists agreed that keeping humans at the center of AI development is crucial, and that predictability and trust-building through strong regulatory frameworks are essential for attracting sustainable investment. The conversation concluded that success requires coordination among donors, strategic priorities, and partnerships across sectors to ensure AI improves health outcomes for everyone, not just a privileged few.
Keypoints
Major Discussion Points:
– AI Integration and Implementation Challenges: The discussion emphasized moving beyond just developing AI tools to focusing on how they integrate into real-world healthcare workflows. Speakers highlighted that simply having accurate AI isn’t enough – the timing, placement, and user experience of AI assistance in clinical practice determines its actual effectiveness.
– The Need for Verified and Transparent AI in Healthcare: A critical point was raised about the necessity for “verifiable AI” with zero tolerance for error in healthcare settings. The speakers advocated for shifting from “black box” to “glass box” AI systems where decision-making processes are transparent and traceable, with built-in safeguards to prevent catastrophic errors.
– Investment Beyond Innovation: Panelists stressed that investment must extend beyond just developing AI technology to include governance, regulation, evidence generation, workforce training, and data systems. The focus should be on building the foundational infrastructure that makes AI safe, trusted, and scalable.
– Human-Centered AI and Behavioral Change: The discussion highlighted the resistance to change in medical practice and the importance of keeping humans at the center of AI implementation. Even with 100% accurate AI systems, public and professional acceptance remains a significant challenge that requires careful attention to societal impacts.
– Equity and Global Access: Speakers emphasized ensuring AI benefits reach underserved populations globally, particularly in the Global South, rather than exacerbating existing healthcare inequalities. Examples included telesurgery capabilities that could address the 5 billion patients without access to equitable surgery.
Overall Purpose:
The discussion aimed to address how to move AI in healthcare from promise to practical progress, focusing on the investments, frameworks, and strategies needed to ensure AI improves health outcomes for everyone, not just a privileged few.
Overall Tone:
The tone was professional and forward-looking, with a sense of urgency about making AI work effectively in healthcare. While optimistic about AI’s potential, the conversation maintained a realistic and cautious approach, acknowledging significant challenges around implementation, acceptance, and equity. The tone remained consistently collaborative throughout, with speakers building on each other’s points and emphasizing the need for partnerships across sectors.
Speakers
Speakers from the provided list:
– Haitham Ali Ahmed El‑Noush – Role/expertise not specified in transcript
– Zameer Brey – Involved in AI development and healthcare technology, discusses AI integration and verified AI concepts
– Prokar Dasgupta – Professor, working surgeon, clinician and innovator, represents Responsible AI UK, specializes in tele-surgery and robotic surgery
– Alain Labrique – Panel moderator/facilitator
– Ken Ichiro Natsume – Role/expertise not clearly specified in transcript
– Justice Prathiba M. Singh – Justice (legal/judicial role), mentioned in context of legislation and legal frameworks
– Payden P. – Associated with Capacity Building Commission, provides closing remarks on AI and health policy
Additional speakers:
None identified – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This panel discussion examined the transition of artificial intelligence in healthcare from theoretical promise to practical implementation, featuring perspectives from clinicians, technologists, and policy experts on the challenges of deploying AI systems that deliver meaningful health outcomes.
Moving Beyond Technical Accuracy to Real-World Implementation
Zameer Brey emphasized that healthcare AI has reached an inflection point where the focus must shift from technical development to practical implementation and measurable health outcomes. He outlined four levels of AI evaluation, arguing that the industry has concentrated too heavily on technical accuracy while neglecting crucial aspects of workflow integration and actual health impact. Brey stressed that the timing and placement of AI assistance within clinical workflows is critical for adoption, noting that user experience and workflow integration are as important as technical performance.
The discussion highlighted how AI tools are often forced into clinical practice at inconvenient points in healthcare workflows, rather than being designed around how healthcare providers naturally want to engage with these systems. This represents a fundamental shift from technology-driven to human-centered implementation strategies.
The Need for “Verified AI” and Zero-Risk Standards
Brey introduced the concept of “verified AI” using a compelling aviation analogy. He noted that while people might accept a 95% chance of safe arrival for air travel, even 99% accuracy in healthcare would be unacceptable, as it would mean catastrophic failures in patient care. This illustrates why healthcare AI must achieve much more stringent safety standards than other technological applications.
Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for shifting from “black box” to “glass box” AI systems where inputs can be documented, logic models are transparent, and reasoning can be traced and verified. This transparency is essential for clinical safety and building the trust necessary for widespread adoption.
Clinical Reality and Human Acceptance Challenges
Professor Prokar Dasgupta, speaking as both a practicing surgeon and innovator, provided sobering real-world evidence of implementation challenges. He described a robotic surgical system that achieved 100% accuracy in gallbladder removal procedures in pigs, but when he asked medical audiences whether they would allow such a system to operate on them, only one person raised their hand in each of two instances, despite the system’s perfect technical performance.
This highlighted a critical disconnect between technical achievement and human acceptance. Even when AI systems demonstrate superior performance, psychological barriers and trust issues create significant resistance to adoption. Dasgupta emphasized that the healthcare AI community must invest as much effort in understanding human psychology and social acceptance as in technical development.
Dasgupta also shared practical examples of AI implementation, including ambient AI systems that saved “a month of wasted time in the operating room” and telesurgery capabilities where surgeons can operate from “2.5 thousand kilometers away using a weapon with a time delay of 60 milliseconds.” He noted his work with “Responsible AI UK” in investing in AI champions within hospitals to facilitate adoption.
Investment in Foundational Infrastructure
The discussion revealed consensus that successful AI implementation requires comprehensive investment beyond technical innovation. This includes governance frameworks, regulatory systems, evidence generation capabilities, and workforce capacity building. Dasgupta noted that very few medical and nursing schools worldwide have integrated AI into their curricula, representing a critical gap in preparing the next generation of healthcare workers.
The panelists identified the need for clear regulatory pathways and legal frameworks that create predictability and encourage sustainable investment. When countries establish robust governance structures, investment flows more readily, creating positive cycles of development and adoption.
Global Equity and Access Considerations
The discussion consistently addressed concerns that AI healthcare benefits might concentrate among privileged populations while underserved communities are left behind. Dasgupta highlighted AI’s potential to address global health inequities, noting that telesurgery technology could potentially provide surgical access to the 5 billion people worldwide who currently lack equitable surgical care.
However, panelists acknowledged that without intentional focus on equity, AI could exacerbate existing healthcare disparities. The concentration of AI development in wealthy countries, combined with lack of diverse training data, creates risks that AI systems will be optimized for populations with existing healthcare access while performing poorly for underserved communities.
Justice Prathiba M. Singh emphasized ensuring that technology and AI work together to create “a healthier world” for all populations, reinforcing the need for deliberate strategies to include global health equity considerations from the earliest stages of development.
Human-Centered Approaches and Societal Impact
Ken Ichiro Natsume emphasized that artificial intelligence must be leveraged with “human being at the center of those utilizations.” Dasgupta advocated for moving beyond traditional AI evaluation metrics to what he called the “Weizenbaum test,” which considers broader societal effects and implications of AI systems rather than simply whether machines can mimic human behavior.
This human-centered approach recognizes that successful AI implementation requires careful consideration of how AI systems affect healthcare relationships, professional roles, patient autonomy, and social structures. The discussion acknowledged that changing established medical practices is inherently difficult, as healthcare professionals often work within well-established workflows refined over years of practice.
Coordination and Future Directions
The panel concluded with recognition that AI in healthcare has moved beyond exploring possibilities to addressing concrete implementation challenges. Critical success factors identified include maintaining human-centered development approaches, investing in comprehensive foundational systems, building trust through transparency, addressing global equity from the outset, and fostering coordination across sectors.
The discussion emphasized that the ultimate measure of success for healthcare AI should be improved health outcomes for all populations, particularly those currently underserved, rather than technical benchmarks alone. This outcome-focused approach provides a framework for evaluating investments and ensuring that AI development serves broader health equity goals.
Several questions remain for future exploration, including mechanisms for achieving zero-risk AI systems, strategies for overcoming patient resistance to AI-driven care, methods for ensuring global data diversity, and approaches for embedding AI education throughout healthcare training programs.
Session transcript
Haitham Ali Ahmed El‑Noush: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Thank you. Thank you. Thank you. Thank you. We’ve confused shine… I’m sorry. Thank you. Thank you. Thank you. At the end. So think about this, you’ve done all your hard work, you’ve made your notes, you’ve written your prescription, you’ve counseled the patient and now you press AI assist. No thank you. All they did was to move the AI assist button earlier on and give the user the prescription to use it when it made sense to that user and the results changed. The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.
So these are some of the fundamental questions and I think we’ve got caught up with investment at levels one and two. Let’s just check how this model works. Let’s just check the product and having given enough investment into how this gets integrated into the world. Let’s just see how this goes. So this is the product flow. This is the product flow. This is the product flow. and then ultimately how does this shift outcomes over time I think can I take one more minute and talk about verified AI, should I come back to this I was thinking to myself and probably a bad analogy but I’m going to put it out there anyway because I’m flying this evening that’s why I didn’t want to use it but if I said to you all would you fly if the likelihood of the flight arriving safely was 95 % I’d fly, you’d fly if it was 95 % if you fly if I told you if it was 96, 97 or 98 would you fly no even if it was 95 just think for a second if it was 99 % That means every 100th flight taking off from Delhi airport would crash.
We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this is that when it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box? Can we really know why did the model make a particular decision? We gave it X input. The patient had these criteria.
Here’s the logic model. and it gave that particular output. But when it gives that output, can we put some safeguards in place that makes 100 % sure that it isn’t prescribing something the patient’s allergic to or that’s going to end up in a catastrophic event or that’s fundamentally flawed in its logic? And that’s where we’d like to invite partners to work with us on a pathway to verified AI. Thank you. Thank you. And I can see Justice Simo. So Justice Simo is just nodding her head because I think, you know, having that chain of proof is something we like to have in legislation. So, you know, it’s always nice when there’s a trail to follow to that decision.
We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician and an innovator. I’m sure you’ve experienced the recalcitrance and challenge of shifting medical practice. And, you know, nurses and doctors are well known for being entrenched in the way of doing things. And changing those well -involved and well -trodden paths of workflows and clinical decision pathways are very difficult to shift. So what kind of investment do we need to make in clinical research and evaluation and evidence to shift those well -trodden paths of practice? Professor Vasgupta.
Namaskar. Namaskar. Thanks. To realize that I am a working surgeon, so in addition to invention and innovation, what I’m really interested in is implementation. I want to make a difference. And if you may be patient someday, it will make a difference to you. I come here on behalf of Responsible AI UK, a major investment from UK research and innovation, not just in AI in the UK, but into an international ecosystem, including the greater south. We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most. Let me give you some examples of how we are doing this. Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.
Shortening the operating time, saving a month of wasted time in the operating room. The British Association of Physicians of Indian origin, realized that wouldn’t it be wonderful if our parents, many of whom are living in India my mother is 87 before she has a heart attack wouldn’t it be nice if a message on my watch told me something was going to happen the reason I decided to make a note is because the data is not diversified enough without diversity of data we are not going to win this battle let me give you another example of investment of inequity two weeks ago if you look at the British Medical Journal there is a major article from us on tele -surgery 2 .0 it means to me the technology exists for a surgeon to operate from two and a half thousand kilometers away using a weapon with a time delay of 60 milliseconds or less it feels like you’re in the same operating room imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery that is an example let me give you a third example and this is in automation my own group in King’s has funded and invested in automation big time the levels of autonomy in robotics takes place from 0 to 5 0 is more autonomy most autonomous machine is level 3 you map with the ultrasound the prostate all the men in this room have a prostate as we know we have difficulty in pain you move the middle of the prostate with an ultrasound you press a button a water jet floats at home in the middle of the prostate so that you don’t have to wake up 20 times at night to pee until last November when one university announced the the first the first the first in the world in the world the first in the world the first on a robotic system which can operate on big gallbladders.
Big gallbladders with 100 % misery, 100 % accurate. Five days after this, I was at the Royal Academy of Engineering, a group like this, and I said, hands up everyone who is going to allow this machine to operate on them. So hands up everyone who will allow a completely automated machine, 100 % accurate in pigs, to take out your gallbladder here. And in takers, there was one hand in the room. On the other occasion, there was a single hand in the room. He is down to his own. So we went into these public cells, but they are saying not yet. They are still going to hear them. Still today. So I do. companies of course we have to work with them, countries including the government side, civil society the three C’s, if we do not bring our patients with us all this investment is going to fail and the final investment I would urge is in skills there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail so these are my parting thoughts to you, thank you thank you Thank you.
Thank you. Thank you.
and impressive with impactful, focusing on things that get used and work in the real world. A benchmark might be the wrong thing, not accuracy but actually impact. And then, of course, you know, the challenge that Professor Dasvipta brought to us that, you know, it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop. So I’d like to give each of you one sentence now just to wrap up. As you’ve heard others, what has changed your thoughts and what’s the one message you’d like to have people read the room with? And let me just go sequentially down the road. Thank you. … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … …
S
Haitham Ali Ahmed El‑Noush:
o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.
Fantastic. Ineji.
Thank you. I think we’re asked to respond in one sentence. I was going to say, we’re not going to do something simple. We need something. But I haven’t changed my mind, but one point which resonated to my heart, which I was not able to mention in my opening sentence, but one thing I’d like to highlight is that, okay, we can leverage artificial intelligence with human being at the center of those utilizations. So that’s what I want to highlight. Thank you.
That’s the thing. I’m going to actually say one sentence. Here’s to a healthier world. Hey, D.I. and technology, we really work together in the world.
Fantastic. Professor.
For AI tools and for the patients, I urge you to sell Mexico the test, which means do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.
I think for me, the question of how do we move from promise to progress is underpinned by I think a theme that I’m seeing at the conference. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important we need to keep humans at the center of the AI revolution.
Fantastic. So, Dr. Pagan, you’ve been patiently giving these wise words from our panel. I’d like to give you the last word to bring this home and everyone keep the audience with food for thought before they go for food for their stomachs.
Payden P.:
Thank you very much. Good afternoon to all. Sincere thanks to all the… I think it’s on. Yes. Sincere thanks to all the distinguished panelists for this very thought -provoking and very interesting conversation around AI and health. I think today’s conversation makes one thing very clear. AI and health has reached an inflection point. And for years we spoke about possibility. Today the conversation has shifted to investment, implementation, and impact. I think that was really highlighted. And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few. Over the past hour, several themes have emerged.
And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long -term partnerships. These are not optional. They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation. New inequalities. Second, predictability builds confidence. When countries strengthen regulatory and legal frameworks, investment flows in. When evidence is generated and transparency shared, investment grows. When partnerships are built across sectors, investment scales. In short, trust is the currency that unlocks sustainable investment. So I think these are some important points that I could take out from here.
And we look forward to working with different partners, investors, donors, government agencies to take AI and health further for the benefit of all the populations. Thank you.
Thank you so much. Those are reserved test patients in writing from the Capacity Building Commission and Curfew Borrow.
Zameer Brey
Speech speed
82 words per minute
Speech length
789 words
Speech time
572 seconds
Verifiable Glass‑Box AI
Explanation
Zameer emphasizes the need to shift from opaque black‑box models to transparent, verifiable AI so that inputs and outputs can be documented, ensuring safety and accountability. He invites partners to collaborate on pathways toward verified AI.
Evidence
“And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box?” [1]. “And that’s where we’d like to invite partners to work with us on a pathway to verified AI.” [3].
Major discussion point
Verification, Transparency, and Trust in AI
Topics
Artificial intelligence | The enabling environment for digital development
Human‑Centered AI for Clinicians
Explanation
He argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficulty of altering entrenched clinical workflows. Maintaining a human in the loop is essential for acceptance and impact.
Evidence
“I think that’s a very important we need to keep humans at the center of the AI revolution.” [2]. “And changing those well‑involved and well‑trodden paths of workflows and clinical decision pathways are very difficult to shift.” [46].
Major discussion point
Human‑Centered AI and Workflow Integration
Topics
Artificial intelligence | Capacity development
AI for TB, Diabetes and Other Diseases
Explanation
Brey questions whether AI tools will actually improve diagnosis of TB or adherence in diabetes, stressing the need for outcome‑focused evaluation across disease areas.
Evidence
“Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.” [42].
Major discussion point
Equity, Access, and Global Collaboration
Topics
Social and economic development | Artificial intelligence
Payden P.
Speech speed
117 words per minute
Speech length
276 words
Speech time
141 seconds
Trust as Currency for Investment
Explanation
Payden states that trust is the essential currency that unlocks sustainable investment, linking strong regulatory frameworks and transparency to the flow of funding for AI in health.
Evidence
“In short, trust is the currency that unlocks sustainable investment.” [16]. “When countries strengthen regulatory and legal frameworks, investment flows in.” [17].
Major discussion point
Verification, Transparency, and Trust in AI
Topics
Financial mechanisms | Artificial intelligence
Investment into Governance, Evidence, and Readiness
Explanation
He calls for AI investment to move beyond pure innovation, directing resources toward governance, regulation, evidence generation, and workforce readiness to ensure safe and scalable deployment.
Evidence
“It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long‑term partnerships.” [5]. “Investment must go beyond innovation.” [12].
Major discussion point
Investment, Coordination, and Implementation
Topics
Financial mechanisms | The enabling environment for digital development
Building Workforce Capacity
Explanation
Payden highlights that workforce capacity and skills are enabling conditions that determine whether AI becomes a tool for equity or merely a driver of innovation.
Evidence
“They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation.” [6]. “It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building…” [5].
Major discussion point
Skills Development and Workforce Readiness
Topics
Capacity development | Financial mechanisms
Alain Labrique
Speech speed
87 words per minute
Speech length
219 words
Speech time
150 seconds
Impact over Accuracy
Explanation
Labrique argues that benchmarks should prioritize real‑world impact rather than pure algorithmic accuracy, focusing on solutions that are actually used and effective in practice.
Evidence
“A benchmark might be the wrong thing, not accuracy but actually impact.” [26]. “and impressive with impactful, focusing on things that get used and work in the real world.” [27].
Major discussion point
Verification, Transparency, and Trust in AI
Topics
Artificial intelligence | Social and economic development
Humans in the Loop for Behavior Change
Explanation
He notes that changing clinical behavior takes time but is possible as long as humans remain in the loop, underscoring the importance of human oversight in AI deployment.
Evidence
“it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop.” [44].
Major discussion point
Human‑Centered AI and Workflow Integration
Topics
Artificial intelligence | Capacity development
Haitham Ali Ahmed El‑Noush
Speech speed
10 words per minute
Speech length
47 words
Speech time
258 seconds
Coordinated Donor Strategies
Explanation
He stresses that donors must coordinate, develop clear strategies, set priorities, and pool investments to effectively scale AI‑driven health initiatives.
Evidence
“o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.” [34].
Major discussion point
Investment, Coordination, and Implementation
Topics
Financial mechanisms | The enabling environment for digital development
Prokar Dasgupta
Speech speed
108 words per minute
Speech length
743 words
Speech time
410 seconds
Funding AI Champions and Evaluation
Explanation
Prokar describes placing AI champions in hospitals and funding evaluations of ambient AI to drive implementation and scale of AI tools in health systems.
Evidence
“We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most.” [35]. “Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.” [14].
Major discussion point
Investment, Coordination, and Implementation
Topics
Financial mechanisms | Artificial intelligence
Equitable AI through Data Diversity
Explanation
He highlights that lack of diverse data hampers equitable AI outcomes and cites technology that could serve billions lacking access to equitable surgery, stressing the need for inclusive data.
Evidence
“without diversity of data we are not going to win this battle” [41]. “it means to me the technology exists for a surgeon to operate … imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery” [41].
Major discussion point
Equity, Access, and Global Collaboration
Topics
Social and economic development | Artificial intelligence
Embedding AI in Medical Education
Explanation
Prokar warns that very few medical and nursing schools teach AI; without embedding AI in curricula, the next generation of health workers will be unprepared for AI‑driven care.
Evidence
“there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail” [37].
Major discussion point
Skills Development and Workforce Readiness
Topics
Capacity development | Artificial intelligence
Ken Ichiro Natsume
Speech speed
143 words per minute
Speech length
84 words
Speech time
35 seconds
Human at the Center of Utilization
Explanation
Natsume stresses that AI should be leveraged with humans at the centre of its utilization, ensuring technology serves rather than replaces human judgment.
Evidence
“we can leverage artificial intelligence with human being at the center of those utilizations.” [45].
Major discussion point
Human‑Centered AI and Workflow Integration
Topics
Artificial intelligence | Capacity development
Justice Prathiba M. Singh
Speech speed
120 words per minute
Speech length
27 words
Speech time
13 seconds
Collaboration for a Healthier World
Explanation
She calls for collaborative effort across technology and stakeholders to build a healthier world, emphasizing joint work as essential for global health improvement.
Evidence
“and technology, we really work together in the world.” [11]. “Here’s to a healthier world.” [29].
Major discussion point
Equity, Access, and Global Collaboration
Topics
Social and economic development | Artificial intelligence
Agreements
Agreement points
Human-centered approach to AI development and implementation
Speakers
– Ken Ichiro Natsume
– Prokar Dasgupta
– Zameer Brey
– Alain Labrique
Arguments
Artificial intelligence must be leveraged with human beings at the center of utilization
The focus should shift from what machines can do for us to considering the societal effects of these machines
Humans must remain in the loop as behavior change takes time but is possible
Summary
All speakers emphasized that AI development should prioritize human needs, maintain human oversight, and consider broader societal implications rather than just technical capabilities
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Social and economic development
Investment must go beyond technical development to include foundational systems
Speakers
– Zameer Brey
– Payden P.
– Prokar Dasgupta
Arguments
Investment has been concentrated on technical development rather than integration into existing healthcare systems
Investment must extend beyond innovation to include governance, regulation, evidence generation, and workforce capacity building
Investment in skills and embedding AI education in healthcare worker training is essential for success
Summary
Speakers agreed that successful AI implementation requires comprehensive investment in supporting systems, governance, education, and integration rather than just technical innovation
Topics
Artificial intelligence | Financial mechanisms | The enabling environment for digital development | Capacity development
Focus on real-world impact and outcomes over technical metrics
Speakers
– Zameer Brey
– Alain Labrique
Arguments
Focus must shift from technical accuracy to real-world implementation and actual health outcomes improvement
A benchmark might be the wrong thing, not accuracy but actually impact
Summary
Both speakers emphasized that success should be measured by actual health outcomes and real-world impact rather than technical performance metrics
Topics
Artificial intelligence | Social and economic development | Monitoring and measurement
Need for transparency and trust in AI systems
Speakers
– Zameer Brey
– Payden P.
Arguments
AI systems need to shift from ‘black box’ to ‘glass box’ with transparent, documentable decision-making processes
Trust serves as the currency that unlocks sustainable investment in AI healthcare
Strong governance and transparency are necessary enabling conditions for AI success
Summary
Both speakers emphasized that transparency in AI decision-making and building trust are fundamental requirements for successful AI implementation in healthcare
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
AI should promote equity rather than create new inequalities
Speakers
– Prokar Dasgupta
– Payden P.
– Justice Prathiba M. Singh
Arguments
Data diversity is crucial – without diversified data, the AI healthcare battle cannot be won
AI should become a tool for equity rather than a driver of new inequalities
Technology and AI should work together to create a healthier world for all
Summary
Speakers agreed that AI development must intentionally address equity concerns and ensure benefits reach all populations, particularly underserved communities
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both speakers recognize that successful AI integration requires understanding and working with existing healthcare workflows and the challenges of changing established medical practices
Speakers
– Zameer Brey
– Prokar Dasgupta
Arguments
AI tools should be integrated at the right point in clinical workflows when users find them most valuable, not forced at inconvenient times
Changing established medical practices and clinical decision pathways requires significant investment in clinical research and evaluation
Topics
Artificial intelligence | Capacity development | Social and economic development
Both speakers identify education and workforce development as critical gaps that must be addressed for successful AI implementation in healthcare
Speakers
– Prokar Dasgupta
– Payden P.
Arguments
Very few medical and nursing schools worldwide have AI integrated into their curriculum
Investment must extend beyond innovation to include governance, regulation, evidence generation, and workforce capacity building
Topics
Artificial intelligence | Capacity development | Social and economic development
Both speakers emphasize the need for coordinated, strategic approaches to investment and the importance of clear frameworks to guide funding decisions
Speakers
– Haitham Ali Ahmed El‑Noush
– Payden P.
Arguments
Donors need better coordination and unified strategies to prioritize AI health investments effectively
Predictability in regulatory and legal frameworks builds confidence and attracts investment
Topics
Artificial intelligence | Financial mechanisms | The enabling environment for digital development
Unexpected consensus
Zero tolerance for AI errors in healthcare
Speakers
– Zameer Brey
Arguments
Healthcare AI must achieve zero percent risk of failure, unlike other industries where 95% accuracy might be acceptable
Explanation
While this was primarily argued by Zameer Brey, the lack of disagreement from other panelists suggests unexpected consensus on this extremely high standard for healthcare AI, which is more stringent than standards accepted in other industries
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Patient acceptance as a critical barrier to AI adoption
Speakers
– Prokar Dasgupta
Arguments
Without proper education of the next generation of healthcare workers, AI implementation will fail
Explanation
Dasgupta’s example of only one person willing to accept fully automated surgery despite 100% accuracy in animal trials highlights an unexpected consensus area – that technical perfection alone is insufficient without patient and provider acceptance
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The speakers demonstrated strong consensus on key principles: human-centered AI development, the need for comprehensive investment beyond technical development, focus on real-world outcomes, transparency and trust-building, and equity considerations. There was also agreement on practical challenges including workforce education gaps, integration difficulties, and the need for coordinated investment strategies.
Consensus level
High level of consensus with complementary rather than conflicting viewpoints. The agreement suggests a mature understanding of AI implementation challenges and a shared vision for responsible AI development in healthcare. This consensus provides a strong foundation for collaborative action and policy development, though implementation details may still require further discussion.
Differences
Different viewpoints
Standards for AI accuracy and risk tolerance in healthcare
Speakers
– Zameer Brey
– Prokar Dasgupta
Arguments
Healthcare AI must achieve zero percent risk of failure, unlike other industries where 95% accuracy might be acceptable
Very few medical and nursing schools worldwide have AI integrated into their curriculum
Summary
Brey demands absolute zero-risk AI systems using flight safety analogies, while Dasgupta acknowledges that even 100% accurate systems (like robotic gallbladder surgery) face public resistance, suggesting a more pragmatic approach to implementation despite technical perfection
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Primary focus for AI implementation strategy
Speakers
– Zameer Brey
– Prokar Dasgupta
Arguments
Focus must shift from technical accuracy to real-world implementation and actual health outcomes improvement
AI champions should be placed in every hospital to facilitate proper implementation
Summary
Brey emphasizes shifting focus from technical metrics to health outcomes and integration timing, while Dasgupta advocates for systematic institutional change through dedicated AI champions and comprehensive workforce education
Topics
Artificial intelligence | Capacity development | Social and economic development
Unexpected differences
Public acceptance versus technical perfection in AI implementation
Speakers
– Prokar Dasgupta
Arguments
The focus should shift from what machines can do for us to considering the societal effects of these machines
Explanation
Dasgupta’s revelation that even 100% accurate robotic surgery systems face overwhelming public rejection (only one person in a room would accept the procedure) represents an unexpected challenge that technical perfection alone cannot solve, highlighting a gap between technological capability and social acceptance that other speakers did not address
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Social and economic development
Overall assessment
Summary
The main disagreements center on implementation approaches (user-driven versus institutional), risk tolerance standards (zero-risk versus pragmatic acceptance), and the relative importance of technical transparency versus comprehensive governance frameworks
Disagreement level
Moderate disagreement with significant implications – while speakers share common goals of safe, effective AI implementation, their different approaches could lead to conflicting strategies for investment priorities, regulatory frameworks, and implementation timelines. The unexpected finding about public resistance to technically perfect AI systems suggests that social acceptance may be a more significant barrier than technical challenges, requiring a fundamental reconsideration of implementation strategies.
Partial agreements
Partial agreements
All speakers agree that successful AI implementation requires moving beyond technical development to focus on real-world integration, but they disagree on approach: Brey emphasizes user-driven timing and workflow integration, Dasgupta focuses on systematic institutional change through AI champions and education, while Payden advocates for comprehensive foundational systems including governance and regulation
Speakers
– Zameer Brey
– Prokar Dasgupta
– Payden P.
Arguments
AI tools should be integrated at the right point in clinical workflows when users find them most valuable, not forced at inconvenient times
Changing established medical practices and clinical decision pathways requires significant investment in clinical research and evaluation
Investment must extend beyond innovation to include governance, regulation, evidence generation, and workforce capacity building
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both agree on the critical importance of transparency in AI systems, but Brey focuses specifically on technical transparency for clinical decision-making and safety verification, while Payden emphasizes transparency as part of broader governance frameworks for building trust and attracting investment
Speakers
– Zameer Brey
– Payden P.
Arguments
AI systems need to shift from ‘black box’ to ‘glass box’ with transparent, documentable decision-making processes
Strong governance and transparency are necessary enabling conditions for AI success
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Both speakers agree on the goal of equitable AI healthcare access, but Dasgupta emphasizes the technical requirement of diverse data representation for effective AI systems, while Justice Singh focuses on the collaborative integration of technologies for universal health improvements
Speakers
– Prokar Dasgupta
– Justice Prathiba M. Singh
Arguments
Data diversity is crucial – without diversified data, the AI healthcare battle cannot be won
Technology and AI should work together to create a healthier world for all
Topics
Artificial intelligence | Closing all digital divides | Social and economic development
Similar viewpoints
Both speakers recognize that successful AI integration requires understanding and working with existing healthcare workflows and the challenges of changing established medical practices
Speakers
– Zameer Brey
– Prokar Dasgupta
Arguments
AI tools should be integrated at the right point in clinical workflows when users find them most valuable, not forced at inconvenient times
Changing established medical practices and clinical decision pathways requires significant investment in clinical research and evaluation
Topics
Artificial intelligence | Capacity development | Social and economic development
Both speakers identify education and workforce development as critical gaps that must be addressed for successful AI implementation in healthcare
Speakers
– Prokar Dasgupta
– Payden P.
Arguments
Very few medical and nursing schools worldwide have AI integrated into their curriculum
Investment must extend beyond innovation to include governance, regulation, evidence generation, and workforce capacity building
Topics
Artificial intelligence | Capacity development | Social and economic development
Both speakers emphasize the need for coordinated, strategic approaches to investment and the importance of clear frameworks to guide funding decisions
Speakers
– Haitham Ali Ahmed El‑Noush
– Payden P.
Arguments
Donors need better coordination and unified strategies to prioritize AI health investments effectively
Predictability in regulatory and legal frameworks builds confidence and attracts investment
Topics
Artificial intelligence | Financial mechanisms | The enabling environment for digital development
Takeaways
Key takeaways
AI in healthcare has reached an inflection point where the focus has shifted from possibility to investment, implementation, and impact
Healthcare AI must achieve zero percent risk of failure and transition from ‘black box’ to ‘glass box’ systems with complete transparency and verifiable decision-making processes
Investment must extend beyond technical innovation to include governance, regulation, workforce capacity building, data systems, and long-term partnerships
Human-centered AI development is essential, keeping humans in the loop and considering societal effects rather than just technical capabilities
Data diversity is crucial for AI success, and without it, healthcare AI initiatives will fail to serve all populations equitably
Trust serves as the currency that unlocks sustainable investment, built through predictable regulatory frameworks and transparent evidence generation
Educational integration is critical – AI must be embedded in medical and nursing school curricula to prepare the next generation of healthcare workers
Patient acceptance remains a significant challenge, as demonstrated by reluctance to accept fully automated surgical procedures despite high technical accuracy
Resolutions and action items
Develop a pathway to verified AI with complete transparency and accountability in medical decision-making
Place AI champions in every hospital to facilitate proper implementation and expand this model to partners in India and Africa
Create unified strategies and priorities for donors to coordinate AI health investments more effectively
Integrate AI education into medical and nursing school curricula worldwide
Work with partners on shifting AI systems from black box to glass box transparency
Invest in clinical research and evaluation to provide evidence needed to change established medical practices
Unresolved issues
How to achieve zero percent risk of failure in AI healthcare systems while maintaining practical functionality
How to overcome patient reluctance to accept AI-driven medical interventions, even when technically superior
Specific mechanisms for coordinating donor investments and avoiding duplication of efforts
How to ensure data diversity across different populations and healthcare systems globally
The timeline and specific steps needed to integrate AI education into existing medical curricula
How to balance the need for human oversight with the efficiency gains promised by AI automation
Specific regulatory frameworks needed to build trust while enabling innovation
Suggested compromises
Maintain humans in the loop during the transition period while behavior change occurs gradually
Focus on AI assistance rather than AI replacement, allowing users to choose when to engage AI tools in their workflow
Implement AI at different levels of autonomy (0-5 scale) rather than jumping directly to full automation
Balance technical accuracy benchmarks with real-world impact measurements
Develop AI systems that can operate effectively in resource-constrained environments while maintaining safety standards
Thought provoking comments
The flight safety analogy: ‘if I said to you all would you fly if the likelihood of the flight arriving safely was 95%? I’d fly, you’d fly if it was 95%… if it was 99% That means every 100th flight taking off from Delhi airport would crash… when it comes to health care, the bar should be 0% risk of failure, 0% risk of error.’
Speaker
Zameer Brey
Reason
This analogy brilliantly illustrates the fundamental difference between acceptable risk levels in different domains. It challenges the tech industry’s typical approach of ‘good enough’ accuracy rates and forces consideration of what standards should apply to healthcare AI.
Impact
This comment shifted the discussion from technical capabilities to safety standards and verification requirements. It introduced the concept of ‘verified AI’ and the need for transparent, glass-box solutions rather than black-box AI, fundamentally changing how the panel approached AI implementation in healthcare.
The gallbladder surgery experiment: ‘until last November when one university announced… the first in the world… robotic system which can operate on big gallbladders… 100% accuracy… I said, hands up everyone who is going to allow this machine to operate on them… there was one hand in the room.’
Speaker
Prokar Dasgupta
Reason
This real-world example powerfully demonstrates the gap between technical achievement and public acceptance. It reveals that even perfect accuracy doesn’t guarantee adoption, highlighting the critical importance of trust and human psychology in AI implementation.
Impact
This comment introduced a crucial reality check to the discussion, shifting focus from technical capabilities to human acceptance and trust. It emphasized that successful AI implementation requires bringing patients along in the journey, not just achieving technical milestones.
The four-level investment framework, particularly: ‘The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.’
Speaker
Zameer Brey
Reason
This framework provides a structured way to evaluate AI investments beyond just technical metrics, emphasizing real-world health impact as the ultimate measure of success. It challenges the tendency to get caught up in technical achievements without considering actual health outcomes.
Impact
This comment provided a analytical framework that other panelists referenced and built upon. It shifted the conversation from discussing AI capabilities in isolation to evaluating them within a comprehensive impact assessment model.
The shift from Turing test to societal impact: ‘do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.’
Speaker
Prokar Dasgupta
Reason
This comment reframes the entire evaluation criteria for AI from technical capability (can it fool humans?) to societal responsibility (what are its broader impacts?). It introduces a more holistic and ethical framework for AI development.
Impact
This philosophical shift influenced the closing remarks and reinforced the theme that emerged throughout the discussion about keeping humans at the center of AI development and considering broader societal implications.
The diversity and equity challenge: ‘the data is not diversified enough without diversity of data we are not going to win this battle… imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery’
Speaker
Prokar Dasgupta
Reason
This comment connects technical data quality issues directly to global health equity, showing how AI could either perpetuate or help solve healthcare disparities. It makes the abstract concept of data diversity concrete and urgent.
Impact
This comment reinforced the equity theme that ran through the discussion and connected technical considerations to global health justice, influencing the final emphasis on ensuring AI benefits everyone, not just a few.
Overall assessment
These key comments fundamentally shaped the discussion by elevating it from a technical conversation about AI capabilities to a comprehensive examination of implementation challenges, safety requirements, human acceptance, and societal impact. The flight safety analogy established the need for different standards in healthcare, while the gallbladder surgery example grounded the discussion in real-world human psychology. The four-level investment framework provided structure for evaluating AI beyond technical metrics, and the call to move from Turing test to societal impact assessment reframed success criteria entirely. Together, these insights created a discussion that balanced technical innovation with human-centered design, safety, equity, and real-world implementation challenges. The conversation evolved from ‘what can AI do?’ to ‘how do we ensure AI serves humanity responsibly and equitably?’ – a much more mature and nuanced approach to healthcare AI development.
Follow-up questions
How do we get AI to be verifiable so that we can document inputs, ensure transparency, and shift from black box to glass box AI?
Speaker
Zameer Brey
Explanation
This is critical for healthcare applications where 0% risk of failure should be the standard, requiring complete transparency in AI decision-making processes
How can we put safeguards in place to ensure AI doesn’t prescribe something a patient is allergic to or cause catastrophic events?
Speaker
Zameer Brey
Explanation
Patient safety is paramount in healthcare AI applications, and preventing harmful recommendations is essential for verified AI systems
What kind of investment do we need to make in clinical research and evaluation to shift well-established medical practice workflows?
Speaker
Alain Labrique
Explanation
Medical professionals are known for being entrenched in established practices, so understanding how to effectively change clinical decision pathways is crucial for AI adoption
How do we address the lack of data diversity to ensure AI systems work effectively across different populations?
Speaker
Prokar Dasgupta
Explanation
Without diverse data representation, AI systems may not be effective for all populations, particularly in global health applications
How do we bring patients along in accepting AI-driven healthcare solutions, given the resistance to fully automated systems?
Speaker
Prokar Dasgupta
Explanation
Public acceptance is crucial for AI implementation, as demonstrated by the reluctance to accept fully automated surgical procedures even when they show 100% accuracy in trials
How do we embed AI education in medical and nursing school curricula globally?
Speaker
Prokar Dasgupta
Explanation
There’s a critical gap in AI education for healthcare workers, which needs to be addressed to prepare the next generation for AI-integrated healthcare
How do we move from measuring AI accuracy to measuring actual health impact and outcomes?
Speaker
Alain Labrique
Explanation
The focus should shift from technical benchmarks to real-world health improvements and patient outcomes
How do we ensure AI becomes a tool for equity rather than a driver of new inequalities?
Speaker
Payden P.
Explanation
There’s a risk that AI could exacerbate existing health disparities if not implemented thoughtfully with equity considerations
What societal effects will AI machines have beyond their technical capabilities?
Speaker
Prokar Dasgupta
Explanation
Moving from the Turing test to the Weizenbaum test requires considering broader societal implications of AI implementation in healthcare
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

