WHO/Europe warns safeguards lag as AI use grows in health care

All EU Member States prioritise improved patient care as AI expands, with most already integrating tools into everyday clinical settings.

AI is becoming embedded across EU healthcare systems, with WHO reporting widespread adoption in diagnostics.

AI is becoming more deeply embedded in health systems across WHO European Region, according to a new WHO/Europe report that maps adoption, governance, and readiness across 50 of the region’s 53 member states. Rather than presenting a purely positive picture of rapid innovation, the report warns that legal and ethical safeguards are not keeping pace with deployment.

The report shows that AI is already being used in a wide range of medical and administrative functions. Thirty-two countries, or 64%, said they are using AI-assisted diagnostics, particularly in imaging and detection, while half reported deploying AI chatbots for patient engagement and support. Countries most often said they were adopting AI to improve patient care, reduce pressure on health workers, and increase efficiency across health services.

WHO/Europe’s findings suggest that health systems are beginning to adapt institutionally, but unevenly. Only four countries have adopted a dedicated national strategy on AI in health, while seven more are developing one. That leaves much of the region in a transitional phase, where AI tools are entering clinical and administrative settings faster than governments are building the structures needed to govern them properly.

The report places particular emphasis on accountability, regulation, and public trust. Legal uncertainty was identified by 43 countries, or 86%, as the main barrier to wider AI adoption in health. At the same time, fewer than one in ten countries reported having liability standards in place for AI in health care, raising difficult questions about responsibility when systems fail or cause harm.

That warning gives the report its real policy weight. The main issue is not simply that AI use is growing in diagnostics, administration, and patient interaction, but that many health systems still lack the legal clarity and governance capacity needed to use it safely. In that sense, WHO/Europe is framing AI less as a breakthrough story than as a test of whether public institutions can build trustworthy safeguards around fast-moving digital tools.

The broader significance is that the debate over AI in health care is shifting. Early attention focused on what the technology might do for diagnosis, triage, and efficiency. WHO/Europe is now pointing to a harder question: whether health systems can make AI useful without weakening patient safety, privacy, accountability, and public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!