WHO/Europe warns safeguards lag as AI use grows in health care

AI is becoming more deeply embedded in health systems across WHO European Region, according to a new WHO/Europe report that maps adoption, governance, and readiness across 50 of the region’s 53 member states. Rather than presenting a purely positive picture of rapid innovation, the report warns that legal and ethical safeguards are not keeping pace with deployment.

The report shows that AI is already being used in a wide range of medical and administrative functions. Thirty-two countries, or 64%, said they are using AI-assisted diagnostics, particularly in imaging and detection, while half reported deploying AI chatbots for patient engagement and support. Countries most often said they were adopting AI to improve patient care, reduce pressure on health workers, and increase efficiency across health services.

WHO/Europe’s findings suggest that health systems are beginning to adapt institutionally, but unevenly. Only four countries have adopted a dedicated national strategy on AI in health, while seven more are developing one. That leaves much of the region in a transitional phase, where AI tools are entering clinical and administrative settings faster than governments are building the structures needed to govern them properly.

The report places particular emphasis on accountability, regulation, and public trust. Legal uncertainty was identified by 43 countries, or 86%, as the main barrier to wider AI adoption in health. At the same time, fewer than one in ten countries reported having liability standards in place for AI in health care, raising difficult questions about responsibility when systems fail or cause harm.

That warning gives the report its real policy weight. The main issue is not simply that AI use is growing in diagnostics, administration, and patient interaction, but that many health systems still lack the legal clarity and governance capacity needed to use it safely. In that sense, WHO/Europe is framing AI less as a breakthrough story than as a test of whether public institutions can build trustworthy safeguards around fast-moving digital tools.

The broader significance is that the debate over AI in health care is shifting. Early attention focused on what the technology might do for diagnosis, triage, and efficiency. WHO/Europe is now pointing to a harder question: whether health systems can make AI useful without weakening patient safety, privacy, accountability, and public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK invests £500 million in Sovereign AI fund to boost startups

The UK government has launched a £500 million Sovereign AI initiative to support domestic startups, aiming to strengthen national capabilities and reduce reliance on foreign technology providers.

The programme is designed to help companies start, scale and compete globally while remaining rooted in Britain.

An initiative that combines direct investment with broader support, including fast-track visas, access to high-performance computing and assistance in navigating regulation and procurement.

Early backers target firms working on advanced AI infrastructure, life sciences and next-generation computing, reflecting a strategic focus on sectors with long-term economic and security implications.

A central feature is access to national supercomputing resources, addressing one of the most significant barriers to AI development.

By providing large-scale compute capacity and linking it to potential future investment, the programme aims to accelerate research, testing and deployment within the UK ecosystem.

Essentially, the policy signals a shift toward a more interventionist approach, positioning the state as an active investor rather than a passive regulator.

The objective is to anchor innovation domestically, ensuring that intellectual property, talent and economic value remain within the UK as global competition in AI intensifies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI accelerates life sciences research with a new specialised model

OpenAI has launched GPT-Rosalind, a purpose-built model. It is designed to support complex workflows in biology, drug discovery and translational medicine.

A system that focuses on improving reasoning across scientific domains, enabling researchers to process large volumes of data, literature and experimental inputs more efficiently.

The model is engineered to assist with early-stage discovery, where improvements can significantly influence downstream outcomes.

By supporting hypothesis generation, evidence synthesis and experimental design, GPT-Rosalind aims to streamline fragmented research processes that often slow scientific progress.

Integration with specialised tools and access to more than 50 scientific databases enable the new OpenAI model to operate across multi-step workflows.

Why does it matter?

Early evaluations indicate stronger performance in areas such as protein analysis, genomics and chemical reasoning, alongside improved capability in selecting and using domain-specific tools.

Access is currently limited through a controlled deployment framework, ensuring use within governed research environments.

Partnerships with organisations including Amgen and Moderna reflect a broader effort to apply AI to real-world scientific challenges while maintaining safeguards and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New India partnership targets AI innovation and digital transformation

Broadcast Engineering Consultants India Limited (BECIL) and the Centre for Development of Advanced Computing (C-DAC) have signed a Memorandum of Understanding to collaborate on advanced technologies and digital transformation. The agreement focuses on joint projects, consultancy, and technical support across sectors.

The partnership covers AI, machine learning, Internet of Things, cybersecurity, 5G, and cloud computing. It also includes the development of turnkey solutions, technology transfer, and the commercialisation of innovative products.

Capacity development is a key component of the collaboration. Both organisations will support workforce upskilling and skill development to strengthen technical capabilities.

Officials stated that the partnership aims to leverage complementary strengths to deliver technology solutions. It is also expected to support innovation and contribute to India’s broader digital development objectives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Health queries dominate AI chatbot use, study finds

A large-scale study analysing more than 500,000 health-related conversations with Microsoft Copilot offers a detailed look at how people are using general-purpose AI chatbots for medical information, symptom questions, and healthcare navigation.

Published in Nature Health, the study suggests that conversational AI is increasingly being used as an early point of contact for health concerns outside formal clinical settings.

The largest share of conversations fell into the health information and education category, accounting for 40.7% of the sample. Users frequently asked about symptoms, conditions, nutrition, treatments, and medicines, often in ways that reflected personal concerns rather than detached information-seeking.

The study found that 18.8% of conversations involved users discussing their own health conditions, while roughly one in seven personal health queries concerned someone else, such as a child, partner, or parent.

Patterns of use also varied by device and time of day. Mobile users were more likely to ask personal and emotionally sensitive questions, particularly about symptoms and well-being, with activity rising in the evening and overnight.

Desktop use, by contrast, was more closely associated with work, study, and administrative tasks, including research, documentation, and medical paperwork during office hours.

The study also points to growing use of AI for practical healthcare navigation. Beyond questions about symptoms or conditions, users turned to Copilot for help with appointments, provider access, paperwork, and understanding parts of the healthcare system that can be difficult to navigate. That suggests people are not using chatbots only for medical curiosity, but also to manage the bureaucratic and logistical side of care.

The broader significance of the findings lies in what they reveal about the changing role of conversational AI in everyday health decision-making. General-purpose chatbots are not replacing clinicians, but they are increasingly occupying the space before, between, and around formal care, where people seek quick explanations, reassurance, and guidance.

That makes questions of accuracy, safety, and health literacy more important, especially when users may act on AI-generated responses without professional context or oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

WHO launches AI Community of Practice for emergency response surveillance

The World Health Organization Regional Office for the Eastern Mediterranean has launched a Community of Practice on AI for disaster and emergency response surveillance through the WHO Collaboratory platform.

According to the organisation, the initiative brings together national authorities, practitioners, researchers, partners, and WHO staff to share knowledge, build capacity, and develop practical guidance on the use of AI in surveillance, early warning, risk assessment, and operational response.

WHO says the Community of Practice is part of its AI Literacy Programme and is intended to strengthen national and regional capacity to evaluate, adopt, govern, and scale AI tools during disasters and health emergencies. Members will have access to training modules, peer-to-peer learning, technical working groups, and a repository of best practices and tested guidance.

The organisation states that the platform prioritises the ethical, equitable, and transparent use of AI in line with its standards. Dr Annette Heinzelmann, WHO Regional Emergency Director, a.i., said:

At WHO, we advocate for the science-driven use of artificial intelligence in public health response, especially during emergencies.

Heizelmann added:

Our priority is to ensure these technologies are applied in ways that are safe, ethical and grounded in public health needs. This initiative reflects our commitment to supporting Member States in translating innovation into faster, more effective emergency response.

WHO says it launched the All-Hazards Information Management Toolkit last year as an AI-powered tool to support emergency information management, including rapid risk assessments, response plans, monitoring tools, and situation reports. According to WHO, participants from 20 countries were trained in the use of the toolkit and in AI literacy for emergency preparedness and surveillance.

Dr Oliver Morgan, Head of the WHO Hub for Pandemic and Epidemic Intelligence, said: ‘Artificial intelligence has enormous potential in public health, but its impact depends on how responsibly and effectively it is applied.’

Morgan expanded: ‘At the WHO Hub in Berlin, we develop innovative tools and bring experts together through initiatives like the Collaboratory to support countries and regions to detect health threats faster and respond more effectively. This Community of Practice helps ensure AI solutions move beyond pilots and into real-world emergency response, where speed, trust and usability matter most.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU proposes new Google search-data sharing measures under DMA

The European Commission has set out proposed measures that would require Google to share key search data with third-party providers under the Digital Markets Act (DMA), in a fresh step to open Europe’s online search market to greater competition. The move comes in the form of preliminary findings sent to Google, rather than a final decision, and is now subject to public consultation.

Under the proposal, Google would have to provide access to anonymised search data, including ranking, query, click, and view data, on fair, reasonable, and non-discriminatory terms. According to the Commission, the aim is to allow third-party search engines to improve their services and better challenge Google Search’s market position.

The proposed measures go beyond a general obligation to share data. They set out detailed conditions covering who should qualify for access, what data must be made available, how frequently it should be shared, how personal data should be anonymised, how pricing should be set, and how access procedures should work in practice. The consultation also explicitly includes companies offering online search services that incorporate AI chatbot functionality, showing that the case could shape competition not only in traditional search but also in AI-assisted search services.

The consultation is tied to Article 6(11) of the DMA, which requires gatekeepers operating online search engines to share certain anonymised data with other search engines under FRAND terms. The Commission says it opened proceedings against Alphabet in January 2026 to specify how Google should comply with that obligation in practice.

Brussels is now asking stakeholders to comment on whether the proposed framework would work in practice, whether the anonymised data would remain useful enough to help rivals improve their services, whether additional measures are needed, and whether the implementation timeline is realistic. The consultation opened on 16 April 2026 and will run until 1 May 2026, with the Commission expecting to adopt a final decision by 27 July 2026.

The case is significant because it shows the DMA moving from broad obligations to detailed implementation. Rather than debating only whether large platforms should share data, the Commission is now trying to define what meaningful access would look like in operational terms, including what must be handed over, on what conditions, and with what privacy safeguards. In that sense, the Google case may become an important test of how far the DMA can reshape competition in digital search markets and related AI services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Paraguay introduces AI rules for courts with UNESCO support and human oversight focus

UNESCO has supported Paraguay in developing a regulatory framework governing the use of AI within its judicial system.

The policy, adopted by the Supreme Court of Justice of Paraguay, establishes clear limits on AI use, ensuring that such systems function strictly as support tools rather than replacing human decision-making.

A regulation that outlines principles for the application of AI in data processing, information management and assisted decision-making. It emphasises transparency, accountability and respect for fundamental rights, requiring disclosure when AI tools influence judicial processes.

The framework aligns with UNESCO’s global guidelines on AI in courts, which promote human oversight, auditability and the protection of rights throughout the lifecycle of AI systems.

Implementation has been supported through technical cooperation, including training programmes to strengthen institutional capacity.

Such an approach in Paraguay reflects a broader trend towards embedding ethical safeguards in AI governance within public institutions. It highlights the role of international cooperation in shaping regulatory models that balance innovation with legal certainty and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s cyber resilience plan targets AI-driven threats to critical infrastructure

A new initiative to strengthen national resilience has been launched by the Canadian Centre for Cyber Security against escalating cyber threats targeting critical infrastructure.

The programme, titled CIREN (Critical Infrastructure Resilience and Escalated Threat Navigation), aims to prepare organisations for severe disruptions by improving readiness, response capacity, and long-term recovery planning.

An initiative that reflects growing concern within Communications Security Establishment Canada over increasingly sophisticated cyber risks, including those amplified by AI.

Authorities highlight that both state-sponsored and criminal actors are exploiting automation and AI to accelerate attacks, raising the stakes for sectors such as energy, telecommunications, transport, and water systems.

CIREN outlines a structured approach centred on operational continuity during extreme scenarios.

Organisations are encouraged to prepare for prolonged isolation of critical systems, develop independent operating capabilities, and establish recovery frameworks capable of rebuilding infrastructure after major incidents. The focus remains on maintaining essential services under worst-case conditions.

The programme forms part of a broader national strategy in Canada to enhance cyber readiness through collaboration, threat intelligence, and practical guidance.

Officials stress that proactive planning and simplified defensive measures can significantly reduce real-world impact, particularly as cyber incidents grow in frequency, scale, and complexity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ILO warns against treating AI exposure indicators as job-loss forecasts

A new brief from the International Labour Organisation argues that AI exposure indicators should not be treated as forecasts of job losses, even as they become a more common tool for assessing how artificial intelligence could reshape work.

According to the ILO, these indicators can help identify where jobs may be affected by AI. Still, they do not show whether workers will actually be displaced or how labour markets will adjust in practice.

The brief examines how different exposure measures are constructed and why they often produce different results. Earlier approaches to automation focused mainly on routine and lower-skilled work, while newer AI-related models point to greater exposure in higher-skilled cognitive occupations, including roles in finance, computing, business, and education. That shift reflects the growing capacity of AI systems to perform tasks once seen as less vulnerable to automation.

The ILO stresses that exposure does not necessarily lead to job loss. Most indicators rely on static task descriptions and estimate what may be technically feasible, rather than what employers will actually adopt or what makes economic sense. They do not capture whether automation is profitable, whether it improves productivity, or how firms, workers, and institutions may respond over time.

The brief also argues that AI-related disruption is unlikely to stay confined to a narrow set of occupations. Jobs are linked through shared skills, career mobility, and workplace structures, meaning that changes in one part of the labour market can influence broader employment patterns elsewhere. That makes simple occupation-by-occupation risk scores less useful on their own than they may appear.

For that reason, the ILO says exposure indicators should be used as early warning signals rather than stand-alone labour market forecasts. It recommends combining them with evidence on employment, wages, job transitions, and broader economic and institutional conditions to build a more realistic picture of how AI is affecting work.

The broader significance of the brief is that it pushes back against the simplest narratives about AI and employment. Rather than asking how many jobs AI will eliminate, the ILO is urging policymakers to focus on where work may change, how quickly adoption may happen, and what kinds of institutions, skills, and labour protections will shape the outcome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!