Experts warn over unreliable AI medical guidance

Investigations suggest some AI systems rely on unverified or outdated sources, increasing the risk of misleading medical guidance.

Medical experts warn that unverified or outdated information poses risks, especially where clinical guidance changes rapidly.

AI tools used for health searches are facing growing scrutiny after reports found that some systems provide incorrect or potentially harmful medical advice. Wider public use of generative AI for health queries raises concerns over how such information is generated and verified.

An investigation by The Guardian found that Google AI Overview has sometimes produced guidance contrary to established medical advice. Attention has also focused on data sources, as platforms like ChatGPT frequently draw on user-generated or openly edited material.

Medical experts warn that unverified or outdated information poses risks, especially where clinical guidance changes rapidly. The European Lung Foundation has stressed that health-related AI outputs should meet the same standards as professional medical sources.

Efforts to counter misinformation are now expanding. The European Respiratory Society and its partners are running campaigns to protect public trust in science and encourage people to verify health information with qualified professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!