Yale expert warns against overtrusting AI health chatbots
As millions turn to AI chatbots for medical guidance, a Yale expert reveals why the very features that make them so easy to trust could also make them dangerous.
More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.
Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.
Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.
She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.
Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
