Elderly patient hospitalised after ChatGPT’s dangerous dietary advice

A misinformed switch from salt to sodium bromide based on AI advice led to toxic poisoning, paranoia, hallucinations and eventual psychiatric hospitalisation.

google, gemini, ai chatbot, artificial intelligence, ai bug, self-criticism, rant mode, ai glitch, google deepmind, machine learning, chatbot error, ai behaviour, conversational ai, ai safety, software bug, chatgpt, bromism, bromide intoxication, ai medical advice, ai psychosis, mental health, bromide, neuropsychiatric, misinformation

Hospital records show that a man in his sixties ended up hospitalised with neurological and psychiatric symptoms after replacing table salt with sodium bromide, based on AI-generated advice from ChatGPT. The condition, known as bromism, includes paranoia, hallucinations and coordination issues.

Medical staff noted unusual thirst and paranoia around drinking water. Shortly after admission, the patient experienced auditory and visual hallucinations and was placed under an involuntary psychiatric hold due to grave disability.

The incident underscores the serious risks of relying on AI tools for health guidance. In this case, ChatGPT did not issue warnings or ask for medical context when recommending sodium bromide, a toxic alternative.

Experts stress that AI should never replace professional healthcare consultation, particularly for complex or rare conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!