Mental health concerns over chatbots fuel AI regulation calls
Psychotherapists fear that vulnerable people turning to chatbots instead of professional help may face serious mental health risks, with studies linking AI to harmful amplification of delusions.

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.
Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.
Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.
He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.
He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.
Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.
The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.
Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!