Teens turn to AI chatbots for support, raising mental health concerns

A health supervisor warns that AI chatbots may validate the wrong things for teens, risking misinformation and poor mental health outcomes.

Teens increasingly use AI chatbots instead of real connections, raising fears of misinformation and worsening mental health struggles, experts warn on National Suicide Prevention Day.

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!