The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.
Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.
Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.
An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.
Other companies receiving orders include Character.AI and Elon Musk’s xAI.
The probe follows growing public concern over the psychological effects of generative AI on young people.
Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
During her annual State of the Union address, von der Leyen said the Commission is closely monitoring Australia’s approach, where individuals under 16 are banned from using platforms like TikTok, Instagram, and Snapchat.
‘I am watching the implementation of their policy closely,’ von der Leyen said, adding that a panel of experts will advise her on the best path forward for Europe by the end of 2025.
Currently, social media age limits are handled at the national level across the EU, with platforms generally setting a minimum age of 13. France, however, is moving toward a national ban for those under 15 unless an EU-wide measure is introduced.
Several EU countries, including the Netherlands, have already warned against children under 15 using social media, citing health risks.
In June, the European Commission issued child protection guidelines under the Digital Services Act, and began working with five member states on age verification tools, highlighting growing concern over digital safety for minors.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.
Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.
University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.
Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission President, Ursula von der Leyen, delivered her 2025 State of the Union address to the European Parliament in Strasbourg. The speech set out priorities for the coming year and was framed by growing geopolitical tensions and the push for a more self-reliant Europe.
Von der Leyen highlighted that global dynamics have shifted.
‘Battlelines for a new world order based on power are being drawn right now, ’ she said.
In this context, Europe must take a more assertive role in defending its own security and advancing the technologies that will underpin its economic future. The President characterised this moment as a turning point for European independence.
Digital policy appeared less prominently than expected in the address. Von der Leyen often referred to ‘technology sovereignty’ to encompass not only digital technologies, but also other types of technologies necessary for the green transition and to achieve energetic autonomy. In spite of that, some specific references to digital policy are worth highlighting.
Europe’s right to regulate. Von der Leyen defended Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US President Donald Trump’s administration.
Regulatory simplification. A specific regulatory package (omnibus) on digital was promised, under inspiration from the Draghi report on EU competitiveness.
Investment in digital technology. Startups in key areas, such as quantum and AI, could receive particular attention, in order to enhance the availability of European capital and strengthen European sovereignty in these areas. According to her, the Commission ‘will partner with private investors on a multi-billion euro Scaleup Europe Fund’. No concrete figures were provided, however.
Artificial intelligence as key to European independence. In order to support this sector, von der Leyen highlighted the importance of some initiatives, such as the Cloud and AI Development Act, and the European AI Gigafactories. She praised the commitment of CEOs from some leading European companies to invest in digital in the recently launched AI and Tech Declaration.
Mainstreaming information integrity. According to von der Leyen, Europe’s democracy is under attack, with the rise of information manipulation and disinformation. She proposed to create a new European Centre for Democratic Resilience, which will bring together all the expertise and capacity across member states and neighbouring countries. A new Media Resilience Programme aimed at supporting independent journalism and media literacy was also announced.
Limits to the use of social media by young people. The President of the Commission raised concerns about the impact of social media on children’s mental health and safety. She committed to convening a panel of experts to consider restrictions for social media access, referencing efforts that have been put in place in Australia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.
The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.
Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.
She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.
‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.
Without human guidance, young people risk misinterpreting results and worsening their struggles.
Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.
Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.
Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.
Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.
He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.
He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.
Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.
The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.
Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.
Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.
A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.
Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.
Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.
Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.
Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.
Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.
OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.
Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Walt Disney Company will pay $10 million to settle allegations that it breached children’s privacy laws by mislabelling videos aimed at young audiences on YouTube, allowing personal data to be collected without parental consent.
In a complaint filed by the US Department of Justice, following a Federal Trade Commission (FTC) referral, Disney was accused of incorrectly designing hundreds of child-directed videos as ‘Made for Kids’.
Instead, the company applied a blanket ‘Not Made for Kids’ label at the channel level, enabling YouTube to collect data and serve targeted advertising to viewers under 13, contrary to the Children’s Online Privacy Protection Act (COPPA).
The FTC claims Disney profited through direct ad sales and revenue-sharing with YouTube. Despite being notified by YouTube in 2020 that over 300 videos had been misclassified, Disney did not revise its labelling policy.
Under the proposed settlement, Disney must pay the civil penalty, fully comply with COPPA by obtaining parental consent before data collection, and implement a video review programme to ensure accurate classification, unless YouTube introduces age assurance technologies to determine user age reliably.
“This case underscores the FTC’s commitment to protecting children’s privacy online,” said FTC Chair Andrew Ferguson. “Parents, not corporations like Disney, should decide how their children’s data is collected and used.”
The agreement, which a federal judge must still approve, reflects growing pressure on tech platforms and content creators to safeguard children’s digital privacy.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!