Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


Gemini upgrade for Google Home coming soon

An upcoming upgrade for Google Home devices is set to bring a new AI assistant, Gemini, to the smart home ecosystem. A recent post by the Made by Google account on X revealed that more details will be announced on 1 October.

The move follows months of user complaints about Google Home’s performance, including issues with connectivity and the assistant’s failure to recognise basic commands.

With Gemini’s superior ability to understand natural language, the upgrade is expected to improve how users interact with their smart devices significantly. Home devices should better execute complex commands with multiple actions, such as dimming some lights while leaving others on.

However, the update will also introduce ‘Gemini Live’ to compatible devices, a feature allowing for natural, back-and-forth conversations with the AI chatbot.

The Gemini for Google Home upgrade will initially be rolled out on an early access basis. It will be available in free and paid tiers, suggesting that some more advanced features may be locked behind a subscription.

The update is anticipated to make Google Home and Nest devices more reliable and to handle complex requests easily.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploited flaws in WhatsApp and Apple devices, company says

WhatsApp has disclosed a hacking attempt that combined flaws in its app with a vulnerability in Apple’s operating system. The company has since fixed the issues.

The exploit, tracked as CVE-2025-55177 in WhatsApp and CVE-2025-43300 in iOS, allowed attackers to hijack devices via malicious links. Fewer than 200 users worldwide are believed to have been affected.

Amnesty International reported that some victims appeared to be members of civic organisations. Its Security Lab is collecting forensic data and warned that iPhone and Android users were impacted.

WhatsApp credited its security team for identifying the loopholes, describing the operation as highly advanced but narrowly targeted. The company also suggested that other apps could have been hit in the same campaign.

The disclosure highlights ongoing risks to secure messaging platforms, even those with end-to-end encryption. Experts stress that keeping apps and operating systems up to date remains essential to reducing exposure to sophisticated exploits.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s influence puts Grok at the centre of AI bias debate

Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.

xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.

Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.

Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.

The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI says China’s Salt Typhoon breached millions of Americans’ data

China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.

The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.

Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.

Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.

The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe adds 12 new unicorn startups in first half of 2025

Funding season is restarting in Europe, with investors expecting to add several new unicorns in the coming months. Despite fewer mega-rounds than in 2021, a dozen startups passed the $1 billion mark in the first half of 2025.

AI, biotech, defence technology, and renewable energy are among the sectors attracting major backing. Recent unicorns include Lovable, an AI coding firm from Sweden, UK-based Fuse Energy, and Isar Aerospace from Germany.

London-based Isomorphic Labs, spun out of DeepMind, raised $600 million to enter unicorn territory. In biotech, Verdiva Bio hit unicorn status after a $410 million Series A, while Neko Health reached a $1.8 billion valuation.

AI and automation continue to drive investor appetite. Dublin’s Tines secured a $125 million Series C at a $1.125 billion valuation, and German AI customer service startup Parloa raised $120 million at a $1 billion valuation.

Dual-use drone companies also stood out. Portugal-based Tekever confirmed its unicorn status with plans for a £400 million UK expansion, while Quantum Systems raised €160 million to scale its AI-driven drones globally.

Film-streaming platform Mubi and encryption startup Zama also joined the unicorn club, showing the breadth of sectors gaining traction. With Bristol, Manchester, Munich, and Stockholm among the hotspots, Europe’s tech ecosystem continues to diversify.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!