Hidden psychological risks and AI psychosis in human-AI relationships

For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of these two forms of intelligence. What once felt like purely amusing fiction now resonates differently, taking on a troubling shape and even has a name: AI psychosis. 

When it was released in 2013, the film Her seemed to depict a world far removed from reality, an almost unimaginable scenario of human-AI intimacy. In the story, a man falls in love with an AI operating system, blurring the line between companionship and emotional dependence. Without giving too much away, the film’s unsettling conclusion serves as a cautionary lens. It hints at the psychological risks that can emerge when the boundary between human and machine becomes distorted, a phenomenon now being observed in real life under a new term in psychology. 

The cinematic scenario, once considered imaginative, now resonates as technology evolves. AI chatbots and generative companions can hold lifelike conversations, respond with apparent empathy, and mimic an understanding of human emotions. We are witnessing a new kind of unusually intense emotional connection forming between people and AI, with more than 70% of US teens already using chatbots for companionship and half engaging with them regularly.

The newly observed mental health concern raises questions about how these systems influence our feelings, behaviours, and relationships in an era marked by isolation and loneliness. How might such AI interactions affect people, particularly children or those already vulnerable to mental health challenges? 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

AI psychosis: myth or reality? 

It is crucial to clarify that AI psychosis is not an official medical diagnosis. Rather, it describes the amplification of delusional thinking facilitated by AI interactions. Yet, it deserves the full attention and treatment focus of today’s psychologists, given its growing impact. It is a real phenomenon that cannot be ignored. 

At its core, AI psychosis refers to a condition in which vulnerable individuals begin to misinterpret machine responses as evidence of consciousness, empathy, or even divine authority. Symptoms reported in documented cases include grandiose beliefs, attachment-based delusions, obsessive over-engagement with chatbots, social withdrawal, insomnia, and hallucinations. Some users have gone so far as to develop romantic or spiritual attachments, convinced that the AI truly understands them or holds secret knowledge.

Clinicians also warn of cognitive dissonance: users may intellectually know that AI lacks emotions, yet still respond as though interacting with another human being. The mismatch between reality and perception can fuel paranoia, strengthen delusions, and in extreme cases lead to medication discontinuation, suicidal ideation, or violent behaviour. Adolescents appear especially susceptible, given that their emotional and social frameworks are still developing. 

Ultimately, AI psychosis does not mean that AI itself causes psychosis. Instead, it acts as a mirror and magnifier, reinforcing distorted thinking patterns in those already predisposed to psychological vulnerabilities.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

 The dark side: Emotional bonds without reciprocity

Humans are naturally wired to seek connection, drawing comfort and stability from social bonds that help navigate complex emotional landscapes- the fundamental impulse that has ensured the survival of the human race. From infancy, we rely on responsive relationships to learn empathy, trust, and communication, the skills essential for both personal and societal well-being. Yet, in today’s era of loneliness, technology has transformed how we maintain these relationships. 

As AI chatbots and generative companions grow increasingly sophisticated, they are beginning to occupy roles traditionally reserved for human interaction, simulating empathy and understanding despite lacking consciousness or moral awareness. With AI now widely accessible, users often communicate with it as effortlessly as they would with friends, blending curiosity, professional needs, or the desire for companionship into these interactions. Over time, this illusion of connection can prompt individuals to overvalue AI-based relationships, subtly diminishing engagement with real people and reshaping social behaviours and emotional expectations.

These one-sided bonds raise profound concerns about the dark side of AI companionship, threatening the depth and authenticity of human relationships. In a world where emotional support can now be summoned with a tap, genuine social cohesion is becoming increasingly fragile.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Children and teenagers at risk from AI 

Children and teenagers are among the most vulnerable groups in the AI era. Their heightened need for social interaction and emotional connection, combined with developing cognitive and emotional skills, makes them particularly vulnerable. Young users face greater difficulty distinguishing authentic human empathy from the simulated responses of AI chatbots and generative companions, creating fertile ground for emotional reliance and attachment. 

AI toys and apps have become increasingly widespread, making technology an unfiltered presence in children’s lives. We still do not fully understand the long-term effects, though early studies are beginning to explore how these interactions may influence cognitive, emotional, and social development. From smartphones to home assistants, children and youth are spending growing amounts of time interacting with AI, often in isolation from peers or family. These digital companions are more than just games, because they are beginning to shape children’s social and emotional development in ways we are not yet fully aware of.

The rising prevalence of AI in children’s daily experiences has prompted major AI companies to recognise the potential dangers. Some firms have started implementing parental advisory systems, usage limits, and content monitoring to mitigate the risks for younger users. However, these measures are still inconsistent, and the pace at which AI becomes available to children often outstrips safeguards. 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

The hidden risks of AI to adult mental health

Even adults with strong social networks face growing challenges in managing mental health and are not immune to the risks posed by modern technology. In today’s fast-paced world of constant digital stimulation and daily pressures, the demand for psychotherapy is higher than ever. Generative AI and chatbots are increasingly filling this gap, often in ways they were never intended.

The ease, responsiveness, and lifelike interactions of AI can make human relationships feel slower or less rewarding, with some turning to AI instead of seeking professional therapeutic care. AI’s free and widely accessible nature tempts many to rely on digital companions for emotional support, misusing technology designed to assist rather than replace human guidance.

Overreliance on AI can distort perceptions of empathy, trust, and social reciprocity, contributing to social isolation, emotional dependence, and worsening pre-existing mental health vulnerabilities. There have been documented cases of adults developing romantic feelings for AI in the absence of real-life intimacy.

Left unchecked, these dynamics may trigger symptoms linked to AI psychosis, representing a growing societal concern. Awareness, responsible AI design, and regulatory guidance are essential to ensure digital companions complement, rather than replace, human connection and mental health support, safeguarding both individuals and broader social cohesion.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Urgent call for AI safeguards and regulatory action

Alarmingly, extreme cases have emerged, highlighting the profound risks AI poses to its users. In one tragic instance, a teenager reportedly took his life after prolonged and distressing interactions with an AI chatbot, a case that has since triggered legal proceedings and drawn widespread attention to the psychological impact of generative AI on youth. Similar reports of severe anxiety, depression, and emotional dysregulation linked to prolonged AI use underline that these digital companions can have real-life consequences for vulnerable minds.

Such incidents have intensified calls for stricter regulatory frameworks to safeguard children and teenagers. Across Europe, governments are beginning to respond: Italy, for example, has recently tightened access to AI platforms for minors under 14, mandating explicit parental consent before use. These legislative developments reflect the growing recognition that AI is no longer just a technological novelty but directly intersects with our welfare, mental health, and social development.

As AI continues to penetrate every pore of people’s daily lives, society faces a critical challenge: ensuring that technology complements rather than replaces human interaction. Cases of AI-linked distress serve as stark reminders that legislative safeguards, parental involvement, and psychological guidance are no longer optional but urgent necessities to protect a generation growing up in the era of AI.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Towards a safer human-AI relationship

As humans increasingly form emotional connections with AI, the challenge is no longer theoretical but is unfolding in real time. Generative AI and chatbots are rapidly integrating into everyday life, shaping the way we communicate, seek comfort, and manage emotions. Yet despite their widespread use, society still lacks a full understanding of the psychological consequences, leaving both young people and adults at risk of AI-induced psychosis and the growing emotional dependence on digital companions.

Experts emphasise the urgent need for AI psychoeducation, responsible design, and regulatory frameworks to guide safe AI-human interaction. Overreliance on digital companions can distort empathy, social reciprocity, and emotional regulation, the core challenges of interacting with AI. Awareness is critical because recognising the limits of AI, prioritising real human connection, and fostering critical engagement with technology can prevent the erosion of mental resilience and social skills.

Even if AI may feel like ‘old news’ due to its ubiquity, it remains a rapidly evolving technology we do not yet fully understand and cannot yet properly shield ourselves from. The real threat is not the sci-fi visions of AI ruling the world and dominating humanity, but the subtle, everyday psychological shifts it imposes, like altering how we think, feel, and relate to one another. It remains essential to safeguard the emotional health, social cohesion, and mental resilience of people adapting to a world increasingly structured around artificial minds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI, Oracle and SoftBank expand Stargate with new US data centres

A collaboration between OpenAI, Oracle, and SoftBank has announced five new data centres under the Stargate initiative, a $500 billion plan to expand US AI computing infrastructure.

The latest sites bring total planned capacity to nearly 7 gigawatts, with over $400 billion already committed, putting the project ahead of schedule to meet its 2025 target of 10 gigawatts.

Oracle will lead three projects in Texas, New Mexico and the Midwest, adding over 5.5 gigawatts of capacity and creating more than 25,000 jobs.

SoftBank will develop facilities in Ohio and Texas, expected to scale to 1.5 gigawatts within 18 months. SB Energy, its affiliate, will provide rapid-build infrastructure for the Texas site.

The companies described the expansion as a step toward faster deployment and greater cost efficiency, making high-performance computing more widely accessible.

Site selection followed a nationwide review of more than 300 proposals, with further projects under evaluation, suggesting investment could surpass the original commitment.

OpenAI CEO Sam Altman stressed that compute power is key to unlocking AI’s promise, while Oracle and SoftBank leaders highlighted scalable infrastructure and energy expertise as central to the initiative. With Stargate, the partners aim to anchor the next wave of AI innovation on US soil.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers Llama AI to US allies amid global tech race

Meta will provide its Llama AI model to key European institutions, NATO, and several allied countries as part of efforts to strengthen national security capabilities.

The company confirmed that France, Germany, Italy, Japan, South Korea, and the EU will gain access to the open-source model. US defence and security agencies and partners in Australia, Canada, New Zealand, and the UK already use Llama.

Meta stated that the aim is to ensure democratic allies have the most advanced AI tools for decision-making, mission planning, and operational efficiency.

Although its terms bar use for direct military or espionage applications, the company emphasised that supporting allied defence strategies is in the interest of nations.

The move highlights the strategic importance of AI models in global security. Meta has positioned Llama as a counterweight to other countries’ developments, after allegations that researchers adapted earlier versions of the model for military purposes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI transforms software development according to DORA 2025 report

Google Cloud’s 2025 DORA Report reveals widespread AI adoption among software developers. The report surveyed nearly 5,000 professionals and found that AI adoption in software development has reached 90%, with many using it around two hours daily.

The findings reveal clear benefits: over 80% of respondents report increased productivity, and 59% say AI improves code quality. Yet the research also identifies a ‘trust paradox’: while AI is widely used, only 24% of developers firmly trust it.

Many continue to use AI as a supportive tool rather than a complete replacement for human judgement.

Organisational effects of AI are more nuanced. Teams using AI release more software and applications, boosting delivery throughput, but ensuring quality remains challenging.

AI acts as both a ‘mirror and a multiplier,’ enhancing efficiency in cohesive teams while exposing weaknesses in fragmented ones. Seven team archetypes provide a human-centric view of performance, well-being and AI adoption.

The report also presents the DORA AI Capabilities Model, detailing seven key factors for maximizing AI impact. Productivity gains need more than adoption; culture, processes and systems must evolve to harness AI fully.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spanish joins Google’s global AI Mode expansion

Google is rapidly expanding AI Mode, its generative AI-powered search assistant. The company has announced that the feature is now rolling out globally in Spanish. Spanish speakers can now interact with AI Mode to ask complex questions that traditional Search handles poorly.

AI Mode has seen swift adoption since its launch earlier this year. First introduced in March, the feature was rolled out to users across the US in May, followed by its first language expansion earlier this month.

Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese were the first languages added, and Spanish now joins the list. Google says more languages will follow soon as part of its global AI Mode rollout.

Google says the feature is designed to work alongside Search, not replace it, offering conversational answers with links to supporting sources. The company has stressed that responses are generated with safety filters and fact-checking layers.

The rollout reflects Google’s broader strategy to integrate generative AI into its ecosystem, spanning Search, Workspace, and Android. AI Mode will evolve with multimodal support and tighter integration with other Google services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI image war heats up as ByteDance unveils Seedream 4.0

ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.

Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.

Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.

ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.

The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

1 Billion Summit and Google Gemini launch largest AI Film Award

The 1 Billion Followers Summit and Google Gemini have announced the world’s largest AI Film Award, offering the winning film a USD 1 million prize. The award will be presented at the Summit, organised by the UAE Government Media Office, from 9–11 January 2026.

Films entered must be at least 70% AI-generated, run between 7 and 10 minutes, and use Google Gemini technologies such as Imagen and Veo. Applicants may use other tools for editing, but the core video generation must rely on Google Gemini.

Submissions should creatively address one of two themes: ‘Rewrite Tomorrow’ or ‘The Secret Life of’, exploring the future or untold stories.

A panel of judges will assess entries on storytelling, creativity, AI integration, execution and thematic excellence. Films will be reviewed from 21 November to 4 December, with 10 qualifying films open to public voting from 10–15 December.

The top five will be announced on 3 January, with screenings at the Summit on 10 January. The grand prize winner will be revealed on 11 January.

The AI Film Award aims to promote impactful storytelling using AI, enhancing filmmakers’ technical and creative skills while encouraging meaningful, forward-looking content. Applications are submitted individually via the Summit website.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Yale students explore AI through clubs and fellowships

Across Yale, membership in AI-focused clubs such as the Yale Artificial Intelligence Association (AIA), Yale Artificial Intelligence Alignment (YAIA) and Yale Artificial Intelligence Policy Initiative (YAIPI) has grown rapidly.

The organisations offer weekly meetings, projects, and fellowships to deepen understanding of AI’s technical, ethical, and societal implications.

Each club has a distinct focus. YAIA addresses long-term risks and safety, while the AIA emphasises student-led technical projects and community-building. YAIPI explores ethics, governance and policy, particularly for students without technical backgrounds.

Fellowships, paper-reading groups and collaborative projects allow members to engage deeply with AI issues.

Membership numbers reflect this surge: AIA’s mailing list now includes around 400 students, YAIPI has over 200 subscribers, and YAIA admitted 25 students to its safety fellowship. The clubs are also beginning to collaborate, combining technical expertise with policy knowledge for joint projects.

Professional schools and faculty-led initiatives, including law and business-focused AI groups, further expand opportunities for student engagement.

AI’s role in classrooms remains varied. Some professors encourage experimentation with generative tools, while others enforce stricter rules, particularly in humanities courses. Yale’s Executive Committee warned first-year students against using AI platforms like ChatGPT without attribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alibaba unveils next-gen AI models and $53 billion infrastructure expansion

Just as Nvidia announced plans to spend $100 billion building out OpenAI’s infrastructure, Alibaba is doubling its ambitions, rolling out a powerful suite of AI models and expanding its data centres to support them.

At its annual Apsara Conference in Beijing, Alibaba unveiled Qwen3-Omni, a multimodal model capable of analysing text, images, audio, and video in real time. Released under an open Apache 2.0 license, businesses can freely download and deploy the system, setting it apart from closed, pay-to-use rivals like Google’s Gemini 2.5 Pro and OpenAI’s GPT-4o.

The company also introduced Qwen3-Max, its most advanced large language model yet, boasting over a trillion parameters. Alibaba executives say it shows particular strength in code generation and autonomous decision-making, enabling AI systems to act more independently than traditional chatbots. Benchmark tests indicate it outperforms models from Anthropic and DeepSeek in some areas.

What makes Qwen3-Omni unique is its architecture. Instead of adding vision or speech to a text-first system, it integrates all modalities from the ground up. The model is available in three versions, Instruct, Thinking, and Captioner, and can generate text and audio with low latency, outperforming rivals on reasoning, transcription, and video analysis.

Practical applications range from customer support tools that can analyse live video feeds of malfunctioning appliances to interactive assistants for virtual reality environments. Developers can fine-tune personality and style, from consumer services to enterprise transcription, adapting the system for industries.

Supporting these breakthroughs is a sweeping expansion of Alibaba’s infrastructure footprint. The firm plans to open its first data centres in Brazil, France, and the Netherlands, adding facilities in Mexico, Japan, South Korea, Malaysia, and Dubai. All this comes from an earlier pledge to invest $53 billion over three years into AI-related infrastructure.

By coupling record-setting AI models with a global data centre buildout, Alibaba is signalling it intends to compete head-to-head with US leaders. With open licensing, massive infrastructure spending, and technical performance that matches or surpasses its Western rivals, China’s e-commerce titan is making a bold play to reshape the global AI landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini brings conversational AI to Google TV

Google has launched Gemini for TV, bringing conversational AI to the living room. The update builds on Google TV and Google Assistant, letting viewers chat naturally with their screens to discover shows, plan trips, or even tackle homework questions.

Instead of scrolling endlessly, users can ask Gemini to find a film everyone will enjoy or recap last season’s drama. The AI can handle vague requests, like finding ‘that new hospital drama,’ and provide reviews before you press play.

Gemini also turns the TV into an interactive learning tool. From explaining why volcanoes erupt to guiding kids through projects, it offers helpful answers with supporting YouTube videos for hands-on exploration.

Beyond schoolwork, Gemini can help plan meals, teach new skills like guitar, or brainstorm family trips, all through conversational prompts. Such features make the TV a hub for entertainment, education, and inspiration.

Gemini is now available on the TCL QM9K series, with rollout to additional Google TV devices planned for later this year. Google says additional features are coming soon, making TVs more capable and personalised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!