UN urges global rules to ensure AI benefits humanity

The UN Security Council debated AI, noting its potential to boost development but warning of risks, particularly in military use. Secretary-General António Guterres called AI a ‘double-edged sword,’ supporting development but posing threats if left unregulated.

He urged legally binding restrictions on lethal autonomous weapons and insisted nuclear decisions remain under human control.

Experts and leaders emphasised the urgent need for global regulation, equitable access, and trustworthy AI systems. Yoshua Bengio of Université de Montréal warned of risks from misaligned AI, cyberattacks, and economic concentration, calling for greater oversight.

Stanford’s Yejin Choi highlighted the concentration of AI expertise in a few countries and companies, stressing that democratising AI and reducing bias is key to ensuring global benefits.

Representatives warned that AI could deepen digital inequality in developing regions, especially Africa, due to limited access to data and infrastructure.

Delegates from Guyana, Somalia, Sierra Leone, Algeria, and Panama called for international rules to ensure transparency, fairness, and prevent dominance by a few countries or companies. Others, including the United States, cautioned that overregulation could stifle innovation and centralise power.

Delegates stressed AI’s risks in security, urging Yemen, Poland, and the Netherlands called for responsible use in conflict with human oversight and ethical accountability.Leaders from Portugal and the Netherlands said AI frameworks must promote innovation, security, and serve humanity and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic models join Microsoft Copilot Studio for enhanced AI flexibility

Microsoft has added Anthropic’s Claude Sonnet 4 and Claude Opus 4.1 to Copilot Studio, giving users more control over model selection for orchestration, workflow automation, and reasoning tasks.

The integration allows customers to design and optimise AI agents with either Anthropic or OpenAI models, or even coordinate across both. Administrators can manage access through the Microsoft 365 Admin Center, with automatic fallback to OpenAI GPT-4o if Anthropic models are disabled.

Anthropic’s models are available in early release environments now, with preview access across all environments expected within two weeks and full production readiness by the end of the year.

Microsoft said the move empowers businesses to tailor AI agents more precisely to industry-specific needs, from HR onboarding to compliance management.

By enabling multi-model orchestration, Copilot Studio extends its versatility for enterprises seeking to match the right AI model to each task, underlining Microsoft’s push to position Copilot as a flexible platform for agentic AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gatik and Loblaw to deploy 50 self-driving trucks in Canada

Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.

The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.

The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.

Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.

Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Stargate sites create jobs and boost AI capacity across the US

OpenAI, Oracle, and SoftBank are expanding their Stargate AI infrastructure with five new US data centre sites. The addition brings nearly 7 gigawatts of capacity and $400 billion in investment, putting the partners on track to meet the $500 billion, 10-gigawatt commitment by 2025.

Three of the new sites- located in Shackelford County, Texas; Doña Ana County, New Mexico; and a forthcoming Midwest location, are expected to deliver over 5.5 gigawatts of capacity. These developments are expected to create over 25,000 onsite jobs and tens of thousands more nationwide.

A potential 600-megawatt expansion near the flagship site in Abilene, Texas, is also under consideration.

The remaining two sites, in Lordstown, Ohio, and Milam County, Texas, will scale to 1.5 gigawatts over 18 months. SoftBank and SB Energy are providing advanced design and infrastructure to enable faster, more scalable, and cost-efficient AI compute.

The new sites follow a rigorous nationwide selection process involving over 300 proposals from more than 30 states. Early workloads at the Abilene flagship site are already advancing next-generation AI research, supported by Oracle Cloud Infrastructure and NVIDIA GB200 racks.

The expansion underscores the partners’ commitment to building the physical infrastructure necessary for AI breakthroughs and long-term US leadership in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The UK’s invisible AI workforce is reshaping industries

According to a new analysis from Multiverse, the UK’s AI workforce is expanding far beyond traditional tech roles. Nurses, lecturers, librarians, surveyors, and other non-tech professionals increasingly apply AI, forming what experts call an ‘invisible AI workforce.’

Over two-thirds of AI apprentices are in roles without tech-related job titles, highlighting the widespread adoption of AI across industries.

An analysis of more than 2,500 Multiverse apprentices shows that AI is being applied in healthcare, education, government administration, financial services, and construction sectors. AI hotspots are emerging beyond London, with clusters in Trafford, Cheshire West and Chester, Leeds, and Birmingham.

Croydon leads among London boroughs for AI apprentices, followed by Tower Hamlets, Lewisham, and Wandsworth.

The UK’s AI workforce is also demographically diverse. Apprentices range in age from 19 to 71, with near-equal gender representation- 45% female and 54% male- compared with just 22% of women in AI roles nationwide.

Workers at all career stages are reskilling with AI, using the technology to address real-world problems, such as improving patient care or streamlining charity services.

Multiverse has trained over 20,000 apprentices in AI, data, and digital skills since 2016 and aims to train another 15,000 in the next two years. With 1,500 companies involved, the platform is helping non-tech workers use AI to boost productivity and innovation across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven remote fetal monitoring launched by Lee Health

Lee Health has launched Florida’s first AI-powered birth care centre, introducing a remote fetal monitoring command hub to improve maternal and newborn outcomes across the Gulf Coast.

The system tracks temperature, heart rate, blood pressure, and pulse for mothers and babies, with AI alerting staff when vital signs deviate from normal ranges. Nurses remain in control but gain what Lee Health calls a ‘second set of eyes’.

‘Maybe mum’s blood pressure is high, maybe the baby’s heart rate is not looking great. We will be able to identify those things,’ said Jen Campbell, director of obstetrical services at Lee Health.

Once a mother checks in, the system immediately monitors across Lee Health’s network and sends data to the AI hub. AI cues trigger early alerts under certified clinician oversight and are aligned with Lee Health’s ethical AI policies, allowing staff to intervene before complications worsen.

Dr Cherrie Morris, vice president and chief physician executive for women’s services, said the hub strengthens patient safety by centralising monitoring and providing expert review from certified nurses across the network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek reveals secrets of low-cost AI model

Chinese start-up DeepSeek has published the first peer-reviewed study of its R1 model, revealing how it built the powerful AI system for under US$300,000.

The model stunned markets on its release in January and has since become Hugging Face’s most downloaded open-weight system. Unlike rivals, R1 was not trained on other models’ output but instead developed reasoning abilities through reinforcement learning.

DeepSeek’s engineers rewarded the model for correct answers, enabling it to form problem-solving strategies. Efficiency gains came from allowing R1 to score its own outputs rather than relying on a separate algorithm.

The Nature paper marks the first time a major large language model has undergone peer review. Reviewers said the process increased transparency and should be adopted by other firms as scrutiny of AI risks intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN General Assembly highlights threats of unregulated technology

World leaders opened the 80th UN General Debate with a strong call to keep technology in the service of humanity, warning that without safeguards, rapid advances could widen divides and fuel insecurity. Speakers highlighted the promise of AI, digital innovation, and new technologies, but stressed that global cooperation is essential to ensure they promote development, dignity, and peace.

A recurring theme was the urgent need for universal guardrails on AI, with concerns over regulation lagging behind its fast-paced growth. Delegates from across regions supported multilateral governance, ethical standards, and closing global capacity gaps so that all countries can design, use, and benefit from AI.

While some warned of risks such as inequality, social manipulation, and autonomous weapons, others emphasised AI’s potential for prosperity, innovation, and inclusive growth.

Cybersecurity and cybercrime also drew attention, with calls for collective security measures and anticipation of a new UN convention against cybercrime. Leaders further raised alarms over disinformation, digital authoritarianism, and the race for critical minerals, urging fair access and sustainability.

Across the debate, the unifying message was clear. The technology must uplift humanity, protect rights, and serve as a force for peace rather than domination.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hidden psychological risks and AI psychosis in human-AI relationships

For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of these two forms of intelligence. What once felt like purely amusing fiction now resonates differently, taking on a troubling shape and even has a name: AI psychosis. 

When it was released in 2013, the film Her seemed to depict a world far removed from reality, an almost unimaginable scenario of human-AI intimacy. In the story, a man falls in love with an AI operating system, blurring the line between companionship and emotional dependence. Without giving too much away, the film’s unsettling conclusion serves as a cautionary lens. It hints at the psychological risks that can emerge when the boundary between human and machine becomes distorted, a phenomenon now being observed in real life under a new term in psychology. 

The cinematic scenario, once considered imaginative, now resonates as technology evolves. AI chatbots and generative companions can hold lifelike conversations, respond with apparent empathy, and mimic an understanding of human emotions. We are witnessing a new kind of unusually intense emotional connection forming between people and AI, with more than 70% of US teens already using chatbots for companionship and half engaging with them regularly.

The newly observed mental health concern raises questions about how these systems influence our feelings, behaviours, and relationships in an era marked by isolation and loneliness. How might such AI interactions affect people, particularly children or those already vulnerable to mental health challenges? 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

AI psychosis: myth or reality? 

It is crucial to clarify that AI psychosis is not an official medical diagnosis. Rather, it describes the amplification of delusional thinking facilitated by AI interactions. Yet, it deserves the full attention and treatment focus of today’s psychologists, given its growing impact. It is a real phenomenon that cannot be ignored. 

At its core, AI psychosis refers to a condition in which vulnerable individuals begin to misinterpret machine responses as evidence of consciousness, empathy, or even divine authority. Symptoms reported in documented cases include grandiose beliefs, attachment-based delusions, obsessive over-engagement with chatbots, social withdrawal, insomnia, and hallucinations. Some users have gone so far as to develop romantic or spiritual attachments, convinced that the AI truly understands them or holds secret knowledge.

Clinicians also warn of cognitive dissonance: users may intellectually know that AI lacks emotions, yet still respond as though interacting with another human being. The mismatch between reality and perception can fuel paranoia, strengthen delusions, and in extreme cases lead to medication discontinuation, suicidal ideation, or violent behaviour. Adolescents appear especially susceptible, given that their emotional and social frameworks are still developing. 

Ultimately, AI psychosis does not mean that AI itself causes psychosis. Instead, it acts as a mirror and magnifier, reinforcing distorted thinking patterns in those already predisposed to psychological vulnerabilities.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

 The dark side: Emotional bonds without reciprocity

Humans are naturally wired to seek connection, drawing comfort and stability from social bonds that help navigate complex emotional landscapes- the fundamental impulse that has ensured the survival of the human race. From infancy, we rely on responsive relationships to learn empathy, trust, and communication, the skills essential for both personal and societal well-being. Yet, in today’s era of loneliness, technology has transformed how we maintain these relationships. 

As AI chatbots and generative companions grow increasingly sophisticated, they are beginning to occupy roles traditionally reserved for human interaction, simulating empathy and understanding despite lacking consciousness or moral awareness. With AI now widely accessible, users often communicate with it as effortlessly as they would with friends, blending curiosity, professional needs, or the desire for companionship into these interactions. Over time, this illusion of connection can prompt individuals to overvalue AI-based relationships, subtly diminishing engagement with real people and reshaping social behaviours and emotional expectations.

These one-sided bonds raise profound concerns about the dark side of AI companionship, threatening the depth and authenticity of human relationships. In a world where emotional support can now be summoned with a tap, genuine social cohesion is becoming increasingly fragile.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Children and teenagers at risk from AI 

Children and teenagers are among the most vulnerable groups in the AI era. Their heightened need for social interaction and emotional connection, combined with developing cognitive and emotional skills, makes them particularly vulnerable. Young users face greater difficulty distinguishing authentic human empathy from the simulated responses of AI chatbots and generative companions, creating fertile ground for emotional reliance and attachment. 

AI toys and apps have become increasingly widespread, making technology an unfiltered presence in children’s lives. We still do not fully understand the long-term effects, though early studies are beginning to explore how these interactions may influence cognitive, emotional, and social development. From smartphones to home assistants, children and youth are spending growing amounts of time interacting with AI, often in isolation from peers or family. These digital companions are more than just games, because they are beginning to shape children’s social and emotional development in ways we are not yet fully aware of.

The rising prevalence of AI in children’s daily experiences has prompted major AI companies to recognise the potential dangers. Some firms have started implementing parental advisory systems, usage limits, and content monitoring to mitigate the risks for younger users. However, these measures are still inconsistent, and the pace at which AI becomes available to children often outstrips safeguards. 

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

The hidden risks of AI to adult mental health

Even adults with strong social networks face growing challenges in managing mental health and are not immune to the risks posed by modern technology. In today’s fast-paced world of constant digital stimulation and daily pressures, the demand for psychotherapy is higher than ever. Generative AI and chatbots are increasingly filling this gap, often in ways they were never intended.

The ease, responsiveness, and lifelike interactions of AI can make human relationships feel slower or less rewarding, with some turning to AI instead of seeking professional therapeutic care. AI’s free and widely accessible nature tempts many to rely on digital companions for emotional support, misusing technology designed to assist rather than replace human guidance.

Overreliance on AI can distort perceptions of empathy, trust, and social reciprocity, contributing to social isolation, emotional dependence, and worsening pre-existing mental health vulnerabilities. There have been documented cases of adults developing romantic feelings for AI in the absence of real-life intimacy.

Left unchecked, these dynamics may trigger symptoms linked to AI psychosis, representing a growing societal concern. Awareness, responsible AI design, and regulatory guidance are essential to ensure digital companions complement, rather than replace, human connection and mental health support, safeguarding both individuals and broader social cohesion.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Urgent call for AI safeguards and regulatory action

Alarmingly, extreme cases have emerged, highlighting the profound risks AI poses to its users. In one tragic instance, a teenager reportedly took his life after prolonged and distressing interactions with an AI chatbot, a case that has since triggered legal proceedings and drawn widespread attention to the psychological impact of generative AI on youth. Similar reports of severe anxiety, depression, and emotional dysregulation linked to prolonged AI use underline that these digital companions can have real-life consequences for vulnerable minds.

Such incidents have intensified calls for stricter regulatory frameworks to safeguard children and teenagers. Across Europe, governments are beginning to respond: Italy, for example, has recently tightened access to AI platforms for minors under 14, mandating explicit parental consent before use. These legislative developments reflect the growing recognition that AI is no longer just a technological novelty but directly intersects with our welfare, mental health, and social development.

As AI continues to penetrate every pore of people’s daily lives, society faces a critical challenge: ensuring that technology complements rather than replaces human interaction. Cases of AI-linked distress serve as stark reminders that legislative safeguards, parental involvement, and psychological guidance are no longer optional but urgent necessities to protect a generation growing up in the era of AI.

AI is no longer just a tool- humans are forming deep emotional bonds with artificial intelligence, impacting behavior, decision-making, and the very way we perceive connection.

Towards a safer human-AI relationship

As humans increasingly form emotional connections with AI, the challenge is no longer theoretical but is unfolding in real time. Generative AI and chatbots are rapidly integrating into everyday life, shaping the way we communicate, seek comfort, and manage emotions. Yet despite their widespread use, society still lacks a full understanding of the psychological consequences, leaving both young people and adults at risk of AI-induced psychosis and the growing emotional dependence on digital companions.

Experts emphasise the urgent need for AI psychoeducation, responsible design, and regulatory frameworks to guide safe AI-human interaction. Overreliance on digital companions can distort empathy, social reciprocity, and emotional regulation, the core challenges of interacting with AI. Awareness is critical because recognising the limits of AI, prioritising real human connection, and fostering critical engagement with technology can prevent the erosion of mental resilience and social skills.

Even if AI may feel like ‘old news’ due to its ubiquity, it remains a rapidly evolving technology we do not yet fully understand and cannot yet properly shield ourselves from. The real threat is not the sci-fi visions of AI ruling the world and dominating humanity, but the subtle, everyday psychological shifts it imposes, like altering how we think, feel, and relate to one another. It remains essential to safeguard the emotional health, social cohesion, and mental resilience of people adapting to a world increasingly structured around artificial minds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!