New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Macquarie crowns ‘AI slop’ as Word of the Year

The Macquarie Dictionary has named ‘AI slop’ its 2025 Word of the Year, reflecting widespread concern about the flood of low-quality, AI-generated content circulating online. The selection committee noted that the term captures a major shift in how people search for and evaluate information, stating that users now need to act as ‘prompt engineers’ to navigate the growing sea of meaningless material.

‘AI slop’ topped a shortlist packed with culturally resonant expressions, including ‘Ozempic face’, ‘blind box’, ‘ate (and left no crumbs)’ and ‘Roman Empire’. Honourable mentions went to emerging technology-related words such as ‘clankers’, referring to AI-powered robots, and ‘medical misogyny’.

The public vote aligned with the experts, also choosing ‘AI slop’ as its top pick.

The rise of the term reflects the explosive growth of AI over the past year, from social media content shared by figures like Donald Trump to deepfake-driven misinformation flagged by the Australian Electoral Commission. Language specialist David Astle compared AI slop to the modern equivalent of spam, noting its adaptability into new hybrid terms.

Asked about the title, ChatGPT said the win suggests people are becoming more critical of AI output, which is a reminder, it added, of the standard it must uphold.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK enforces digital travel approval through new ETA system

Visitors from 85 nationalities, including those from the US, Canada, and France, will soon be required to secure an Electronic Travel Authorisation to enter the UK.

The requirement takes effect in February 2026 and forms part of a move towards a fully digital immigration system that aims to deliver a contactless border in the future.

More than thirteen million people in the UK have already used the ETA since its introduction in 2023. However, the government claims that this scale facilitates smoother travel and faster processing for most applicants.

Carriers will be required to confirm that incoming passengers hold either an ETA or an eVisa before departure, a step officials argue strengthens the country’s ability to block individuals who present a security risk.

British and Irish citizens remain exempt; however, dual nationals have been advised to carry a valid British passport to avoid any difficulties when boarding.

The application process takes place through the official ETA app, costs £ 16, and concludes typically within minutes. However, applicants are advised to allow three working days in case additional checks are required.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pope Leo warns teens not to outsource schoolwork to AI

During a livestream from the Vatican to the National Catholic Youth Conference in Indianapolis, Pope Leo XIV warned roughly 15,000 young people not to rely on AI to do their homework.

He described AI as ‘one of the defining features of our time’ but insisted that responsible use should promote personal growth, not shortcut learning: ‘Don’t ask it to do your homework for you.’

Leo also urged teens to be deliberate with their screen time and use technology in ways that nurture faith, community and authentic friendships. He warned that while AI can process data quickly, it cannot replace real wisdom or the capacity for moral judgement.

His remarks reflect a broader concern from the Vatican about the impact of AI on the development of young people. In a previous message to a Vatican AI ethics conference, he emphasised that access to data is not the same as accurate intelligence. That youth must not let AI stunt their growth or compromise their dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot