UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Launch of Qai advances Qatar’s AI strategy globally

Qatar has launched Qai, a new national AI company designed to strengthen the country’s digital capabilities and accelerate sustainable development. The initiative supports Qatar’s plans to build a knowledge-based economy and deepen economic diversification under Qatar National Vision 2030.

The company will develop, operate and invest in AI infrastructure both domestically and internationally, offering high-performance computing and secure tools for deploying scalable AI systems. Its work aims to drive innovation while ensuring that governments, companies and researchers can adopt advanced technologies with confidence.

Qai will collaborate closely with research institutions, policymakers and global partners to expand Qatar’s role in data-driven industries. The organisation promotes an approach to AI that prioritises societal benefit, with leaders stressing that people and communities must remain central to technological progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Intellectual property laws in Azerbaijan adapts to AI challenges

Azerbaijan is preparing to update its intellectual property legislation to address the growing impact of artificial intelligence. Kamran Imanov, Chairman of the Intellectual Property Agency, highlighted that AI raises complex questions about authorship, invention, and human–AI collaboration that current laws cannot fully resolve.

The absence of legal personality for AI creates challenges in defining rights and responsibilities, prompting a reassessment of both national and international legal norms. Imanov underlined that reforming intellectual property rules is essential for fostering innovation while protecting creators’ rights.

Recent initiatives, including the adoption of a national AI strategy and the establishment of the Artificial Intelligence Academy, demonstrate Azerbaijan’s commitment to building a robust governance framework for emerging technologies. The country is actively prioritising AI regulation to guide ethical development and usage.

The Intellectual Property Agency, in collaboration with the World Intellectual Property Organization, recently hosted an international conference in Baku on intellectual property and AI. Experts from around the globe convened to discuss the challenges and opportunities posed by AI in the legal protection of inventions and creative works.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens worldwide divided over Australia’s under-16 social media ban

As Australia prepares to enforce the world’s first nationwide under-16 social-media ban on 10 December 2025, young people across the globe are voicing sharply different views about the move.

Some teens view it as an opportunity for a digital ‘detox’, a chance to step back from the constant social media pressure. Others argue the law is extreme, unfair, and likely to push youth toward less regulated corners of the internet.

In Mumbai, 19-year-old Pratigya Jena said the debate isn’t simple: ‘nothing is either black or white.’ She acknowledged that social media can help young entrepreneurs, but also warned that unrestricted access exposes children to inappropriate content.

Meanwhile, in Berlin, 13-year-old Luna Drewes expressed cautious optimism; she felt the ban might help reduce the pressure to conform to beauty standards that are often amplified online. Another teen, 15-year-old Enno Caro Brandes, said he understood the motivation but admitted he couldn’t imagine giving up social media altogether.

In Doha, older teens voiced more vigorous opposition. Sixteen-year-old Firdha Razak called the ban ‘really stupid,’ while sixteen-year-old Youssef Walid argued that it would be trivial to bypass using VPNs. Both said they feared losing vital social and communication outlets.

Some, like 15-year-old Mitchelle Okinedo from Lagos, suggested the ban ignored how deeply embedded social media is in modern life: ‘We were born with it,’ she said, hinting that simply cutting access may be unrealistic. Others noted the role of social media in self-expression, especially in areas where offline spaces are limited.

Even within Australia, opinions diverge. A 15-year-old named Layton Lewis said he doubted the ban would have significant effects. His mother, Emily, meanwhile, welcomed the change, hoping it might encourage more authentic offline friendships rather than ‘illusory’ online interactions.

The variety of reactions underscores how the law is approaching a stark test: while some see potential mental health or safety gains, many worry about the rights of teens, enforcement effectiveness, and whether simply banning access truly addresses the underlying risks.

As commentary and activism ramp up around digital-age regulation, few expect consensus, but many do expect the debate to shape future policy beyond Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mitigated ads personalisation coming to Meta platforms in the EU

Meta has agreed to introduce a less personalised ads option for Facebook and Instagram users in the EU, as part of efforts to comply with the bloc’s Digital Markets Act and address concerns over data use and user consent.

Under the revised model, users will be able to access Meta’s social media platforms without agreeing to extensive personal data processing for fully personalised ads. Instead, they can opt for an alternative experience based on significantly reduced data inputs, resulting in more limited ad targeting.

The option is set to roll out across the EU from January 2026. It marks the first time Meta has offered users a clear choice between highly personalised advertising and a reduced-data model across its core platforms.

The change follows months of engagement between Meta and Brussels after the European Commission ruled in April that the company had breached the DMA. Regulators stated that Meta’s previous approach had failed to provide users with a genuine and effective choice over how their data was used for advertising.

Once implemented, the Commission said it will gather evidence and feedback from Meta, advertisers, publishers, and other stakeholders. The goal is to assess the extent to which the new option is adopted and whether it significantly reshapes competition and data practices in the EU digital advertising market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time journalism becomes central to Meta AI strategy

Meta has signed commercial agreements with news publishers to feed real-time reporting into Meta AI, enabling its chatbot to answer news-related queries with up-to-date information from multiple editorial sources.

The company said responses will include links to full articles, directing users to publishers’ websites and helping partners reach new audiences beyond traditional platform distribution.

Initial partners span US and international outlets, covering global affairs, politics, entertainment, and sports, with Meta signalling that additional publishing deals are in the works.

The shift marks a recalibration. Meta previously reduced its emphasis on news across Facebook and ended most publisher payments, but now sees licensed reporting as essential to improving AI accuracy and relevance.

Facing intensifying competition in the AI market, Meta is positioning real-time journalism as a differentiator for its chatbot, which is available across its apps and to users worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creatives warn that AI is reshaping their jobs

AI is accelerating across creative fields, raising concerns among workers who say the technology is reshaping livelihoods faster than anyone expected.

A University of Cambridge study recently found that more than two-thirds of creative professionals fear AI has undermined their job security, and many now describe the shift as unavoidable.

One of them is Norwich-based artist Aisha Belarbi, who says the rise of image-generation tools has made commissions harder to secure as clients ‘can just generate whatever they want’. Although she works in both traditional and digital media, Belarbi says she increasingly struggles to distinguish original art from AI output. That uncertainty, she argues, threatens the value of lived experience and the labour behind creative work.

Others are embracing the change. Videographer JP Allard transformed his Milton Keynes production agency after discovering the speed and scale of AI-generated video. His company now produces multilingual ‘digital twins’ and fully AI-generated commercials, work he says is quicker and cheaper than traditional filming. Yet he acknowledges that the pace of change can leave staff behind and says retraining has not kept up with the technology.

For musician Ross Stewart, the concern centres on authenticity. After listening to what he later discovered was an AI-generated blues album, he questioned the impact of near-instant song creation on musicians’ livelihoods and exposure. He believes audiences will continue to seek human performance, but worries that the market for licensed music is already shifting towards AI alternatives.

Copywriter Niki Tibble has experienced similar pressures. Returning from maternity leave, she found that AI tools had taken over many entry-level writing tasks. While some clients still prefer human writers for strategy, nuance and brand voice, Tibble’s work has increasingly shifted toward reviewing and correcting AI-generated copy. She says the uncertainty leaves her unsure whether her role will exist in a decade.

Across these stories, creative workers describe a sector in rapid transition. While some see new opportunities, many fear the speed of adoption and a future where AI replaces the very work that has long defined their craft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!