Study warns against using AI for Valentine’s messages

Psychologists have urged caution over using AI to write Valentine’s Day messages, after research suggested people judge such use negatively in intimate contexts.

A University of Kent study surveyed 4,000 participants about their perceptions of people who relied on AI to complete various tasks. Respondents viewed AI use more negatively when it was applied to writing love letters, apologies, and wedding vows.

According to the findings, people who used AI for personal messages were seen as less caring, less authentic, less trustworthy, and lazier, even when the writing quality was high, and the AI use was disclosed.

The research forms part of the Trust in Moral Machines project, supported by the University of Exeter. Lead researcher Dr Scott Claessens said people judge not only outcomes, but also the process behind them, particularly in socially meaningful tasks.

Dr Jim Everett, also from the University of Kent, said relying on AI for relationship-focused communication risks signalling lower effort and care. He added that AI could not replace the personal investment that underpins close human relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN General Assembly appoints experts to the Independent International Scientific Panel on AI

The UN General Assembly has appointed 40 experts to serve on a newly created Independent International Scientific Panel on Artificial Intelligence, marking the launch of the first global scientific body dedicated to assessing the technology’s impact. The panel, established by a 2025 Assembly resolution, will produce annual evidence-based reports examining AI’s opportunities, risks and broader societal effects.

The members, selected from more than 2,600 candidates, will serve in their personal capacity for a three-year term running from February 2026 to February 2029. According to UN Secretary-General António Guterres, ‘we now have a multidisciplinary group of leading AI experts from across the globe, geographically diverse and gender-balanced, who will provide independent and impartial assessments of AI’s opportunities, risks and impacts, including to the new Global Dialogue on AI Governance’.

The appointments were approved by a recorded vote of 117 in favour to two against, Paraguay and the United States, with two abstentions from Tunisia and Ukraine. The United States requested a recorded vote, strongly objecting to the panel’s creation and arguing that it represents an ‘overreach of the UN’s mandate and competence’.

Other countries pushed back against that view. Uruguay, speaking on behalf of the Group of 77 and China, stressed the call for ‘comprehensive international frameworks that guarantee the fair inclusion of developing countries in shaping the future of AI governance’.

Several delegations highlighted the technology’s potential to improve public services, expand access to education and healthcare, and accelerate progress towards the Sustainable Development Goals.

Supporters of the initiative argued that AI’s global and interconnected nature requires coordinated governance. Spain, co-facilitator of the resolution that created the panel, stressed that AI ‘generates an interdependence that demands governance frameworks that no State can build by itself’ and offered to host the panel’s first in-person meeting.

The European Union and others underlined the importance of scientific excellence, independence and integrity to ensure the panel’s credibility.

The United Kingdom emphasised that trust in the Panel’s independence, scientific rigour, integrity and ability to reflect diverse perspectives are ‘essential ingredients for the Panel’s legitimacy and for its reports to be widely utilised’. China urged the Panel to prioritise capacity-building as a ‘core issue’ in its future work, and Iran urged that the ‘voice of developing countries must be heard’, and that such states must be empowered to benefit from impartial scientific guidance.

Ukraine, while supporting the initiative, expressed concerns about a potential conflict of interest involving an expert nominated by Russia.

In parallel with the AI appointments, the Assembly named two new members to the Joint Inspection Unit, the UN’s independent oversight body responsible for evaluations and investigations across the system. It also noted that Ghana, Saint Vincent and the Grenadines, and Togo had reduced their arrears below the threshold set by Article 19 of the UN Charter, which can suspend a country’s voting rights if dues remain unpaid for two full years.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany drafts reforms expanding offensive cyber powers

Politico reports that Germany is preparing legislative reforms that would expand the legal framework for conducting offensive cyber operations abroad and strengthen authorities to counter hybrid threats.

According to the Interior Ministry, two draft laws are under preparation:

  • One would revise the mandate of Germany’s foreign intelligence service to allow cyber operations outside national territory.
  • A second would grant security services expanded powers to fight back against hybrid threats and what the government describes as active cyber defense.

The discussion in Germany coincides with broader European debates on offensive cyber capabilities. In particular, the Netherlands have incorporated offensive cyber elements into national strategies.

The reforms in Germany remain in draft form and may face procedural and constitutional scrutiny. Adjustments to intelligence mandates could require amendments supported by a two-thirds majority in both the Bundestag and Bundesrat.

The proposed framework for ‘active cyber defense’ would focus on preventing or mitigating serious threats. Reporting by Tagesschau ndicates that draft provisions may allow operational follow-up measures in ‘special national situations,’ particularly where timely police or military assistance is not feasible.

Opposition lawmakers have raised questions regarding legal clarity, implementation mechanisms, and safeguards. Expanding offensive cyber authorities raises longstanding policy questions, including challenges of attribution to identify responsible actors; risks of escalation or diplomatic repercussions; oversight and accountability mechanisms; and compatibility with international law and norms of responsible state behaviour.

The legislative process is expected to continue through the year, with further debate anticipated in parliament.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI anxiety strains the modern workforce

Mounting anxiety is reshaping the modern workplace as AI alters job expectations and career paths. Pew Research indicates more than a third of employees believe AI could harm their prospects, fuelling tension across teams.

Younger workers feel particular strain, with 92% of Gen Z saying it is vital to speak openly about mental health at work. Communicators and managers must now deliver reassurance while coping with their own pressure.

Leadership expert Anna Liotta points to generational intelligence as a practical way to reduce friction and improve trust. She highlights how tailored communication can reduce misunderstanding and conflict.

Her latest research connects neuroscience, including the role of the vagus nerve, with practical workplace strategies. By combining emotional regulation with thoughtful messaging, she suggests that organisations can calm anxiety and build more resilient teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

British Transport Police trial live facial recognition at London Bridge station

On 11 February 2026, the British Transport Police (BTP) deployed Live Facial Recognition cameras at London Bridge railway station as the first phase of a six-month trial intended to assess how the technology performs in a busy railway environment.

The pilot, planned with Network Rail, the Department for Transport and the Rail Delivery Group, will scan faces passing through designated areas and compare them to a watchlist of individuals wanted for serious offences, generating alerts for officers to review.

BTP says the trial is part of efforts to make the railways safer by quickly identifying high-risk offenders, with future LFR deployments to be announced in advance online.

Operational procedures include deleting images of people not on the authorised database and providing alternative routes for passengers who prefer not to enter recognition zones, with public feedback encouraged via QR codes on signage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Institute of AI Education marks significant step for responsible AI in schools

The Institute of AI Education was officially launched at York St John University, bringing together education leaders, teachers, and researchers to explore practical and responsible approaches to AI in schools.

Discussions at the event focused on critical challenges, including fostering AI literacy, promoting fairness and inclusion, and empowering teachers and students to have agency over how AI tools are used.

The institute will serve as a collaborative hub, offering research-based guidance, professional development, and practical support to schools. A central message emphasised that AI should enhance the work of educators and learners, rather than replace them.

The launch featured interactive sessions with contributions from both education and technology leaders, as well as practitioners sharing real-world experiences of integrating AI into classrooms.

Strong attendance and active participation underscored the growing interest in AI across the education sector, with representatives from the Department for Education highlighting notable progress in early years and primary school settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia signals no immediate Google ban as Android dependence remains critical

Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.

Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.

A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.

Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.

The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.

Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Hybrid AI could reshape robotics and defence

Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.

Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.

London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.

Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Researchers tackle LLM regression with on policy training

Researchers at MIT, the Improbable AI Lab and ETH Zurich have proposed a fine tuning method to address catastrophic forgetting in large language models. The issue often causes models to lose earlier skills when trained on new tasks.

The technique, called self distillation fine tuning, allows a model to act as both teacher and student during training. In Cambridge and Zurich experiments, the approach preserved prior capabilities while improving accuracy on new tasks.

Enterprise teams often manage separate model variants to prevent regression, increasing operational complexity. The researchers argue that their method could reduce fragmentation and support continual learning, useful for AI, within a single production model.

However, the method requires around 2.5 times more computing power than standard supervised fine tuning. Analysts note that real world deployment will depend on governance controls, training costs and suitability for regulated industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot