UK launches software security ambassadors scheme

The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.

In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.

The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.

Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI becomes mainstream in UK auto buying behaviour, survey shows

A recent survey reported by AM-Online reveals that approximately 66 per cent of UK car buyers use artificial intelligence in some form as part of their vehicle research and buying process.

AI applications cited include chatbots for questions and comparisons, recommendation systems for model selection, and virtual advisors that help consumers weigh options based on preferences and budget.

Industry commentators suggest that this growing adoption reflects broader digital transformation trends in automotive retail, with dealerships and manufacturers increasingly deploying AI technologies to personalise sales experiences, streamline research and nurture leads.

The integration of AI tools is seen as boosting customer engagement and efficiency, but it also raises questions about privacy and data protection, transparency and the future role of human sales advisors as digital tools become more capable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New consortium applies AI to early drug research

A new AI-driven drug discovery initiative with a budget exceeding €60 million has launched, bringing together academic and industry partners across Europe and North America. University College London is acting as the lead academic partner in the UK.

The five-year LIGAND-AI programme is funded through the Innovative Health Initiative and aims to speed up early drug discovery. Researchers will generate large open datasets showing how molecules bind to human proteins, supporting the training of advanced AI models.

The consortium, led by Pfizer and the Structural Genomics Consortium, includes 18 partners in nine countries. Work will focus on proteins linked to diseases such as cancer, neurological conditions and rare disorders.

UK based UCL scientists will help build global research networks and promote open sharing of protein samples and machine learning models. Organisers say the project supports open science and long-term goals to map chemical modulators for every human protein.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK toy industry trends show promising market recovery amid social media challenges

UK toy industry trends show a recovering market, but face challenges from social media regulations for children.

After Australia introduced a ban on social media for under-16s, UK toy sellers are monitoring the possibility of similar policies.

The UK toy market is rebounding, with sales value rising 6 percent last year, the first growth since 2020. Despite cost-of-living pressures, families continue to prioritise spending on toys, especially during holidays like Christmas.

A major driver of UK toy industry trends is the growth of the ‘kidult’ market. Older children and adults now account for around 30 percent of toy sales and spend more on items such as Lego sets, collectable figurines, and pop-culture merchandise. That shift shows that the sector is no longer reliant solely on younger children.

Social media shapes UK toy industry trends, as platforms promote toys from films, games, music, and sports, with franchises like Pokémon and Minecraft driving consumer interest.

Potential social media restrictions could force the industry to adapt, relying more on in-store promotions, traditional media, or franchise collaborations. The sector must balance child-protection policies with its growing dependence on digital platforms to maintain growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour MPs press Starmer to consider UK under-16s social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.

The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI guidance released for UK tax professionals by leading bodies

Several UK professional organisations for tax practitioners, including the Chartered Institute of Taxation (CIOT) and the Society of Trust and Estate Practitioners (STEP), have published new AI guidance for members.

The documents aim to help tax professionals understand how to adopt AI tools securely and responsibly while maintaining professional standards and compliance with legal and regulatory frameworks.

The guidance stresses that members should be aware of risks associated with AI, including data quality, bias, model limitations and the need for human oversight. It encourages firms to implement robust governance, clear policies on use, appropriate training and verification processes where outputs affect client advice or statutory obligations.

By highlighting best practices, the professional bodies seek to balance the benefits of generative AI, such as improved efficiency and research assistance, with ethical considerations and core professional responsibilities.

The guidance also points to data-protection obligations under UK law and the importance of maintaining client confidentiality when using third-party AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft obtains UK and US court orders to disable cybercrime infrastructure

Microsoft has obtained court orders in the United Kingdom and the United States to disrupt the cybercrime-as-a-service platform RedVDS, marking the first time its Digital Crimes Unit (DCU) has pursued a major civil action outside the US.

According to Microsoft, the legal action targeted infrastructure supporting RedVDS, a service that provided virtualised computing resources used in fraud and other cyber-enabled criminal activity. The company sought relief in the UK courts because elements of the platform’s infrastructure were hosted by a UK-based provider, and a significant number of affected victims were located in the UK.

It is reported that the action was conducted with support from Europol’s European Cybercrime Centre (EC3), as well as German authorities, including the Central Office for Combating Internet Crime (ZIT) at the Frankfurt-am-Main Public Prosecutor’s Office and the Criminal Police Office of the state of Brandenburg.

RedVDS operated on a subscription basis, with access reportedly available for approximately $24 per month. The service provided customers with short-lived virtual machines, which could be used to support activities such as phishing campaigns, hosting malicious infrastructure, and facilitating online fraud.

Microsoft states that RedVDS infrastructure has been used in a range of cyber-enabled criminal activities since September 2025, including business email compromise (BEC). In BEC cases, attackers impersonate trusted individuals or organisations to induce victims to transfer funds to accounts under the attackers’ control.

According to Microsoft’s assessment, users of the service targeted organisations across multiple sectors and regions. The real estate sector was among those affected, with estate agents, escrow agents, and title companies reportedly targeted in Australia and Canada. Microsoft estimates that several thousand organisations in that sector experienced some level of impact.

The company also noted that RedVDS users combined the service with other tools, including generative AI technologies, to scale operations, identify potential targets, and generate fraudulent content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK users can now disable Shorts autoplay with new YouTube feature

YouTube has introduced a new parental control for users in the United Kingdom that lets parents and guardians disable Shorts autoplay and continuous scrolling, addressing concerns about addictive viewing patterns and excessive screen time among children.

The feature gives families greater control over how the short-form video feed behaves, allowing users to turn off the infinite-scroll experience that keeps viewers engaged longer.

The update comes amid broader efforts by tech platforms to provide tools that support healthier digital habits, especially for younger users. YouTube says the control can help parents set limits without entirely removing access to Shorts content.

The roll-out is initially targeted at UK audiences, with the company indicating feedback will guide potential expansion. YouTube’s new off-switch reflects growing industry awareness of screen-time impacts and regulatory scrutiny around digital wellbeing features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot