UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK Digital Inclusion Action Plan delivers devices funding and online access support

The UK Department for Science, Innovation and Technology said more than one million people have been helped online through its Digital Inclusion Action Plan. The update was published in a one-year progress report on the government strategy.

The department said over 22,000 devices were donated through government schemes and industry partnerships. It also confirmed £11.9 million in funding that supported more than 80 local digital inclusion programmes.

According to the report, the plan aims to improve access to devices, connectivity and digital skills. The government said all commitments in the strategy have either been delivered or remain on track.

The department added that partnerships with industry and charities helped expand access to broadband and mobile services, including more affordable connectivity. The programme also supported training and local initiatives to improve digital participation.

Secretary of State for Science, Innovation and Technology, Liz Kendall, said the programme is intended to expand access to online services, employment opportunities and communication tools. She added that the government plans to continue developing the initiative.

The department also confirmed it will take over the Essential Digital Skills Framework from Lloyds Banking Group and update it to reflect current needs, including online safety and the growing role of AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

UK’s CMA sets AI consumer law guidance

The UK Competition and Markets Authority has issued guidance warning firms that AI agents must follow the same consumer protection laws as human staff. Businesses remain legally responsible for AI actions, even when third parties supply tools.

Companies are advised to be transparent when customers interact with AI systems, particularly where people might assume a human response. Clear labelling and honest explanations of capabilities are considered essential for informed consumer decisions.

Proper training and testing of AI tools should ensure respect for refund rights, contract terms and accurate product information. Human oversight is recommended to prevent errors, misleading claims and so-called hallucinated outputs.

Rapid fixes are expected when problems emerge, especially for services affecting large audiences or vulnerable users. In the UK, breaches of consumer law can trigger enforcement action, heavy fines and mandatory compensation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Essex strawberry-picking robot wins national award for industry collaboration

A University of Essex robotics project designed to automate crop harvesting has won the Best Research Project (Industry Collaboration) award at the 2026 UKRI AI & Robotics Research Awards.

The Sustainable smArt Robotic Agriculture (SARA) project was developed in collaboration with industry partners Wilkin and Sons, JEPCO, and GyroPlant, and addresses three interconnected challenges: food security, labour shortages, and sustainability.

Central to the project is the development of low-cost AgriRobotics systems capable of adapting to different crops, tasks, and growing environments, automating repetitive, labour-intensive farm work whilst reducing wastage, carbon footprint, and dependence on increasingly scarce agricultural labour.

The team delivered a live strawberry-harvesting demonstration at the Innovate UK Robotics Industry Showcase in March, an event aligned with UKRI’s announcement of a £52 million competition for Robotics Adoption Hubs.

Building on the project’s success, lead researchers Professor Klaus McDonald-Maier and Dr Vishwanathan Mohan have launched a spinout company, Versatile RobotX, to accelerate the commercialisation of the technology and extend its global impact.

The SARA project previously won the Best Demonstration category at the same awards in 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI hiring tools are rejecting graduates before a human ever reads their CV

AI is increasingly taking over the early stages of hiring, with 89% of UK recruiters planning to use it more in the recruitment process this year.

For graduates like Bhuvana Chilukuri, a third-year business student at Queen Mary University London who has applied for over 100 roles without a single offer, this means facing automatic CV screening and AI video interviews, with some rejections arriving in under two minutes.

The scale of the problem is significant on both sides. Denis Machuel, CEO of Adecco, one of the world’s largest recruitment specialists, noted that candidates now need to send an average of 200 applications to receive a single job offer.

Meanwhile, firms like law firm Mishcon de Reya received 5,000 applications for just 35 roles in its last hiring round, a volume driven in part by candidates using AI to write and mass-submit applications, prompting employers to deploy AI to filter them out.

Supporters of AI hiring tools argue they can reduce human bias and deliver more consistent decisions. But critics warn the process strips candidates of their personality and humanity, with applicants describing feeling ‘robotic’ and ‘monotone’ while recording answers into a screen with no human interaction.

Machuel acknowledged the tension, calling for AI and human judgement to be combined at the right moments in the process, arguing that balance is the only way to break what he described as a growing ‘arms race.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK firms struggle to turn AI adoption into measurable returns

AI adoption is accelerating across UK businesses, with 78% now using the technology in some capacity, rising to 85% among mid-sized organisations. A further 14% are exploring or planning implementation by 2026, reflecting the continued momentum behind AI adoption.

Despite widespread use, tangible results remain limited. Just 31% of UK businesses report a positive return on investment, while 18% say their AI initiatives have failed to deliver expected benefits. Another 16% indicate it is still too early to assess outcomes, highlighting the long lead times often associated with AI deployments.

A major issue lies in defining success. Only 41% of organisations using AI say they have a clear understanding of what success looks like, suggesting that AI adoption is often not matched by clear strategic planning, even among mid-sized firms, the most active adopters; fewer than half can articulate measurable goals.

The findings suggest that rapid uptake has outpaced organisational readiness. Many businesses are deploying AI tools without defining how they fit into workflows, what decisions they are meant to support, or whether the goal is efficiency, cost reduction, or growth.

For AI adoption to translate into real business value, companies will need stronger governance, clearer objectives, and measurable success criteria. Without that foundation, AI risks remaining an expensive experiment rather than a driver of long-term transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!