Pornhub will begin blocking access for new UK users from 2 February 2026, allowing entry only to people who had already created an account and completed age checks before that date, the company said, framing the move as a protest against how the UK’s Online Safety Act is being enforced.
The UK regime, overseen by Ofcom, requires porn services accessible in Britain to deploy ‘highly effective’ age assurance measures, not simple click-through age gates. Ofcom says traffic to pornography sites has fallen by about a third since the age-check deadline of 25 July 2025, and it has pursued investigations into dozens of services as enforcement ramps up.
Privacy and security concerns become sharper when adult platforms are turned into identity checkpoints. In December 2025, reporting linked a large leak of Pornhub premium-user analytics data, including emails and viewing/search histories, to a breach involving a third-party analytics provider, underscoring how sensitive such datasets can be when they are collected or retained.
Government and regulator messaging emphasises child protection and the Online Safety Act’s enforcement teeth, including significant penalties and, in extreme cases, access restrictions, while companies like Aylo argue that inconsistent enforcement simply pushes demand to riskier corners of the internet and fuels workarounds like VPNs.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.
The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.
Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.
The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.
Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.
Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.
The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A UK-based AI and coding educator, The School of Coding and AI, has announced plans to open a £3 million campus in Dubai Media City, marking its expansion into the Middle East. The new site is scheduled to open in March, with student enrolment beginning in May, and will welcome learners from the UAE and international markets.
The expansion reflects the school’s ambition to become a global player in AI and computer science education, with the Gulf region identified as a key growth market. The move is supported by the UK Department for Business and Trade and aligns with wider efforts to strengthen UK–UAE cooperation in technology, education, and skills development.
The Dubai campus will offer flexible programmes in AI, computer science, and digital skills, aiming to upskill around 2,000 students and provide pathways to higher education. The initiative is intended to respond to growing regional demand for innovation-driven training and advanced digital capabilities.
The expansion of the School of Coding and AI underscores the growing importance of education exports and skills development in meeting global demand for AI talent.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The government plans to trial AI tutoring tools in secondary schools, with nationwide availability targeted for the end of 2027. The tools will be developed through a government-led tender, bringing together teachers, AI labs, and technology companies to co-create solutions aligned with classroom needs.
The initiative aims to provide personalised, one-to-one-style learning support, adapting to individual pupils’ needs and helping them catch up where they struggle. A central objective is to reduce educational inequality, with up to 450,000 disadvantaged pupils in years 9–11 potentially benefiting each year, particularly those eligible for free school meals.
AI tutoring tools are intended to complement, not replace, face-to-face teaching. Teachers will play a key role in co-designing, testing, and refining the tools, ensuring they support high-quality teaching, provide targeted help to struggling pupils, and stretch higher-performing students.
Safety and quality are positioned as non-negotiable. The tools will be rigorously tested to ensure they are safe, reliable, and aligned with the National Curriculum, and clear benchmarks will be developed for use in schools. Trials beginning later this year will generate evidence to guide wider rollout, alongside practical training for teachers and school staff to support confident and responsible use of AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK authorities have unveiled a major policing reform programme that places AI and facial recognition at the centre of future law enforcement strategy. The plans include expanding the use of Live Facial Recognition and creating a national hub to scale AI tools across police forces.
The Home Office will fund 40 new facial recognition vans for town centres across England and Wales, significantly increasing real-time biometric surveillance capacity. Officials say the rollout responds to crime that increasingly involves digital activity.
The UK government will also invest £115 million over three years into a National Centre for AI in Policing, known as Police.AI. The centre will focus on speeding investigations, reducing paperwork and improving crime detection.
New governance measures will regulate police use of facial recognition and introduce a public register of deployed AI systems. National data standards aim to strengthen accountability and coordination across forces.
Structural reforms include creating a National Police Service to tackle serious crime and terrorism. Predictive analytics, deepfake detection and digital forensics will play a larger operational role.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Companies are reporting net job losses linked to AI adoption, with research showing a sharper impact than in other major economies. A Morgan Stanley survey found that firms using the technology for at least a year cut more roles than they created, particularly across the UK labour market.
The study covered sectors including retail, real estate, transport, healthcare equipment and automotive manufacturing, showing an average productivity increase of 11.5% among UK businesses. Comparable firms in the United States reported similar efficiency gains but continued to expand employment overall.
Researchers pointed to higher operating costs and tax pressures as factors amplifying the employment impact in Britain. Unemployment has reached a four-year high, while increases in the minimum wage and employer national insurance contributions have tightened hiring across industries.
Public concern over AI-driven displacement is also rising, with more than a quarter of UK workers fearing their roles could disappear within five years, according to recruitment firm Randstad. Younger workers expressed the highest anxiety, while older generations showed greater confidence in adapting.
Political leaders warn that unmanaged AI-driven change could disrupt labour markets. London mayor Sadiq Khan said the technology may cut many white-collar jobs, calling for action to create replacement roles.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.
Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.
The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.
European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The transport sector is expected to be the first industry to face large-scale AI automation, particularly in frontline driving roles. Buses, taxis, trains, coaches and heavy goods vehicles are seen as especially vulnerable as autonomous technologies continue to mature.
Employers are increasingly attracted to AI automation, such as automated vehicles, because they can operate continuously without the driving-time limits imposed on human workers. However, this makes automation economically appealing, especially in freight and logistics, where efficiency and round-the-clock operation are critical.
The shift could lead to the displacement of hundreds of thousands, or even millions, of transport workers. Concerns are growing over the lack of alternative job opportunities, as investment in reskilling across the UK has remained limited despite ongoing discussions about labour shortages.
Beyond employment, AI automation may have broader economic implications. Large-scale job losses would reduce tax revenues, potentially forcing governments to reconsider taxation policies, including taxing activities that are currently untaxed to offset losses from employment income.
The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.
In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.
The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.
Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A recent survey reported by AM-Onlinereveals that approximately 66 per cent of UK car buyers use artificial intelligence in some form as part of their vehicle research and buying process.
AI applications cited include chatbots for questions and comparisons, recommendation systems for model selection, and virtual advisors that help consumers weigh options based on preferences and budget.
Industry commentators suggest that this growing adoption reflects broader digital transformation trends in automotive retail, with dealerships and manufacturers increasingly deploying AI technologies to personalise sales experiences, streamline research and nurture leads.
The integration of AI tools is seen as boosting customer engagement and efficiency, but it also raises questions about privacy and data protection, transparency and the future role of human sales advisors as digital tools become more capable.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!