Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple study finds AI fails on complex tasks

A recent study by Apple researchers exposed significant limitations in the capabilities of advanced AI systems and huge reasoning models (LRMs).

Apple’s team suggested this may point to a fundamental limit in how current AI models scale up to general reasoning.

These models, designed to solve complex problems through step-by-step thinking, experienced what the paper called a ‘complete accuracy collapse’ when faced with high-complexity tasks. Even when given an algorithm that should have ensured success, the models failed to deliver correct solutions.

The study found that LRMs performed well with low- and medium-difficulty tasks but deteriorated sharply as the complexity increased.

Rather than increasing their effort as problems became harder, the models reduced their reasoning paradoxically, leading to complete failure.

Experts, including AI researcher Gary Marcus and University of Surrey’s Andrew Rogoyski in the UK, called the findings alarming and indicative of a potential dead end in current AI development.

The study tested systems from OpenAI, Google, Anthropic and DeepSeek, raising serious questions about how close the industry is to achieving AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oxford physicists push qubit precision to new heights

Oxford University physicists have achieved a world-first in quantum computing by setting a new record for single-qubit operation accuracy.

Using a trapped calcium ion as the qubit, the researchers controlled its state using electronic microwave signals instead of lasers.

Their experiment produced an error rate of just 0.000015 percent, or one mistake in 6.7 million operations, nearly ten times better than the previous benchmark set by the same team. The breakthrough brings quantum computers a step closer to becoming viable tools.

This more stable and cost-effective approach was conducted at room temperature and without magnetic shielding, simplifying future hardware requirements.

The precision reduces the number of qubits needed for error correction, making future quantum machines potentially smaller and faster.

Despite the milestone, the researchers emphasised the need to improve two-qubit gate fidelity, where error rates remain significantly higher.

The project is part of the UK Quantum Computing and Simulation Hub, with wider support from the National Quantum Technologies Programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers get AI support for marking and admin

According to new government guidance, teachers in England are now officially encouraged to use AI to reduce administrative tasks. The Department for Education has released training materials that support the use of AI for low-stakes marking and routine parent communication.

The guidance allows AI-generated letters, such as those informing parents about minor issues like head lice outbreaks, and suggests using the technology for quizzes or homework marking.

While the move aims to cut workloads and improve classroom focus, schools are also advised to implement clear policies on appropriate use and ensure manual checks remain in place.

Experts have welcomed the guidance as a step forward but noted concerns about data privacy, budget constraints, and potential misuse.

The guidance comes as UK nations explore AI in education, with Northern Ireland commissioning a study on its impact and Scotland and Wales also advocating its responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator probes 4chan over online safety rules

The UK communications regulator Ofcom has launched an investigation into the controversial message board 4chan for potentially breaching new online safety laws. Under the Online Safety Act, platforms must assess and manage risks related to illegal content affecting UK users.

Ofcom stated that it requested 4chan’s risk assessment in April but received no response, prompting a formal inquiry into whether the site failed to meet its duty to protect users. The nature of the illegal content being scrutinised has not been disclosed.

The regulator emphasised that it has the authority to fine companies up to £18 million or 10% of their global revenue, depending on which is higher. That move marks a significant test of the UK’s stricter regulatory powers to hold online services accountable.

The watchdog’s concerns stem from user anonymity on 4chan, which has historically made the platform a hotspot for controversial, offensive, and often extreme content. A recent cyberattack further complicated matters, rendering parts of the website offline for over a week.

Alongside 4chan, Ofcom is also investigating pornographic site First Time Videos for failing to prove robust age verification systems are in place to block access by under-18s. This is part of a broader crackdown as platforms with age-restricted content face a July deadline to implement effective safeguards, which may include facial age-estimation technology.

Additionally, seven lesser-known file-sharing services, including Krakenfiles and Yolobit, are being scrutinised for potentially hosting child sexual abuse material. Like 4chan, these platforms reportedly failed to respond to Ofcom’s information requests. The regulator’s growing list of investigations signals a tougher era for digital platforms operating in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!