UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL cracks down on global cybercrime networks

Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.

Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.

The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.

More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.

Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.

In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe’s new digital diplomacy: From principles to power

In a decisive geopolitical shift, the European Union has unveiled its 2025 International Digital Strategy, signalling a turn from a values-first diplomacy to a focus on security and competitiveness. As Jovan Kurbalija explains in his blog post titled ‘EU Digital Diplomacy: Geopolitical shift from focus on values to economic security‘, the EU is no longer simply exporting its regulatory ideals — often referred to as the ‘Brussels effect’ — but is now positioning digital technology as central to its economic and geopolitical resilience.

The strategy places special emphasis on building secure digital infrastructure, such as submarine cables and AI factories, and deepening digital partnerships across continents. Unlike the 2023 Council Conclusions, which promoted a human-centric, rights-based approach to digital transformation, the 2025 Strategy prioritises tech sovereignty, resilient supply chains, and strategic defence-linked innovations.

Human rights, privacy, and inclusivity still appear, but mainly in supporting roles to broader goals of power and resilience. The EU’s new path reflects a realpolitik understanding that its survival in the global tech race depends on alliances, capability-building, and a nimble response to the rapid evolution of AI and cyber threats.

In practice, this means more digital engagement with key partners like India, Japan, and South Korea and coordinated global investments through the ‘Tech Team Europe’ initiative. The strategy introduces new structures like a Digital Partnership Network while downplaying once-central instruments like the AI Act.

With China largely sidelined and relations with the US in ‘wait and see’ mode, the EU seems intent on building an independent but interconnected digital path, reaching out to the Global South with a pragmatic offer of secure digital infrastructure and public-private investments.

Why does it matter?

Yet, major questions linger: how will these ambitious plans be implemented, who will lead them, and can the EU maintain coherence between its internal democratic values and this outward-facing strategic assertiveness? As Kurbalija notes, the success of this new digital doctrine will hinge on whether the EU can fuse its soft power legacy with the hard power realities of a turbulent tech-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital Social Security cards coming this summer

The US Social Security Administration is launching digital access to Social Security numbers in the summer of 2025 through its ‘My Social Security’ portal. The initiative aims to improve convenience, reduce physical card replacement delays, and protect against identity theft.

The digital rollout responds to the challenges of outdated paper cards, rising fraud risks, and growing demand for remote access to US government services. Cybersecurity experts also recommend using VPNs, antivirus software, and identity monitoring services to guard against phishing scams and data breaches.

While it promises faster and more secure access, experts urge users to bolster account protection through strong passwords, two-factor authentication, and avoidance of public Wi-Fi when accessing sensitive data.

Users should regularly check their credit reports and SSA records and consider requesting an IRS PIN to prevent tax-related fraud. The SSA says this move will make Social Security more efficient without compromising safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple study finds AI fails on complex tasks

A recent study by Apple researchers exposed significant limitations in the capabilities of advanced AI systems and huge reasoning models (LRMs).

Apple’s team suggested this may point to a fundamental limit in how current AI models scale up to general reasoning.

These models, designed to solve complex problems through step-by-step thinking, experienced what the paper called a ‘complete accuracy collapse’ when faced with high-complexity tasks. Even when given an algorithm that should have ensured success, the models failed to deliver correct solutions.

The study found that LRMs performed well with low- and medium-difficulty tasks but deteriorated sharply as the complexity increased.

Rather than increasing their effort as problems became harder, the models reduced their reasoning paradoxically, leading to complete failure.

Experts, including AI researcher Gary Marcus and University of Surrey’s Andrew Rogoyski in the UK, called the findings alarming and indicative of a potential dead end in current AI development.

The study tested systems from OpenAI, Google, Anthropic and DeepSeek, raising serious questions about how close the industry is to achieving AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers get AI support for marking and admin

According to new government guidance, teachers in England are now officially encouraged to use AI to reduce administrative tasks. The Department for Education has released training materials that support the use of AI for low-stakes marking and routine parent communication.

The guidance allows AI-generated letters, such as those informing parents about minor issues like head lice outbreaks, and suggests using the technology for quizzes or homework marking.

While the move aims to cut workloads and improve classroom focus, schools are also advised to implement clear policies on appropriate use and ensure manual checks remain in place.

Experts have welcomed the guidance as a step forward but noted concerns about data privacy, budget constraints, and potential misuse.

The guidance comes as UK nations explore AI in education, with Northern Ireland commissioning a study on its impact and Scotland and Wales also advocating its responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator probes 4chan over online safety rules

The UK communications regulator Ofcom has launched an investigation into the controversial message board 4chan for potentially breaching new online safety laws. Under the Online Safety Act, platforms must assess and manage risks related to illegal content affecting UK users.

Ofcom stated that it requested 4chan’s risk assessment in April but received no response, prompting a formal inquiry into whether the site failed to meet its duty to protect users. The nature of the illegal content being scrutinised has not been disclosed.

The regulator emphasised that it has the authority to fine companies up to £18 million or 10% of their global revenue, depending on which is higher. That move marks a significant test of the UK’s stricter regulatory powers to hold online services accountable.

The watchdog’s concerns stem from user anonymity on 4chan, which has historically made the platform a hotspot for controversial, offensive, and often extreme content. A recent cyberattack further complicated matters, rendering parts of the website offline for over a week.

Alongside 4chan, Ofcom is also investigating pornographic site First Time Videos for failing to prove robust age verification systems are in place to block access by under-18s. This is part of a broader crackdown as platforms with age-restricted content face a July deadline to implement effective safeguards, which may include facial age-estimation technology.

Additionally, seven lesser-known file-sharing services, including Krakenfiles and Yolobit, are being scrutinised for potentially hosting child sexual abuse material. Like 4chan, these platforms reportedly failed to respond to Ofcom’s information requests. The regulator’s growing list of investigations signals a tougher era for digital platforms operating in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS launches urgent blood donor appeal

The NHS is appealing for one million new blood donors during National Blood Week, following a significant cyberattack on London hospitals that severely impacted blood stocks.

This urgent plea comes after an analysis by NHS Blood and Transplant revealed a shortfall of 200,000 donors, with only two percent of the population currently sustaining the nation’s blood supply.

The ongoing shortage stems from a July 2024 cyberattack, linked to the Russian-based Qilin ransomware group, which crippled the networks of Synnovis, a major NHS lab partner.

This disruption affected crucial pathology services at hospitals including King’s College Hospital and Guy’s and St Thomas’ NHS Foundation Trust, leading to postponed operations and procedures. The incident prompted an Amber alert for severe blood shortages, declared a ‘critical incident’ by the NHS.

With blood stocks remaining low, exacerbated by recent bank holidays, the NHS is now facing a pressing need to prevent a ‘Red Alert,’ which would signify demand far exceeding capacity and threaten public safety.

A particular emphasis is being placed on finding more O-negative and Ro donors, urging the public to come forward and help replenish vital supplies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!