Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL cracks down on global cybercrime networks

Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.

Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.

The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.

More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.

Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.

In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe’s new digital diplomacy: From principles to power

In a decisive geopolitical shift, the European Union has unveiled its 2025 International Digital Strategy, signalling a turn from a values-first diplomacy to a focus on security and competitiveness. As Jovan Kurbalija explains in his blog post titled ‘EU Digital Diplomacy: Geopolitical shift from focus on values to economic security‘, the EU is no longer simply exporting its regulatory ideals — often referred to as the ‘Brussels effect’ — but is now positioning digital technology as central to its economic and geopolitical resilience.

The strategy places special emphasis on building secure digital infrastructure, such as submarine cables and AI factories, and deepening digital partnerships across continents. Unlike the 2023 Council Conclusions, which promoted a human-centric, rights-based approach to digital transformation, the 2025 Strategy prioritises tech sovereignty, resilient supply chains, and strategic defence-linked innovations.

Human rights, privacy, and inclusivity still appear, but mainly in supporting roles to broader goals of power and resilience. The EU’s new path reflects a realpolitik understanding that its survival in the global tech race depends on alliances, capability-building, and a nimble response to the rapid evolution of AI and cyber threats.

In practice, this means more digital engagement with key partners like India, Japan, and South Korea and coordinated global investments through the ‘Tech Team Europe’ initiative. The strategy introduces new structures like a Digital Partnership Network while downplaying once-central instruments like the AI Act.

With China largely sidelined and relations with the US in ‘wait and see’ mode, the EU seems intent on building an independent but interconnected digital path, reaching out to the Global South with a pragmatic offer of secure digital infrastructure and public-private investments.

Why does it matter?

Yet, major questions linger: how will these ambitious plans be implemented, who will lead them, and can the EU maintain coherence between its internal democratic values and this outward-facing strategic assertiveness? As Kurbalija notes, the success of this new digital doctrine will hinge on whether the EU can fuse its soft power legacy with the hard power realities of a turbulent tech-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital Social Security cards coming this summer

The US Social Security Administration is launching digital access to Social Security numbers in the summer of 2025 through its ‘My Social Security’ portal. The initiative aims to improve convenience, reduce physical card replacement delays, and protect against identity theft.

The digital rollout responds to the challenges of outdated paper cards, rising fraud risks, and growing demand for remote access to US government services. Cybersecurity experts also recommend using VPNs, antivirus software, and identity monitoring services to guard against phishing scams and data breaches.

While it promises faster and more secure access, experts urge users to bolster account protection through strong passwords, two-factor authentication, and avoidance of public Wi-Fi when accessing sensitive data.

Users should regularly check their credit reports and SSA records and consider requesting an IRS PIN to prevent tax-related fraud. The SSA says this move will make Social Security more efficient without compromising safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple study finds AI fails on complex tasks

A recent study by Apple researchers exposed significant limitations in the capabilities of advanced AI systems and huge reasoning models (LRMs).

Apple’s team suggested this may point to a fundamental limit in how current AI models scale up to general reasoning.

These models, designed to solve complex problems through step-by-step thinking, experienced what the paper called a ‘complete accuracy collapse’ when faced with high-complexity tasks. Even when given an algorithm that should have ensured success, the models failed to deliver correct solutions.

The study found that LRMs performed well with low- and medium-difficulty tasks but deteriorated sharply as the complexity increased.

Rather than increasing their effort as problems became harder, the models reduced their reasoning paradoxically, leading to complete failure.

Experts, including AI researcher Gary Marcus and University of Surrey’s Andrew Rogoyski in the UK, called the findings alarming and indicative of a potential dead end in current AI development.

The study tested systems from OpenAI, Google, Anthropic and DeepSeek, raising serious questions about how close the industry is to achieving AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!