Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI to teach machines physical reasoning

Meta Platforms has unveiled V-JEPA 2, an open-source AI model designed to help machines understand and interact with the physical world more like humans do.

The technology allows AI agents, including delivery robots and autonomous vehicles, to observe object movement and predict how those objects may behave in response to actions.

The company explained that just as people intuitively understand that a ball tossed into the air will fall due to gravity, AI systems using V-JEPA 2 gain a similar ability to reason about cause and effect in the real world.

Trained using video data, the model recognises patterns in how humans and objects move and interact, helping machines learn to reach, grasp, and reposition items more naturally.

Meta described the tool as a step forward in building AI that can think ahead, plan actions and respond intelligently to dynamic environments. In lab tests, robots powered by V-JEPA 2 performed simple tasks that relied on spatial awareness and object handling.

The company, led by CEO Mark Zuckerberg, is ramping up its AI initiatives to compete with rivals like Microsoft, Google, and OpenAI. By improving machine reasoning through world models such as V-JEPA 2, Meta aims to accelerate its progress toward more advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple brings AI tools to apps and Siri

Apple is rolling out Apple Intelligence, its generative AI platform, across popular apps including Messages, Mail, and Notes. Introduced in late 2024 and expanded in 2025, the platform blends text and image generation, redesigned Siri features, and integrations with ChatGPT.

The AI-enhanced Siri can now edit photos, summarise content, and interact across apps with contextual awareness. Writing tools offer grammar suggestions, tone adjustments, and content generation, while image tools allow for Genmoji creation and prompt-based visuals via the Image Playground app.

Unlike competitors, Apple uses on-device processing for many tasks, prioritising privacy. More complex queries are sent to its Private Cloud Compute system running on Apple Silicon, with a visible fallback if offline. Additional features like Visual Intelligence and Live Translation are expected later in 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils AI incident reporting guidelines for critical infrastructure

India is developing AI incident reporting guidelines for companies, developers, and public institutions to report AI-related issues affecting critical infrastructure sectors such as telecommunications, power, and energy. The government aims to create a centralised database to record and classify incidents like system failures, unexpected results, or harmful impacts caused by AI.

That initiative will help policymakers and stakeholders better understand and manage the risks AI poses to vital services, ensuring transparency and accountability. The proposed guidelines will require detailed reporting of incidents, including the AI application involved, cause, location, affected sector, and severity of harm.

The Telecommunications Engineering Centre (TEC) is spearheading the effort, focusing initially on telecom and digital infrastructure, with plans to extend the standard across other sectors and pitch it globally through the International Telecommunication Union. The framework aligns with international initiatives such as the OECD’s AI Incident Monitor and builds on government recommendations to improve oversight while fostering innovation.

Why does it matter?

The draft emphasises learning from incidents rather than penalising reporters, encouraging self-regulation to avoid excessive compliance burdens. The following approach complements broader AI safety goals of India, including the recent launch of the IndiaAI Safety Institute, which works on risk management, ethical frameworks, and detection tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia announces new AI lab in UK and supercomputing wins in Europe

What began as a company powering 3D games in the 1990s has evolved into the backbone of the global AI revolution. Nvidia, once best known for its Riva TNT2 chips in consumer graphics cards like the Elsa Erazor III, now sits at the centre of scientific computing, defence, and national-scale innovation.

While gaming remains part of its identity—with record revenue of $3.8 billion in Q1 FY2026—it now accounts for less than 9% of Nvidia’s $44.1 billion total revenue. The company’s trajectory reflects its founder Jensen Huang’s ambition to lead beyond the gaming space, targeting AI, supercomputing, and global infrastructure.

Recent announcements reinforce this shift. Huang joined UK Prime Minister Sir Keir Starmer to open London Tech Week, affirming Nvidia’s commitment to launch an AI lab in the UK, as the government commits £1 billion to AI compute by 2030.

Nvidia also revealed its Rubin-Vera superchip will power Germany’s ‘Blue Lion’ supercomputer, and its Grace Hopper platform is at the heart of Jupiter—Europe’s first exascale AI system, located at the Jülich Supercomputing Centre.

Nvidia’s presence now spans continents and disciplines, from powering national research to driving breakthroughs in climate modelling, quantum computing, and structural biology.

‘AI will supercharge scientific discovery and industrial innovation,’ said Huang. And with systems like Jupiter poised to run a quintillion operations per second, the company’s growth story is far from over.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TechNext launches forecasting system to guide R&D strategy

Global R&D spending now exceeds $2 trillion a year, yet many companies still rely on intuition rather than evidence to shape innovation strategies—often at great cost.

TechNext, co-founded by Anuraag Singh and MIT’s Prof. Christopher L. Magee, aims to change that with a newly patented system that delivers data-driven forecasts for technology performance.

Built on large-scale empirical datasets and proprietary algorithms, the system enables organisations to anticipate which technologies are likely to improve most rapidly.

‘R&D has become one of the fastest-growing expenses for companies, yet most decisions still rely on intuition rather than data,’ said Singh. ‘We have been flying blind’

The tool has already drawn attention from major stakeholders, including the United States Air Force, multinational firms, VCs, and think tanks.

By quantifying the future of technologies—from autonomous vehicle perception systems to clean energy infrastructure—TechNext promises to help decision-makers avoid expensive dead ends and focus on long-term winners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!