AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia announces new AI lab in UK and supercomputing wins in Europe

What began as a company powering 3D games in the 1990s has evolved into the backbone of the global AI revolution. Nvidia, once best known for its Riva TNT2 chips in consumer graphics cards like the Elsa Erazor III, now sits at the centre of scientific computing, defence, and national-scale innovation.

While gaming remains part of its identity—with record revenue of $3.8 billion in Q1 FY2026—it now accounts for less than 9% of Nvidia’s $44.1 billion total revenue. The company’s trajectory reflects its founder Jensen Huang’s ambition to lead beyond the gaming space, targeting AI, supercomputing, and global infrastructure.

Recent announcements reinforce this shift. Huang joined UK Prime Minister Sir Keir Starmer to open London Tech Week, affirming Nvidia’s commitment to launch an AI lab in the UK, as the government commits £1 billion to AI compute by 2030.

Nvidia also revealed its Rubin-Vera superchip will power Germany’s ‘Blue Lion’ supercomputer, and its Grace Hopper platform is at the heart of Jupiter—Europe’s first exascale AI system, located at the Jülich Supercomputing Centre.

Nvidia’s presence now spans continents and disciplines, from powering national research to driving breakthroughs in climate modelling, quantum computing, and structural biology.

‘AI will supercharge scientific discovery and industrial innovation,’ said Huang. And with systems like Jupiter poised to run a quintillion operations per second, the company’s growth story is far from over.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TechNext launches forecasting system to guide R&D strategy

Global R&D spending now exceeds $2 trillion a year, yet many companies still rely on intuition rather than evidence to shape innovation strategies—often at great cost.

TechNext, co-founded by Anuraag Singh and MIT’s Prof. Christopher L. Magee, aims to change that with a newly patented system that delivers data-driven forecasts for technology performance.

Built on large-scale empirical datasets and proprietary algorithms, the system enables organisations to anticipate which technologies are likely to improve most rapidly.

‘R&D has become one of the fastest-growing expenses for companies, yet most decisions still rely on intuition rather than data,’ said Singh. ‘We have been flying blind’

The tool has already drawn attention from major stakeholders, including the United States Air Force, multinational firms, VCs, and think tanks.

By quantifying the future of technologies—from autonomous vehicle perception systems to clean energy infrastructure—TechNext promises to help decision-makers avoid expensive dead ends and focus on long-term winners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman predicts AI will discover new ideas

In a new blog post titled The Gentle Singularity, OpenAI CEO Sam Altman predicted that AI systems capable of producing ‘novel insights’ may arrive as early as 2026.

While Altman’s essay blends optimism with caution, it subtly signals the company’s next central ambition — creating AI that goes beyond repeating existing knowledge and begins generating original ideas instead of mimicking human reasoning.

Altman’s comments echo a broader industry trend. Researchers are already using OpenAI’s recent o3 and o4-mini models to generate new hypotheses. Competitors like Google, Anthropic and FutureHouse are also shifting their focus towards scientific discovery.

Google’s AlphaEvolve has reportedly devised novel solutions to complex maths problems, while FutureHouse claims to have built AI capable of genuine scientific breakthroughs.

Despite the optimism, experts remain sceptical. Critics argue that AI still struggles to ask meaningful questions, a key ingredient for genuine insight.

Former OpenAI researcher Kenneth Stanley, now leading Lila Sciences, says generating creative hypotheses is a more formidable challenge than agentic behaviour. Whether OpenAI achieves the leap remains uncertain, but Altman’s essay may hint at the company’s next bold step.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple study finds AI fails on complex tasks

A recent study by Apple researchers exposed significant limitations in the capabilities of advanced AI systems and huge reasoning models (LRMs).

Apple’s team suggested this may point to a fundamental limit in how current AI models scale up to general reasoning.

These models, designed to solve complex problems through step-by-step thinking, experienced what the paper called a ‘complete accuracy collapse’ when faced with high-complexity tasks. Even when given an algorithm that should have ensured success, the models failed to deliver correct solutions.

The study found that LRMs performed well with low- and medium-difficulty tasks but deteriorated sharply as the complexity increased.

Rather than increasing their effort as problems became harder, the models reduced their reasoning paradoxically, leading to complete failure.

Experts, including AI researcher Gary Marcus and University of Surrey’s Andrew Rogoyski in the UK, called the findings alarming and indicative of a potential dead end in current AI development.

The study tested systems from OpenAI, Google, Anthropic and DeepSeek, raising serious questions about how close the industry is to achieving AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers get AI support for marking and admin

According to new government guidance, teachers in England are now officially encouraged to use AI to reduce administrative tasks. The Department for Education has released training materials that support the use of AI for low-stakes marking and routine parent communication.

The guidance allows AI-generated letters, such as those informing parents about minor issues like head lice outbreaks, and suggests using the technology for quizzes or homework marking.

While the move aims to cut workloads and improve classroom focus, schools are also advised to implement clear policies on appropriate use and ensure manual checks remain in place.

Experts have welcomed the guidance as a step forward but noted concerns about data privacy, budget constraints, and potential misuse.

The guidance comes as UK nations explore AI in education, with Northern Ireland commissioning a study on its impact and Scotland and Wales also advocating its responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!