UK Justice Secretary pushes expanded AI use in courts to tackle backlogs

In a speech at the Microsoft AI Tour in London, Lammy outlined a vision for using AI to help address persistent backlogs in the criminal justice system, which currently stands at tens of thousands of unresolved cases, by automating and streamlining court administration and case progression tasks.

He described how pilot tools have already been used in the probation system to transcribe meetings and save over 25,000 hours of administrative time. He said similar AI transcription and summarisation systems are being tested in courts and tribunals to help judges, magistrates and legal advisers handle paperwork more efficiently.

Lammy also announced plans to invest more in an in-house Justice AI unit, with additional funding, to support pilot AI tools such as an intelligent listing assistant (J-AI) to help schedule and prioritise cases, and to strengthen partnerships with technology firms alongside funding programmes like LawtechUK to support law-tech innovation.

The Ministry of Justice will expand the use of AI tools to assist transcription, case summary generation and legal analysis, aiming to free up human judges and staff to focus on substantive decision-making.

The reforms come amid broader judicial changes, including lifting caps on court sitting days and proposals to reduce the number of jury trials for less serious offences, to alleviate bottlenecks that could otherwise take years to clear.

However, legal industry groups such as The Law Society of England and Wales have expressed reservations, saying AI may help with administrative tasks. Still, they should not replace critical human judgement in decisions with serious consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI preparing kids for careers that don’t exist yet, say education leaders

Education leaders and industry stakeholders in South Africa say the rise of AI is transforming labour-market expectations to the point that tomorrow’s careers may not yet exist.

They argue that traditional curricula, centred on static knowledge and routine tasks, must evolve to prioritise adaptability, problem solving, creativity, ethical reasoning and digital fluency, competencies that complement AI rather than compete with it.

Speakers at recent education forums emphasised that AI will continue to automate routine cognitive and technical work, pushing demand toward roles that require higher-order thinking and human-centred skills.

They described a growing need to integrate AI literacy and data skills into schooling from an early age to reduce future workforce displacement and prepare students to harness AI as a productive partner.

Experts also highlighted equity concerns: without intentional policy and investment to support under-resourced schools and communities, the ‘AI skills gap’ could exacerbate inequality. Some educators recommended stronger partnerships between government, tech industry and educational institutions to co-develop curricula, teacher training and accessible AI tools.

They underscored that competencies such as empathetic communication, cultural awareness and ethical judgement, areas where AI lacks robust capabilities, will remain crucial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered electronic nose shows promise for early ovarian cancer screening

Researchers at Linköping University have developed an AI-powered electronic nose capable of detecting early signs of ovarian cancer in blood plasma samples. The pilot study, published in Advanced Intelligent Systems, reports 97 per cent accuracy using machine-learning models trained on biobank data.

Ovarian cancer is often diagnosed late because symptoms resemble those of more common conditions. In 2022, around 325,000 new cases and more than 200,000 deaths were recorded globally. Earlier detection could significantly improve survival rates and access to timely treatment.

The prototype device contains 32 commercially available sensors that detect volatile substances emitted by blood samples. Rather than targeting a single biomarker, the system analyses complex chemical patterns, with machine learning identifying signatures linked to ovarian cancer.

Unlike conventional blood tests, which can be slow and rely on specific biomarkers, the electronic nose evaluates a broad spectrum of compounds. Researchers say the approach offers greater precision and could reduce screening costs while improving accessibility.

Developers estimate the test takes around 10 minutes and could become part of cancer screening programmes within three years. Although currently focused on ovarian cancer, the team suggests the method could eventually be adapted to detect multiple cancer types.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US tech giants eye Wales for major AI investment

American technology firms are increasingly looking to Wales as a destination for AI investment and data infrastructure. Strong inward investment figures and expanding growth zones are putting the nation firmly on the technology map.

Last year Wales secured £4.6bn in global investment across 65 foreign direct investment projects, marking a 23 per cent rise year on year. Thousands of jobs were created or safeguarded, outperforming many other UK regions.

Major projects underline the shift. US firm Vantage plans to transform the former Ford Bridgend plant into a large-scale data centre campus, while Microsoft is supporting another proposed scheme in Newport, both located within designated AI growth zones.

Beyond data centres, Wales offers land, connectivity and a supportive regulatory environment. Innovation clusters across Cardiff, Newport and North Wales, alongside strengths in life sciences, advanced manufacturing and renewable energy, are strengthening its appeal to global investors.

With expanding energy projects and a growing start-up pipeline, Wales is positioning itself as a competitive base for global business. Investors are increasingly encouraged to see it not as a regional outpost, but as an international platform rooted in strong economic foundations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw vulnerabilities exposed by AI-powered code scanner

Researchers at Endor Labs identified six high- to critical vulnerabilities in the open-source AI agent framework OpenClaw using an AI-powered static application security testing engine to trace untrusted data flows. The flaws included server-side request forgery, authentication bypass, and path traversal.

The bugs affected multiple components of the agentic system, which integrates large language models with external tools and web services. Several SSRF issues were found in the gateway and authentication modules, potentially exposing internal services or cloud metadata depending on the deployment context.

Access control failures were also found in OpenClaw. A webhook handler lacked proper verification, enabling forged requests, while another flaw allowed unauthenticated access to protected functionality. Researchers confirmed exploitability with proof-of-concept demonstrations.

The team said that traditional static analysis tools struggle with modern AI software stacks, where inputs undergo multiple transformations before reaching sensitive operations. Their AI-based SAST engine preserved context across layers, tracing untrusted data from entry points to critical functions.

OpenClaw maintainers were notified through responsible disclosure and have since issued patches and advisories. Researchers argue that as AI agent frameworks expand into enterprise environments, security analysis must adapt to address both conventional vulnerabilities and AI-specific attack surfaces.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sony targets AI music copyright use

Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.

According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.

Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.

Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Romania’s job market faces structural change as AI and automation rise

A Think by ING analysis finds that Romania’s recent macroeconomic slowdown reflects more profound structural change than cyclical weakness.

After years of robust consumption-led expansion, fiscal tightening and weak domestic demand have curbed growth, while firms increasingly invest in automation and AI to boost productivity rather than expand headcount.

Industrial employment has declined; for example, manufacturing jobs fell by around 25,000 in late 2025, and labour market hiring has shifted toward defensive, replacement-only patterns.

Firms are integrating robotics, automated assembly lines and intelligent logistics systems, and service-sector work is also being reshaped by AI tools, even where formal adoption is still emerging.

A recent survey suggests that 68% of people in Romania have used AI tools, and 44% rely on them for work tasks such as administrative support and analysis, signalling rising informal use ahead of widespread enterprise deployment.

While automation and AI can raise productivity and output without proportional employment growth, they also tilt the labour market: high-skill specialised roles (e.g. AI, engineering, advanced management) are expected to remain resilient or grow, while routine roles, including some entry-level tech positions, call-centre jobs and administrative tasks, face stagnation or decline.

However, this can create a ‘barbell’ labour market with growth chiefly at the high and low ends, and limited opportunities in mid-skill roles.

Real wage erosion, tight hiring and demographic trends (including a shrinking workforce) add to short-term challenges. In the near term, employment may remain subdued even as economic output recovers modestly by 2027.

Over the longer term, the economy’s shift toward capital-intensive, productivity-driven growth could support stronger output without generating broad employment, underscoring the need for education, reskilling and policy strategies that help workers adapt to AI-driven labour demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film removed from cinemas after public backlash

A prize-winning AI-generated short film has been pulled from cinemas following criticism from audiences. Thanksgiving Day, created by filmmaker Igor Alferov, was due to screen in selected theatres before feature presentations.

Concerns emerged after news of the screening spread online, prompting complaints directed at AMC Theatres. The chain stated it had not programmed the film and that pre-show advertising partner Screenvision Media had arranged the placement.

AMC confirmed it would not participate in the initiative, meaning the AI film will no longer appear in its locations. The animated short, produced using Google’s Gemini 3.1 and Nano Banana Pro tools, had recently won an AI film festival award.

The episode comes amid broader debate about artificial intelligence in Hollywood. Industry insiders suggest studios are quietly increasing AI use in production, even as concerns grow over job losses and economic uncertainty within Los Angeles’ entertainment sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots can reinforce delusions and mania

AI chatbots may pose serious risks for people with severe mental illnesses, according to a new study from Acta Psychiatrica Scandinavica. Researchers found that tools such as ChatGPT can worsen psychiatric conditions by reinforcing users’ delusions, paranoia, mania, suicidal thoughts, and eating disorders.

The team examined health records from more than 54,000 patients and identified dozens of cases where AI interactions appeared to exacerbate symptoms. Experts warn that the actual number of affected individuals is likely far higher.

AI’s design to follow and validate a user’s input can unintentionally strengthen delusional thinking, turning digital assistants into echo chambers for psychosis.

Despite potential benefits for psychoeducation or alleviating loneliness, experts caution against using AI as a substitute for trained therapists. Chatbots should be tested in rigorous clinical trials before any therapeutic use, says Professor Søren Dinesen Østergaard.

The researchers urge healthcare providers to discuss AI chatbot use with patients, particularly those with severe mental illnesses, and call for central regulation of the technology. They argue that lessons from social media show that early oversight is essential to protect vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Commission delays high risk AI guidance

The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.

According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.

The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.

Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot