Experts propose frameworks for trustworthy AI systems

A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.

The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.

The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.

Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.

The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Project Genie allowing users to create interactive AI-generated worlds

Google has launched Project Genie, an experimental prototype that allows users to create and explore interactive AI-generated worlds. The web application, powered by Genie 3, Nano Banana Pro, and Gemini, is rolling out to Google AI Ultra subscribers in the US aged 18 and over.

Genie 3 represents a world model that simulates environmental dynamics and predicts how actions affect them in real time. Unlike static 3D snapshots, the technology generates paths in real time as users move and interact, simulating physics for dynamic environments.

Project Genie centres on three core capabilities: world sketching, exploration, and remixing. Users can prompt with text and images to create environments, define character perspectives, and preview worlds before entering.

As users navigate, the system generates paths in real time based on their actions.

The experimental prototype has known limitations, including generation restrictions to 60 seconds, potential deviations from prompts or real-world physics, and occasional character controllability issues.

Google emphasises responsible development as part of its mission to build AI that benefits humanity, with ongoing improvements planned based on user feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Large language models mirror human brain responses to unexpected twists

Researchers at the University of Chicago are using AI to uncover insights into how the human brain processes surprise. The project, directed by Associate Professor Monica Rosenberg, compares human and AI responses to narrative moments to explore cognitive processes.

The study involved participants listening to stories whilst researchers recorded their responses through brain scans. Researchers then fed identical stories to the language model Llama, prompting it to predict subsequent text after each segment.

When AI predictions diverged from actual story content, that gap served as a measure of surprise, mirroring the discrepancy human readers experience when expectations fail.

Results showed a striking alignment between AI prediction errors and both participants’ reported feelings and brain-scan activity patterns. The correlation emerged when texts were analysed in 10 to 20-word chunks, suggesting humans and AI encode surprise at broader levels where ideas unfold.

Fourth-year data science student Bella Summe, involved in the Cognition, Attention and Brain Lab research, noted the creative challenge of working in an emerging field.

Few studies have explored whether LLM prediction errors could serve as measures of human surprise, requiring constant problem-solving and experimental design adaptation throughout the project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reduces late breast cancer diagnoses by 12% in landmark study

AI in breast cancer screening reduced late diagnoses by 12% and increased early detection rates in the largest trial of its kind. The Swedish study involved 100,000 women randomly assigned to AI-supported screening or standard radiologist readings between April 2021 and December 2022.

The AI system analysed mammograms and assigned low-risk cases to single readings and high-risk cases to double readings by radiologists.

Results published in The Lancet showed 1.55 cancers per 1,000 women in the AI group versus 1.76 in the control group, with 81% detected at the screening stage, compared with 74% in the control group.

Dr Kristina Lång from Lund University said AI-supported mammography could reduce radiologist workload pressures and improve early detection, but cautioned that implementation must be done carefully with continuous monitoring.

Researchers stressed that screening still requires at least one human radiologist working alongside AI, rather than AI replacing human radiologists. Cancer Research UK’s Dr Sowmiya Moorthie called the findings promising but noted more research is needed to confirm life-saving potential

Breast Cancer Now’s Simon Vincent highlighted the significant potential for AI to support radiologists, emphasising that earlier diagnosis improves treatment outcomes for a disease that affects over 2 million people globally each year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Prism launches as OpenAI’s new workspace for scientific papers

OpenAI has launched Prism, a cloud-based LaTeX workspace designed to streamline the drafting, collaboration, and publication of academic papers. The tool integrates writing, citation management, real-time collaboration, and AI assistance into a single environment to reduce workflow friction.

Built specifically for scientific use, Prism embeds GPT-5.2 directly inside documents rather than as a separate chatbot. Researchers can rewrite sections, verify equations, test arguments, and clarify explanations without leaving the editing interface, positioning AI as a background collaborator.

Users can start new LaTeX projects or upload existing files through prism.openai.com using a ChatGPT account. Co-authors can join instantly, enabling simultaneous editing while maintaining structured formatting for equations, references, and manuscript layout.

OpenAI says Prism supports academic search, converts handwritten formulas into clean LaTeX, and allows voice-driven edits for faster reviews. Completed papers export as publication-ready PDFs alongside full source files.

Initially available for free to personal ChatGPT users, the workspace will later expand to Business, Enterprise, and Education plans. The company frames the tool as a practical productivity layer rather than a research disruption platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK minister signals interest in universal basic income amid rising AI job disruption

Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.

He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.

Concern about the economic impact of AI continues to intensify.

Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.

Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.

Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.

He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.

Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Historic digital assets regulation bill approved by US Senate committee for the first time

The US Senate Agriculture Committee has voted along party lines to advance legislation on the cryptocurrency market structure, marking the first time such a bill has cleared a Senate committee.

The Digital Commodity Intermediaries Act passed with 12 Republicans voting in favour and 11 Democrats opposing, representing a significant development for digital asset regulation in the United States.

The legislation would grant the Commodity Futures Trading Commission new regulatory authority over digital commodities and establish consumer protections, including safeguards against conflicts of interest.

Chairman John Boozman proceeded with the bill after losing bipartisan support when Senator Cory Booker withdrew backing for the version presented. The Senate Banking Committee must approve the measure before the two versions can be combined and advanced to the Senate floor.

Democrats raised concerns about the legislation, particularly regarding President Donald Trump’s cryptocurrency ventures. Senator Booker stated the bill departed from bipartisan principles established in November, noting Republicans ‘walked away’ from previous agreements.

Democrats offered amendments to ban public officials from engaging in the crypto industry and to address foreign-adversary involvement in digital commodities. Still, all were rejected as outside the committee’s jurisdiction.

Senator Gillibrand expressed optimism about the bill’s advancement, whilst Boozman called the vote ‘a critical step towards creating clear rules’. The Senate Banking Committee’s consideration was postponed following opposition from the crypto industry, with no new hearing date set.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Physical AI becomes central to LG’s robotics and automation ambitions

LG Group affiliates are expanding into physical AI by combining robotics hardware, industrial data, and advanced AI models. The strategy aims to deliver integrated autonomous systems across industries. The group is positioning itself along the complete robotics value chain.

LG Electronics is strengthening its role in robotic actuators that enable precise humanoid movement. Leveraging decades of motor engineering, it recently launched the AXIUM actuator brand. The company has also expanded its investments across robotics manufacturers.

The company’s AI Research division is working on programs that help machines understand the real world. Its special lab puts seeing and language skills into robots and factory systems. The aim is for machines to predict and act autonomously in real time.

The CNS division is teaching robots the skills they need for different jobs. LG Display is making robot screens using bendable panels that perform well in harsh environments. Both groups are using their cars’ and factories’ know-how to build robots.

Power and sensing tools complete the group’s robot plans. LG Energy Solution makes powerful batteries for moving robots, while LG Innotek creates cameras and sensors. Group leaders see building intelligent machines as key to future growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands faces rising digital sovereignty threat, data authority warns

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, warning that dependence on overseas technology firms could expose vital public services to significant risk.

Concern has intensified after DigiD, the national digital identity system, appeared set for acquisition by a US company, raising questions about long-term control of key infrastructure.

The watchdog argues that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

Additionally, the watchdog criticises the government for treating digital autonomy as an academic exercise rather than recognising its immediate implications for communication between the state and citizens.

In a letter to the economy minister, the authority calls for a unified national approach rather than fragmented decisions by individual public bodies.

It proposes sovereignty criteria for all government contracts and suggests termination clauses that enable the state to withdraw immediately if a provider is sold abroad. It also notes the importance of designing public services to allow smooth provider changes when required.

The watchdog urges the government to strengthen European capacity by investing in scalable domestic alternatives, including a Dutch-controlled government cloud. The economy ministry has declined to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!