Experts propose frameworks for trustworthy AI systems

A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.

The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.

The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.

Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.

The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe opens new digital identity innovation hub in France

ID Campus has opened in the French city of Angers as a new European hub dedicated to identity technologies and trust-based digital services. Led by sovereign provider iDAKTO, the initiative aims to bring together public institutions, startups, and researchers to advance secure online systems.

The campus will support innovation, training, pilot projects, and cross-sector collaboration. A key focus is the rollout of the European Digital Identity Wallet. Deeptech firms, research labs, and international delegations are expected to use the site for testing and cooperation.

The project’s development involved partnerships with public bodies in France, including France Titres, La Mission French Tech, and Angers Loire Metropole, reflecting a wider push to strengthen national and European authentication infrastructure.

The official launch brought together leaders from government and industry to discuss the rise in digital adoption and tightening regulatory frameworks across Europe, as secure digital identity systems become central to public services and cross-border transactions.

European digital sovereignty remains a core driver of the initiative, with policymakers seeking interoperable trust frameworks that reduce reliance on non-European technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conversational advertising arrives as OpenAI integrates sponsored content into ChatGPT

OpenAI has begun testing advertising placements inside ChatGPT, marking a shift toward monetising one of the world’s most widely used AI platforms. Sponsored content now appears below chatbot responses for free and low-cost users, integrating promotions directly into conversational queries.

Ads remain separate from organic answers, with OpenAI saying commercial content will not influence AI-generated responses. Users can see why specific ads appear, dismiss irrelevant placements, and disable personalisation. Advertising is excluded for younger users and sensitive topics.

Initial access is limited to enterprise partners, with broader availability expected later. Premium subscription tiers continue without ads, reflecting a freemium model similar to streaming platforms offering both paid and ad-supported options.

Pricing places ChatGPT ads among the most expensive digital formats. The value lies in reaching users at high-intent moments, such as during product research and purchase decisions. Measurement tools remain basic, tracking only impressions and clicks.

OpenAI’s move into advertising signals a broader shift as conversational AI reshapes how people discover information. Future performance data and targeting features will determine whether ChatGPT becomes a core ad channel or a premium niche format.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Large language models mirror human brain responses to unexpected twists

Researchers at the University of Chicago are using AI to uncover insights into how the human brain processes surprise. The project, directed by Associate Professor Monica Rosenberg, compares human and AI responses to narrative moments to explore cognitive processes.

The study involved participants listening to stories whilst researchers recorded their responses through brain scans. Researchers then fed identical stories to the language model Llama, prompting it to predict subsequent text after each segment.

When AI predictions diverged from actual story content, that gap served as a measure of surprise, mirroring the discrepancy human readers experience when expectations fail.

Results showed a striking alignment between AI prediction errors and both participants’ reported feelings and brain-scan activity patterns. The correlation emerged when texts were analysed in 10 to 20-word chunks, suggesting humans and AI encode surprise at broader levels where ideas unfold.

Fourth-year data science student Bella Summe, involved in the Cognition, Attention and Brain Lab research, noted the creative challenge of working in an emerging field.

Few studies have explored whether LLM prediction errors could serve as measures of human surprise, requiring constant problem-solving and experimental design adaptation throughout the project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Prism launches as OpenAI’s new workspace for scientific papers

OpenAI has launched Prism, a cloud-based LaTeX workspace designed to streamline the drafting, collaboration, and publication of academic papers. The tool integrates writing, citation management, real-time collaboration, and AI assistance into a single environment to reduce workflow friction.

Built specifically for scientific use, Prism embeds GPT-5.2 directly inside documents rather than as a separate chatbot. Researchers can rewrite sections, verify equations, test arguments, and clarify explanations without leaving the editing interface, positioning AI as a background collaborator.

Users can start new LaTeX projects or upload existing files through prism.openai.com using a ChatGPT account. Co-authors can join instantly, enabling simultaneous editing while maintaining structured formatting for equations, references, and manuscript layout.

OpenAI says Prism supports academic search, converts handwritten formulas into clean LaTeX, and allows voice-driven edits for faster reviews. Completed papers export as publication-ready PDFs alongside full source files.

Initially available for free to personal ChatGPT users, the workspace will later expand to Business, Enterprise, and Education plans. The company frames the tool as a practical productivity layer rather than a research disruption platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK minister signals interest in universal basic income amid rising AI job disruption

Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.

He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.

Concern about the economic impact of AI continues to intensify.

Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.

Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.

Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.

He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.

Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!