ChatGPT ‘adult mode’ launch delayed as OpenAI focuses on core improvements

OpenAI has postponed the launch of ChatGPT’s ‘adult mode’, a feature designed to let verified adult users access erotica and other mature content.

Teams are focusing on improving intelligence, personality and proactive behaviour instead of releasing the feature immediately.

A feature that was first announced by Sam Altman in October, with an initial December rollout, aiming to allow adults more freedom while maintaining safety for younger users.

The project faced an earlier delay as internal teams prioritised the core ChatGPT experience.

OpenAI stated it still supports the principle of treating adults like adults but warned that achieving the right experience will require more time. No new release date has been provided.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Canada begin negotiations on a digital trade agreement

The European Commission and Canada have launched negotiations on a new Digital Trade Agreement to strengthen the rules governing cross-border digital commerce.

The initiative was announced in Toronto by the EU Trade Commissioner Maroš Šefčovič and Canadian International Trade Minister Maninder Sidhu.

An agreement that will expand the digital dimension of the existing Comprehensive Economic and Trade Agreement, which has already increased trade in goods and services between the two partners.

Officials say the new negotiations aim to create clearer rules for businesses and consumers engaging in cross-border digital transactions.

Proposals under discussion include promoting paperless trade systems, recognising electronic signatures and digital contracts, and prohibiting customs duties on electronic transmissions.

The agreement between the EU and Canada will also seek to prevent protectionist practices such as unjustified data localisation requirements or forced transfers of software source code.

European officials argue that the negotiations reflect a broader effort to develop international standards for digital trade governance while preserving governments’ ability to regulate emerging challenges in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Job losses study by Anthropic reveals 0 evidence of AI driven unemployment

A new Anthropic report finds AI has not yet caused significant job losses, introducing ‘observed exposure’ to measure actual workplace AI use.

Researchers combined language model capabilities with workplace data to identify occupations at risk of disruption. Although AI can perform many tasks, its actual adoption remains much lower across most industries, which is a main finding of the study.

Even in highly digital professions, only a fraction of potential automation results from AI use. For instance, computer and mathematics occupations rank among the most AI-exposed groups. Despite AI’s capability to assist with many tasks, it currently covers only about 33% of them in these fields.

Across the broader economy, many roles experience little or no impact from AI, which represents a key finding. About 30% of workers are in jobs such as cooking, bartending, mechanics, and lifeguarding, where physical tasks dominate, and measured AI exposure is almost zero.

The report also finds no clear evidence that AI adoption has increased unemployment or caused a spike in job losses since generative AI tools began spreading widely in 2022. Rather than triggering sudden job losses, researchers suggest labour-market effects emerge gradually, through slower hiring, shifting skill requirements, and changes in job composition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Berlin becomes home to Google AI research centre

Google has launched its new AI Centre in Berlin, creating a hub for researchers, developers, and leaders from Google DeepMind, Google Research, and Google Cloud. The centre aims to foster collaboration, debate, and innovation in AI.

The opening event highlighted the company’s work in advancing science and healthcare through AI-enabled agents and platforms. Google announced long-term research partnerships with the Technical University of Munich and Helmholtz Munich, backed by the Google.org AI for Science fund.

Built on Google’s existing research and engineering foundations in Germany and globally, the Berlin centre emphasises AI innovations with societal benefits. It will connect experts from science, business, academia, and politics to drive forward responsible AI development.

The centre will also serve as a platform for public engagement, hosting workshops, lectures, and events to raise awareness about AI applications, ethical considerations, and future opportunities across industries and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle launches AI system designed to predict construction safety risks

The US tech company Oracle has introduced a new AI platform to predict safety risks across construction projects.

A system called Advisor for Safety that aims to shift industry practices from reactive incident response to predictive risk prevention.

The AI model was trained using safety information equivalent to more than 10,000 project-years across multiple project types and locations.

By analysing historical patterns, the platform generates weekly forecasts that identify projects statistically most likely to experience safety incidents.

The solution also integrates structured safety observation tools through systems such as Oracle Aconex and Oracle Primavera Unifier, allowing field teams to collect consistent data on mobile devices or web platforms.

These inputs improve predictive accuracy while enabling organisations to track potential hazards earlier in the project lifecycle.

According to Oracle, the system combines data streams ranging from incident reports and payroll records to project schedules and operational metrics.

Early adopters reportedly reduced workplace incidents by up to 50 percent and workers’ compensation costs by as much as 75 percent during the first year of use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gemini leads latest ORCA benchmark on AI maths accuracy

A new round of the ORCA (Omni Research on Calculation in AI) benchmark reveals significant progress in how leading AI chatbots handle real-world mathematical problems, while also highlighting persistent limitations in reliability and consistency.

The latest results show Google’s Gemini 3 Flash moving clearly ahead of competing systems, correctly answering nearly three-quarters of the 500 practical questions used in the benchmark.

Our readers may recall that the platform previously analysed the first edition of the ORCA benchmark, examining how AI chatbots performed on everyday quantitative tasks rather than purely academic problems. The earlier analysis already showed notable gaps between systems and raised questions about the reliability of AI models for calculations people might encounter in daily life.

The second benchmark compares four widely accessible models: ChatGPT-5.2, Gemini 3 Flash, Grok-4.1 and DeepSeek V3.2. Gemini recorded the largest improvement, decisively outpacing the others. ChatGPT and DeepSeek posted smaller but steady gains, while Grok’s results declined slightly in several subject areas.

Performance improvements were uneven across domains, with Gemini showing particularly strong gains in fields such as biology, chemistry, physics and health-related calculations.

Closer examination of the errors reveals why AI still struggles with mathematical accuracy. Calculation mistakes have increased as a share of total errors, while rounding and formatting problems have decreased.

Researchers explain that large language models do not actually compute numbers in the same way that calculators do. Instead, they predict likely sequences of words and numbers, which can lead to small shortcuts during multi-step reasoning that eventually produce incorrect results.

The benchmark also highlights another challenge: instability. The same question can produce different answers when asked multiple times, even when the model initially responded correctly. Such variation reflects the probabilistic nature of AI systems.

As a result, the benchmark concludes that AI chatbots can assist with calculations but cannot yet match the consistency of traditional calculators, which always return the same answer for the same input.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Debate grows over the future of privacy

Experts gathered in London, UK, to examine how the concept of privacy has evolved over centuries. Discussions in London, UK, highlighted that privacy was only widely recognised as a legal and social norm after the Second World War.

Speakers in London noted that earlier societies often viewed privacy with suspicion or did not recognise it at all. Historical examples discussed included practices from Roman society and the French monarchy.

Modern legal protections expanded rapidly in recent decades, with privacy laws now covering about 80 percent of the global population. Scholars said the concept remains relatively new despite its central role in modern democracies.

The debate also explored whether privacy will remain a stable social value as technology evolves. Analysts in London said emerging technologies such as AI are reshaping debates over personal data and surveillance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU Commission’s new guidance to push Cybersecurity Resilience Act

The EU Commission has opened a public consultation on draft guidance to help companies apply the EU’s Cyber Resilience Act (CRA), a regulation that sets baseline cybersecurity requirements for hardware and software ‘products with digital elements’ to reduce vulnerabilities and improve security throughout a product’s life cycle. The guidance is framed as practical help, especially for microenterprises and SMEs, and the consultation runs until 31 March 2026.

The CRA is designed to make ‘secure by design’ the default for connected products people use every day, from consumer devices to business software, while giving users clearer information about a product’s security properties. In timeline terms, the Act entered into force on 10 December 2024. The incident reporting duties start on 11 September 2026, and the main obligations apply from 11 December 2027, giving industry a runway but also a clear countdown.

What the Commission is trying to nail down now are the parts companies have found hardest to interpret: how the rules apply to remote data processing solutions (cloud-linked features), how they treat free and open-source software, what ‘support periods’ mean in practice (i.e. how long security upkeep is expected), and how the CRA fits alongside other EU laws. In other words, this is less about announcing new rules and more about reducing legal grey zones before enforcement ramps up.

The guidance push also lands amid a broader policy drive, as on 20 January 2026, the Commission proposed a new EU cybersecurity package, built around a revised Cybersecurity Act and targeted NIS2 amendments. The package aims to harden ICT supply chains, including a framework to jointly identify and mitigate risks across 18 critical sectors, and would enable mandatory ‘de-risking’ of EU mobile telecom networks away from high‑risk third‑country suppliers. It also proposes a revamped EU cybersecurity certification system with simpler procedures, giving a default 12‑month timeline to develop certification schemes, while cutting red tape for tens of thousands of firms and strengthening ENISA’s role, including early warnings, ransomware support, and a major budget boost.

Taken together, the EU is moving from strategy documents to operational details, product security on one side (CRA) and ecosystem-level resilience on the other (supply chains, certification, incident reporting and supervision). For companies, that can be both reassuring and demanding: clearer guidance should reduce uncertainty, but the compliance reality may still be layered, especially for businesses spanning devices, software, cloud features, and cross-border operations. The Commission’s stakeholder feedback window is essentially a test of whether these rules can be made workable without diluting their bite.

Why does it matter?

Beyond technical risk, this is increasingly about sovereignty: who sets the rules for digital products, who can be trusted in supply chains, and how much dependency is acceptable in critical infrastructure. Digital governance expert Jovan Kurbalija argues that full ‘stack’ digital sovereignty, that is to say control over infrastructure, services, data, and AI knowledge, is concentrated in very few states, while most countries must balance openness with autonomy. The EU’s current wave of cybersecurity governance fits that pattern: it’s an attempt to turn security standards, certification, and supply-chain choices into a practical form of strategic control, not just to prevent hacks, but to protect democratic institutions, economic competitiveness, and trust in the digital tools people rely on.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!