Google warns Europe risks losing its AI advantage

European business leaders heard an urgent message in Brussels as Google underlined the scale of the continent’s AI opportunity and the risks of falling behind global competitors.

Debbie Weinstein, Google’s President for EMEA, argued that Europe holds immense potential for a new generation of innovative firms. Yet, too few companies can access the advanced technologies that already drive growth elsewhere.

Weinstein noted that only a small share of European businesses use AI, even though the region could unlock over a trillion euros in economic value within a decade.

She suggested that firms are hampered by limited access to cutting-edge models, rather than being supported with the most capable tools. She also warned that abrupt policy shifts and a crowded regulatory landscape make it harder for founders to experiment and expand.

Europe has the skills and talent to build strong AI-driven industries, but it needs more straightforward rules and a long-term approach to training.

Google pointed to its own investments in research centres, cybersecurity hubs and digital infrastructure across the continent, as well as programmes that have trained millions of Europeans in digital and entrepreneurial skills.

Weinstein insisted that a partnership between governments, industry and civil society is essential to prepare workers and businesses for the AI era.

She argued that providing better access to advanced AI, clearer legislation instead of regulatory overlap and sustained investment in skills would allow European firms to compete globally. With those foundations in place, she said Europe could secure its share of the emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe urged to accelerate AI adoption

European policymakers are being urged to accelerate the adoption of AI, as Christine Lagarde warns that Europe risks missing another major technological shift. Her message highlights that global AI investment is soaring, yet its economic impact remains limited, similar to that of earlier innovation waves.

Lagarde argues that AI could boost productivity faster than past technologies because the infrastructure already exists, and the systems can improve their own performance. Scientific progress powered by AI, such as the rapid prediction of protein structures, signals how R&D can scale far quicker than before.

Europe’s challenge, she notes, is not building frontier models but ensuring rapid deployment across industries. Strong uptake of generative AI by European firms is encouraging, but fragmented regulation, high energy costs and limited risk capital remain significant frictions.

Strategic resilience in chips, data centres and interoperable standards is also essential to avoid deeper dependence on non-European systems.

Greater cooperation in shared data spaces, such as Manufacturing-X and the European Health Data Space, could unlock competitive advantages. Lagarde emphasises that Europe must act swiftly, as delays would hinder adoption and erode industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude Opus 4.5 brings smarter AI to apps and developers

Anthropic has launched Claude Opus 4.5, now available on apps, API, and major cloud platforms. Priced at $ 5 per million tokens and $25 per million tokens, the update makes Opus-level AI capabilities accessible to a broader range of users, teams, and enterprises.

Alongside the model, updates to Claude Developer Platform and Claude Code introduce new tools for longer-running agents and enhanced integration with Excel, Chrome, and desktop apps.

Early tests indicate that Opus 4.5 can handle complex reasoning and problem-solving with minimal guidance. It outperforms previous versions on coding, vision, reasoning, and mathematics benchmarks, and even surpasses top human candidates in technical take-home exams.

The model demonstrates creative approaches to multi-step problems while remaining aligned with safety and policy constraints.

Significant improvements have been made to robustness and security. Claude Opus 4.5 resists prompt injection and handles complex tasks with less intervention through effort controls, context compaction, and multi-agent coordination.

Users can manage token usage more efficiently while achieving superior performance.

Claude Code now offers Plan Mode and desktop functionality for multiple simultaneous sessions, and consumer apps support uninterrupted long conversations. Beta access for Excel and Chrome lets enterprise and team users fully utilise Opus 4.5’s workflow improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

White House launches Genesis Mission for AI-driven science

Washington prepares for a significant shift in research as the White House launches the Genesis Mission, a national push to accelerate innovation through advanced AI. The initiative utilises AI to enhance US US technological leadership in a competitive global landscape.

The programme puts the Department of Energy at the centre, tasked with building a unified AI platform linking supercomputers, federal datasets and national laboratories.

The goal is to develop AI models and agents that automate experiments, test hypotheses and accelerate breakthroughs in key scientific fields.

Federal agencies, universities and private firms will conduct coordinated research using shared data spaces, secure computing and standardised partnership frameworks. Priority areas cover biotechnology, semiconductors, quantum science, critical materials and next-generation energy.

Officials argue that the Genesis Mission represents one of the most ambitious attempts to modernise US research infrastructure. Annual reviews will track scientific progress, security, collaborations and AI-driven breakthroughs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!