Google warns Europe risks losing its AI advantage

European business leaders heard an urgent message in Brussels as Google underlined the scale of the continent’s AI opportunity and the risks of falling behind global competitors.

Debbie Weinstein, Google’s President for EMEA, argued that Europe holds immense potential for a new generation of innovative firms. Yet, too few companies can access the advanced technologies that already drive growth elsewhere.

Weinstein noted that only a small share of European businesses use AI, even though the region could unlock over a trillion euros in economic value within a decade.

She suggested that firms are hampered by limited access to cutting-edge models, rather than being supported with the most capable tools. She also warned that abrupt policy shifts and a crowded regulatory landscape make it harder for founders to experiment and expand.

Europe has the skills and talent to build strong AI-driven industries, but it needs more straightforward rules and a long-term approach to training.

Google pointed to its own investments in research centres, cybersecurity hubs and digital infrastructure across the continent, as well as programmes that have trained millions of Europeans in digital and entrepreneurial skills.

Weinstein insisted that a partnership between governments, industry and civil society is essential to prepare workers and businesses for the AI era.

She argued that providing better access to advanced AI, clearer legislation instead of regulatory overlap and sustained investment in skills would allow European firms to compete globally. With those foundations in place, she said Europe could secure its share of the emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s results fail to ease AI bubble fears

Record profits and year-on-year revenue growth above 60 percent have put Nvidia at the centre of debate over whether the surge in AI spending signals a bubble or a long-term boom.

CEO Jensen Huang and CFO Colette Kress dismissed concerns about the bubble, highlighting strong demand and expectations of around $65 billion in revenue for the next quarter.

Executives forecast global AI infrastructure spending could reach $3–4 trillion annually by the end of the decade as both generative AI and traditional cloud computing workloads increasingly run on GPUs.

Widespread adoption by major partners, including Meta, Anthropic and Salesforce, suggests lasting momentum rather than short-term hype.

Analysts generally agree that Nvidia’s performance remains robust, but questions persist over the sustainability of heavy investment in AI. Investors continue to monitor whether Big Tech can maintain this pace and if highly leveraged customers might expose Nvidia to future risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Waymo wins regulatory green light to expand robotaxi reach in Bay Area and SoCal

Waymo has received regulatory approval from the California Department of Motor Vehicles to deploy its fully autonomous vehicles across significantly more territory.

In the Bay Area, the newly permitted regions include much of the East Bay, the North Bay (including Napa), and the Sacramento area. In Southern California, Waymo’s newly approved zone stretches from Santa Clarita down to San Diego.

While this approval allows for driverless operation, Waymo still requires additional regulatory clearances before it can begin carrying paying passengers in certain parts of the expansion area. The company says it plans to start welcoming riders in San Diego by mid-2026.

From a policy and urban mobility perspective, this marks a significant milestone for Waymo, laying the groundwork for a truly statewide robotaxi network. It will be essential to monitor how this expansion interacts with local transit planning, safety regulation, and infrastructure demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot