EU pushes for stronger powers in delayed customs reform

EU lawmakers have accused national governments of stalling a major customs overhaul aimed at tackling the rise in low-cost parcels from China. Parliament’s lead negotiator Dirk Gotink argues that only stronger EU-level powers can help authorities regain control of soaring e-commerce volumes.

Talks have slowed over a proposed e-commerce data hub linking national customs services. Parliament wants European prosecutors to gain direct access to the hub, while capitals insist that national authorities must remain the gatekeepers to sensitive information.

Gotink warns that limiting access would undermine efforts to stop non-compliant goods such as those from China, entering the single market. Senior MEP Anna Cavazzini echoes the concern, saying EU-level oversight is essential to keep consumers safer and improve coordination across borders.

The Danish Council Presidency aims to conclude negotiations in mid-December but concedes that major disputes remain. Trade groups urge a swift deal, arguing that a modernised customs system must support enforcement against surging online imports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake and AI fraud surges despite stable identity-fraud rates

According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.

Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.

The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).

Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.

This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oakley Meta glasses launch in India with AI features

Meta is preparing to introduce its Oakley Meta HSTN smart glasses to the Indian market as part of a new effort to bring AI-powered eyewear to a broader audience.

A launch that begins on 1 December and places the glasses within a growing category of performance-focused devices aimed at athletes and everyday users who want AI built directly into their gear.

The frame includes an integrated camera for hands-free capture and open-ear speakers that provide audio cues without blocking outside sound.

These glasses are designed to suit outdoor environments, offering IPX4 water resistance and robust battery performance. Also, they can record high-quality 3K video, while Meta AI supplies information, guidance and real-time support.

Users can expect up to eight hours of active use and a rapid recharge, with a dedicated case providing an additional forty-eight hours of battery life.

Meta has focused on accessibility by enabling full Hindi language support through the Meta AI app, allowing users to interact in their preferred language instead of relying on English.

The company is also testing UPI Lite payments through a simple voice command that connects directly to WhatsApp-linked bank accounts.

A ‘Hey Meta’ prompt enables hands-free assistance for questions, recording, or information retrieval, allowing users to remain focused on their activity.

The new lineup arrives in six frame and lens combinations, all of which are compatible with prescription lenses. Meta is also introducing its Celebrity AI Voice feature in India, with Deepika Padukone’s English AI voice among the first options.

Pre-orders are open on Sunglass Hut, with broader availability planned across major eyewear retailers at a starting price of ₹ 41,800.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU unveils AI whistleblower tool

The European Commission has launched a confidential tool enabling insiders at AI developers to report suspected rule breaches. The channel forms part of wider efforts to prepare for enforcement of the EU AI Act, which will introduce strict obligations for model providers.

Legal protections for users of the tool will only apply from August 2026, leaving early whistleblowers exposed to employer retaliation until the Act’s relevant provisions take effect. The Commission acknowledges the gap and stresses strong encryption to safeguard identities.

Advocates say the channel still offers meaningful progress. Karl Koch, founder of the AI whistleblower initiative, argues that existing EU whistleblowing rules on product safety may already cover certain AI-related concerns, potentially offering partial protection.

Koch also notes parallels with US practice, where regulators accept overseas tips despite limited powers to shield informants. The Commission’s transparency about current limitations has been welcomed by experts who view the tool as an important foundation for long-term AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google teams with Accel to boost India’s AI ecosystem

Google has partnered with VC firm Accel to support early-stage AI start-ups in India, marking the first time its AI Futures Fund has collaborated directly on regional venture investment.

Through the newly created Atoms AI Cohort 2026, selected start-ups will receive up to US$2 million in funding, with Google and Accel each contributing up to US$1 million. Founders will also gain up to US$350,000 in compute credits, early access to models from Gemini and DeepMind, technical mentorship, and support for scaling globally.

The collaboration is designed to stimulate India’s AI ecosystem across a broad set of domains, including creativity, productivity, entertainment, coding, and enterprise automation. According to Accel, the focus will lie on building products tailored for local needs, with potential global reach.

This push reflects Google’s growing bet on India as a global hub for AI. For digital-policy watchers and global technology observers, this partnership raises essential questions.

Will increased investment accelerate India’s role as an AI-innovation centre? Could this shift influence tech geopolitics and data-governance norms in Asia? The move follows the company’s recently announced US$15 billion investment to build an AI data centre in Andhra Pradesh.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google warns Europe risks losing its AI advantage

European business leaders heard an urgent message in Brussels as Google underlined the scale of the continent’s AI opportunity and the risks of falling behind global competitors.

Debbie Weinstein, Google’s President for EMEA, argued that Europe holds immense potential for a new generation of innovative firms. Yet, too few companies can access the advanced technologies that already drive growth elsewhere.

Weinstein noted that only a small share of European businesses use AI, even though the region could unlock over a trillion euros in economic value within a decade.

She suggested that firms are hampered by limited access to cutting-edge models, rather than being supported with the most capable tools. She also warned that abrupt policy shifts and a crowded regulatory landscape make it harder for founders to experiment and expand.

Europe has the skills and talent to build strong AI-driven industries, but it needs more straightforward rules and a long-term approach to training.

Google pointed to its own investments in research centres, cybersecurity hubs and digital infrastructure across the continent, as well as programmes that have trained millions of Europeans in digital and entrepreneurial skills.

Weinstein insisted that a partnership between governments, industry and civil society is essential to prepare workers and businesses for the AI era.

She argued that providing better access to advanced AI, clearer legislation instead of regulatory overlap and sustained investment in skills would allow European firms to compete globally. With those foundations in place, she said Europe could secure its share of the emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!