Copilot will be removed from WhatsApp on 15 January 2026

Microsoft will withdraw Copilot from WhatsApp as of 15 January 2026, following the implementation of new platform rules that ban all LLM chatbots.

The service helped millions of users interact with their AI companion inside an everyday messaging environment, yet the updated policy leaves no option for continued support.

Copilot access will continue on the mobile app, the web portal and Windows, offering fuller functionality instead of the limited experience available on WhatsApp.

Users are encouraged to rely on these platforms for ongoing features such as Copilot Voice, Vision and Mico, which expand everyday use across a broader set of tasks.

Chat history cannot be transferred because WhatsApp operated the service without authentication; therefore, users must manually export their conversations before the deadline. Copilot remains free across supported platforms, although some advanced features require a subscription.

Microsoft is working to ensure a smooth transition and stresses that users can expect a more capable experience after leaving WhatsApp, as development resources now focus on its dedicated environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU pushes for stronger powers in delayed customs reform

EU lawmakers have accused national governments of stalling a major customs overhaul aimed at tackling the rise in low-cost parcels from China. Parliament’s lead negotiator Dirk Gotink argues that only stronger EU-level powers can help authorities regain control of soaring e-commerce volumes.

Talks have slowed over a proposed e-commerce data hub linking national customs services. Parliament wants European prosecutors to gain direct access to the hub, while capitals insist that national authorities must remain the gatekeepers to sensitive information.

Gotink warns that limiting access would undermine efforts to stop non-compliant goods such as those from China, entering the single market. Senior MEP Anna Cavazzini echoes the concern, saying EU-level oversight is essential to keep consumers safer and improve coordination across borders.

The Danish Council Presidency aims to conclude negotiations in mid-December but concedes that major disputes remain. Trade groups urge a swift deal, arguing that a modernised customs system must support enforcement against surging online imports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake and AI fraud surges despite stable identity-fraud rates

According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.

Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.

The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).

Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.

This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oakley Meta glasses launch in India with AI features

Meta is preparing to introduce its Oakley Meta HSTN smart glasses to the Indian market as part of a new effort to bring AI-powered eyewear to a broader audience.

A launch that begins on 1 December and places the glasses within a growing category of performance-focused devices aimed at athletes and everyday users who want AI built directly into their gear.

The frame includes an integrated camera for hands-free capture and open-ear speakers that provide audio cues without blocking outside sound.

These glasses are designed to suit outdoor environments, offering IPX4 water resistance and robust battery performance. Also, they can record high-quality 3K video, while Meta AI supplies information, guidance and real-time support.

Users can expect up to eight hours of active use and a rapid recharge, with a dedicated case providing an additional forty-eight hours of battery life.

Meta has focused on accessibility by enabling full Hindi language support through the Meta AI app, allowing users to interact in their preferred language instead of relying on English.

The company is also testing UPI Lite payments through a simple voice command that connects directly to WhatsApp-linked bank accounts.

A ‘Hey Meta’ prompt enables hands-free assistance for questions, recording, or information retrieval, allowing users to remain focused on their activity.

The new lineup arrives in six frame and lens combinations, all of which are compatible with prescription lenses. Meta is also introducing its Celebrity AI Voice feature in India, with Deepika Padukone’s English AI voice among the first options.

Pre-orders are open on Sunglass Hut, with broader availability planned across major eyewear retailers at a starting price of ₹ 41,800.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google teams with Accel to boost India’s AI ecosystem

Google has partnered with VC firm Accel to support early-stage AI start-ups in India, marking the first time its AI Futures Fund has collaborated directly on regional venture investment.

Through the newly created Atoms AI Cohort 2026, selected start-ups will receive up to US$2 million in funding, with Google and Accel each contributing up to US$1 million. Founders will also gain up to US$350,000 in compute credits, early access to models from Gemini and DeepMind, technical mentorship, and support for scaling globally.

The collaboration is designed to stimulate India’s AI ecosystem across a broad set of domains, including creativity, productivity, entertainment, coding, and enterprise automation. According to Accel, the focus will lie on building products tailored for local needs, with potential global reach.

This push reflects Google’s growing bet on India as a global hub for AI. For digital-policy watchers and global technology observers, this partnership raises essential questions.

Will increased investment accelerate India’s role as an AI-innovation centre? Could this shift influence tech geopolitics and data-governance norms in Asia? The move follows the company’s recently announced US$15 billion investment to build an AI data centre in Andhra Pradesh.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK enforces digital travel approval through new ETA system

Visitors from 85 nationalities, including those from the US, Canada, and France, will soon be required to secure an Electronic Travel Authorisation to enter the UK.

The requirement takes effect in February 2026 and forms part of a move towards a fully digital immigration system that aims to deliver a contactless border in the future.

More than thirteen million people in the UK have already used the ETA since its introduction in 2023. However, the government claims that this scale facilitates smoother travel and faster processing for most applicants.

Carriers will be required to confirm that incoming passengers hold either an ETA or an eVisa before departure, a step officials argue strengthens the country’s ability to block individuals who present a security risk.

British and Irish citizens remain exempt; however, dual nationals have been advised to carry a valid British passport to avoid any difficulties when boarding.

The application process takes place through the official ETA app, costs £ 16, and concludes typically within minutes. However, applicants are advised to allow three working days in case additional checks are required.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!