Deepfake and AI fraud surges despite stable identity-fraud rates

According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.

Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.

The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).

Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.

This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oakley Meta glasses launch in India with AI features

Meta is preparing to introduce its Oakley Meta HSTN smart glasses to the Indian market as part of a new effort to bring AI-powered eyewear to a broader audience.

A launch that begins on 1 December and places the glasses within a growing category of performance-focused devices aimed at athletes and everyday users who want AI built directly into their gear.

The frame includes an integrated camera for hands-free capture and open-ear speakers that provide audio cues without blocking outside sound.

These glasses are designed to suit outdoor environments, offering IPX4 water resistance and robust battery performance. Also, they can record high-quality 3K video, while Meta AI supplies information, guidance and real-time support.

Users can expect up to eight hours of active use and a rapid recharge, with a dedicated case providing an additional forty-eight hours of battery life.

Meta has focused on accessibility by enabling full Hindi language support through the Meta AI app, allowing users to interact in their preferred language instead of relying on English.

The company is also testing UPI Lite payments through a simple voice command that connects directly to WhatsApp-linked bank accounts.

A ‘Hey Meta’ prompt enables hands-free assistance for questions, recording, or information retrieval, allowing users to remain focused on their activity.

The new lineup arrives in six frame and lens combinations, all of which are compatible with prescription lenses. Meta is also introducing its Celebrity AI Voice feature in India, with Deepika Padukone’s English AI voice among the first options.

Pre-orders are open on Sunglass Hut, with broader availability planned across major eyewear retailers at a starting price of ₹ 41,800.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google teams with Accel to boost India’s AI ecosystem

Google has partnered with VC firm Accel to support early-stage AI start-ups in India, marking the first time its AI Futures Fund has collaborated directly on regional venture investment.

Through the newly created Atoms AI Cohort 2026, selected start-ups will receive up to US$2 million in funding, with Google and Accel each contributing up to US$1 million. Founders will also gain up to US$350,000 in compute credits, early access to models from Gemini and DeepMind, technical mentorship, and support for scaling globally.

The collaboration is designed to stimulate India’s AI ecosystem across a broad set of domains, including creativity, productivity, entertainment, coding, and enterprise automation. According to Accel, the focus will lie on building products tailored for local needs, with potential global reach.

This push reflects Google’s growing bet on India as a global hub for AI. For digital-policy watchers and global technology observers, this partnership raises essential questions.

Will increased investment accelerate India’s role as an AI-innovation centre? Could this shift influence tech geopolitics and data-governance norms in Asia? The move follows the company’s recently announced US$15 billion investment to build an AI data centre in Andhra Pradesh.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK enforces digital travel approval through new ETA system

Visitors from 85 nationalities, including those from the US, Canada, and France, will soon be required to secure an Electronic Travel Authorisation to enter the UK.

The requirement takes effect in February 2026 and forms part of a move towards a fully digital immigration system that aims to deliver a contactless border in the future.

More than thirteen million people in the UK have already used the ETA since its introduction in 2023. However, the government claims that this scale facilitates smoother travel and faster processing for most applicants.

Carriers will be required to confirm that incoming passengers hold either an ETA or an eVisa before departure, a step officials argue strengthens the country’s ability to block individuals who present a security risk.

British and Irish citizens remain exempt; however, dual nationals have been advised to carry a valid British passport to avoid any difficulties when boarding.

The application process takes place through the official ETA app, costs £ 16, and concludes typically within minutes. However, applicants are advised to allow three working days in case additional checks are required.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan boosts Rapidus with major semiconductor funding

Japan will inject more than one trillion yen (approximately 5.5 billion €) into chipmaker Rapidus between 2026 and 2027. The plan aims to fortify national economic security by rebuilding domestic semiconductor capacity after decades of reliance on overseas suppliers.

Rapidus intends to begin producing 2-nanometre chips in late 2027 as global demand for faster, AI-ready components surges. The firm expects overall investment to reach seven trillion yen and hopes to list publicly around 2031.

Japanese government support includes large subsidies and direct investment that add to earlier multi-year commitments. Private contributors, including Toyota and Sony, previously backed the venture, which was founded in 2022 to revive Japan’s cutting-edge chip ambitions.

Officials argue that advanced production is vital for technological competitiveness and future resilience. Critics to this investment note that there are steep costs and high risks, yet policymakers view the Rapidus investment as crucial to keeping pace with technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot