Deepfake and AI fraud surges despite stable identity-fraud rates

According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.

Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.

The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).

Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.

This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US warns of rising senior health fraud as AI lifts scam sophistication

AI-driven fraud schemes are on the rise across the US health system, exposing older adults to increasing financial and personal risks. Officials say tens of billions in losses have already been uncovered this year. High medical use and limited digital literacy leave seniors particularly vulnerable.

Criminals rely on schemes such as phantom billing, upcoding and identity theft using Medicare numbers. Fraud spans home health, hospice care and medical equipment services. Authorities warn that the ageing population will deepen exposure and increase long-term harm.

AI has made scams harder to detect by enabling cloned voices, deepfakes and convincing documents. The tools help impersonate providers and personalise attacks at scale. Even cautious seniors may struggle to recognise false calls or messages.

Investigators are also using AI to counter fraud by spotting abnormal billing, scanning records for inconsistencies and flagging high-risk providers. Cross-checking data across clinics and pharmacies helps identify duplicate claims. Automated prompts can alert users to suspicious contacts.

Experts urge seniors to monitor statements, ignore unsolicited calls and avoid clicking unfamiliar links. They should verify official numbers, protect Medicare details and use strong login security. Suspicious activity should be reported to Medicare or to local fraud response teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VPN credential theft emerges as top ransomware entry point

Cyber Express reports that compromised VPN credentials are now the most common method for ransomware attackers to gain entry. In Q3 2025, nearly half of all ransomware incidents began with valid, stolen VPN logins.

The analysis, based on data from Beazley Security (the insurance arm of Beazley), reveals that threat actors are increasingly exploiting remote access tools, rather than relying solely on software exploits or phishing.

Notably, VPN misuse accounted for more initial access than social engineering, supply chain attacks or remote desktop credential compromises.

One contributing factor is that many organisations do not enforce multi-factor authentication (MFA) or maintain strict access controls for VPN accounts. Cyber Express highlights that this situation underscores the ‘critical need’ for MFA and for firms to monitor for credential leaks on the dark web.

The report also mentions specific ransomware groups such as Akira, Qilin and INC, which are known to exploit compromised VPN credentials, often via brute-force attacks or credential stuffing.

From a digital-security policy standpoint, the trend has worrying implications. It shows how traditional perimeter security (like VPNs) is under pressure, and reinforces calls for zero-trust architectures, tighter access governance and proactive credentials-monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Popular Python AI library compromised to deliver malware

Security researchers have confirmed that the Ultralytics YOLO library was hijacked in a supply-chain attack, where attackers injected malicious code into the PyPI-published versions 8.3.41 and 8.3.42. When installed, these versions deployed the XMRig cryptominer.

The compromise stemmed from Ultralytics’ continuous-integration workflow: by exploiting GitHub Actions, the attackers manipulated the automated build process, bypassing review and injecting cryptocurrency mining malware.

The maintainers quickly removed the malicious versions and released a clean build (8.3.43); however, newer reports suggest that further suspicious versions may have appeared.

This incident illustrates the growing risk in AI library supply chains. As open-source AI frameworks become more widely used, attackers increasingly target their build systems to deliver malware, particularly cryptominers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trilateral sanctions target Media Land for supporting ransomware groups

The United States has imposed coordinated sanctions on Media Land, a Russian bulletproof hosting provider accused of aiding ransomware groups and broader cybercrime. The measures target senior operators and sister companies linked to attacks on businesses and critical infrastructure.

Authorities in the UK and Australia say Media Land infrastructure aided ransomware groups, including LockBit, BlackSuit, and Play, and was linked to denial-of-service attacks on US organisations. OFAC also named operators and firms that maintained systems designed to evade law enforcement.

The action also expands earlier sanctions against Aeza Group, with entities accused of rebranding and shifting infrastructure through front companies such as Hypercore to avoid restrictions introduced this year. Officials say these efforts were designed to obscure operational continuity.

According to investigators, the network relied on overseas firms in Serbia and Uzbekistan to conceal its activity and establish technical infrastructure that was detached from the Aeza brand. These entities, along with the new Aeza leadership, were designated for supporting sanctions evasion and cyber operations.

The sanctions block assets under US jurisdiction and bar US persons from dealing with listed individuals or companies. Regulators warn that financial institutions interacting with sanctioned entities may face penalties, stating that the aim is to disrupt ransomware infrastructure and encourage operators to comply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Azure weathers record 15.7 Tbps cloud DDoS attack

According to Microsoft, Azure was hit on 24 October 2025 by a massive multi-vector DDoS attack that peaked at 15.72 terabits per second and unleashed 3.64 billion packets per second on a single endpoint.

The attack was traced to the Aisuru botnet, a Mirai-derived IoT botnet. More than 500,000 unique IP addresses, mostly residential devices, participated in the assault. UDP floods with random ports made the attack particularly potent and harder to spoof.

Azure’s automated DDoS Protection infrastructure handled the traffic surge, filtering out malicious packets in real time and keeping customer workloads online.

From a security-policy viewpoint, this incident underscores how IoT devices continue to fuel some of the biggest cyber threats, and how major cloud platforms must scale defences rapidly to cope.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCC set to rescind cyber rules after Salt Typhoon hack

The FCC is scheduled this week to vote on rescinding rules imposed in January that required major telecommunications carriers to secure networks from unauthorised access and interception under Section 105 of the Communications Assistance for Law Enforcement Act.

These measures were introduced after the Salt Typhoon cyber-espionage campaign exposed vulnerabilities in US telecom infrastructure.

Current FCC Chair Brendan Carr argues the prior policy exceeded the agency’s legal authority and did not offer flexible or targeted protections. The proposed reversal follows lobbying by major carriers who claim the rules could undermine partnership efforts between public and private sectors.

Lawmakers, including Maria Cantwell, ranking Democrat on the Senate Commerce Committee, have strongly opposed the move. They describe the Salt Typhoon campaign, attributed to Chinese-linked actors targeting numerous US carriers, as one of the most serious telecom breaches in US history, emphasising that loosening these rules could undermine national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot