Deepfake and AI fraud surges despite stable identity-fraud rates
A new report finds that while overall identity-fraud attempts have dipped, AI and deepfake-powered “sophisticated fraud” is rising fast, increasing by 180 percent.
According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.
Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.
The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).
Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.
This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
