Why detecting deepfakes is no longer enough to stay secure

As AI-generated deepfakes grow more convincing and injection attacks surge, the systems organisations rely on to verify who they are dealing with are facing an unprecedented crisis of trust.

Sixty-one data protection authorities issued a joint declaration urging worldwide cooperation to confront the spread of non-consensual AI-generated images targeting children and vulnerable groups.

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!