Experts link Qantas data breach to AI voice impersonation

AI-powered voice cloning may have tricked Qantas staff into handing over credentials, cybersecurity researchers say.

Qantas, AI deepfakes, cyberattack, Scattered Spider, Salesforce

Cybersecurity experts believe criminals may have used AI-generated voice deepfakes to breach Qantas systems, potentially deceiving contact centre staff in Manila. The breach affected nearly six million customers, with links to a group known as Scattered Spider.

Qantas confirmed the breach after detecting suspicious activity on a third-party platform. Stolen data included names, phone numbers, and addresses—but no financial details. The airline has not confirmed whether voice impersonation was involved.

Experts point to Scattered Spiders’ history of using synthetic voices to trick help desk staff into handing over credentials. Former FBI agent Adam Marré said the technique, known as vishing, matches the group’s typical methods and links them to The Com, a cybercrime collective.

Other members of The Com have targeted companies like Salesforce through similar tactics. Qantas reportedly warned contact centre staff shortly before the breach, citing a threat advisory connected to Scattered Spider.

Google and CrowdStrike reported that the group frequently impersonates employees over the phone to bypass multi-factor authentication and reset passwords. The FBI has warned that Scattered Spider is now targeting airlines.

Qantas says its core systems remain secure and has not confirmed receiving a ransom demand. The airline is cooperating with authorities and urging affected customers to watch for scams using their leaked information.

Cybersecurity firm Trend Micro notes that voice deepfakes are now easy to produce, with convincing audio clips available for as little as $5. The deepfakes can mimic language, tone, and emotion, making them powerful tools for deception.

Experts recommend biometric verification, synthetic signal detection, and real-time security challenges to counter deepfakes. Employee training and multi-factor authentication remain essential defences.

Recent global cases illustrate the risk. In one instance, a deepfake mimicking US Senator Marco Rubio attempted to access sensitive systems. Other attacks involved cloned voices of US political figures Joe Biden and Susie Wiles.

As voice content becomes more publicly available, experts warn that anyone sharing audio online could become a target for AI-driven impersonation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!