Generative AI enables rapid phishing attacks on older users

Chatbots such as Grok and Claude produced convincing scam emails, showing how generative AI could amplify cybercrime targeting older adults.

A study found AI chatbots can craft persuasive phishing emails, with 11% of seniors clicking on links in a controlled test.

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot