Hackers use ChatGPT for fake ID attack
Experts warn that hackers are increasingly exploiting AI for phishing, malware development and impersonation of trusted institutions.

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.
Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.
Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.
The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!