OpenAI cracks down on misuse of ChatGPT by foreign threat actors
OpenAI uncovered cyber campaigns using ChatGPT for malware, social engineering, and fake political content across platforms.
OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.
The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.
According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.
The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.
Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.
Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.
The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.
The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!