Cybercriminals use AI to target elections, says OpenAI
Cybercriminals are increasingly using AI, including ChatGPT, to generate fake election content.
OpenAI reports cybercriminals are increasingly using its AI models to generate fake content aimed at influencing elections. The startup has neutralised over 20 attempts this year, including accounts producing articles on the US elections. Several accounts from Rwanda were banned in July for similar activities related to elections in that country.
The company confirmed that none of these attempts succeeded in generating viral engagement or reaching sustainable audiences. However, the use of AI in election interference remains a growing concern, especially as the US approaches its presidential elections. The US Department of Homeland Security also warns of foreign nations attempting to spread misinformation using AI tools.
As OpenAI strengthens its global position, the rise in election manipulation efforts underscores the critical need for heightened vigilance. The company recently completed a $6.6 billion funding round, further securing its status as one of the most valuable private firms.
ChatGPT continues to see rapid growth, boasting 250 million weekly active users since launching in November 2022, emphasising the platform’s widespread influence.