Gen-AI tools pose the risk of generating misleading election images

Concerns have been raised by researchers from the Center for Countering Digital Hate (CCDH) regarding the potential for AI tools developed by companies such as OpenAI and Microsoft to generate images that could fuel election-related disinformation.

 Box, Cardboard, Carton, Package, Package Delivery, Person

According to a report by the Center for Countering Digital Hate (CCDH), text-to-image tools powered by AI can potentially produce misleading photos related to elections or voting. The report shows how the CCDH, currently being sued by X, conducted tests with generative AI tools and discovered that they could produce images that might be exploited as false ‘photo evidence’ in the context of elections. Notably, these AI tools generated images depicting scenarios such as US President Joe Biden in a hospital bed and election workers purportedly destroying voting machines. These scenarios could contribute to the spread of misinformation leading up to the US presidential election in November.

The CCDH’s tests encompassed several AI tools, including OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, Midjourney, and Stability AI’s DreamStudio, all capable of generating falsehood images based on text prompts. Findings revealed that in 41% of the tests, the AI tools produced misleading images, particularly when prompted to depict election fraud instances, such as discarded ballots. 

While ChatGPT Plus and Image Creator successfully blocked prompts for false images of candidates, Midjourney exhibited the lowest efficacy, generating misleading pictures in 65% of the tests.

Why does it matter?

The emergence of AI-generated deceptive imagery holds significant implications, particularly in light of recent efforts by OpenAI, Microsoft, and Stability AI, among others, to prevent deceptive AI content from influencing global elections. As confirmed in the tests, OpenAI asserts active efforts to curb misuse of its tools, while Microsoft has yet to issue a response to the CCDH’s findings, despite both of them being part of an agreement to collaborate on preventing AI election fraud last month. This development underscores the critical need for continued vigilance and collaborative action to mitigate the potential impact of AI-generated disinformation on democratic processes.