OpenAI addresses concerns over election misuse of AI

The company plans to make AI-generated images more obvious and is developing methods to identify modified content.

Logo of OpenAI

OpenAI has released a blog post addressing concerns surrounding the potential misuse of its technology in elections. OpenAI’s products, ChatGPT and DALL-E, have raised fears about AI’s interference in election integrity. ChatGPT can convincingly mimic human writing, while DALL-E can generate realistic deepfake images.

OpenAI’s CEO, Sam Altman, expressed his apprehension during a congressional hearing, expressing concerns over generative AI’s ability to compromise election integrity through interactive disinformation. In response to these concerns, OpenAI has partnered with the National Association of Secretaries of State in the US, an organisation focused on promoting effective democratic processes, including elections.

To ensure the integrity of elections, OpenAI has implemented several measures. ChatGPT will now direct users to CanIVote.org when asked specific election-related questions. The company also plans to make AI-generated images more easily distinguishable by adding a ‘cr’ icon, following a protocol established by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI is actively developing methods to identify DALL-E-generated content, even after modifications have been made.

OpenAI’s policies prohibit abusive use of its technology, such as creating chatbots pretending to be real people or discouraging voting. Additionally, DALL-E is restricted from generating images of real people, including political candidates. However, OpenAI faces challenges in effectively monitoring and regulating content on its platform, as demonstrated by Reuters’ experience generating images of Donald Trump and Joe Biden. While the request was blocked, Reuters was able to create images of other US politicians, including former Vice President Mike Pence.

Why does it matter?

More than seventy countries worldwide will hold elections in 2024, where AI is expected to play a significant role. The World Economic Forum has already released a report pinpointing misinformation as an immediate threat to democracy. The move of OpenAI to actively collaborate with relevant organisations, implement identification protocols, and establish policies to protect the integrity of elections is a good step forward. However, OpenAI acknowledges its difficulties in effectively monitoring and regulating content on its platform.