AI firms plan guardrails to prevent misuse of political images ahead of 2024 elections

Artificial intelligence (AI) and social media companies are actively considering and implementing steps to reduce the possible misuse of the technology in developing and sharing political images and material ahead of the 2024 elections.

 Computer Hardware, Electronics, Hardware, Text, Monitor, Screen, Computer, Laptop, Pc

Midjourney, an AI image-generating firm, is considering a ban for the next twelve months on political image creation, including figures like Donald Trump and Joe Biden. Midjourney’s CEO, David Holz, voiced concerns about the platform’s role in political discourse and the potential for misuse of its technology to jeopardise the democratic process. The company has already implemented safeguards against misleading or harmful portrayals of public figures and events.


Other major AI companies have also contemplated similar measures. OpenAI is focusing on mitigating misuse and increasing transparency around AI-generated content. ChatGPT Maker has implemented safeguards against creating representations of real people, including political candidates, and is developing tools to increase factual accuracy and limit bias. They are also working with the National Association of Secretaries of State to connect users to reliable voting information. Furthermore, OpenAI intends to use digital credentials and an origin classifier to identify AI-generated photos.


Last week, Meta unveiled plans to label AI-generated images created through third-party tools like OpenAI and Midjourney and shared on its platforms, including Facebook and Instagram. The company will provide end-users with more transparency about the source of the content and will develop technical solutions to identify and label AI-generated images. Meta is also investigating new methods to prevent the removal of invisible watermarks from AI-generated content, which could contribute to strengthening the integrity of information shared on its social media platforms.

Why does it matter?


These restrictions are a reaction to rising concerns about election integrity and AI’s role in disseminating misinformation and disinformation. In preparation for the upcoming elections in the US and elsewhere, AI firms are taking significant steps to limit the potential risks associated with AI-generated political content.
The industry’s reaction to AI’s challenges in the political domain includes self-regulation, technological solutions, and engaging with government and non-governmental organisations.

Companies are also investing in technologies and policies to prevent the use of AI tools for political manipulation, misinformation, and disinformation. These initiatives are part of a larger plan to preserve the democratic process and promote ethical usage of AI technologies. All these steps demonstrate an awareness of the issues and a commitment to preserving the integrity of the democratic process in an age of fast-evolving technology.