EU calls for clear labeling of AI-generated content to combat disinformation

The EU has set a deadline for companies to report on their safeguards measures by July and warned Twitter, which left the EU Code, to prepare for increased regulatory scrutiny.

Flag by EU.

European Commission deputy head Vera Jourova has urged companies that utilise generative AI tools like ChatGPT and Bard, known for their potential to create disinformation, to clearly label such content as part of their efforts to combat fake news, particularly from Russia. OpenAI’s ChatGPT, supported by Microsoft, has gained immense popularity as a consumer application, prompting competition among tech companies to release their own generative AI products. 

However, concerns have arisen regarding the misuse of this technology, with fears that malicious actors and even governments may exploit it to spread disinformation on a larger scale. Jourova emphasised that companies like Microsoft (with Bingchat), Google (with Bard), and Meta Platforms (formerly Facebook) that have committed to the EU Code of Practice in combating disinformation should implement safeguards to prevent the misuse of their generative AI services. 

Additionally, companies with services capable of disseminating AI-generated disinformation should employ technology to identify and label such content for users. The EU expects content labelling to occur noticeably, indicating it is not created by humans, using phrases like ‘this is the robot talking.’

Jourova stated that these companies will be required to report on the measures taken to address this issue by July. She also warned Twitter, which recently withdrew from the EU Code, to anticipate increased regulatory scrutiny, emphasising that its actions and compliance with the EU law will be thoroughly and urgently scrutinised.

Signatories of the Code of Practice on Disinformation met in Brussels to examine the initial year of advancements in the revised initiative. The main focus of the discussion was the significant presence of generative AI, particularly after Twitter departed from the voluntary program. They include diverse entities, ranging from research groups and civil society organisations to the largest platforms that are now subject to new regulations outlined in the Digital Services Act (DSA).