US authorities disrupt Russian AI-powered disinformation campaign

US authorities have disrupted a sophisticated Russian disinformation campaign, Meliorator, which uses AI to create fake social media personas and spread false information in the US and internationally.

Cyberattack laptop

Authorities from multiple countries have issued warnings about a sophisticated disinformation campaign backed by Russia that leverages AI-powered software to spread false information both in the US and internationally. The operation, known as Meliorator, is reportedly being carried out by affiliates of RT (formerly Russia Today), a Russian state-sponsored media outlet, to create fake online personas and disseminate misleading content. Since at least 2022, Meliorator has been employed to spread disinformation targeting the US, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel, as detailed in a joint advisory released by US, Canadian, and Dutch security services.

Meliorator is designed to create fake social media profiles that appear to be real individuals, primarily from the US. These bots can generate original posts, follow users, like, comment, repost, and gain followers. They are capable of mirroring and amplifying existing Russian disinformation narratives. The identities of these bots are crafted based on specific parameters like location, political ideologies, and biographical data. Meliorator can also group bots with similar ideologies to enhance their personas.

Moreover, most bot accounts had over 100,000 followers to avoid detection and followed genuine accounts aligned with their fabricated political leanings. As of June 2024, Meliorator was only operational on X, but there are indications that its functionality might have expanded to other social media networks.

The US Justice Department (DOJ) announced the seizure of two domain names and the search of nearly a thousand social media accounts used by Russian actors to establish an AI-enhanced bot farm with Meliorator’s assistance. The bot farm operators registered fictitious social media accounts using private email servers linked to the seized domain names. The FBI took control of these domains, while social media platform X (formerly Twitter) voluntarily suspended the remaining identified bot accounts for violating terms of service.

FBI Director Christopher Wray emphasised that this marks a significant step in disrupting a Russian-sponsored AI-enhanced disinformation bot farm. The goal of the bot farm was to use AI to scale disinformation efforts, undermining partners in Ukraine and influencing geopolitical narratives favouring the Russian government. These accounts commonly posted pro-Kremlin content, including videos of President Vladimir Putin and criticism of the Ukrainian government.

US authorities have linked the development of Meliorator to a former deputy editor-in-chief at RT in early 2022. RT viewed this bot farm as an alternative means of distributing information beyond its television broadcasts, especially after going off the air in the US in early 2022. The Kremlin approved and financed the bot farm, with Russia’s Federal Security Service (FSB) having access to the software to advance its goals.

The DOJ highlighted that the use of US-based domain names by the FSB violates the International Emergency Economic Powers Act, and the associated payments breach US money laundering laws. Deputy Attorney General Lisa Monaco stated that the DOJ and its partners will not tolerate the use of AI by Russian government actors to spread disinformation and sow division among Americans.

Why does it matter?

The disruption of the Russian operation comes just four months before the US presidential election, a period during which security experts anticipate heightened hacking and covert social media influence attempts by foreign adversaries. Attorney General Merrick Garland noted that this is the first public accusation against a foreign government for using generative AI in a foreign influence operation.