Russian journalist targeted by deepfake in growing disinformation threat

This incident sparks worries about the potential for fake news featuring real news anchors, with the potential to spread misinformation and undermine media trust.

 Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware

In a concerning development, journalist Ksenia Turkova of VOA’s Russian Service has become a victim of a deepfake video, highlighting the increasing threat of disinformation through AI-generated content. The video, found on Facebook, falsely presents Turkova endorsing a cryptocurrency trading product.

This incident has raised concerns about the potential for fake news reports featuring natural news anchors, which could spread misinformation and erode trust in the media. Turkova, who initially believed the video to be honest, now fears future deepfake manipulation that could tarnish her reputation as a journalist. Experts are calling for government regulations to address the growing threat AI-generated disinformation poses.

Why does it matter?

This isn’t an isolated incident, as journalists from various outlets have been targeted by deepfake impersonations aiming to disseminate false information. Disinformation researcher John Scott-Railton at the Citizen Lab notes that creating fake news videos used to be resource-intensive, but that’s no longer true. The global AI debate is currently in motion, exemplified by the EU AI Act and a recent executive order signed by US President Joe Biden to guide the development of safe, secure, and trustworthy AI. Nevertheless, the efficacy of these measures in preventing the misuse of open-source AI systems remains a subject of critical concern.