TikTok fails disinformation test ahead of EU elections, study reveals

A recent study by Global Witness has revealed alarming deficiencies in TikTok’s ability to manage disinformation related to the upcoming EU elections. The investigation tested the platform’s content moderation by submitting 16 disinformation ads. TikTok approved all of these ads, which included false information about polling station closures, incorrect voting methods, and incitements to violence.

The Global Witness study developed 16 disinformation ads relating to the upcoming European parliamentary elections in Ireland and submitted them to X, YouTube and TikTok while allowing at least 48 hours for the review process. Additionally, Global Witness said that it’s submitting a complaint to the Irish regulators regarding TikTok’s failure to mitigate electoral threats. Examples of fake ads include: 

  • Incorrect voting method ‘Don’t vote in person this EU election! New reports find that ballots are being altered by election workers. Vote instead by texting 05505’
  • Incorrect voting requirements ‘New rules change ID requirement for 2024 elections. If you don’t have a valid driving licence, you can’t vote this election day’ 
  • Polling station closure ‘Following an unprecedented spike in contagious diseases, the Electoral Commission is to close all polling stations to protect staff and public health. Please vote online instead.’

While YouTube rejected most disinformation ads (14 out of 16) and X blocked all of them and suspended their ability to run ads, TikTok’s approval rate was a concerning 100%. This highlights a significant vulnerability in TikTok’s moderation process, especially given its large and youthful user base. 

Why does it matter?

TikTok’s failure to effectively moderate election-related content violates both its own policies which ‘do not allow misinformation or content about civic and electoral process that may result in voter interference, disrupt the peaceful transfer of power, or to off-platform violence‘ and the EU’s Digital Services Act, which requires very large online platforms (VLOPs) to mitigate electoral risks by ensuring that they ‘are able to react rapidly to manipulation of their service aimed at undermining the electoral process and attempts to use disinformation and information manipulation to suppress voters.’

A similar study on TikTok led by the EU Disinfo Lab further emphasises the issue and highlights several concerns regarding Algorithmic amplification, user demographics and policy enforcement. TikTok’s recommendation algorithm often promotes sensational and misleading content, increasing the spread of disinformation, and with a predominantly young user base, it can influence a critical segment of the electorate. Despite having policies against political ads and disinformation, enforcement could be more consistent and often effective.

In Tiktok’s response to the study, the platform recognised a violation of its policy, citing an internal investigation following a ‘human error’ and the implementation of new processes to prevent this from happening in the future.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

X now officially allows adult content

X, formerly known as Twitter, has officially updated its rules to allow the posting of adult and graphic content. Users can now share consensually produced NSFW (not safe for work) content, including AI-generated images and videos, provided they are clearly labelled. This change is a formal acknowledgement of practices that have existed unofficially for years, especially under the platform’s current ownership by Elon Musk, who has been exploring ways to host and potentially monetise adult content.

The new guidelines emphasise that while adult content is permitted, it must be consensually produced and appropriately labelled to prevent unintended exposure, particularly to minors. X continues to prohibit excessively gory content and any depiction of sexual violence, aligning with its existing violent content policies. The platform also requires users to mark posts containing sensitive media, ensuring that such content is only visible to those over 18 who have provided birthdates.

This move opens the door for X to potentially develop services around adult content, possibly positioning itself as a competitor to platforms like OnlyFans. The prevalence of adult content on X has been significant, with about 13% of posts in 2022 containing such material, a figure that has likely increased with the proliferation of porn bots. Regulatory bodies will closely monitor X’s efforts to manage and eliminate non-consensual porn and child sexual abuse material (CSAM), especially following past fines and warnings from countries like Australia and India.

Adobe removes AI imitations after Ansel Adams estate complaint

Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.

In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.

Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.

Microsoft notes little AI impact on EU election disinformation

Microsoft’s president, Brad Smith, announced that the company has yet to observe significant use of AI to create disinformation campaigns in the upcoming European Parliament elections. This comes as Microsoft plans to invest 33.7 billion Swedish crowns ($3.21 billion) to expand its cloud and AI infrastructure in Sweden over the next two years. Smith acknowledged the risks of AI-generated deepfakes and abusive content but noted that the European elections have not been targeted heavily by such efforts.

Smith highlighted that while AI-generated fakes have been increasingly used in elections in countries like India, the United States, Pakistan, and Indonesia, the European context appears less affected. For instance, in India, deepfake videos of Bollywood actors criticising Prime Minister Narendra Modi and supporting the opposition went viral. In the EU, a Russian-language video falsely claimed that citizens were fleeing Poland for Belarus, but the EU’s disinformation team debunked it.

Ahead of the European Parliament elections from June 6-9, Microsoft’s training for candidates to monitor AI-related disinformation seems to be paying off. Despite not declaring victory prematurely, Smith emphasised that current threats focus more on events like the Olympics than the elections. This development follows the International Olympic Committee’s ban on the Russian Olympic Committee for recognising councils in Russian-occupied regions of Ukraine. Microsoft plans to release a detailed report on this issue soon.

EU alleges Russian disinformation ahead of elections

European governments are raising alarms over alleged Russian disinformation campaigns as the EU prepares for its parliamentary elections from June 6-9. They claim Moscow, alongside pro-Kremlin actors, is engaged in a broad interference effort to discredit European governments and destabilise the EU. Key tactics allegedly include the spread of manipulated information, deepfake videos, and fake news websites designed to resemble legitimate sources. For instance, the Czech Republic has identified voiceofeurope.com as a leading platform for pro-Russian influence operations, allegedly funded by Ukrainian politician Viktor Medvedchuk.

Russia, however, vehemently denies these accusations, labelling them as part of a Western-led information war aimed at tarnishing its reputation. Russian officials argue that the West is suppressing alternative viewpoints and has banned Russian state media such as RIA Novosti and Izvestia. They contend that the West’s intolerance for dissenting narratives fuels these allegations, positioning Russia as a fabricated enemy.

European officials also point to the sophisticated use of ‘deepfakes’ and ‘doppelganger’ sites in these disinformation efforts. Deepfakes, created with AI to produce realistic fake media, have been used to spread false narratives, such as the fake recording of Slovak politician Michal Simecka discussing vote rigging. Doppelganger sites, mimicking legitimate news sources, have disseminated false information, including fabricated stories about France’s policies.

In response, the EU leaders emphasise the need for a strong, democratic Europe, while new regulations under the Digital Services Act demand swift action against illegal content and deceptive practices on social media platforms. Companies like Meta, Google, and TikTok have announced measures to combat disinformation before the elections.

Survey reveals concerns over potential AI abuse in US presidential election

A recent survey conducted by the Elon University Poll and the Imagining the Digital Future Center at Elon University has revealed widespread concerns among American adults regarding the impact of AI on the upcoming presidential election. According to the survey, more than three-fourths of respondents believe that abuses involving AI systems will influence the election outcome. Specifically, 73% of respondents fear AI will be used to manipulate social media, while 70% anticipate the spread of fake information through AI-generated content like deepfakes.

Moreover, the survey highlights concerns about targeted AI manipulation to dissuade certain voters from participating in the election, with 62% of respondents expressing apprehension about this possibility. Overall, 78% of Americans anticipate at least one form of AI abuse affecting the election, while over half believe all three identified forms are likely to occur. Lee Rainie, director of Elon University’s Imagining the Digital Future Center, notes that voters in the USA anticipate facing significant challenges in navigating misinformation and voter manipulation tactics facilitated by AI during the campaign period.

The survey underscores a strong consensus among Americans regarding the accountability of political candidates who maliciously alter or fake photos, videos, or audio files. A resounding 93% of respondents believe such candidates should face punishment, with opinions split between removal from office (46%) and criminal prosecution (36%). Additionally, the survey reveals concerns about the public’s ability to discern faked media, as 69% of respondents lack confidence in most voters’ ability to detect altered content.

OpenAI uncovers misuse of AI in deceptive campaigns

OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.

Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.

In response to these threats, OpenAI has formed a Safety and Security Committee, led by CEO Sam Altman and other board members, to oversee the training of its next AI model. Additionally, Meta Platforms reported similar findings of likely AI-generated content used deceptively on Facebook and Instagram, underscoring the broader issue of AI misuse in digital platforms.

TikTok aims to address US security concerns with new algorithm

TikTok is developing a separate recommendation algorithm for its 170 million US users to address concerns from American lawmakers who are pushing to ban the app. The following action, initiated by ByteDance, TikTok’s Chinese parent company, involves separating millions of lines of code to create an independent US version, potentially paving the way for divestiture of US assets.

The initiative, which predates a bill mandating TikTok’s US operations’ sale, is a response to bipartisan concerns that the app could provide Beijing with access to extensive user data. Despite ByteDance’s legal challenge to the new law, engineers continue to work on the complex and lengthy process of code separation, which is expected to take over a year.

TikTok has stated that selling its US assets is not feasible, citing commercial, technological, and legal constraints. However, the company is exploring options to demonstrate its US operations’ independence, including possibly open-sourcing parts of its algorithm. The success of this separation project could impact TikTok US’s performance, which currently relies on ByteDance’s engineering resources.

Meta discovers ‘likely AI-generated’ content praising Israel

Meta reported finding likely AI-generated content used deceptively on Facebook and Instagram, praising Israel’s handling of the Gaza conflict in comments under posts from global news organisations and US lawmakers. This campaign, linked to the Tel Aviv-based political marketing firm STOIC, targeted audiences in the US and Canada by posing as various concerned citizens. STOIC has not commented on the allegations.

Meta’s quarterly security report marks the first disclosure of text-based generative AI technology used in influence operations since its emergence in late 2022. While AI-generated profile photos have been identified in past operations, the use of text-based AI raises concerns about more effective disinformation campaigns. Despite this, Meta’s security team successfully disrupted the Israeli campaign early and maintained confidence in their ability to detect such networks.

The report detailed six covert influence operations disrupted in the first quarter, including an Iran-based network focused on the Israel-Hamas conflict, which did not use generative AI. As Meta and other tech giants continue to address potential AI misuse, upcoming elections in the EU and the US will test their defences against AI-generated disinformation.