Report reveals surge in fake accounts on X targeting US presidential election

Fake accounts discussing the US presidential election are increasing on the social media platform X, according to a report by Cyabra, an Israeli tech company specialising in AI-driven analysis. The report found that 15% of accounts praising former President Donald Trump and criticising President Joe Biden are fake, while 7% of accounts praising Biden and criticising Trump are fake.

Cyabra’s study analysed posts on X over two months, starting 1 March, focusing on popular hashtags and sentiment. The analysis showed a tenfold increase in fake accounts during March and April. Specifically, 12,391 out of 94,363 pro-Trump accounts and 803 out of 10,065 pro-Biden accounts were found to be bogus.

The report also noted that fake pro-Trump accounts appear to be part of a coordinated campaign pushing messages like ‘Vote for Trump’ and ‘Biden is the worst president the US has ever had,’ while fake pro-Biden accounts did not show coordinated activity.

Although X did not respond to requests for comments, Elon Musk recently announced efforts to purge bots and trolls from the platform, including testing a ‘Not a Bot’ program in New Zealand and the Philippines.

Why does it matter?

Since Russia’s interference in the 2016 election, social media platforms have faced increased scrutiny. With the upcoming election on 5 November, Cyabra’s findings on fake accounts are a cause of alarm for election officials and misinformation experts. The situation is even more concerning, given X’s history of downplaying the presence of fake accounts on its platform. According to Reuters, in May 2022, Twitter claimed that fewer than 5% of its daily active users were ‘false or spam’ based on an internal review. However, Cyabra estimated that 13.7% of Twitter profiles were inauthentic.

YouTube threatens to block Russian rights group’s channel

A Russian rights group, OVD-Info, reported that YouTube has threatened to block one of its channels in Russia, called Kak Teper, which discusses Ukraine war and political issues and has 100,000 subscribers.

Reuters reports that YouTube’s warning followed a complaint from Russian regulator Roskomnadzor, claiming the content violated information technology laws. OVD-Info is negotiating with YouTube and Google, labelling the potential block as political censorship.

YouTube did not specify which law was violated and did not respond to inquiries about this case but confirmed the reinstatement of videos from other opposition channels. Blocking YouTube entirely in Russia could be unpopular due to its large user base of tens of millions of monthly users.

Why does it matter? 

OVD-Info’s Kak Teper might become the first entire human rights channel banned on YouTube, warns Natalia Krapiva from Access Now, noting the growing threat to civil society’s presence on the platform. While Russia has blocked most foreign social media, YouTube has managed to avoid a ban, but not without consequences, as it has been consistently fined for hosting content deemed illegal by Russian authorities.

ChatGPT faces scrutiny from EU privacy watchdog over data accuracy

The EU’s privacy watchdog task force has raised concerns over OpenAI’s ChatGPT chatbot, stating that the measures taken to ensure transparency are insufficient to comply with data accuracy principles. In a report released on Friday, the task force emphasised that while efforts to prevent misinterpretation of ChatGPT’s output are beneficial, they still need to address concerns regarding data accuracy fully.

The task force was established by Europe’s national privacy watchdogs following concerns raised by authorities in Italy regarding ChatGPT’s usage. Despite ongoing investigations by national regulators, a comprehensive overview of the results has yet to be provided. The findings presented in the report represent a common understanding among national authorities.

Data accuracy is a fundamental principle of the data protection regulations in the EU. The report highlights the probabilistic nature of ChatGPT’s system, which can lead to biassed or false outputs. Furthermore, the report warns that users may perceive ChatGPT’s outputs as factually accurate, regardless of their actual accuracy, posing potential risks, especially concerning information about individuals.

Biden administration urges action against AI-generated sexual abuse images

The Biden administration is urging tech and financial industries to combat the proliferation of AI-generated sexual abuse images, the Time reports. Generative AI tools have made it easy to create explicit deepfakes, often targeting women, children, and LGBTQ+ individuals, with little recourse for the victims. The White House calls for voluntary cooperation from companies to implement measures to stop these nonconsensual images, while no federal legislation addresses the issue.

Biden’s chief science adviser, Arati Prabhakar, noted the rapid increase in such abusive content and the need for companies to take responsibility. A document shared with the Associated Press outlines actions for various stakeholders, including AI developers, financial institutions, cloud providers, and app store gatekeepers, to restrict the monetisation and distribution of explicit images, particularly those involving minors. The administration also stressed the importance of stronger enforcement of terms of service and better mechanisms for victims to remove nonconsensual images online.

Why does it matter?

Last summer, major tech companies committed to AI safeguards, followed by an executive order from Biden to ensure AI development prioritises public safety, including measures to detect AI-generated child abuse imagery. However, high-profile incidents, like AI-generated deepfakes of Taylor Swift and the rise of such images in schools, reveal and urgent need for action and the potential insufficiency of voluntary commitments from companies. Recently, Forbes reported that AI-generated images of young girls in provocative outfits are spreading on TikTok and Instagram, drawing inappropriate comments from older men and raising concerns about potential exploitation.

GLAAD report: major social media platforms fail LGBTQ safety standards

GLAAD, the LGBTQ media advocacy organisation, gave failing grades to most major social media platforms for their handling of safety, privacy, and expression for the LGBTQ community online, as reported by The Hill. In the fourth annual Social Media Safety Index, GLAAD assessed hate, disinformation, anti-LGBTQ tropes, content suppression, AI, data protection, and the link between online hate and real-world harm.

Five of six leading social media platforms, including X (formerly Twitter), YouTube, Facebook, Instagram, and Threads, received failing grades for the third consecutive year. TikTok was the only platform not to receive an F, instead earning a D+ due to improvements in its Anti-Discrimination Ad Policy, which included preventing advertisers from wrongfully targeting or excluding users from content. Meanwhile, Threads received its first F since its launch in 2023, and Facebook and Instagram’s ratings worsened from the previous year.

Why does it matter?

GLAAD uses this index to urge social media leaders to create safer environments for the LGBTQ community, noting a lack of enforcement of current policies in the digital sector and a clear link between online hate and increasing real-world violence and legislative attacks.

OpenAI strikes major content deal with News Corp

OpenAI, led by Sam Altman, has entered a significant deal with media giant News Corp, securing access to content from its major publications. The agreement follows a recent content licensing deal with the Financial Times aimed at enhancing the capabilities of OpenAI’s ChatGPT. Such partnerships are essential for training AI models, providing a financial boost to news publishers traditionally excluded from the profits generated by internet companies distributing their content.

The financial specifics of the latest deal remain undisclosed, though the Wall Street Journal, a News Corp entity, reported that it could be valued at over $250 million across five years. The deal ensures that content from News Corp’s publications, including the Wall Street Journal, MarketWatch, and the Times, will not be immediately available on ChatGPT upon publication. The following move is part of OpenAI’s ongoing efforts to secure diverse data sources, following a similar agreement with Reddit.

The announcement has positively impacted News Corp’s market performance, with shares rising by approximately 4%. OpenAI’s continued collaboration with prominent media platforms underscores its commitment to developing sophisticated AI models capable of generating human-like responses and comprehensive text summaries.

FCC proposes disclosure for AI-generated political ads

The US Federal Communications Commission (FCC) has proposed a requirement for political ads to disclose the use of AI-generated content. Chairwoman Jessica Rosenworcel announced Wednesday that the FCC would seek public comments on this potential rule. The initiative aims to ensure transparency in political advertising, allowing consumers to know when AI tools are utilised in the ads they view.

Under the proposed framework, candidate and issue ads would need to include disclosures about AI-generated content for cable, satellite TV, and radio providers, but not for streaming services like YouTube, which fall outside FCC regulation. The first step involves defining what constitutes AI-generated content and determining if such a regulation is necessary. The proposal marks the beginning of a fact-finding mission to develop new regulations.

The FCC document emphasises the public interest in protecting viewers from misleading or deceptive programming and promoting informed decision-making. While the proposal is still in its early stages, it reflects a growing concern about the impact of AI on political communication. The rule, if implemented, could deter low-effort AI-generated ads and help address deceptive practices in political advertising.

The FCC will gather more information on how this rule would interact with the Federal Trade Commission and the Federal Election Commission, which oversee advertising and campaign regulations. The timeline for the rule’s enforcement remains uncertain, pending further review and public input.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.

Concerns rise as Google implements AI for search engine answers

Google’s deployment of AI to condense search results is causing publishers’ concern about potential website traffic declines. The update to Google’s search engine, recently announced, will introduce AI-generated summaries of online queries in the US, with plans to expand globally. The implementation of AI in Google browsers could diminish the significance of links and web pages for over a billion users, potentially reducing audiences for bloggers, news outlets, and other online content creators who rely on Google referrals.

The AI-generated summaries, produced by Google’s Gemini technology, will offer concise insights from various online sources with minimal links. Google claims the change will encourage users to explore a wider range of websites, but critics, including Marketing AI Institute CEO Paul Roetzer, anticipate negative impacts on publishers and advertisers. With little information Google provides about the implications for these stakeholders, uncertainty looms over the future of online visibility and revenue generation.

Despite concerns, some experts see potential opportunities for collaboration between AI companies and news outlets to leverage real-time data for AI models.

Jeff Jarvis, a professor at CUNY Graduate School of Journalism, suggests that news organisations with credible information could benefit from partnerships with AI giants. However, the advertising industry faces uncertainties, with Semasio CEO Jeff Ragovin warning of potential revenue losses and the need for better-targeted ads amidst the AI-driven search landscape.

Reddit partners with OpenAI for ChatGPT integration

Reddit has announced a new partnership with OpenAI, allowing the popular chatbot ChatGPT to access Reddit’s content. The partnership caused Reddit’s shares to surge by 12% in extended trading, and it is part of Reddit’s strategy to diversify its revenue streams beyond advertising, complementing its recent agreement with Alphabet, which enables Google’s AI models to use Reddit’s data.

OpenAI will utilise Reddit’s application programming interface (API) to access and distribute content, marking a big step in integrating AI with social media data. Additionally, OpenAI will serve as a Reddit advertising partner, potentially boosting Reddit’s advertising revenue.

Why does it matter?

The development follows Reddit’s IPO in March and its lucrative deal with Alphabet worth around $60 million annually. Investors see these partnerships as crucial for generating revenue from data sales, supplementing the company’s advertising income. Reddit’s recent financial reports indicate strong revenue growth and improving profitability, reflecting the success of its AI data deals and advertising initiatives. Consequently, Reddit’s stock has risen by 10.5% to $62.31, showing a significant increase since its market debut.