EU probes Meta platforms for deceptive ads

The European Commission has launched an investigation into Meta Platforms’ Facebook and Instagram over suspected failures to combat deceptive advertising and disinformation ahead of the European Parliament elections. Concerns have arisen not only about external sources like Russia, China, and Iran but also within the EU, with political parties and organisations resorting to false information to sway voters in the June 6-9 elections.

Under the Digital Services Act (DSA), big tech companies must take stronger measures against illegal and harmful content on their platforms or face fines of up to 6% of their global annual turnover. EU digital chief Margrethe Vestager expressed concerns about Meta’s moderation practices and transparency regarding advertisement and content moderation procedures, prompting the Commission to initiate proceedings to assess Meta’s compliance with the DSA.

Meta, with over 250 million monthly active users in the EU, defended its risk-mitigating process but faced suspicion from the Commission regarding its compliance with DSA obligations. Specific concerns include Meta’s handling of deceptive advertisements, disinformation campaigns, coordinated inauthentic behaviour, and the absence of an effective third-party real-time civic discourse and election-monitoring tool ahead of the European Parliament elections.

The European Commission also raised issues regarding Meta’s decision to phase out its disinformation-tracking tool, CrowdTangle, without a suitable replacement. Meta now has five working days to inform the EU about any remedial actions to address the Commission’s concerns, signalling a pivotal moment in the ongoing battle against online misinformation and harmful content ahead of significant electoral events.

Meta platforms face a probe by EU for disinformation handling

The EU regulators are gearing up to launch an investigation into Meta Platforms amid concerns regarding the company’s efforts to combat disinformation, mainly from Russia and other nations. According to a report by the Financial Times, the EU regulators are alarmed by Meta’s purported inadequacy in curbing the spread of political advertisements that could undermine the integrity of electoral processes. Citing sources familiar with the matter, the report suggests that Meta’s content moderation measures might need to address this issue more effectively.

While the investigation is expected to be initiated imminently, the European Commission is anticipated to refrain from explicitly targeting Russia in its official statement. Instead, the focus will be on the broader problem of foreign actors manipulating information. Meta Platforms and the European Commission have yet to respond to requests for comment, indicating the gravity and sensitivity of the impending probe.ž

Why does it matter?

The timing of the investigation coincides with a significant year for elections across the globe, with numerous countries, including UK, Austria, and Georgia, preparing to elect new leaders. Additionally, the European Parliament elections are slated for June, heightening the urgency for regulatory scrutiny over platforms like Meta. This development underscores the growing concern among regulators regarding the influence of disinformation on democratic processes, prompting concerted efforts to address these challenges effectively.

AI ‘girlfriend’ ads raise concerns on Meta platforms

Meta’s integration of AI across its platforms, including Facebook, Instagram, and WhatsApp, has raised concerns as Wired reports the proliferation of explicit ads for AI ‘girlfriends’ on these platforms. The investigation found tens of thousands of such ads violating Meta’s adult content advertising policy, which prohibits nudity, sexually suggestive content, and sexual services. Despite this policy, these ads continue to circulate on Meta’s platforms, sparking criticism from various communities, including sex workers, educators, and LGBTQ individuals, who feel unfairly targeted by Meta’s content policies.

For years, users have criticised Meta for what they perceive as discriminatory enforcement of its community guidelines. LGBTQ and sex educator accounts have reported instances of shadowbanning on Instagram, while WhatsApp has banned accounts associated with sex work. Additionally, Meta’s advertising approval process has come under scrutiny, with reports of gender-biased rejections of ads, such as those for sex toys and period care products. Despite these issues, explicit AI ‘girlfriend’ ads have evaded Meta’s enforcement mechanisms, highlighting a gap in the company’s content moderation efforts.

When approached, Meta acknowledged the presence of these ads and stated its commitment to removing them promptly. A Meta spokesperson emphasised the company’s ongoing efforts to improve its systems for detecting and removing ads that violate its policies. However, despite Meta’s assurances, Wired found that thousands of these ads remained active even days after the initial inquiry.

Meta spokesperson sentenced to six years in Russia

A military court in Moscow has reportedly sentenced Meta Platforms spokesperson Andy Stone to six years in prison in absentia for ‘publicly defending terrorism.’ This ruling comes amid Russia’s crackdown on Meta, which was designated as an extremist organisation in the country, resulting in the banning of Facebook and Instagram in 2022 due to Russia’s conflict with Ukraine.

Meta has yet to comment on the reported sentencing of Stone, who serves as the company’s communications director. Stone himself was unavailable for immediate response following the court’s decision. Stone’s lawyer, Valentina Filippenkova, indicated they intend to appeal the verdict, expressing a request for acquittal.

The Russian interior ministry initiated a criminal investigation against Stone late last year, although the specific charges were not disclosed then. According to state investigators, Stone’s online comments allegedly defended ‘aggressive, hostile, and violent actions’ against Russian soldiers involved in what Russia terms its ‘special military operation’ in Ukraine.

Why does it matter?

Stone’s sentencing underscores Russia’s stringent stance on online content related to its military activities in Ukraine, extending repercussions to individuals associated with Meta Platforms. The circumstances also reflect the broader context of heightened scrutiny and legal actions against perceived dissent and criticism within Russia’s digital landscape.

Meta shifts away from politics ahead of 2024 US election

In a significant shift ahead of the Trump-Biden rematch, Meta is distancing itself from politics after years of positioning as a key player in political discourse. The company has reduced the visibility of political content on Facebook and Instagram, imposed new rules on political advertisers, and downsized the team responsible for engaging with politicians and campaigns. The shift reshaped digital outreach strategies for the 2024 US election and could transform political communication on social media platforms.

Meta’s retreat from politics follows years of controversy and public scrutiny, including outrage over Russian interference in the 2016 presidential race and the role of social media in the 6 January 2021 attack on the US Capitol. The company’s efforts to minimise political content in users’ news feeds reflect a broader trend away from news and politics on social media platforms. This shift has impacted major news outlets, with significant declines in user engagement observed across platforms.

As Meta redefines its approach to political content, political campaigns adapt their strategies to navigate this new landscape. The Biden campaign has increased its social media presence to drive engagement, while Trump has turned to alternative platforms like Truth Social. However, both parties recognise the continued importance of Facebook as a vital tool for reaching voters despite the platform’s evolving restrictions on political advertising and content.

Why does it matter?

The changing dynamics of political communication on social media raise concerns about access to information and the role of tech companies in shaping public discourse. With political content increasingly marginalised on platforms like Facebook and Instagram, questions arise about how voters will stay informed about key issues during elections. As campaigns adjust to Meta’s evolving policies, the impact on democratic discourse and the dissemination of political information remains a topic of debate and scrutiny.

Apple removes WhatsApp and Threads from China app store

Apple has removed the Meta-owned apps WhatsApp and Threads from its app store in China, complying with orders from the country’s internet regulator, the Cyberspace Administration, citing national security concerns. According to Apple, the move was made in accordance with local laws, despite any disagreement. The Chinese government allegedly found content on WhatsApp and Threads regarding China’s president, Xi Jinping, that violated cybersecurity laws, though specifics were unclear.

This action intensifies the technology dispute between the US and China, with Apple and Meta caught in the middle. In the US, lawmakers are considering a bill that would compel ByteDance to divest its popular video app TikTok, citing national security risks due to its ties to China. Meanwhile, the White House is tightening restrictions on Beijing’s access to advanced technologies and American financing.

Apple, reliant on China for a significant portion of its revenue, has complied with Beijing’s demands in the past, including blocking various apps and establishing a data centre to store Chinese users’ iCloud data. As tensions persist, Apple has started diversifying its supply chain, reducing its dependence on Chinese manufacturing.

While Meta’s fallout from China may be less direct, the company faces challenges elsewhere, particularly in its strained relationship with Apple over privacy and data tracking issues. In the US, efforts to address concerns over TikTok’s ownership and data handling are gaining momentum, with legislation being packaged alongside other bills related to foreign aid.

Meta launches Llama 3 to challenge OpenAI

Meta Platforms launched its latest large language model, Llama 3, and a real-time image generator designed to update pictures as users type prompts. The development aims to catch up with the generative AI market leader, OpenAI. The models are set to be integrated into Meta’s virtual assistant, Meta AI, which the company claims to be the most advanced among its free-to-use counterparts. Performance comparisons highlight its reasoning, coding, and creative writing capabilities against competitors like Google and Mistral AI.

Meta is giving prominence to its updated Meta AI assistant within its various platforms, positioning it to compete more directly with OpenAI’s ChatGPT. The assistant will feature prominently in Meta’s Facebook, Instagram, WhatsApp, and Messenger apps, along with a standalone website offering various functionalities, from creating vacation packing lists to providing homework help.

The development of Llama 3 is part of Meta’s efforts to challenge OpenAI’s leading position in generative AI. The company has openly released its Llama models for developers, aiming to disrupt rivals’ revenue plans with powerful free options. However, critics have raised safety concerns about unscrupulous actors’ potential misuse of such models.

While Llama 3 currently outputs only text, future versions will incorporate multimodal capabilities, generating text and images. Meta CEO Mark Zuckerberg emphasised the performance of Llama 3 versions against other free models, indicating a growing performance gap between free and proprietary models. The company aims to address previous issues with understanding context by leveraging high-quality data and significantly increasing the training data for Llama 3.

Meta oversight board reviews handling of sexually explicit AI-generated images

Meta Platforms’ Oversight Board is currently examining how the company handled two AI-generated sexually explicit images of female celebrities that circulated on Facebook and Instagram. The board, which operates independently but is funded by Meta, aims to evaluate Meta’s policies and enforcement practices surrounding AI-generated pornographic content. To prevent further harm, the board did not disclose the names of the celebrities depicted in the images.

Advancements in AI technology have led to an increase in fabricated content online, particularly explicit images and videos portraying women and girls. This surge in ‘deepfakes’ has posed significant challenges for social media platforms in combating harmful content. Earlier this year, Elon Musk’s social media platform X faced difficulties managing the spread of false explicit images of Taylor Swift, prompting temporary restrictions on related searches.

The Oversight Board highlighted two specific cases: one involving an AI-generated nude image resembling an Indian public figure shared on Instagram and another depicting a nude woman resembling an American public figure in a Facebook group for AI creations. Meta initially removed the latter image for violating its bullying and harassment policy but left the former image up until the board selected it for review.

In response to the board’s scrutiny, Meta acknowledged the cases and committed to implementing the board’s decisions. The prevalence of AI-generated explicit content underscores the need for clearer policies and stricter enforcement measures by tech companies to address the growing issue of ‘deepfakes’ online.

Meta temporarily suspends Threads in Türkiye

Meta Platforms Inc. announced that it will temporarily suspend its social networking app Threads in Türkiye starting 29 April to comply with an interim order from the Turkish Competition Authority. The decision, detailed in a blog post on Monday, aims to address concerns related to data sharing between Instagram and Threads as the competition watchdog investigates potential market dominance abuses by Meta. Despite this move, Meta reassured users that the shutdown of Threads in Türkiye will not affect other Meta services like Facebook, Instagram, or WhatsApp within the country or Threads in other global locations.

The Turkish Competition Authority initiated an investigation into Meta in December over possible competition law violations stemming from the integration of Instagram with Threads. The interim order, which restricts data merging between the two platforms, will remain effective until the authority reaches a final decision. Meta expressed disagreement with this decision, asserting its compliance with Turkish legal requirements and indicating plans to appeal the ruling.

Threads, Meta’s microblogging venture launched in July 2023, aimed to expand beyond Instagram’s media-centric format by offering a predominantly text-based social platform where users could share photos, links, and short videos. While Threads quickly gained traction in the US and over 100 other countries, its European debut was delayed until December 2023 due to stringent privacy regulations in the region. Despite this setback, Meta remains committed to navigating regulatory challenges while advancing its diverse social networking offerings.

New OpenAI and Meta AI models close to human-like reasoning

Meta and OpenAI are close to unveiling advanced AI models that can reason and plan, according to a Financial Times report. OpenAI’s COO, Brad Lightcap, hinted at the upcoming release of GPT-5, which will make significant progress in solving ‘hard problems’ of reasoning.

Yann LeCun, Meta’s chief AI scientist, and Joelle Pineau, VP of AI Research, envision AI agents capable of complex, multi-stage operations. The enhanced reasoning should enable the AI models to ‘search over possible answers,’ ‘plan sequences of actions,’ and model out the outcomes and consequences before execution.

Why does it matter?


Meta is getting ready to launch Llama 3 in various model sizes optimized for different apps and devices, including WhatsApp and Ray-Ban smart glasses. OpenAI is less open about its plans for GPT-5, but Lightcap expressed optimism about the model’s potential to reason.


Getting AI models to reason and plan is a critical step towards reaching artificial general intelligence (AGI). Multiple definitions of AGI exist, but it can be simply described as a sort of AI capable of performing at or beyond human levels on a broad range of activities.

Some scientists and experts have expressed concerns about building technology that will outperform human abilities. AI godfathers Yoshua Bengio, and Geoffrey Hinton have even warned us against the threats to humanity posed by AI. Both Meta and OpenAI claim to be aiming for AGI, which could be worth trillions for the company that achieves it.