Meta introduces tools to fight disinformation ahead of EU elections

The European Commission announced on Tuesday that Meta Platforms has introduced measures to combat disinformation ahead of the EU elections. Meta has launched 27 real-time visual dashboards, one for each EU member state, to enable third-party monitoring of civic discourse and election activities.

This development comes after the European Commission investigated Meta last month for allegedly breaching EU online content regulations. The investigation highlighted concerns over Meta’s Facebook and Instagram platforms failing to address disinformation and deceptive advertising adequately.

While the formal procedures against Meta continue, the European Commission stated that it would closely monitor the implementation of these new features to ensure their effectiveness in curbing disinformation.

CMA accepts Meta’s updated UK privacy compliance proposals

Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.

The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.

In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.

EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.

Workplace app discontinued as Meta invests in AI and metaverse

Meta Platforms, the parent company of Facebook, announced that it will discontinue its Workplace app, a platform geared towards work-related communications. The social media platform made this decision as it shifted its focus towards developing AI and metaverse technologies. The Workplace app will be phased out for customers starting in June 2026, although Meta will continue to utilise it internally as a messaging board until August 2025, according to a statement from the company.

A spokesperson for Meta stated that they are discontinuing Workplace to focus on building AI and metaverse technologies that they believe will fundamentally reshape the way they work. Over the next two years, Workplace customers will have the option to transition to Zoom’s Workvivo product, which Meta has designated as its preferred migration partner. Workplace was initially launched in 2016 to cater to businesses, offering features such as multi-company groups and shared spaces to facilitate collaboration among employees from different organizations.

Why does it matter?

The discontinuation of Workplace aligns with Meta’s strategic emphasis on advancing AI and metaverse technologies, which it views as integral to the future of digital communication. The strategic change of business direction has raised concerns about escalating costs that could potentially impact the company’s growth trajectory. Despite the discontinuation of Workplace, Meta has assured customers that billing and payment arrangements will remain unchanged until August of this year. Currently, Workplace offers a core plan priced at $4 per user per month, with additional add-ons available starting from $2 per user per month, with monthly bills calculated based on the number of billable users unless a fixed plan is in place.

Malaysia condemns Meta for removing posts on prime minister’s meeting with Hamas leader

Malaysia’s communications minister has criticised Meta Platforms for removing Facebook posts by local media covering Prime Minister Anwar Ibrahim’s meeting with a Hamas leader in Qatar. Anwar clarified that while he has diplomatic relations with Hamas’s political leadership, he is not involved in its military activities.

Expressing Malaysia’s support for the Palestinian cause, the government has asked Meta to explain the removal of posts by two media outlets about Anwar’s meeting. Additionally, a Facebook account covering Palestinian issues was closed.

Communications Minister Fahmi Fadzil condemned Meta’s actions, noting the posts’ relevance to the prime minister’s official visit to Qatar. He emphasised concerns about Meta’s disregard for media freedom.

Last October, Fahmi warned of potential actions against Meta and other social media platforms if they obstructed pro-Palestinian content since Malaysia consistently advocates for a two-state solution to the Israel-Palestine conflict.

Meta Platforms faces heavy fine in Turkey over data-sharing

Turkey’s competition board has levied a substantial fine of 1.2 billion lire ($37.20 million) against Meta Platforms following investigations into data-sharing practices across its social media platforms, including Facebook, Instagram, WhatsApp, and Threads. The board launched an inquiry last December, particularly focusing on potential competition law violations related to integrating Threads and Instagram.

As part of its findings, the competition board imposed an interim measure in March to restrict data sharing between Threads and Instagram. In response, Meta announced the temporary shutdown of Threads in Turkey to comply with the interim order, reflecting the company’s efforts to adhere to regulatory directives.

The fine encompasses two separate investigations, with 898 million lira attributed to the compliance process and investigations related to Facebook, Instagram, and WhatsApp, and an additional 336 million lira for the inquiry into Threads. The board’s decision emphasises the importance of user consent and notification regarding data usage, ensuring transparency and control over personal data across Meta’s platforms.

Previously, the competition board had imposed fines on Meta, including daily penalties for insufficient documentation and notifications about data-sharing. While these penalties concluded on 3 May 2024, the recent fine extends the ongoing regulatory scrutiny over Meta’s business practices, echoing similar actions taken by regulatory authorities globally to ensure compliance with competition and data protection laws.

Meta found displaying explicit ‘AI Girlfriend’ ads, violating advertising policies

Meta-owned social media platforms, including Facebook, Instagram, and Messenger, have reportedly displayed explicit ads for ‘AI girlfriends,’ violating the company’s advertising policies—an investigation by Wired uncovered over 29,000 instances of such ads in Meta’s ad library. They feature chatbots sending sexually suggestive messages and AI-generated images of women in provocative poses, often without the ‘NSFW’ (Not Safe for Work) label. These instances have raised concerns about user’s exposure to inappropriate content.

Despite prohibiting adult content in advertising, including nudity and sexually explicit activities, about half of the identified ads breached Meta’s policies. Ryan Daniels, a Meta spokesperson, stated that the company is working to remove these violating ads promptly and continuously improving detection systems. However, he acknowledged various attempts to circumvent their current policies and detection methods.

Why does it matter?

Sex workers, sex educators, LGBTQ users, and erotic artists have long claimed that Meta unfairly targets their content, as reported by Mashable. They argue that Instagram shadowbans LGBTQ and sex educator accounts, while WhatsApp bans sex worker accounts.

Another controversial incident occurred last November when Mashable reported that Meta rejected a period care ad as ‘adult or political.’ Meanwhile, NSFW ‘AI girlfriend’ ads appear to be slipping through Meta’s advertising policies, sparking discussions about selective enforcement.

EU probes Meta platforms for deceptive ads

The European Commission has launched an investigation into Meta Platforms’ Facebook and Instagram over suspected failures to combat deceptive advertising and disinformation ahead of the European Parliament elections. Concerns have arisen not only about external sources like Russia, China, and Iran but also within the EU, with political parties and organisations resorting to false information to sway voters in the June 6-9 elections.

Under the Digital Services Act (DSA), big tech companies must take stronger measures against illegal and harmful content on their platforms or face fines of up to 6% of their global annual turnover. EU digital chief Margrethe Vestager expressed concerns about Meta’s moderation practices and transparency regarding advertisement and content moderation procedures, prompting the Commission to initiate proceedings to assess Meta’s compliance with the DSA.

Meta, with over 250 million monthly active users in the EU, defended its risk-mitigating process but faced suspicion from the Commission regarding its compliance with DSA obligations. Specific concerns include Meta’s handling of deceptive advertisements, disinformation campaigns, coordinated inauthentic behaviour, and the absence of an effective third-party real-time civic discourse and election-monitoring tool ahead of the European Parliament elections.

The European Commission also raised issues regarding Meta’s decision to phase out its disinformation-tracking tool, CrowdTangle, without a suitable replacement. Meta now has five working days to inform the EU about any remedial actions to address the Commission’s concerns, signalling a pivotal moment in the ongoing battle against online misinformation and harmful content ahead of significant electoral events.

Meta platforms face a probe by EU for disinformation handling

The EU regulators are gearing up to launch an investigation into Meta Platforms amid concerns regarding the company’s efforts to combat disinformation, mainly from Russia and other nations. According to a report by the Financial Times, the EU regulators are alarmed by Meta’s purported inadequacy in curbing the spread of political advertisements that could undermine the integrity of electoral processes. Citing sources familiar with the matter, the report suggests that Meta’s content moderation measures might need to address this issue more effectively.

While the investigation is expected to be initiated imminently, the European Commission is anticipated to refrain from explicitly targeting Russia in its official statement. Instead, the focus will be on the broader problem of foreign actors manipulating information. Meta Platforms and the European Commission have yet to respond to requests for comment, indicating the gravity and sensitivity of the impending probe.ž

Why does it matter?

The timing of the investigation coincides with a significant year for elections across the globe, with numerous countries, including UK, Austria, and Georgia, preparing to elect new leaders. Additionally, the European Parliament elections are slated for June, heightening the urgency for regulatory scrutiny over platforms like Meta. This development underscores the growing concern among regulators regarding the influence of disinformation on democratic processes, prompting concerted efforts to address these challenges effectively.

AI ‘girlfriend’ ads raise concerns on Meta platforms

Meta’s integration of AI across its platforms, including Facebook, Instagram, and WhatsApp, has raised concerns as Wired reports the proliferation of explicit ads for AI ‘girlfriends’ on these platforms. The investigation found tens of thousands of such ads violating Meta’s adult content advertising policy, which prohibits nudity, sexually suggestive content, and sexual services. Despite this policy, these ads continue to circulate on Meta’s platforms, sparking criticism from various communities, including sex workers, educators, and LGBTQ individuals, who feel unfairly targeted by Meta’s content policies.

For years, users have criticised Meta for what they perceive as discriminatory enforcement of its community guidelines. LGBTQ and sex educator accounts have reported instances of shadowbanning on Instagram, while WhatsApp has banned accounts associated with sex work. Additionally, Meta’s advertising approval process has come under scrutiny, with reports of gender-biased rejections of ads, such as those for sex toys and period care products. Despite these issues, explicit AI ‘girlfriend’ ads have evaded Meta’s enforcement mechanisms, highlighting a gap in the company’s content moderation efforts.

When approached, Meta acknowledged the presence of these ads and stated its commitment to removing them promptly. A Meta spokesperson emphasised the company’s ongoing efforts to improve its systems for detecting and removing ads that violate its policies. However, despite Meta’s assurances, Wired found that thousands of these ads remained active even days after the initial inquiry.