Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.

Italian regulator fines Meta over user data misuse

Italy’s antitrust regulator AGCM (Autorita’ Garante della Concorrenza e del Mercato) has fined Meta, the owner of Facebook and Instagram, for unfair commercial practices. The authority imposed a fine of €3.5 million on Meta Platforms Ireland Ltd. and parent company Meta Platforms Inc. for two deceptive business practices regarding the creation and management of Facebook and Instagram social network accounts.

Namely, the watchdog stated that Instagram users were not adequately informed about how their personal data was used for commercial purposes and that users of both platforms were not given proper information on contesting account suspensions.

Meta has already addressed these issues, according to the regulator. A Meta spokesperson expressed disagreement with AGCM’s decision and mentioned that the company is considering its options. They also highlighted that since August 2023, Meta has implemented changes for Italian users to increase transparency about data usage for advertising on Instagram.

Former Meta engineer sues over Gaza post suppression

A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.

Why does it matter?

The lawsuit reflects ongoing criticisms by human rights groups of Meta’s content moderation regarding Israel and the Palestinian territories. These concerns were amplified following the conflict that erupted in Gaza after a Hamas attack in Israel and Israel’s subsequent offensive.

Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.

Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.

Australia is considering forcing Meta to pay publishers for news

Australia is considering new regulations to make Meta Platforms, the parent company of Facebook, pay news companies for content. The development follows Meta’s decision to stop compensating Australian news publishers despite a 2021 law that mandates such payments. News Corp Australia’s executive chairman, Michael Miller, urged the government to enforce this law, criticising Meta for abandoning previous agreements and emphasising the need for fair negotiations.

Meta argues that interest in news on its platforms is declining and views its services as free distribution channels for media companies. However, publishers claim that social media platforms profit unfairly from advertising revenue linked to news content. As a consequence, if the government enforces the 2021 law, Meta might restrict news sharing on Facebook in Australia, as it has done in Canada, leading to concerns about increased misinformation.

Miller also highlighted the negative impacts of social media on mental health and called for a regulatory framework to protect Australians. His proposal includes holding tech firms accountable for all content, enforcing competition laws for digital advertising, improving consumer complaint processes, and supporting mental health programs. He suggested barring companies that fail to comply with these rules from operating in Australia. Meta has defended its actions, stating that it respects Australian laws and community standards and has implemented measures to promote online safety and prevent harm.

Spain suspends Meta’s election tools for EU election

Spain’s data protection authority, AEPD, has temporarily suspended two Meta products planned for deployment during the upcoming European election on its social media platforms, Facebook and Instagram. The tools, named ‘Election Day Information’ (EDI) and ‘Voter Information Unit’ (VIU), potentially violate data protection regulations in Spain, according to AEPD. Meta, formerly Facebook, has contested this decision, stating that the tools were designed to respect users’ privacy and comply with GDPR standards.

Meta’s proposed data processing methods, aimed at sending notifications to eligible users reminding them to vote, raised concerns for AEPD. The agency highlighted that Meta’s selection of eligible voters based on user profile data such as city of residence and IP addresses was contrary to Spanish data protection regulations. AEPD deemed this data processing unnecessary, disproportionate, and excessive, as it excluded EU citizens living abroad and targeted non-EU citizens in Europe.

Furthermore, AEPD criticised Meta’s data collection practices regarding users’ ages, stating there was no reliable mechanism to verify self-reported ages. Additionally, the watchdog found Meta’s treatment of interaction data disproportionate to the stated purpose of informing about the elections. Moreover, Meta failed to justify the need to retain the collected data after the election, indicating potential additional purposes for the processing operation, according to AEPD.

Meta discovers ‘likely AI-generated’ content praising Israel

Meta reported finding likely AI-generated content used deceptively on Facebook and Instagram, praising Israel’s handling of the Gaza conflict in comments under posts from global news organisations and US lawmakers. This campaign, linked to the Tel Aviv-based political marketing firm STOIC, targeted audiences in the US and Canada by posing as various concerned citizens. STOIC has not commented on the allegations.

Meta’s quarterly security report marks the first disclosure of text-based generative AI technology used in influence operations since its emergence in late 2022. While AI-generated profile photos have been identified in past operations, the use of text-based AI raises concerns about more effective disinformation campaigns. Despite this, Meta’s security team successfully disrupted the Israeli campaign early and maintained confidence in their ability to detect such networks.

The report detailed six covert influence operations disrupted in the first quarter, including an Iran-based network focused on the Israel-Hamas conflict, which did not use generative AI. As Meta and other tech giants continue to address potential AI misuse, upcoming elections in the EU and the US will test their defences against AI-generated disinformation.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.

Meta introduces tools to fight disinformation ahead of EU elections

The European Commission announced on Tuesday that Meta Platforms has introduced measures to combat disinformation ahead of the EU elections. Meta has launched 27 real-time visual dashboards, one for each EU member state, to enable third-party monitoring of civic discourse and election activities.

This development comes after the European Commission investigated Meta last month for allegedly breaching EU online content regulations. The investigation highlighted concerns over Meta’s Facebook and Instagram platforms failing to address disinformation and deceptive advertising adequately.

While the formal procedures against Meta continue, the European Commission stated that it would closely monitor the implementation of these new features to ensure their effectiveness in curbing disinformation.

CMA accepts Meta’s updated UK privacy compliance proposals

Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.

The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.

In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.

EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.