Meta and Spotify criticise EU decisions on AI

Several tech companies, including Meta and Spotify, have criticised the European Union for what they describe as inconsistent decision-making on data privacy and AI. A collective letter from firms, researchers, and industry bodies warned that Europe risks losing competitiveness due to fragmented regulations. They urged data privacy regulators to deliver clear, harmonised decisions, allowing European data to be utilised in AI training for the benefit of the region.

The companies voiced concerns about the unpredictability of recent decisions made under the General Data Protection Regulation (GDPR). Meta, known for owning Facebook and Instagram, recently paused plans to collect European user data for AI development, following pressure from EU privacy authorities. Uncertainty surrounding which data can be used for AI models has become a major issue for businesses.

Tech firms have delayed product releases in Europe, seeking legal clarity. Meta postponed its Twitter-like app Threads, while Google has also delayed the launch of AI tools in the EU market. The introduction of Europe’s AI Act earlier this year added further regulatory requirements, which firms argue complicates innovation.

The European Commission insists that all companies must comply with data privacy rules, and Meta has already faced significant penalties for breaches. The letter stresses the need for swift regulatory decisions to ensure Europe can remain competitive in the AI sector.

Meta wins lawsuit over Apple’s privacy changes

Meta Platforms has secured a legal victory after a US court dismissed a lawsuit accusing the tech giant of misleading shareholders about the impact of Apple’s privacy changes on its advertising business. The suit, brought by Israeli insurers and pension funds, claimed Meta concealed how Apple’s iOS privacy updates would diminish the effectiveness of ads on Facebook and Instagram, harming the company’s ad revenue.

The plaintiffs argued that Meta’s stock value dropped 53% within a year, wiping out over $500 billion in market value as the truth about Apple’s changes came to light. However, US District Judge Yvonne Gonzalez Rogers ruled that Meta’s eventual admission of a $10 billion financial hit in 2022 due to Apple’s policy didn’t prove that earlier disclosures were misleading or fraudulent.

In addition to the privacy claims, the lawsuit also alleged Meta had concealed former COO Sheryl Sandberg’s use of company resources for personal projects, including her wedding and book. The judge rejected these accusations, noting they were based on unverified media reports. Claims that Meta’s transition to Reels, a short-form video format inspired by TikTok, negatively impacted the company’s financial performance were also dismissed for lack of evidence.

Judge Rogers’ ruling effectively closes the case, dismissing it with prejudice, meaning it cannot be refiled. Meta and its top executives, including CEO Mark Zuckerberg and CFO Susan Li, have denied the allegations throughout the legal battle. Meta and the plaintiffs’ lawyers have not commented on the court’s decision.

EU to fine Meta over anti-competitive practices

Facebook’s owner company, Meta, is bracing for a substantial fine from the European Union, according to sources familiar with the matter. The penalty stems from allegations that Meta is leveraging its dominance in social networking to stifle competition in the classified advertising sector. The company’s practice of linking its free Marketplace service with Facebook has raised concerns among the EU regulators, who view this strategy as an attempt to edge out rivals.

The decision is expected as soon as next month, and it could be one of the final significant moves overseen by the EU’s current competition chief, Margrethe Vestager, before her departure. The investigation into Meta’s business practices marks a continuation of the EU’s broader efforts to crack down on the monopolistic behaviour of tech giants.

Currently, neither Meta nor the EU regulators have commented on the looming decision. However, this case could signal a more stringent approach to maintaining a level playing field in the digital marketplace, where tech companies have long held considerable power. The ruling could have substantial financial and operational consequences for Meta, potentially setting the tone for future regulatory actions in the tech industry.

Meta introduces new Instagram teen accounts

Meta is set to overhaul Instagram’s privacy settings for users under 18, introducing stricter controls to protect young users. Accounts for teenagers will now be private by default, ensuring only approved connections can message or tag them. The move comes amid growing concerns over the negative impact of social media on youth, with studies highlighting links to mental health issues such as depression and anxiety.

Parents will have more authority over their children’s accounts, including monitoring who they engage with and setting restrictions on app usage. Teens under 16 will need parental permission to change default settings. The update also includes new features like a 60-minute daily usage reminder and a default “sleep mode” to mute notifications overnight.

Social media platforms, including Meta’s Instagram, have faced numerous lawsuits, with critics arguing that these apps have addictive qualities and contribute to rising mental health problems in teenagers. Recent US legislation seeks to hold platforms accountable for their effects on young users, pushing Meta to introduce these changes.

The rollout will take place in the US, UK, Canada, and Australia within the next two months, with European Union users following later. Global adoption of the new teen accounts is expected by January next year.

Meta bans Russian state media over covert online operations

Meta, the parent company of Facebook, has banned several Russian state media outlets, including RT (Russia Today) and Rossiya Segodnya, from its platforms due to their involvement in covert online influence operations. The censorship decision significantly escalates Meta’s actions against Russian media, as it previously restricted their activities by limiting ad access and post visibility. Meta explained that after reviewing ongoing foreign interference by these outlets, it expanded its enforcement to ban them from all its apps, which include Instagram, WhatsApp, and Threads. The company expects the ban to take full effect in the coming days.

The decision follows recent charges by US authorities against two RT employees accused of money laundering in connection with efforts to influence the 2024 US elections. US Secretary of State Antony Blinken has urged countries to treat RT’s activities as covert intelligence operations rather than legitimate journalism. Despite these developments, RT has criticised the US government’s actions, accusing them of stifling the media outlet’s ability to function as a journalistic organisation.

Meta also shared that Russian state media outlets have attempted to conceal their online activities before, and it anticipates further attempts to evade the newly imposed restrictions. The Russian embassy and the White House have yet to comment on Meta’s decision.

Facebook and Instagram data to power Meta’s AI models

Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.

Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.

From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.

Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.

Meta revises AI labels on social media platforms to balance transparency and user experience.

Meta’s decision to change how it labels AI-modified content on Instagram, Facebook, and Threads signifies another advancement in the company’s approach to generative AI. The visibility of AI’s involvement is reduced by moving the ‘AI info’ label to the post’s menu for content that has been edited with AI tools. This could make it easier for users to overlook or miss the AI editing details in such posts.

However, for content fully generated by AI, Meta will continue to prominently display the label beneath the user’s name, ensuring that posts created entirely by AI prompts remain visibly marked. The distinction Meta is making here seems to reflect the varying degrees of AI involvement in content creation.

Meta aims to increase transparency about content labelling, specifying if AI designation is from industry signals or self-disclosure. This effort follows complaints and confusion over the previous ‘Made with AI’ label, particularly from photographers concerned that their real photos were misrepresented.

This change may raise concerns about the potential for users to be misled, especially as AI editing tools become more sophisticated and the line between human and AI-created content continues to blur. It highlights the need for continued transparency as AI technology integrates more deeply into content creation across platforms.

Meta urged to rethink content removal amid Israel-Palestine controversy

Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with Palestinians and by others as antisemitic. The board determined that the phrase holds multiple meanings and cannot be universally deemed harmful or violent.

The phrase refers to the region between the River Jordan and the Mediterranean Sea, encompassing Israel and the Palestinian territories. Often used at pro-Palestinian rallies, critics argue it calls for Israel’s destruction, while others dispute this interpretation. The board emphasised the importance of context in assessing such political speech, urging Meta to allow space for debate, particularly during times of conflict.

Meta expressed support for the board’s review, acknowledging the complexities involved in global content moderation. However, the Anti-Defamation League criticised the decision, saying the phrase makes Jewish and pro-Israel communities feel unsafe. The Oversight Board also called on Meta to restore data access for researchers and journalists following its recent decision to end the CrowdTangle tool.

The board’s ruling highlights the ongoing challenges in regulating sensitive content on social media platforms, with a need for balancing free speech and community safety.

Meta’s oversight board rules on content moderation in Venezuela amidst post-election crisis

Meta’s Oversight Board has issued a decision regarding the company’s content moderation policies in Venezuela amidst violent crackdowns and widespread protests following the disputed presidential election.

The ruling addresses how Meta should handle posts concerning state-supported armed groups, known as ‘colectivos’. This follows Meta’s request for guidance on moderating increasing volumes of ‘anti-colectivos content’, highlighting two specific posts for review: an Instagram post saying ‘Go to hell! I hope they kill you all!’ aimed at the colectivos, and a Facebook post criticising Venezuela’s security forces, stating ‘kill those damn colectivos’.

The Oversight Board determined that neither post violated Meta’s rules on calls for violence, instead categorising both as ‘aspirational statements’ from citizens facing severe repression and threats to free expression from state-supported forces. The board justified this by noting the colectivos’ role in repressing civic space and committing human rights violations in Venezuela, particularly during the current post-election crisis. The board emphasised that the civilian population is predominantly the target of such abuses.

Additionally, the board critiqued Meta’s practice of making political content less visible across its platforms during critical times, expressing concerns that this could undermine users’ ability to express political dissent and raise awareness about the situation in Venezuela. It recommended that Meta adapt its policies to ensure political content, especially during crises like elections and post-electoral protests, receives the same reach as non-political content. This adjustment is vital for enabling citizens to share and amplify their political grievances during significant socio-political turmoil.

Why does it matter?

This decision is part of an ongoing debate about the role of political content on Meta’s platforms. Earlier this year, the board accepted its first case related to a post on Threads, another Meta service, focusing on the company’s decision to limit recommendations of political posts. The outcome of this related case is still pending, signalling potential further policy changes regarding political content on Meta’s platforms. The board’s decision underscores the critical role of context in content moderation, particularly in regions experiencing significant political and social upheaval.

Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.