Meta faces legal challenge on Instagram’s impact on teenagers

Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.

The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.

Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.

The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.

Meta reintroduces facial recognition for celebrity scam protection

Meta, the parent company of Facebook, is testing facial recognition technology again, three years after halting its use due to privacy concerns. This time, the company focuses on combating ‘celeb bait’ scams, which use public figures’ images in fraudulent advertisements. Meta plans to enrol around 50,000 celebrities in a trial program that will automatically compare their profile photos with those in suspicious ads. If the system detects a match, Meta will block the ad and notify the celebrities who can opt out of the program.

The trial, which will begin globally in December, excludes regions where regulatory clearance has yet to be obtained, such as Britain, the European Union, South Korea, and certain US states states like Texas and Illinois. Meta’s vice president of content policy, Monika Bickert, explained that the program protects celebrities from being exploited in scam ads, a growing problem on social media platforms. Meta aims to offer this protection while allowing participants to choose whether to participate in the trial.

The initiative comes at a time when Meta is balancing the need to address rising scam concerns while avoiding past criticisms over user data privacy. In 2021, Meta shut down its previous facial recognition system and deleted the face scan data of a billion users, citing growing concerns over biometric data use. Earlier this year, the company faced a $1.4 billion fine in Texas for allegedly collecting biometric data illegally.

In addition to targeting scam ads, Meta is also considering using facial recognition data to help everyday users regain access to their accounts, especially in cases where they’ve been hacked or forgotten their passwords. Meta emphasises that all facial data generated by the new system will be deleted immediately after use, regardless of whether a scam is detected. The tool has undergone extensive internal and external privacy reviews before being implemented.

Meta’s oversight board seeks public input on immigration posts

Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.

The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.

Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.

Meta unveils Movie Gen in collaboration with Blumhouse

Meta, the owner of Facebook, announced a partnership with Blumhouse Productions, known for hit horror films like ‘The Purge’ and ‘Get Out,’ to test its new generative AI video model, Movie Gen. This follows the recent launch of Movie Gen, which can produce realistic video and audio clips based on user prompts. Meta claims that this tool could compete with offerings from leading media generation startups like OpenAI and ElevenLabs.

Blumhouse has chosen filmmakers Aneesh Chaganty, The Spurlock Sisters, and Casey Affleck to experiment with Movie Gen, with Chaganty’s film set to appear on Meta’s Movie Gen website. In a statement, Blumhouse CEO Jason Blum emphasised the importance of involving artists in the development of new technologies, noting that innovative tools can enhance storytelling for directors.

This partnership highlights Meta’s aim to connect with the creative industries, which have expressed hesitance toward generative AI due to copyright and consent concerns. Several copyright holders have sued companies like Meta, alleging unauthorised use of their works to train AI systems. In response to these challenges, Meta has demonstrated a willingness to compensate content creators, recently securing agreements with actors such as Judi Dench, Kristen Bell, and John Cena for its Meta AI chatbot.

Meanwhile, Microsoft-backed OpenAI has been exploring potential partnerships with Hollywood executives for its video generation tool, Sora, though no deals have been finalised yet. In September, Lions Gate Entertainment announced a collaboration with another AI startup, Runway, underscoring the increasing interest in AI partnerships within the film industry.

Meta and Blumhouse test AI video tool for filmmakers

Meta has joined forces with Blumhouse, the Hollywood studio renowned for horror films, to test its new AI-driven video tool called Movie Gen that creates custom 1080p videos with sound using text-based inputs, offering filmmakers innovative ways to visualise their ideas.

The pilot project engaged prominent filmmakers, including Aneesh Chaganty, Casey Affleck, and The Spurlock Sisters, who integrated AI-generated clips into their films. Chaganty’s work is already featured on the Movie Gen website, with other contributions set to appear soon. The collaboration demonstrates how AI can become a creative partner, expanding artistic possibilities through responses to text prompts and advanced sound effects.

Blumhouse CEO Jason Blum praised the initiative, stating that these tools could empower artists to tell better stories and stressed the importance of involving creators early in the development phase. Meta aims to continue refining the tool by extending the pilot programme through 2025, encouraging user feedback to enhance its capabilities.

Alongside this initiative, Meta has expanded its AI chatbot, Meta AI, to 21 markets, including the UK and Brazil. Seen as a competitor to ChatGPT, Meta AI supports multiple languages, targeting 500 million monthly active users globally.

Meta’s oversight board investigates anti-immigration posts on Facebook

Meta’s Oversight Board has initiated a detailed investigation into how the company handles anti-immigration content on Facebook, following numerous user complaints. Helle Thorning-Schmidt, co-chair of the board and former Danish prime minister, underscored the crucial task of balancing free speech with the need to protect vulnerable groups from hate speech.

The investigation particularly focuses on two contentious posts. The first is a meme from a page linked to Poland’s far-right Confederation party, featuring former prime minister Donald Tusk in a racially charged image that alludes to the EU’s immigration pact. The image utilises language perceived as a racial slur in Poland, raising ethical concerns about its impact. The second case involves an AI-generated image posted on a German Facebook page opposing leftist and green parties. It portrays a woman with Aryan features in a stop gesture with accompanying text condemning immigrants as ‘gang-rape specialists,’ a narrative linked to perceived outcomes of the Green Party’s immigration policies. This portrayal not only uses inflammatory rhetoric but also touches on deeply sensitive cultural issues within Germany.

Thorning-Schmidt highlighted the importance of examining Meta’s current approach to managing ‘coded speech’—subtle language or imagery that carries derogatory implications while avoiding direct violations of community standards.

The board’s investigation will assess whether Meta’s policies on hate speech are robust enough to protect individuals and communities at risk of discrimination, while still allowing for critical discourse on immigration matters. Meta’s policy is designed to protect refugees, migrants, immigrants, and asylum seekers from severe attacks while allowing critique of immigration laws.

Why does it matter?

The outcome of this investigation could prompt significant changes in how Meta moderates content on sensitive topics like immigration, striking a balance between curbing hate speech and preserving freedom of expression. Moreover, Meta’s oversight board tackling politically sensitive posts shows the broader challenges social media platforms face in moderating content that balances the fine line between free expression and inciting division. It highlights the ongoing debate on the role of these platforms in managing nuanced or politically sensitive content, potentially setting a precedent.

Human-level AI still a decade away, Meta scientist warns

Achieving human-level AI may be at least a decade away, according to Meta’s AI scientist, Yann LeCun. Current AI systems, like large language models, fall short of true reasoning, memory, and planning, even though companies like OpenAI market their technologies with terms like ‘memory’ and ‘thinking’. LeCun cautions against the hype, saying these systems lack the deeper understanding required for complex human tasks.

LeCun argues that the limitations stem from how these AI models function. LLMs predict words, while image and video models predict pixels, making them capable of only single or two-dimensional predictions. In contrast, humans operate in a three-dimensional world, able to plan and adapt intuitively. Even the most advanced AI struggles with everyday actions, such as cleaning a room or driving a car, tasks children and teenagers can learn with ease.

The key to more advanced AI, according to LeCun, lies in ‘world models’ – systems capable of perceiving and predicting outcomes within a three-dimensional environment. These models would allow AI to form action plans without trial and error, similar to how humans quickly solve problems by envisioning the results of their actions. However, building these systems requires massive computational power, driving cloud providers to partner with AI companies.

FAIR, Meta’s research arm, has shifted its focus towards developing world models and objective-driven AI. Other labs are also pursuing this approach, with researchers such as Fei-Fei Li raising significant funding to explore the potential of world models. Despite growing interest, LeCun emphasises that significant technical challenges remain, and achieving human-level AI will likely take many years, if not a full decade.

Meta faces another round of layoffs affecting Threads and other teams

Meta experienced another wave of layoffs on Wednesday, affecting multiple teams, including those working on Threads, recruiting, legal operations, and design. These cuts are part of the company’s ongoing effort to reallocate resources that are aligned with its strategic goals and location strategy. According to a statement from Meta, some teams were relocated, and certain employees were shifted to new roles, while others faced job eliminations. In cases where roles were cut, Meta stated that it works to provide new opportunities for affected employees.

 Text, Person, Page, Face, Head, Jiang Xindi, Andreas Hestler

While the exact number of layoffs remains unclear, social media posts and anonymous employee accounts suggest several team members were dismissed through video calls. Some of those affected received six weeks of severance pay. According to The Verge, teams from Meta’s Reality Labs, Instagram, and WhatsApp divisions were also impacted by this round of layoffs.

Why does it matter?

Meta has been undergoing significant workforce reductions following the company’s pandemic-era expansion. In 2022, the tech giant laid off 13% of its workforce—approximately 11,000 employees—with CEO Mark Zuckerberg taking responsibility for the decision. Another 10,000 employees were cut in 2023, along with the withdrawal of 5,000 open positions. These ongoing changes reflect Meta’s shift toward streamlining operations amid a challenging economic environment.

Meta faces lawsuits over teen mental health concerns

A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.

Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.

The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.

Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.

Thousands of users impacted by Facebook and Instagram outage

On Monday, Meta Platforms’ social media platforms Facebook and Instagram experienced a significant outage affecting thousands of users across the US. According to Downdetector, a website that tracks service interruptions, the outage peaked around 1:35 p.m. ET, with over 12,000 users reporting issues with Facebook and more than 5,000 for Instagram.

By 2:09 p.m. ET, the number of reported problems had decreased significantly to around 659 for Facebook and 450 for Instagram. Downdetector’s data is based on user-submitted reports, so the actual number of impacted users may differ.

Meta Platforms did not respond to requests for comment. Earlier this year, a similar issue disrupted services globally for more than two hours, affecting hundreds of thousands of users. That event saw 550,000 disruption reports for Facebook and around 92,000 for Instagram.