Meta partners with Reuters for AI news content

Meta Platforms announced a new partnership with Reuters on Friday, allowing its AI chatbot to give users real-time answers about news and current events using Reuters content. The agreement marks Meta’s return to licensed news distribution after scaling back on news content due to ongoing disputes over misinformation and revenue sharing with regulators and publishers. The financial specifics of the deal remain undisclosed, as Meta and Reuters-parent Thomson Reuters have chosen to keep the terms confidential.

Meta’s AI chatbot, available on platforms like Facebook, WhatsApp, and Instagram, will now offer users summaries and links to Reuters articles when they ask news-related questions. Although Meta hasn’t clarified if Reuters content will be used to train its language models further, the company assures that Reuters will be compensated under a multi-year agreement, as reported by Axios.

Reuters, known for its fact-based journalism, confirmed its licensed content to multiple tech providers for AI usage without detailing specific deals.

Why does it matter?

The partnership reflects a growing trend in tech, with companies like OpenAI and Perplexity also forming agreements with media outlets to enhance their AI responses with verified information from trusted news sources. Reuters has already collaborated with Meta on fact-checking initiatives, a partnership that began in 2020. This latest agreement aims to improve the reliability of Meta AI’s responses to real-time questions, potentially addressing ongoing concerns around misinformation and helping to balance the distribution of accurate, trustworthy news on social media platforms.

Meta prevails in shareholder child safety lawsuit

Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.

Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.

Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.

Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.

Meta launches new AI model for evaluating AI systems

Meta has released new AI models, including a tool called the Self-Taught Evaluator, which aims to reduce human involvement in the AI development process. The company’s latest batch of models is part of its ongoing efforts to enhance AI accuracy and efficiency across complex fields.

The new tool uses a ‘chain of thought’ technique, similar to one employed by OpenAI, breaking problems into logical steps for improved accuracy in science, coding, and mathematics. Meta trained the evaluator solely with AI-generated data, eliminating the need for human input at that stage.

The ability for AI to reliably assess other AI models could eventually replace costly processes such as Reinforcement Learning from Human Feedback. Meta researchers suggest that self-improving AI systems might perform better than human evaluators, marking progress toward autonomous digital assistants capable of managing complex tasks without supervision.

Meta’s latest release also includes upgrades to its Segment Anything model, tools for faster language model responses, and datasets aimed at discovering new inorganic materials. Unlike competitors Google and Anthropic, Meta makes its models accessible for public use, setting it apart in the industry.

Meta faces legal challenge on Instagram’s impact on teenagers

Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.

The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.

Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.

The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.

Meta reintroduces facial recognition for celebrity scam protection

Meta, the parent company of Facebook, is testing facial recognition technology again, three years after halting its use due to privacy concerns. This time, the company focuses on combating ‘celeb bait’ scams, which use public figures’ images in fraudulent advertisements. Meta plans to enrol around 50,000 celebrities in a trial program that will automatically compare their profile photos with those in suspicious ads. If the system detects a match, Meta will block the ad and notify the celebrities who can opt out of the program.

The trial, which will begin globally in December, excludes regions where regulatory clearance has yet to be obtained, such as Britain, the European Union, South Korea, and certain US states states like Texas and Illinois. Meta’s vice president of content policy, Monika Bickert, explained that the program protects celebrities from being exploited in scam ads, a growing problem on social media platforms. Meta aims to offer this protection while allowing participants to choose whether to participate in the trial.

The initiative comes at a time when Meta is balancing the need to address rising scam concerns while avoiding past criticisms over user data privacy. In 2021, Meta shut down its previous facial recognition system and deleted the face scan data of a billion users, citing growing concerns over biometric data use. Earlier this year, the company faced a $1.4 billion fine in Texas for allegedly collecting biometric data illegally.

In addition to targeting scam ads, Meta is also considering using facial recognition data to help everyday users regain access to their accounts, especially in cases where they’ve been hacked or forgotten their passwords. Meta emphasises that all facial data generated by the new system will be deleted immediately after use, regardless of whether a scam is detected. The tool has undergone extensive internal and external privacy reviews before being implemented.

Meta’s oversight board seeks public input on immigration posts

Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.

The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.

Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.

Meta unveils Movie Gen in collaboration with Blumhouse

Meta, the owner of Facebook, announced a partnership with Blumhouse Productions, known for hit horror films like ‘The Purge’ and ‘Get Out,’ to test its new generative AI video model, Movie Gen. This follows the recent launch of Movie Gen, which can produce realistic video and audio clips based on user prompts. Meta claims that this tool could compete with offerings from leading media generation startups like OpenAI and ElevenLabs.

Blumhouse has chosen filmmakers Aneesh Chaganty, The Spurlock Sisters, and Casey Affleck to experiment with Movie Gen, with Chaganty’s film set to appear on Meta’s Movie Gen website. In a statement, Blumhouse CEO Jason Blum emphasised the importance of involving artists in the development of new technologies, noting that innovative tools can enhance storytelling for directors.

This partnership highlights Meta’s aim to connect with the creative industries, which have expressed hesitance toward generative AI due to copyright and consent concerns. Several copyright holders have sued companies like Meta, alleging unauthorised use of their works to train AI systems. In response to these challenges, Meta has demonstrated a willingness to compensate content creators, recently securing agreements with actors such as Judi Dench, Kristen Bell, and John Cena for its Meta AI chatbot.

Meanwhile, Microsoft-backed OpenAI has been exploring potential partnerships with Hollywood executives for its video generation tool, Sora, though no deals have been finalised yet. In September, Lions Gate Entertainment announced a collaboration with another AI startup, Runway, underscoring the increasing interest in AI partnerships within the film industry.

Meta and Blumhouse test AI video tool for filmmakers

Meta has joined forces with Blumhouse, the Hollywood studio renowned for horror films, to test its new AI-driven video tool called Movie Gen that creates custom 1080p videos with sound using text-based inputs, offering filmmakers innovative ways to visualise their ideas.

The pilot project engaged prominent filmmakers, including Aneesh Chaganty, Casey Affleck, and The Spurlock Sisters, who integrated AI-generated clips into their films. Chaganty’s work is already featured on the Movie Gen website, with other contributions set to appear soon. The collaboration demonstrates how AI can become a creative partner, expanding artistic possibilities through responses to text prompts and advanced sound effects.

Blumhouse CEO Jason Blum praised the initiative, stating that these tools could empower artists to tell better stories and stressed the importance of involving creators early in the development phase. Meta aims to continue refining the tool by extending the pilot programme through 2025, encouraging user feedback to enhance its capabilities.

Alongside this initiative, Meta has expanded its AI chatbot, Meta AI, to 21 markets, including the UK and Brazil. Seen as a competitor to ChatGPT, Meta AI supports multiple languages, targeting 500 million monthly active users globally.

Meta’s oversight board investigates anti-immigration posts on Facebook

Meta’s Oversight Board has initiated a detailed investigation into how the company handles anti-immigration content on Facebook, following numerous user complaints. Helle Thorning-Schmidt, co-chair of the board and former Danish prime minister, underscored the crucial task of balancing free speech with the need to protect vulnerable groups from hate speech.

The investigation particularly focuses on two contentious posts. The first is a meme from a page linked to Poland’s far-right Confederation party, featuring former prime minister Donald Tusk in a racially charged image that alludes to the EU’s immigration pact. The image utilises language perceived as a racial slur in Poland, raising ethical concerns about its impact. The second case involves an AI-generated image posted on a German Facebook page opposing leftist and green parties. It portrays a woman with Aryan features in a stop gesture with accompanying text condemning immigrants as ‘gang-rape specialists,’ a narrative linked to perceived outcomes of the Green Party’s immigration policies. This portrayal not only uses inflammatory rhetoric but also touches on deeply sensitive cultural issues within Germany.

Thorning-Schmidt highlighted the importance of examining Meta’s current approach to managing ‘coded speech’—subtle language or imagery that carries derogatory implications while avoiding direct violations of community standards.

The board’s investigation will assess whether Meta’s policies on hate speech are robust enough to protect individuals and communities at risk of discrimination, while still allowing for critical discourse on immigration matters. Meta’s policy is designed to protect refugees, migrants, immigrants, and asylum seekers from severe attacks while allowing critique of immigration laws.

Why does it matter?

The outcome of this investigation could prompt significant changes in how Meta moderates content on sensitive topics like immigration, striking a balance between curbing hate speech and preserving freedom of expression. Moreover, Meta’s oversight board tackling politically sensitive posts shows the broader challenges social media platforms face in moderating content that balances the fine line between free expression and inciting division. It highlights the ongoing debate on the role of these platforms in managing nuanced or politically sensitive content, potentially setting a precedent.

Human-level AI still a decade away, Meta scientist warns

Achieving human-level AI may be at least a decade away, according to Meta’s AI scientist, Yann LeCun. Current AI systems, like large language models, fall short of true reasoning, memory, and planning, even though companies like OpenAI market their technologies with terms like ‘memory’ and ‘thinking’. LeCun cautions against the hype, saying these systems lack the deeper understanding required for complex human tasks.

LeCun argues that the limitations stem from how these AI models function. LLMs predict words, while image and video models predict pixels, making them capable of only single or two-dimensional predictions. In contrast, humans operate in a three-dimensional world, able to plan and adapt intuitively. Even the most advanced AI struggles with everyday actions, such as cleaning a room or driving a car, tasks children and teenagers can learn with ease.

The key to more advanced AI, according to LeCun, lies in ‘world models’ – systems capable of perceiving and predicting outcomes within a three-dimensional environment. These models would allow AI to form action plans without trial and error, similar to how humans quickly solve problems by envisioning the results of their actions. However, building these systems requires massive computational power, driving cloud providers to partner with AI companies.

FAIR, Meta’s research arm, has shifted its focus towards developing world models and objective-driven AI. Other labs are also pursuing this approach, with researchers such as Fei-Fei Li raising significant funding to explore the potential of world models. Despite growing interest, LeCun emphasises that significant technical challenges remain, and achieving human-level AI will likely take many years, if not a full decade.