Meta urged to rethink content removal amid Israel-Palestine controversy

Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with Palestinians and by others as antisemitic. The board determined that the phrase holds multiple meanings and cannot be universally deemed harmful or violent.

The phrase refers to the region between the River Jordan and the Mediterranean Sea, encompassing Israel and the Palestinian territories. Often used at pro-Palestinian rallies, critics argue it calls for Israel’s destruction, while others dispute this interpretation. The board emphasised the importance of context in assessing such political speech, urging Meta to allow space for debate, particularly during times of conflict.

Meta expressed support for the board’s review, acknowledging the complexities involved in global content moderation. However, the Anti-Defamation League criticised the decision, saying the phrase makes Jewish and pro-Israel communities feel unsafe. The Oversight Board also called on Meta to restore data access for researchers and journalists following its recent decision to end the CrowdTangle tool.

The board’s ruling highlights the ongoing challenges in regulating sensitive content on social media platforms, with a need for balancing free speech and community safety.

Meta’s oversight board rules on content moderation in Venezuela amidst post-election crisis

Meta’s Oversight Board has issued a decision regarding the company’s content moderation policies in Venezuela amidst violent crackdowns and widespread protests following the disputed presidential election.

The ruling addresses how Meta should handle posts concerning state-supported armed groups, known as ‘colectivos’. This follows Meta’s request for guidance on moderating increasing volumes of ‘anti-colectivos content’, highlighting two specific posts for review: an Instagram post saying ‘Go to hell! I hope they kill you all!’ aimed at the colectivos, and a Facebook post criticising Venezuela’s security forces, stating ‘kill those damn colectivos’.

The Oversight Board determined that neither post violated Meta’s rules on calls for violence, instead categorising both as ‘aspirational statements’ from citizens facing severe repression and threats to free expression from state-supported forces. The board justified this by noting the colectivos’ role in repressing civic space and committing human rights violations in Venezuela, particularly during the current post-election crisis. The board emphasised that the civilian population is predominantly the target of such abuses.

Additionally, the board critiqued Meta’s practice of making political content less visible across its platforms during critical times, expressing concerns that this could undermine users’ ability to express political dissent and raise awareness about the situation in Venezuela. It recommended that Meta adapt its policies to ensure political content, especially during crises like elections and post-electoral protests, receives the same reach as non-political content. This adjustment is vital for enabling citizens to share and amplify their political grievances during significant socio-political turmoil.

Why does it matter?

This decision is part of an ongoing debate about the role of political content on Meta’s platforms. Earlier this year, the board accepted its first case related to a post on Threads, another Meta service, focusing on the company’s decision to limit recommendations of political posts. The outcome of this related case is still pending, signalling potential further policy changes regarding political content on Meta’s platforms. The board’s decision underscores the critical role of context in content moderation, particularly in regions experiencing significant political and social upheaval.

Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.

AR studio closed as Meta prioritises AI and metaverse

Meta Platforms has announced plans to shut down its augmented reality studio, Meta Spark, which allowed third-party creators to design custom effects for Instagram and Facebook. The platform will close on 14 January, removing third-party AR effects such as filters, masks, and 3D objects created using the studio. However, their first-party AR effects will remain on its platforms, including Instagram, Facebook, and Messenger.

The decision aligns with Meta’s broader strategy to prioritise investments in AI and the metaverse, a virtual environment the company views as the future of the internet. In a blog post, the company confirmed that resources would now focus on developing the next generation of experiences, particularly in new factors like AR glasses. The shift in strategy has left many third-party creators, who relied on Meta Spark, searching for alternatives.

Many creators have expressed disappointment at the platform’s closure, with some considering moving to other AR creation tools like Snapchat’s Lens Studio or Unity. Despite the discontinuation, the tech giant reassured users that existing reels and stories featuring third-party AR effects will remain accessible. However, the Meta Spark Hub and studio files will no longer be available after the shutdown.

In recent months, the company has also announced the phasing out of other projects, such as its work-focused Workplace app, which will cease customer operation by June 2026. The company’s strategic focus on AI and emerging technologies reflects its ongoing efforts to redefine its core business in an increasingly competitive tech landscape.

Meta partners with Sage for geothermal power in the US

Meta Platforms has partnered with Sage Geosystems to source geothermal energy for its US data centres. The agreement supports the company’s expanding AI infrastructure, which demands noteworthy power. However, the initial phase of the 150-megawatt project, expected to be operational by 2027, will significantly boost the use of geothermal energy in the United States. While the exact location remains undecided, it will be east of the Rocky Mountains.

The deal aligns with the Biden administration’s push for clean energy investments from tech giants as they face growing electricity demands driven by AI advancements. Adopting AI technologies, particularly generative AI, is fuelling a rapid increase in electricity consumption, potentially complicating efforts to decarbonise the power sector by 2035. The Sage project represents Meta’s largest foray into renewable energy, a strategic move to manage rising infrastructure costs.

Sage Geosystems, a Houston-based startup, is pioneering next-generation geothermal technology that can be deployed in more locations than traditional methods. The company, supported by oil and gas firms Chesapeake Energy and Nabors Industries, validated its technology just two years ago, marking a significant step forward in the renewable energy sector.

Meta has been aggressively upgrading and expanding its infrastructure to support AI developments, substantially increasing expenses. With capex projected to reach up to $40 billion in 2024, the company expects infrastructure costs to remain a major expense driver in the coming years.

Zuckerberg alleges Biden admin pressured Meta on COVID censorship

Meta Platforms CEO Mark Zuckerberg has disclosed in a recent letter that senior Biden administration officials pressured his company to censor COVID-19 content during the pandemic. The letter, sent on 26 August to the US House Judiciary Committee, reveals Zuckerberg’s regret over not publicly addressing this pressure sooner and his acknowledgement of questionable content removal decisions made by Meta.

You can read the letter by clicking on X post

Zuckerberg detailed in the letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire. According to Zuckerberg, this pressure led to frustration when Meta did not fully comply.

The letter, which the Judiciary Committee on Facebook shared, highlights Zuckerberg’s criticism of the government’s actions. He expressed regret for not being more vocal about the situation and reflected on the decisions made with the benefit of hindsight.

The White House and Meta have not commented on the matter outside regular business hours. The Judiciary Committee, led by Chairman Jim Jordan, has labelled the letter a ‘big win for free speech,’ noting Zuckerberg’s admission that Facebook censored some content.

Additionally, Zuckerberg announced that he would refrain from contributing to electoral infrastructure for the upcoming presidential election. The approach follows his controversial $400 million donation in 2020 through his Chan Zuckerberg Initiative, which faced criticism and legal challenges from some groups who perceived it as partisan.

Former Meta executive joins OpenAI to lead key initiatives

OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.

Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.

In recent months, OpenAI has also brought in other prominent figures from the tech industry. These include Kevin Weil, a former Instagram executive now serving as chief product officer, and Sarah Friar, the former CEO of Nextdoor, who has taken on the role of chief financial officer.

Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.

Meta uncovers hack attempts on US officials’ WhatsApp accounts

Meta recently announced that it had detected attempts to hack WhatsApp accounts belonging to US officials from both the Biden and Trump administrations. The company linked these efforts to an Iranian hacker group, APT42, which has previously been connected to breaches in the Trump campaign. Meta described the attempts as a small-scale operation using social engineering tactics, where hackers posed as technical support from major companies like AOL, Google, Yahoo, and Microsoft.

After users flagged these suspicious activities, Meta blocked the accounts and confirmed that none of the targeted WhatsApp accounts had been compromised. The company explained that APT42 is known for deploying surveillance software on victims’ mobile devices, enabling them to access calls and text messages and even activate cameras and microphones without detection.

These hacking attempts are reportedly part of a broader campaign targeting US presidential campaigns earlier this month, just ahead of the upcoming presidential election. While Meta did not disclose the identities of those targeted, it indicated that the hackers focused on political and diplomatic figures, as well as business leaders from several countries, including the US, UK, Israel, the Palestinian territories, and Iran.

Meta’s findings underscore the ongoing risks of cyber-attacks targeting political figures and highlight the need for increased vigilance as the US heads into a critical election period.

Meta alters data use policy after CMA approval

Britain’s competition watchdog has approved Meta’s new approach to handling advertisers’ data on its platform. Previously under scrutiny for potentially unfair practices, the tech giant had initially allowed advertisers to opt out of using their data to enhance Facebook Marketplace. However, Meta has now gone further, ensuring that none of the data from advertisers on Facebook Marketplace will be used to improve the e-commerce platform, removing the need to opt in or out.

The UK’s Competition and Markets Authority (CMA), which began investigating Meta in 2021, has confirmed that these changes surpass the original commitments and do not disadvantage advertisers. The inquiry initially focused on whether Meta had an unfair edge in sectors like online classified ads and dating due to its data practices.

The decision follows a broader trend of tech companies, like Amazon, making adjustments to ensure fair competition. Last year, Amazon agreed not to use marketplace data from competing sellers to create a level playing field for third-party vendors.

Meta disrupts Russia’s AI-driven misinformation campaigns

According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.

The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.

Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.