Facebook and Instagram data to power Meta’s AI models

Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.

Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.

From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.

Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.

Meta revises AI labels on social media platforms to balance transparency and user experience.

Meta’s decision to change how it labels AI-modified content on Instagram, Facebook, and Threads signifies another advancement in the company’s approach to generative AI. The visibility of AI’s involvement is reduced by moving the ‘AI info’ label to the post’s menu for content that has been edited with AI tools. This could make it easier for users to overlook or miss the AI editing details in such posts.

However, for content fully generated by AI, Meta will continue to prominently display the label beneath the user’s name, ensuring that posts created entirely by AI prompts remain visibly marked. The distinction Meta is making here seems to reflect the varying degrees of AI involvement in content creation.

Meta aims to increase transparency about content labelling, specifying if AI designation is from industry signals or self-disclosure. This effort follows complaints and confusion over the previous ‘Made with AI’ label, particularly from photographers concerned that their real photos were misrepresented.

This change may raise concerns about the potential for users to be misled, especially as AI editing tools become more sophisticated and the line between human and AI-created content continues to blur. It highlights the need for continued transparency as AI technology integrates more deeply into content creation across platforms.

Meta urged to rethink content removal amid Israel-Palestine controversy

Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with Palestinians and by others as antisemitic. The board determined that the phrase holds multiple meanings and cannot be universally deemed harmful or violent.

The phrase refers to the region between the River Jordan and the Mediterranean Sea, encompassing Israel and the Palestinian territories. Often used at pro-Palestinian rallies, critics argue it calls for Israel’s destruction, while others dispute this interpretation. The board emphasised the importance of context in assessing such political speech, urging Meta to allow space for debate, particularly during times of conflict.

Meta expressed support for the board’s review, acknowledging the complexities involved in global content moderation. However, the Anti-Defamation League criticised the decision, saying the phrase makes Jewish and pro-Israel communities feel unsafe. The Oversight Board also called on Meta to restore data access for researchers and journalists following its recent decision to end the CrowdTangle tool.

The board’s ruling highlights the ongoing challenges in regulating sensitive content on social media platforms, with a need for balancing free speech and community safety.

Meta’s oversight board rules on content moderation in Venezuela amidst post-election crisis

Meta’s Oversight Board has issued a decision regarding the company’s content moderation policies in Venezuela amidst violent crackdowns and widespread protests following the disputed presidential election.

The ruling addresses how Meta should handle posts concerning state-supported armed groups, known as ‘colectivos’. This follows Meta’s request for guidance on moderating increasing volumes of ‘anti-colectivos content’, highlighting two specific posts for review: an Instagram post saying ‘Go to hell! I hope they kill you all!’ aimed at the colectivos, and a Facebook post criticising Venezuela’s security forces, stating ‘kill those damn colectivos’.

The Oversight Board determined that neither post violated Meta’s rules on calls for violence, instead categorising both as ‘aspirational statements’ from citizens facing severe repression and threats to free expression from state-supported forces. The board justified this by noting the colectivos’ role in repressing civic space and committing human rights violations in Venezuela, particularly during the current post-election crisis. The board emphasised that the civilian population is predominantly the target of such abuses.

Additionally, the board critiqued Meta’s practice of making political content less visible across its platforms during critical times, expressing concerns that this could undermine users’ ability to express political dissent and raise awareness about the situation in Venezuela. It recommended that Meta adapt its policies to ensure political content, especially during crises like elections and post-electoral protests, receives the same reach as non-political content. This adjustment is vital for enabling citizens to share and amplify their political grievances during significant socio-political turmoil.

Why does it matter?

This decision is part of an ongoing debate about the role of political content on Meta’s platforms. Earlier this year, the board accepted its first case related to a post on Threads, another Meta service, focusing on the company’s decision to limit recommendations of political posts. The outcome of this related case is still pending, signalling potential further policy changes regarding political content on Meta’s platforms. The board’s decision underscores the critical role of context in content moderation, particularly in regions experiencing significant political and social upheaval.

Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.

AR studio closed as Meta prioritises AI and metaverse

Meta Platforms has announced plans to shut down its augmented reality studio, Meta Spark, which allowed third-party creators to design custom effects for Instagram and Facebook. The platform will close on 14 January, removing third-party AR effects such as filters, masks, and 3D objects created using the studio. However, their first-party AR effects will remain on its platforms, including Instagram, Facebook, and Messenger.

The decision aligns with Meta’s broader strategy to prioritise investments in AI and the metaverse, a virtual environment the company views as the future of the internet. In a blog post, the company confirmed that resources would now focus on developing the next generation of experiences, particularly in new factors like AR glasses. The shift in strategy has left many third-party creators, who relied on Meta Spark, searching for alternatives.

Many creators have expressed disappointment at the platform’s closure, with some considering moving to other AR creation tools like Snapchat’s Lens Studio or Unity. Despite the discontinuation, the tech giant reassured users that existing reels and stories featuring third-party AR effects will remain accessible. However, the Meta Spark Hub and studio files will no longer be available after the shutdown.

In recent months, the company has also announced the phasing out of other projects, such as its work-focused Workplace app, which will cease customer operation by June 2026. The company’s strategic focus on AI and emerging technologies reflects its ongoing efforts to redefine its core business in an increasingly competitive tech landscape.

Meta partners with Sage for geothermal power in the US

Meta Platforms has partnered with Sage Geosystems to source geothermal energy for its US data centres. The agreement supports the company’s expanding AI infrastructure, which demands noteworthy power. However, the initial phase of the 150-megawatt project, expected to be operational by 2027, will significantly boost the use of geothermal energy in the United States. While the exact location remains undecided, it will be east of the Rocky Mountains.

The deal aligns with the Biden administration’s push for clean energy investments from tech giants as they face growing electricity demands driven by AI advancements. Adopting AI technologies, particularly generative AI, is fuelling a rapid increase in electricity consumption, potentially complicating efforts to decarbonise the power sector by 2035. The Sage project represents Meta’s largest foray into renewable energy, a strategic move to manage rising infrastructure costs.

Sage Geosystems, a Houston-based startup, is pioneering next-generation geothermal technology that can be deployed in more locations than traditional methods. The company, supported by oil and gas firms Chesapeake Energy and Nabors Industries, validated its technology just two years ago, marking a significant step forward in the renewable energy sector.

Meta has been aggressively upgrading and expanding its infrastructure to support AI developments, substantially increasing expenses. With capex projected to reach up to $40 billion in 2024, the company expects infrastructure costs to remain a major expense driver in the coming years.

Zuckerberg alleges Biden admin pressured Meta on COVID censorship

Meta Platforms CEO Mark Zuckerberg has disclosed in a recent letter that senior Biden administration officials pressured his company to censor COVID-19 content during the pandemic. The letter, sent on 26 August to the US House Judiciary Committee, reveals Zuckerberg’s regret over not publicly addressing this pressure sooner and his acknowledgement of questionable content removal decisions made by Meta.

You can read the letter by clicking on X post

Zuckerberg detailed in the letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire. According to Zuckerberg, this pressure led to frustration when Meta did not fully comply.

The letter, which the Judiciary Committee on Facebook shared, highlights Zuckerberg’s criticism of the government’s actions. He expressed regret for not being more vocal about the situation and reflected on the decisions made with the benefit of hindsight.

The White House and Meta have not commented on the matter outside regular business hours. The Judiciary Committee, led by Chairman Jim Jordan, has labelled the letter a ‘big win for free speech,’ noting Zuckerberg’s admission that Facebook censored some content.

Additionally, Zuckerberg announced that he would refrain from contributing to electoral infrastructure for the upcoming presidential election. The approach follows his controversial $400 million donation in 2020 through his Chan Zuckerberg Initiative, which faced criticism and legal challenges from some groups who perceived it as partisan.

Former Meta executive joins OpenAI to lead key initiatives

OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.

Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.

In recent months, OpenAI has also brought in other prominent figures from the tech industry. These include Kevin Weil, a former Instagram executive now serving as chief product officer, and Sarah Friar, the former CEO of Nextdoor, who has taken on the role of chief financial officer.

Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.

Meta uncovers hack attempts on US officials’ WhatsApp accounts

Meta recently announced that it had detected attempts to hack WhatsApp accounts belonging to US officials from both the Biden and Trump administrations. The company linked these efforts to an Iranian hacker group, APT42, which has previously been connected to breaches in the Trump campaign. Meta described the attempts as a small-scale operation using social engineering tactics, where hackers posed as technical support from major companies like AOL, Google, Yahoo, and Microsoft.

After users flagged these suspicious activities, Meta blocked the accounts and confirmed that none of the targeted WhatsApp accounts had been compromised. The company explained that APT42 is known for deploying surveillance software on victims’ mobile devices, enabling them to access calls and text messages and even activate cameras and microphones without detection.

These hacking attempts are reportedly part of a broader campaign targeting US presidential campaigns earlier this month, just ahead of the upcoming presidential election. While Meta did not disclose the identities of those targeted, it indicated that the hackers focused on political and diplomatic figures, as well as business leaders from several countries, including the US, UK, Israel, the Palestinian territories, and Iran.

Meta’s findings underscore the ongoing risks of cyber-attacks targeting political figures and highlight the need for increased vigilance as the US heads into a critical election period.