Following Musk’s lead: How Meta’s eased content moderation is redrawing the digital landscape

Meta is overhauling its content policies to emphasize free expression, replacing third-party fact-checking with a user-driven Community Notes system. The company aims to reduce bias and over-enforcement, focusing automated tools on severe violations while using human review for lesser issues. Meta will also personalize political content, giving users more control over their feeds to foster diverse discourse.

A sign with a blue logo on it

2025 starts with a major shift in Meta’s content policy with a far-reaching impact on content governance. Meta’s policy shift, which will start with the United States, is inspired by Musk’s change in content moderation at Twitter/X. The gist of Meta’s new content policy is:

  • Fact-checking out, community notes back
  • Fewer restrictions on the content
  • Strict enforcement of illegal and highly severe content violations
  • Personalised approach to political content.

According to Meta’s CEO Mark Zuckerberg, Meta is recommitting to its foundational value of free expression. In a bid to realign its content moderation practices with this philosophy, Meta is moving away from overly restrictive measures.

A major shift involves discontinuing the third-party fact-checking programme in the United States, originally established to provide independent context to online information. However, it faced criticism for introducing bias and stifling legitimate political discourse. Instead, Meta is launching a Community Notes system inspired by the approach used on platform X, allowing users to contribute context and diminishing the centralised control of fact-checking.

Earthquake in fact-checking industry: New Meta’s content policy will impact the fast-growing industry of 225 fact-checking projects, organisations, and campaigns in 75 countries, according to the Duke Reporter’s Lab. It is estimated that 25.000 – 30.000 people work in this industry.

Acknowledging the errors in its complex systems, Meta aims to reduce over-enforcement and unnecessary censorship. The company estimated that 10% to 20% of its content removal actions may have been mistaken, particularly affecting discussions on sensitive subjects like immigration and gender identity.

Meta plans to relax restrictions on these topics to ensure its platform mirrors the discourse in public forums, such as in media and legislative spaces. Meta will continue using automated systems for severe violations to improve content enforcement while relying on user reports for less serious issues. Efforts to refine its processes include expanding the staff involved in decision reviews and using AI models for secondary analysis.

In response to past user feedback, which led to a reduction in civic content, Meta now intends to personalise the delivery of political material. This new strategy will utilise user interactions to assess the relevance of such content, allowing for a tailored experience that caters to individual interests.

While these changes align with the overall shift in the US tech sector triggered by Musk’s changes in the content policy of Twitter/X, this shift can expose Meta to legal actions internationally. In particular, Meta’s more relaxed content policy could trigger a violation of some provisions of the EU’s Digital Service Act. Meta may find similar reactions in other countries that are introducing much stricter content policies and regulations.