Social media giants criticized for weakening safety measures ahead of 2024 elections

A recent Free Press study unveils 17 concerning policy rollbacks impacting online content integrity in the past year.

Social platforms face fresh criticism from the FTC for mishandling user data, raising privacy concerns.

As the 2024 elections loom, major social media platforms, including Alphabet, Meta, and Twitter/X, are scrutinized for retracting crucial safety policies and laying off moderation staff. A recent study by Free Press reveals alarming policy rollbacks, totaling 17, affecting online content integrity over the past year.

The report warns of a heightened risk to democracy, citing over 40,000 layoffs across these companies. Notably, Twitter’s drastic staff reduction under Elon Musk’s leadership is highlighted as a significant factor contributing to a surge in misinformation and hate speech. The study calls for an urgent reinstatement of anti-misinformation measures and increased transparency, emphasizing the need for swift action before the 2024 elections.

Why does it matter?

Several policy rollbacks have been noted, such as YouTube and X allowing content that challenges the integrity of the 2020 election. Meta, X, and YouTube reinstated Donald Trump’s accounts, and Meta neglected to enforce policies for labeling political ads. A noteworthy shift occurred in August when X announced a policy reversal, now permitting political advertisements.

Beyond Twitter, other platforms have cut key teams fighting mis/disinformation. The significant 2023 layoffs at Meta, surpassing 20,000, included a fact-checking tool team, content moderators, and positions related to trust and integrity. The report also noted that Alphabet’s YouTube parent company laid off about 12,600 people with unclear moderation transparency. TikTok was the only major platform without significant rollbacks in election integrity policies.