Meta pushes free speech at the cost of content control

Meta has announced that Instagram and Threads users will no longer be able to opt out of seeing political content from accounts they don’t follow. The change, part of a broader push toward promoting “free expression,” will take effect in the US this week and expand globally soon after. Users will be able to adjust how much political content they see but won’t be able to block it entirely.

Adam Mosseri, head of Instagram and Threads, had previously expressed reluctance to feature political posts, favouring community-focused content like sports and fashion. However, he now claims that users have asked to see more political material. Critics, including social media experts, argue the shift is driven by changing political dynamics in the US, particularly with Donald Trump’s imminent return to the White House.

While some users have welcomed Meta’s stance on free speech, many worry it could amplify misinformation and hate speech. Experts also caution that marginalised groups may face increased harm due to fewer content moderation measures. The changes could also push discontented users toward rival platforms like Bluesky, raising questions about Meta’s long-term strategy.

Brazil’s Lula criticises Meta’s move to end US fact-checking program

Brazilian President Luiz Inácio Lula da Silva has condemned Meta’s decision to discontinue its fact-checking program in the United States, calling it a grave issue. Speaking in Brasília on Thursday, Lula emphasised the need for accountability in digital communication, equating its responsibilities to those of traditional media. He announced plans to meet with government officials to discuss the matter.

Meta’s recent decision has prompted Brazilian prosecutors to seek clarification on whether the changes will affect the country. The company has been given 30 days to respond as part of an ongoing investigation into how social media platforms address misinformation and online violence in Brazil.

Justice Alexandre de Moraes of Brazil’s Supreme Court, known for his strict oversight of tech companies, reiterated that social media firms must adhere to Brazilian laws to continue operating in the country. Last year, he temporarily suspended X (formerly Twitter) over non-compliance with local regulations.

Meta has so far declined to comment on the matter in Brazil, fueling concerns over its commitment to tackling misinformation globally. The outcome of Brazil’s inquiry could have broader implications for how tech firms balance local laws with global policy changes.

Frank McCourt’s Project Liberty proposes TikTok US buyout

Frank McCourt’s Project Liberty, along with a group of partners, has formally proposed a bid to acquire TikTok’s US assets from ByteDance. The consortium announced its intentions just ahead of ByteDance’s January 19 deadline to sell the platform or face a ban under legislation signed by President Joe Biden in April.

The group has gathered sufficient financial backing, including interest from private equity funds, family offices, and high-net-worth individuals, with debt financing from a leading US bank. The proposed value of the deal has not been disclosed.

McCourt stated the goal is to keep TikTok accessible to millions of US users without relying on its current algorithm while preventing a ban. Efforts are underway to engage with ByteDance, President-elect Trump, and the incoming administration to finalise the deal.

British universities abandon X over misinformation concerns

British universities are increasingly distancing themselves from Elon Musk’s X platform, citing its role in spreading misinformation and inciting racial unrest. A Reuters survey found that several institutions have stopped posting or significantly reduced their activity, joining a broader exodus of academics and public bodies. Concerns over falling engagement, violent content, and the platform’s perceived toxicity have driven the shift.

The University of Cambridge has seen at least seven of its colleges stop posting, while Oxford’s Merton College has deleted its account entirely. Institutions such as the University of East Anglia and London Metropolitan University report dwindling engagement, while arts conservatoires like Trinity Lab and the Royal Northern College of Music are focusing their communication efforts elsewhere. Some universities, including Buckinghamshire New University, have publicly stated that X is no longer a suitable space for meaningful discussion.

The retreat from X follows similar moves by British police forces, reflecting growing unease among public institutions. Despite the trend, some universities continue to maintain a presence on the platform, though many are actively exploring alternatives. X did not respond to requests for comment on the issue.

Musk plans edgier version of Grok

Elon Musk’s AI company, xAI, is preparing to launch a controversial feature for its chatbot, Grok, called ‘Unhinged Mode.’ According to a recently updated FAQ on the Grok website, this mode will deliver responses that are intentionally provocative, offensive, and irreverent, mimicking an amateur stand-up comedian pushing boundaries.

Musk first teased the idea of an unfiltered chatbot nearly a year ago, describing Grok as a tool that would answer controversial questions without self-censorship. While Grok has already been known for its edgy responses, it currently avoids politically sensitive topics. The new mode appears to be an effort to deliver on Musk’s vision of an anti-‘woke’ AI assistant, standing apart from more conservative competitors like OpenAI’s ChatGPT.

The move comes amid ongoing debates about political bias in AI systems. Musk has previously claimed that most AI tools lean left due to their reliance on web-based training data. He has vowed to make Grok politically neutral, blaming the internet’s content for any perceived bias in the chatbot’s current outputs. Critics, however, worry that unleashing an unfiltered mode could lead to harmful or offensive outputs, raising questions about the responsibility of AI developers.

As Grok continues to evolve, the AI industry is closely watching how users respond to Musk’s push for a less restrained chatbot. Whether this will prove a success or ignite further controversy remains to be seen.

EU denies censorship claims made by Meta

The European Commission has rejected accusations from Meta CEO Mark Zuckerberg that European Union laws censor social media, saying regulations only target illegal content. Officials clarified that platforms are required to remove posts deemed harmful to children or democracy, not lawful content.

Zuckerberg recently criticised EU regulations, claiming they stifle innovation and institutionalise censorship. In response, the Commission strongly denied the claims, emphasising its Digital Services Act does not impose censorship but ensures public safety through content regulation.

Meta has decided to end fact-checking in the US for Facebook, Instagram and Threads, opting for a ‘community notes’ system. The system allows users to highlight misleading posts, with notes published if diverse contributors agree they are helpful.

The EU confirmed that such a system could be acceptable in Europe if platforms submit risk assessments and demonstrate effectiveness in content moderation. Independent fact-checking for European users will remain available for US-based content.

Brazil warns tech firms to follow laws or face expulsion

Brazilian Supreme Court Judge Alexandre de Moraes reiterated on Wednesday that technology companies must comply with national laws to continue operating in the country. His statement followed Meta’s recent announcement to scale back its US fact-checking program, raising concerns about its impact on Brazil.

Speaking at an event marking the anniversary of anti-institution riots, Moraes emphasised that the court would not tolerate the use of hate speech for profit. Last year, he ordered the suspension of social media platform X for over a month due to its failure to moderate hate speech, a decision later upheld by the court. X owner Elon Musk criticised the move as censorship but ultimately complied with court demands to restore the platform’s services in Brazil.

Brazilian prosecutors have also asked Meta to clarify whether its US fact-checking changes will apply in Brazil, citing an ongoing investigation into social media platforms’ efforts to combat misinformation and violence. Meta has been given 30 days to respond but declined to comment through its local office.

Spain urges neutrality from social media platforms

The Spanish government stressed social media platforms must remain neutral and avoid interfering in political matters. The statement came after X’s owner, Elon Musk, commented on crime data involving foreigners in Catalonia.

Government spokesperson Pilar Alegria emphasised the need for absolute impartiality from such platforms when responding to questions about Musk’s remarks and his ongoing disagreements with European leaders like Keir Starmer and Emmanuel Macron.

Musk had reposted crime statistics from a Spanish newspaper, leading to criticism from Catalan officials. Catalonia’s Socialist leader Salvador Illa warned against using the region’s name to promote hate speech, while Spanish Prime Minister Pedro Sanchez rejected any link between immigration and crime rates.

The Spanish Interior Ministry previously reported stable or declining crime rates, affirming that immigration has no significant impact on criminal activity.

Telegram provided user data to US authorities following Durov’s arrest

Telegram, the popular messaging app, has fulfilled 900 requests from US authorities for personal information about its users in 2024, with a significant rise in inquiries following the arrest of CEO Pavel Durov in France. A report from 404 Media, published on 7 January, revealed that the platform provided 14 requests for IP addresses and phone numbers between January and September 2024. However, most of these requests were made after October, affecting over 2,000 users.

The increase in requests came after French authorities arrested Durov on 24 August, accusing Telegram of enabling criminal activity. Durov has stated that since 2018, Telegram has been providing user information like IP addresses and phone numbers to law enforcement authorities when requested. The policy, which is mentioned in Telegram’s privacy guidelines, continues to be a source of controversy.

Despite the ongoing legal issues, with Durov still barred from leaving France, Telegram remains a key platform, especially within the cryptocurrency community, where it has more than 950 million monthly active users.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.