German football club exits social media platform X over concerns of hate speech and disinformation

German soccer club St Pauli has announced its withdrawal from the social media platform X, formerly Twitter, citing concerns over hate speech and disinformation. The club accused X’s owner, Elon Musk, of turning the platform into a space for unchecked racism and conspiracy theories, particularly during Germany’s heated political climate ahead of snap elections in February.

St Pauli, known for its progressive values and activism, said X has become a “hate machine” where threats and insults go unpunished under the guise of free speech. The decision mirrors similar moves by media outlets such as The Guardian and Spain’s La Vanguardia, which also left X over concerns about harmful content.

While the German club’s account will remain online as an archive, St Pauli will no longer post new content. The Hamburg-based team, celebrated for its left-wing fan base and social initiatives, stated the decision aligns with its commitment to promoting inclusivity and combating hate.

Ireland intensifies regulation on digital platforms to curb terrorist content

The Irish media regulator, Coimisiún na Meán, has mandated that online platforms TikTok, X, and Meta must take decisive steps to prevent the spread of terrorist content on their services, giving them three months to report on their progress.

This action follows notifications from EU authorities under the Terrorist Content Online Regulation. If the platforms fail to comply, the regulator can impose fines of up to four percent of their global revenue.

This decision aligns with Ireland’s broader enforcement of digital laws, including the Digital Services Act (DSA) and a new online safety code. The DSA has already prompted investigations, such as the European Commission’s probe into X last December, and Ireland’s new safety code will impose binding content moderation rules for video-sharing platforms with European headquarters in Ireland. These initiatives aim to curb the spread of harmful and illegal content on major social media platforms.

Guardian pulls out of X amid content concerns

The Guardian has announced its departure from X, citing concerns over harmful content, such as racist and conspiracy-based posts. The decision marks a significant retreat for one of the UK’s prominent news outlets from the social media platform, which Elon Musk acquired in 2022. According to an editorial, the Guardian stated that the downsides of remaining on X now outweigh any potential benefits.

With over 10.7 million followers, the Guardian’s exit reflects rising concerns about X’s moderation policies. Critics argue that Musk’s relaxed approach has fostered an environment that tolerates misinformation and hate speech. Musk responded to the Guardian’s decision by dismissing the publication as “irrelevant” on X.

The Guardian’s move comes as other high-profile users, including former CNN anchor Don Lemon, also announce plans to leave X. Lemon expressed disappointment in the platform, saying it no longer supports meaningful debate. The UK has seen an increase in concerns about X’s impact, with British police, charities, and public health organisations also reconsidering their use of the platform.

The British government, however, still maintains a presence on X, though it refrains from paid promotions. Instead, it directs advertising efforts towards platforms like Instagram and Facebook. Observers note that the Guardian’s exit may prompt other media outlets to evaluate their stance on social media engagement.

Australia’s proposed ban on social media for under-16s sparks global debate on youth digital exposure

Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.

The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.

While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.

Russian court fines Apple for failing to remove two podcasts, RIA reports

A Moscow court has fined Apple 3.6 million roubles ($36,889) for refusing to remove two podcasts that were reportedly aimed at destabilising Russia’s political landscape, according to the RIA news agency. The court’s decision is part of a larger pattern of the Russian government targeting foreign technology companies for not complying with content removal requests. This action is seen as part of the Kremlin’s broader strategy to exert control over the digital space and reduce the influence of Western tech giants.

Since Russia’s invasion of Ukraine in 2022, the government has intensified its crackdown on foreign tech companies, accusing them of spreading content that undermines Russian authority and sovereignty. The Kremlin has already imposed similar fines on companies like Google and Meta, demanding the removal of content deemed harmful to national security or political stability. Critics argue that these moves are part of an orchestrated effort to suppress dissenting voices and maintain control over information, particularly in the face of growing international scrutiny.

Apple, like other Western companies, has faced mounting pressure to comply with Russia’s increasingly stringent regulations. While the company has largely resisted political content restrictions in other regions, the fine highlights the challenges it faces in operating within Russia’s tightly controlled media environment. Apple has not yet publicly commented on the ruling, but the decision reflects the growing risks for tech firms doing business in Russia as the country tightens its grip on digital platforms.

Mozambique and Mauritius to block social media

Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.

Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.

In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.

Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.

Kremlin seeks end to YouTube ban on Russian state media

The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage. Google blocked more than a thousand Russian channels and over 5.5 million videos, including state-funded media, after halting ad services in Russia following its invasion of Ukraine in 2022.

Russia’s legal actions against Google, initiated by 17 Russian TV channels, have led to compound fines based on the company’s revenue in Russia, accumulating to a staggering figure reportedly in the “undecillions,” according to Russian media. Kremlin spokesperson Dmitry Peskov described this enormous number as symbolic but urged Google to take these legal pressures seriously and reconsider its restrictions.

In response, Google has not commented on these demands. Russian officials argue that such restrictions infringe on the country’s broadcasters and hope the significant financial claims will compel Google to restore access to Russian media content on YouTube.

EU moves to formalise disinformation code under DSA

The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA). According to Paul Gordon, assistant director at Ireland’s media regulator Coimisiúin na Meán, efforts are underway to finalise the transition by January. He emphasised that the new regulations should lead to more meaningful engagement from platforms, moving beyond mere compliance.

Originally established in 2022 and signed by 44 companies, including Google, Meta, and TikTok, the code outlines commitments to combat online disinformation, such as increasing transparency in political advertising and enhancing cooperation during elections. A spokesperson for the European Commission confirmed that the code aims to be recognised as a ‘Code of Conduct’ under the DSA, which already mandates content moderation measures for online platforms.

The DSA, which applies to all platforms since February, imposes strict rules on the largest online services, requiring them to mitigate risks associated with disinformation. The new code will help these platforms demonstrate compliance with the DSA’s obligations, as assessed by the Commission and the European Board of Digital Services. However, no specific timeline has been provided for the code’s formal implementation.

Musk’s platform under fire for inadequate fact-checking

Elon Musk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.

Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.

In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.

Missouri Attorney General accuses Google of censoring conservatives

Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”

Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.

Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.