Russia blocks Signal messaging app

Russia’s state communications watchdog, Roskomnadzor, has announced a nationwide block on the encrypted messaging app Signal. The restriction, reported by Interfax, is attributed to Signal’s failure to comply with Russian anti-terrorism laws aimed at preventing the use of messaging apps for extremist activities.

Users across Russia, including Moscow and St Petersburg, experienced significant disruptions with Signal, which approximately one million Russians use for secure communications. Complaints about the app surged to over 1,500, indicating widespread issues. While Signal appeared to function normally for some users with a VPN, it was inaccessible for others trying to register new accounts or use it without a VPN.

Mikhail Klimarev, a telecom expert, confirmed that this block represents a deliberate action by Russian authorities rather than a technical malfunction. He noted that this is the first instance of Signal being blocked in Russia, marking a significant escalation in the country’s efforts to control encrypted communication platforms.

Roskomnadzor’s action follows previous attempts to restrict other messaging services, such as Telegram, which faced a similar blocking attempt in 2018. Despite these efforts, Telegram’s availability in Russia remained relatively unaffected. Signal still needs to comment on the current situation.

Meta wins appeal against anti-vaccine group

Meta Platforms (META.O) has successfully defended against an appeal by Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr., challenging Meta’s censorship of Facebook posts containing vaccine misinformation. The 9th US Circuit Court of Appeals in Pasadena, California, ruled that CHD did not prove Meta was influenced or coerced by federal officials to suppress anti-vaccine content, upholding a June 2021 decision by US District Judge Susan Illston.

CHD sued Meta in 2020, claiming its constitutional rights were violated when Meta flagged ‘vaccine misinformation’ as false and restricted the group’s advertising on Facebook. Meta argued its actions were part of efforts to curb the spread of COVID-19 vaccine misinformation, including prohibiting claims that the vaccines are ineffective and directing users to authoritative sources like the World Health Organisation for accurate information.

Circuit Judge Eric Miller, appointed by former President Donald Trump, emphasised that Meta, as a private company, has the right under the First Amendment to regulate content on its platform and promote its views on vaccine safety and efficacy, even if they align with the government’s stance. The court also dismissed claims against the Poynter Institute and Science Feedback, which help Meta evaluate content accuracy.

Children’s Health Defense expressed disappointment with the ruling and is considering further legal actions. Circuit Judge Daniel Collins dissented partially, suggesting that CHD could seek an injunction on free speech claims. However, he agreed other claims, such as those for monetary damages, should be dismissed. The decision underscores the ongoing debate around content moderation and free speech in the digital age.

UK considers revising Online Safety Act amid riots

The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.

Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.

In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.

Why does it matter?

The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.

X faces scrutiny for hosting extremist content

Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.

The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.

Why does it matter?

These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.

Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.

Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?

YouTube faces uncertain future in Russia

As Russia tightens its grip on independent media, YouTube remains a vital platform for free expression, particularly for opposition voices. However, this may only last for a while longer. Recent mass outages reported by Russian internet services signal a possible shift, with lawmakers blaming Google’s outdated infrastructure for the slowdowns—a claim Google disputes.

The video platform, which has served as a key outlet for dissenting opinions, faces potential blocking in Russia. With independent media largely banned, YouTube has become a crucial source of opposition content, such as the widely viewed video by the late Alexei Navalny accusing President Vladimir Putin of corruption.

Experts warn that banning YouTube could severely impact online freedom and disrupt Russia’s internet connectivity. The widespread use of VPNs to bypass restrictions could also strain the country’s internet infrastructure, further complicating the situation.

Why does it matter?

The Russian government has historically throttled internet traffic to silence dissent, but it now relies on a more sophisticated censorship system. Despite the growing pressure, YouTube remains accessible, likely due to fears of public backlash and the potential strain on Russia’s networks.

As Moscow encourages users to switch to domestic platforms like VK Video, the future of YouTube in Russia hangs in the balance. While some non-political content creators may migrate, opposition channels could struggle to maintain their reach if forced off YouTube.

EU scrutiny of X could expand due to UK riots

The European Commission’s ongoing investigation into social media platform X, owned by Elon Musk, could factor in the company’s handling of harmful content during the recent UK riots.

Charges against X were issued last month under the Digital Services Act (DSA), which mandates stricter controls on illegal content and public security risks for large online platforms.

Although the UK is no longer part of the EU, content shared in Britain that violates DSA rules might still reach European users, potentially breaching the law. Recent events in Britain, where far-right and anti-Muslim groups exploited the fatal stabbing of three young girls to spread disinformation and incite violence, have raised concerns.

The European Commission acknowledged that while the DSA does not cover actions outside the EU, content visible in Europe from the UK could influence their proceedings against X. The company has yet to respond to these developments.

Maduro blocks X in Venezuela amid election dispute

President Nicolás Maduro has imposed a 10-day block on access to the social media platform X (formerly Twitter) in Venezuela, accusing its owner, Elon Musk, of using the platform to promote hatred following the country’s disputed presidential election. Reports from Caracas indicated that by Thursday night, posts on X were no longer loading on several major telephone services, including the state-owned Movilnet.

Maduro, in a speech after a pro-government march, claimed that Musk violated the platform’s own rules and incited hatred. He also accused X of being used by his political opponents to create unrest in Venezuela. As part of his response, Maduro signed a resolution from the National Telecommunications Commission (Conatel) to remove X from circulation in the country for ten days. However, he did not elaborate on the process involved.

The tension between Maduro and Musk escalated after the disputed 28 July presidential election, where Venezuelan electoral authorities declared Maduro the winner. However, opposition candidate Edmundo González claimed victory, citing records from 80% of the electronic voting machines. Musk criticised Maduro on X, calling him a dictator and accusing him of electoral fraud. Since the election, Maduro has expressed a desire to regulate social media in Venezuela, alleging that platforms like X are being used to threaten his supporters and create anxiety across the country.

Turkey blocks access to Instagram

Turkey has blocked access to the social media platform Instagram, according to an announcement by the country’s infotech regulator. The reason or duration of the ban remains undisclosed, but it has also rendered the platform’s mobile app inaccessible.

The decision follows remarks by communications official in Turkey, Fahrettin Altun, who criticised Instagram for allegedly blocking condolence posts regarding the killing of Ismail Haniyeh, a prominent figure in the Palestinian militant group Hamas. Altun labelled Instagram’s action as ‘censorship’ and pointed out that the platform had not provided any policy violation as justification.

Meta Platforms Inc., the parent company of Instagram, has not yet responded to the ban or to Altun’s accusations. The Turkish Information Technologies and Communication Authority (BTK) made the decision public on its website on 2 August.

Civil society and industry share concerns about the UN draft Cybercrime Convention

Civil society organisations and more than 150 tech companies within the Cybersecurity Tech Accord urged the United Nations to revise the final draft of the UN Cybercrime Convention. Non-state stakeholders share concerns that the current language of the convention could lead to human rights abuses and criminalise the work of penetration testers, ethical hackers, security researchers, and journalists.

The UN member states are currently in the final round of negotiations for what will become the first global treaty on cybercrime, with talks running from 29 July to 8 August. The current draft, published on 23 May, has seen some positive changes, but the Tech Accord, in particular, calls for further revisions. The office of the UN High Commissioner for Human Rights also noted that the revised draft of the UN Cybercrime Convention includes some welcome improvements, however significant concerns remain about many provisions that fail to meet international human rights standards. The Electronic Frontier Foundation (EFF) added that the proposed UN Cybercrime Convention mandates intrusive domestic surveillance measures and requires states to cooperate in surveillance and data sharing. It allows the collection, preservation, and sharing of electronic evidence for any crime deemed serious by a country’s domestic law, with minimal human rights safeguards, even with countries that have poor human rights records.

These shortcomings are particularly concerning given the already expansive use of existing cybercrime laws in some jurisdictions, which have been used to unduly restrict freedom of expression, target dissenting voices, and arbitrarily interfere with the privacy and anonymity of communications, according to the office’s analysis. A key concern of the Tech Accord is the need for more transparency in the convention’s current form, while the EFF calls to address the currently formulated highly intrusive secret spying powers without robust safeguards and insufficient protection for security researchers, among other concerns.

DoJ warns of TikTok’s potential to influence US elections

The US Justice Department has raised the alarm over TikTok’s potential influence on American politics, arguing that the app’s continued operation under ByteDance, its Chinese parent company, could enable covert interference by the Chinese government in US elections. In a recent federal court filing, prosecutors suggested that TikTok’s algorithm might be manipulated to sway public opinion and influence political discourse, posing a significant threat to national security.

The filing is part of a broader legal battle as TikTok challenges a new US law that could force a ban on the app unless its ownership is transferred by January 2025. The law, signed by President Joe Biden in April, addresses concerns over TikTok’s ties to China and its potential to compromise US security. TikTok argues that the law infringes on free speech and restricts access to information, as it targets a specific platform and its extensive global user base.

The Justice Department contends that the law aims not to suppress free speech but to address unique national security risks posed by TikTok’s connection to a foreign power. They suggest a possible solution could involve selling TikTok to an American company, allowing the app to continue operating in the US without interruption.

Why does this matter?

Concerns about TikTok’s data practices have been a focal point, with officials warning that the app collects extensive personal information from users, including location data and private messages. The department also pointed to technologies in China that could potentially influence the app’s content and raise further worries about the app’s role in data collection and content manipulation.

The debate highlights a clash between national security concerns and the protection of digital freedoms, as the outcome of the lawsuit could set a significant precedent for how the US handles foreign tech influence.