Social media Bluesky gains popularity in UK after Musk’s riot remarks

Bluesky, a social media platform, has reported a significant increase in signups in the United Kingdom recently as users look for alternatives to Elon Musk’s X. The increase follows Musk’s controversial remarks on ongoing riots in the UK, which have driven users, including several Members of Parliament, to explore other platforms. The company announced that it had experienced a 60% rise in activity from UK accounts.

Musk has faced criticism for inflaming tensions after riots in Britain were sparked by misinformation surrounding the murder of three girls in northern England. The Tesla CEO allegedly used X to disseminate misleading information to his vast audience, including a post claiming that civil war in Britain was ‘inevitable.’ The case has prompted Prime Minister Keir Starmer to respond and increased calls for the government to accelerate the implementation of online content regulations.

Bluesky highlighted that the UK had the most signups of any country for five of the last seven days. Once supported by Twitter co-founder Jack Dorsey, the platform is among the many apps vying to replace Twitter after Musk’s turbulent takeover in late 2022.

As of July, Bluesky’s monthly active user base was approximately 688,568, which is small compared to X’s 76.9 million users, according to Similarweb, a digital market intelligence firm. Despite its smaller size, the recent surge in UK signups to Bluesky appears to be a growing interest in alternative social media platforms.

Russia blocks Signal messaging app

Russia’s state communications watchdog, Roskomnadzor, has announced a nationwide block on the encrypted messaging app Signal. The restriction, reported by Interfax, is attributed to Signal’s failure to comply with Russian anti-terrorism laws aimed at preventing the use of messaging apps for extremist activities.

Users across Russia, including Moscow and St Petersburg, experienced significant disruptions with Signal, which approximately one million Russians use for secure communications. Complaints about the app surged to over 1,500, indicating widespread issues. While Signal appeared to function normally for some users with a VPN, it was inaccessible for others trying to register new accounts or use it without a VPN.

Mikhail Klimarev, a telecom expert, confirmed that this block represents a deliberate action by Russian authorities rather than a technical malfunction. He noted that this is the first instance of Signal being blocked in Russia, marking a significant escalation in the country’s efforts to control encrypted communication platforms.

Roskomnadzor’s action follows previous attempts to restrict other messaging services, such as Telegram, which faced a similar blocking attempt in 2018. Despite these efforts, Telegram’s availability in Russia remained relatively unaffected. Signal still needs to comment on the current situation.

NOYB files complaint against X over AI data use

An Austrian advocacy group, NOYB, has filed a complaint against the social media platform X, owned by Elon Musk, accusing the company of using users’ data to train its AI systems without their consent. The complaint, led by privacy activist Max Schrems, was lodged with authorities in nine European Union countries, pressuring Ireland’s Data Protection Commission (DPC), the primary EU regulator for major US tech firms because their EU operations are based in Ireland.

The Irish DPC seeks to limit or suspend X’s ability to process user data for AI training. X has agreed to halt the use of personal data for AI training until users can opt-out, following a recent Irish court hearing that found X had begun data collection before allowing users to object.

Despite this fact, NOYB’s complaint primarily focuses on X’s lack of cooperation and the inadequacy of its mitigation measures rather than questioning the legality of the data processing itself. Schrems emphasised the need for X to fully comply with the EU law by obtaining user consent before using their data. X has yet to respond to the latest complaint but intends to work with the DPC on AI-related issues.

In a related case, Meta, Facebook’s parent company, delayed the launch of its AI assistant in Europe after the Irish DPC advised against it, following similar complaints from NOYB regarding using personal data for AI training.

UK considers revising Online Safety Act amid riots

The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.

Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.

In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.

Why does it matter?

The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.

X faces scrutiny for hosting extremist content

Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.

The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.

Why does it matter?

These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.

Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.

Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?

Ireland takes legal action against X over data privacy

The Irish Data Protection Commission (DPC) has launched legal action against the social media platform X, formerly Twitter, in a case that revolves around processing user data to train Musk’s AI large language model called Grok. The AI tool or chatbot was developed by xAI, a company founded by Elon Musk, and is used as a search assistant for premium users on the platform.

The DPC is seeking a court order to stop or limit the processing of user data by X for training its AI systems, expressing concerns that this could violate the European Union’s General Data Protection Regulation (GDPR). The case may be referred to the European Data Protection Board for further review.

The legal dispute is part of a broader conflict between Big Tech companies and regulators over using personal data to develop AI technologies. Consumer organisations have accused X of breaching GDPR, a claim the company has vehemently denied, calling the DPC’s actions unwarranted and overly broad.

The Irish DPC has an important role in overseeing X’s compliance with the EU data protection laws since the platform’s operations in the EU are managed from Dublin. The current legal proceedings could significantly shift how Ireland enforces GDPR against large tech firms.

The DPC is also concerned about X’s plans to launch a new version of Grok, which is reportedly being trained using data from the EU and European Economic Area users. The privacy watchdog argues that this could worsen existing issues with data processing.

Despite X implementing some mitigation measures, such as offering users an opt-out option, these steps were not in place when the data processing began, leading to further scrutiny from the DPC. X has resisted the DPC’s requests to halt data processing or delay the release of the new Grok version, leading to an ongoing court battle.

The outcome of this case could set a precedent for how AI and data protection issues are handled across Europe.

TikTok challenges DOJ’s secret evidence request

TikTok and its parent company ByteDance are urging a US appeals court to dismiss the Justice Department’s request to keep parts of its legal case against TikTok confidential. The government aims to file over 15% of its brief and 30% of its evidence in secret, which TikTok argues would hinder its ability to challenge any potentially incorrect factual claims.

The Justice Department, which has not commented publicly, recently filed a classified document outlining security concerns regarding ByteDance’s ownership of TikTok. The document includes declarations from the FBI and other national security agencies.

The government contends that TikTok’s Chinese ownership poses a significant national security threat due to its access to vast amounts of personal data from American users and China’s potential for information manipulation.

In response, TikTok maintains that it has never and will never share US user data with China or manipulate video content as alleged. The company suggests appointing a district court judge as a special master to review the classified submissions if the court does not reject the secret evidence.

The Biden administration has asked the court to dismiss lawsuits filed by TikTok, ByteDance, and TikTok creators that aim to block a law requiring the divestiture of TikTok’s US assets by 19 January or face a ban. Despite the lack of evidence that the Chinese government has accessed US user data, the Justice Department insists that the potential risk remains too significant to ignore.

Russia fines Google and TikTok over banned content

Russia’s communications regulator, Roskomnadzor, has fined Alphabet’s Google and TikTok for not complying with orders to remove banned content. The Tagansky district court in Moscow imposed a 5 million rouble ($58,038) fine on Google and a 4 million rouble fine on TikTok. These penalties were issued because both platforms failed to identify content similar to what was previously ordered to be removed.

This is part of a broader effort by Russia over the past several years to enforce the removal of content it considers illegal from foreign technology platforms. Although relatively small, the fines have been persistent, reflecting Russia’s ongoing scrutiny and regulation of online content.

Moscow has been particularly critical of Google, especially for taking down YouTube channels associated with Russian media and public figures. Neither Google nor TikTok immediately responded to requests for comment on the fines.

US agency says Amazon to be held accountable for hazardous products

The Consumer Product Safety Commission (CPSC) of the United States declared that Amazon will be held accountable for selling hazardous third-party products on its platform. It has further asked the company to take steps to inform consumers and ensure that they return or destroy such products. The directive encompasses 400,000 items that violate flammability standards, such as defective carbon monoxide detectors, unsafe hairdryers, and children’s sleepwear. In response, Amazon revealed its intention to contest the order in court.

The US agency stated that ‘Amazon failed to notify the public about these hazardous products and did not take adequate steps to encourage its customers to return or destroy them, thereby leaving consumers at substantial risk of injury’. The CPSC labelled Amazon as a ‘distributor’ of faulty products, as such products are stored and shipped by the company.

This is not a one-off incident for the company as previously, in 2021, the CPSC also sued Amazon, compelling them to recall numerous hazardous products sold on their platform. Subsequently, Amazon was forced to remove most of these items and refunded customers. Nevertheless, Amazon maintained that they provide logistics for independent sellers and are not distributors.

Spain fines Booking.com €413.2 million for market abuse

Britain’s competition regulator, the CNMC, has imposed a hefty fine of €413.2 million (US$448 million) on online reservation platform Booking.com. The fine, the largest ever levied by the CNMC, targets Booking.com’s dominant market position in Spain, where it holds a 70% to 90% share. The penalties stem from practices dating back to 2019.

The CNMC found Booking.com to be imposing unfair terms on hotels and stifling competition from other providers. This included a ban on hotels offering lower prices on their own websites compared to Booking.com’s listings, as well as the ability of Booking.com to unilaterally impose price discounts on hotels. Additionally, the platform mandated that hotels resolve disputes in Dutch courts.

Booking Holdings, Booking.com’s parent company, intends to appeal the fine. They argue that the issue falls under the remit of the European Union’s Digital Markets Act and express strong disagreement with the CNMC’s findings. Booking Holdings plans to challenge the decision in Spain’s high court.

The investigation was triggered by complaints lodged in 2021 by the Spanish Association of Hotel Managers and the Madrid Hotel Business Association. Another point of contention is Booking.com’s practice of offering benefits to hotels that generate higher fees, which critics argue unfairly restricts competition from alternative booking services.