X faces scrutiny for hosting extremist content

Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.

The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.

Why does it matter?

These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.

Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.

Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?

Ireland takes legal action against X over data privacy

The Irish Data Protection Commission (DPC) has launched legal action against the social media platform X, formerly Twitter, in a case that revolves around processing user data to train Musk’s AI large language model called Grok. The AI tool or chatbot was developed by xAI, a company founded by Elon Musk, and is used as a search assistant for premium users on the platform.

The DPC is seeking a court order to stop or limit the processing of user data by X for training its AI systems, expressing concerns that this could violate the European Union’s General Data Protection Regulation (GDPR). The case may be referred to the European Data Protection Board for further review.

The legal dispute is part of a broader conflict between Big Tech companies and regulators over using personal data to develop AI technologies. Consumer organisations have accused X of breaching GDPR, a claim the company has vehemently denied, calling the DPC’s actions unwarranted and overly broad.

The Irish DPC has an important role in overseeing X’s compliance with the EU data protection laws since the platform’s operations in the EU are managed from Dublin. The current legal proceedings could significantly shift how Ireland enforces GDPR against large tech firms.

The DPC is also concerned about X’s plans to launch a new version of Grok, which is reportedly being trained using data from the EU and European Economic Area users. The privacy watchdog argues that this could worsen existing issues with data processing.

Despite X implementing some mitigation measures, such as offering users an opt-out option, these steps were not in place when the data processing began, leading to further scrutiny from the DPC. X has resisted the DPC’s requests to halt data processing or delay the release of the new Grok version, leading to an ongoing court battle.

The outcome of this case could set a precedent for how AI and data protection issues are handled across Europe.

TikTok challenges DOJ’s secret evidence request

TikTok and its parent company ByteDance are urging a US appeals court to dismiss the Justice Department’s request to keep parts of its legal case against TikTok confidential. The government aims to file over 15% of its brief and 30% of its evidence in secret, which TikTok argues would hinder its ability to challenge any potentially incorrect factual claims.

The Justice Department, which has not commented publicly, recently filed a classified document outlining security concerns regarding ByteDance’s ownership of TikTok. The document includes declarations from the FBI and other national security agencies.

The government contends that TikTok’s Chinese ownership poses a significant national security threat due to its access to vast amounts of personal data from American users and China’s potential for information manipulation.

In response, TikTok maintains that it has never and will never share US user data with China or manipulate video content as alleged. The company suggests appointing a district court judge as a special master to review the classified submissions if the court does not reject the secret evidence.

The Biden administration has asked the court to dismiss lawsuits filed by TikTok, ByteDance, and TikTok creators that aim to block a law requiring the divestiture of TikTok’s US assets by 19 January or face a ban. Despite the lack of evidence that the Chinese government has accessed US user data, the Justice Department insists that the potential risk remains too significant to ignore.

Russia fines Google and TikTok over banned content

Russia’s communications regulator, Roskomnadzor, has fined Alphabet’s Google and TikTok for not complying with orders to remove banned content. The Tagansky district court in Moscow imposed a 5 million rouble ($58,038) fine on Google and a 4 million rouble fine on TikTok. These penalties were issued because both platforms failed to identify content similar to what was previously ordered to be removed.

This is part of a broader effort by Russia over the past several years to enforce the removal of content it considers illegal from foreign technology platforms. Although relatively small, the fines have been persistent, reflecting Russia’s ongoing scrutiny and regulation of online content.

Moscow has been particularly critical of Google, especially for taking down YouTube channels associated with Russian media and public figures. Neither Google nor TikTok immediately responded to requests for comment on the fines.

US agency says Amazon to be held accountable for hazardous products

The Consumer Product Safety Commission (CPSC) of the United States declared that Amazon will be held accountable for selling hazardous third-party products on its platform. It has further asked the company to take steps to inform consumers and ensure that they return or destroy such products. The directive encompasses 400,000 items that violate flammability standards, such as defective carbon monoxide detectors, unsafe hairdryers, and children’s sleepwear. In response, Amazon revealed its intention to contest the order in court.

The US agency stated that ‘Amazon failed to notify the public about these hazardous products and did not take adequate steps to encourage its customers to return or destroy them, thereby leaving consumers at substantial risk of injury’. The CPSC labelled Amazon as a ‘distributor’ of faulty products, as such products are stored and shipped by the company.

This is not a one-off incident for the company as previously, in 2021, the CPSC also sued Amazon, compelling them to recall numerous hazardous products sold on their platform. Subsequently, Amazon was forced to remove most of these items and refunded customers. Nevertheless, Amazon maintained that they provide logistics for independent sellers and are not distributors.

Spain fines Booking.com €413.2 million for market abuse

Britain’s competition regulator, the CNMC, has imposed a hefty fine of €413.2 million (US$448 million) on online reservation platform Booking.com. The fine, the largest ever levied by the CNMC, targets Booking.com’s dominant market position in Spain, where it holds a 70% to 90% share. The penalties stem from practices dating back to 2019.

The CNMC found Booking.com to be imposing unfair terms on hotels and stifling competition from other providers. This included a ban on hotels offering lower prices on their own websites compared to Booking.com’s listings, as well as the ability of Booking.com to unilaterally impose price discounts on hotels. Additionally, the platform mandated that hotels resolve disputes in Dutch courts.

Booking Holdings, Booking.com’s parent company, intends to appeal the fine. They argue that the issue falls under the remit of the European Union’s Digital Markets Act and express strong disagreement with the CNMC’s findings. Booking Holdings plans to challenge the decision in Spain’s high court.

The investigation was triggered by complaints lodged in 2021 by the Spanish Association of Hotel Managers and the Madrid Hotel Business Association. Another point of contention is Booking.com’s practice of offering benefits to hotels that generate higher fees, which critics argue unfairly restricts competition from alternative booking services.

DoJ warns of TikTok’s potential to influence US elections

The US Justice Department has raised the alarm over TikTok’s potential influence on American politics, arguing that the app’s continued operation under ByteDance, its Chinese parent company, could enable covert interference by the Chinese government in US elections. In a recent federal court filing, prosecutors suggested that TikTok’s algorithm might be manipulated to sway public opinion and influence political discourse, posing a significant threat to national security.

The filing is part of a broader legal battle as TikTok challenges a new US law that could force a ban on the app unless its ownership is transferred by January 2025. The law, signed by President Joe Biden in April, addresses concerns over TikTok’s ties to China and its potential to compromise US security. TikTok argues that the law infringes on free speech and restricts access to information, as it targets a specific platform and its extensive global user base.

The Justice Department contends that the law aims not to suppress free speech but to address unique national security risks posed by TikTok’s connection to a foreign power. They suggest a possible solution could involve selling TikTok to an American company, allowing the app to continue operating in the US without interruption.

Why does this matter?

Concerns about TikTok’s data practices have been a focal point, with officials warning that the app collects extensive personal information from users, including location data and private messages. The department also pointed to technologies in China that could potentially influence the app’s content and raise further worries about the app’s role in data collection and content manipulation.

The debate highlights a clash between national security concerns and the protection of digital freedoms, as the outcome of the lawsuit could set a significant precedent for how the US handles foreign tech influence.

LinkedIn agrees to $6.6 million settlement over ad metrics

LinkedIn has agreed to a $6.625 million settlement to resolve a proposed class action accusing the company of inflating ad metrics, leading to overcharges for advertisers. The preliminary settlement, filed in San Jose, California federal court, awaits approval by US Magistrate Judge Susan van Keulen. Although LinkedIn denies any wrongdoing, it has committed to hiring an outside auditor for two years to review its ad metrics.

The lawsuit originated from allegations by advertisers, including TopDevz of Sacramento and Noirefy of Chicago, who claimed LinkedIn counted video ad views even when the videos played off-screen as users scrolled past. This issue came to light after LinkedIn disclosed in November 2020 that software bugs had led to over 418,000 overcharges, mostly under $25. LinkedIn subsequently provided credits to nearly all affected advertisers.

The settlement covers US advertisers who purchased ads on LinkedIn from January 2015 to May 2023. LinkedIn stated that the settlement underscores its commitment to ad integrity and maintaining a trusted platform for users and customers. The advertisers’ lawyers may seek up to 25% of the settlement amount, approximately $1.656 million, for legal fees.

Judge van Keulen had previously dismissed the lawsuit in December 2021, but the advertisers appealed, and the appeal was put on hold for mediation. The case, known as In re LinkedIn Advertising Metrics Litigation, is being handled in the US District Court, Northern District of California.

YouTube faces speed drops in Russia amid tensions

YouTube speeds in Russia are expected to significantly decline on desktop computers due to Google’s failure to upgrade its equipment in the country and its refusal to unblock Russian media channels. The situation has drawn criticism from Alexander Khinshtein, head of the lower house of parliament’s information policy committee, who emphasised that the slowdown is a repercussion of YouTube’s actions. Khinshtein highlighted that download speeds on the platform have already decreased by 40% and could drop by up to 70% next week.

The decline in YouTube quality is attributed to Google’s inaction, particularly its failure to upgrade Google Global Cache servers in Russia. Additionally, Google has not invested in Russian infrastructure and allowed its local subsidiary to go bankrupt, preventing it from covering local data centre expenses. Communications regulator Roskomnadzor has echoed these concerns, indicating that the lack of upgrades has led to deteriorating service quality.

Google has faced multiple fines from Russia for not removing content deemed illegal or undesirable by the Russian government. Following Russia’s invasion of Ukraine in March 2022, YouTube blocked channels associated with Russian state-funded media worldwide, citing its policy against content that denies or trivialises well-documented violent events. Subsequently, Google’s Russian subsidiary filed for bankruptcy, citing Russian authorities’ seizure of its bank account as the reason for its inability to function. Meanwhile, some Russian officials, including Chechen leader Ramzan Kadyrov, have proposed blocking YouTube entirely in response to the ongoing tensions.

China cracks down on unauthorised ChatGPT access

The Cyberspace Administration of China (CAC), China’s internet regulator, has publicly identified and named agents facilitating local ChatGPT access. The latest crackdown comes in the backdrop of OpenAI’s decision to restrict access to its API in ‘unsupported countries and territories’ like mainland China, Hong Kong, and Macau.

Alongside CAC, other local authorities have penalised several website operators this year for providing unauthorised access to generative AI services like ChatGPT. These measures are indicative of the CAC’s commitment to enforcing China’s AI regulations, which mandate rigorous screening and registration of all AI services before they can be publicly made available. Even with these stringent rules, some developers and businesses have managed to sidestep the regulations by using virtual private networks.

Why does this matter?

Despite Beijing’s ambition of leading the world’s AI race, it is stringent about its requirement of GenAI providers upholding core socialist values and avoiding generating content that threatens national security or the socialist system. As of January, about 117 GenAI products have been registered with the CAC, and 14 large language models and enterprise applications have been given formal approval for commercial use.