Salt Typhoon cyberespionage operation raises alarm over US telecommunications security vulnerabilities

US government agencies are set to brief the House of Representatives on a widespread cyberespionage campaign allegedly linked to China. Known as Salt Typhoon, the operation reportedly targeted American telecommunications firms to steal call metadata and other sensitive information. A similar briefing was held for senators last week.

The White House revealed that at least eight US telecom companies had been affected, with a large number of citizens’ data compromised. Senator Ron Wyden is drafting legislation in response, while Senator Bob Casey expressed significant concern, noting that legislative action might be delayed until the new year.

On Wednesday, a Senate Commerce subcommittee will examine the broader risks posed by cyber threats to communication networks. Industry representatives, including Competitive Carriers Association CEO Tim Donovan, will contribute insights on best practices to counter such attacks.

China has denied the allegations, labelling them as disinformation, and reaffirmed its opposition to cyber theft. Officials and lawmakers continue to emphasise the gravity of the breaches, with Senator Richard Blumenthal calling the scale of Chinese hacking efforts ‘terrifying.’

Google and Meta under European scrutiny over teen ad partnership

European regulators are investigating a previously undisclosed advertising partnership between Google and Meta that targeted teenagers on YouTube and Instagram, the Financial Times reports. The now-cancelled initiative aimed at promoting Instagram to users aged 13 to 17 allegedly bypassed Google’s policies restricting ad personalisation for minors.

The partnership, initially launched in the US with plans for global expansion, has drawn the attention of the European Commission, which has requested extensive internal records from Google, including emails and presentations, to evaluate potential violations. Google, defending its practices, stated that its safeguards for minors remain industry-leading and emphasised recent internal training to reinforce policy compliance.

This inquiry comes amid heightened concerns about the impact of social media on young users. Earlier this year, Meta introduced enhanced privacy features for teenagers on Instagram, reflecting the growing demand for stricter online protections for minors. Neither Meta nor the European Commission has commented on the investigation so far.

OpenAI expands AI tools with text-to-video feature

OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.

The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.

The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.

Palantir and Anduril team up for defence AI

Palantir Technologies and Anduril Industries have joined forces to optimise defence data for AI training. Palantir’s platform will organise and label sensitive defence data for model training, while Anduril’s systems will manage the retention and distribution of this information for national security applications.

The collaboration highlights challenges in deploying AI for defence, where sensitive data complicates model training. Anduril recently partnered with OpenAI to integrate advanced AI into security missions, underscoring its commitment to autonomous defence solutions.

Palantir, a key player in the AI boom, continues to see robust demand from governments and businesses seeking advanced software solutions.

Bill targets Huawei, ZTE in US telecoms overhaul

The US House of Representatives is preparing to vote on a defence bill proposing $3 billion for telecom companies to replace equipment from Chinese firms Huawei and ZTE. The legislation aims to address security concerns posed by Chinese technology in American wireless networks. A previous allocation of $1.9 billion was deemed insufficient for the programme, which the Federal Communications Commission (FCC) estimates will cost nearly $5 billion.

The initiative, known as the ‘rip and replace’ programme, targets rural carriers reliant on the equipment, which could lose connectivity if funding gaps persist. FCC Chair Jessica Rosenworcel warned that insufficient funding might force some rural networks to shut down, endangering services such as 911 emergency calls. Rural regions face significant risks without immediate support for the removal and replacement of insecure telecoms infrastructure.

The proposed funding would also cover up to $500 million for regional technology hubs, supported by revenue from an FCC spectrum auction. Advocates emphasise the importance of securing connectivity while maintaining services for millions of Americans. Competitive Carriers Association CEO Tim Donovan welcomed the proposed funding, calling it critical for network security and consumer access.

Pavel Durov faces Paris court over Telegram allegations

Pavel Durov, founder of Telegram, appeared in a Paris court on 6 December to address allegations that the messaging app has facilitated criminal activity. Represented by his lawyers, Durov reportedly stated he trusted the French justice system but declined to comment further on the case.

The legal proceedings stem from charges brought against Durov in August, accusing him of running a platform that enables illicit transactions. Following his arrest at Le Bourget airport, he posted a $6 million bail and has been barred from leaving France until March 2025. If convicted, he could face up to 10 years in prison and a fine of 500,000 euros.

Industry experts fear the case against Durov reflects a broader crackdown on privacy-preserving technologies in the Web3 space. Parallels have been drawn with the arrest of Tornado Cash developer Alexey Pertsev, raising concerns over government overreach and the implications for digital privacy.

Blue Yonder hit by data theft in cyberattack

Supply chain software company Blue Yonder is investigating claims of data theft after the ‘Termite’ ransomware group threatened to release stolen data. The Arizona-based company, which serves major clients like DHL, Starbucks, and Walgreens, was hit by a ransomware attack on 21 November. While Blue Yonder initially confirmed a cyberattack, it did not disclose the perpetrators.

The Termite group, which recently claimed responsibility for the breach on its dark web leak site, claims to have stolen 680 gigabytes of data, including documents, reports, and email lists. The group, believed to be a rebranded version of the Babuk ransomware gang, has threatened to release the data soon. Blue Yonder is working with cybersecurity experts to investigate the breach and has notified impacted customers, though it has not confirmed specific details about the stolen data.

The attack has caused operational disruptions for some clients, including UK supermarkets Morrisons and Sainsbury’s, and US company Starbucks, which was forced to manually calculate employee pay. The full extent of the attack on Blue Yonder’s 3,000+ customers remains unclear.

UN Cybercrime Convention raises human rights concerns in the Arab region

The imminent adoption of a new UN cybercrime convention by the General Assembly has sparked significant concerns over its implications for global digital rights, particularly in the Arab region. Critics argue that the convention, as currently drafted, lacks sufficient human rights safeguards, potentially empowering authoritarian regimes to suppress dissent both domestically and internationally.

In the Arab region, existing cybercrime laws often serve as tools to curb freedom of expression, with vague terms criminalising online speech that might undermine state prestige or harm public morals. These restrictions contravene Article 19 of the International Covenant on Civil and Political Rights, which requires limitations on expression to be lawful, necessary, and proportionate.

Such ambiguity in legal language fosters an environment of self-censorship, as individuals remain uncertain about the legal interpretation of their online content. The convention’s broad scope further alarms international cooperation in cases potentially infringing human rights. It allows for the collection of electronic evidence for ‘serious crimes,’ which are vaguely defined and could include acts like defamation or expressions of sexual orientation—punishable by severe penalties in some countries.

That provision risks enabling extensive surveillance and data-sharing among nations with weak human rights records. In the Arab region, existing cybercrime laws already permit intrusive surveillance and mass data collection without adequate safeguards, threatening individuals’ privacy rights. Countries like Tunisia and Palestine lack mechanisms to notify individuals after surveillance, removing their ability to seek redress for legal violations and exacerbating privacy concerns.

In light of these issues, Access Now and civil society organisations are urging UN member states to critically evaluate the convention and resist voting for its adoption in its current form. They recommend thorough national discussions to assess its human rights impacts and call for stronger safeguards in future negotiations.

Why does it matter?

Arab states are encouraged to align their cybercrime laws with international standards and engage civil society in discussions to demonstrate a genuine commitment to human rights. The overarching message is clear: without comprehensive reforms, the convention risks further eroding digital rights and undermining freedom of expression worldwide. It is imperative to ensure that any international treaty robustly protects human rights rather than enabling their violation under the guise of combating cybercrime.

International Red Cross sets guidelines for AI use

The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.

ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.

Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.

Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.

Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.

Former ASML worker accused of selling secrets

A Rotterdam court is set to hold a pretrial hearing on Monday concerning a former Russian employee of ASML accused of stealing intellectual property from the Dutch semiconductor equipment maker. The suspect, a 43-year-old Russian national, allegedly profited by selling company manuals, including those of ASML’s Mapper subsidiary, to Russian buyers, according to Dutch media reports.

ASML, which acquired Mapper in 2019, confirmed its awareness of the case and said it had filed a formal complaint, declining further comment during ongoing legal proceedings. The suspect is reportedly in custody, though details of the arrest remain unclear.

Mapper, a Dutch firm focused on developing E-beam lithography technology, was integrated into ASML following its 2019 bankruptcy. While Mapper’s product did not succeed, its engineers joined ASML’s chip-measuring business, helping to bolster the company’s capabilities. This acquisition eased concerns about sensitive technology falling into foreign hands, a priority for both the Dutch government and the US military.