The imminent adoption of a new UN cybercrime convention by the General Assembly has sparked significant concerns over its implications for global digital rights, particularly in the Arab region. Critics argue that the convention, as currently drafted, lacks sufficient human rights safeguards, potentially empowering authoritarian regimes to suppress dissent both domestically and internationally.
In the Arab region, existing cybercrime laws often serve as tools to curb freedom of expression, with vague terms criminalising online speech that might undermine state prestige or harm public morals. These restrictions contravene Article 19 of the International Covenant on Civil and Political Rights, which requires limitations on expression to be lawful, necessary, and proportionate.
Such ambiguity in legal language fosters an environment of self-censorship, as individuals remain uncertain about the legal interpretation of their online content. The convention’s broad scope further alarms international cooperation in cases potentially infringing human rights. It allows for the collection of electronic evidence for ‘serious crimes,’ which are vaguely defined and could include acts like defamation or expressions of sexual orientation—punishable by severe penalties in some countries.
That provision risks enabling extensive surveillance and data-sharing among nations with weak human rights records. In the Arab region, existing cybercrime laws already permit intrusive surveillance and mass data collection without adequate safeguards, threatening individuals’ privacy rights. Countries like Tunisia and Palestine lack mechanisms to notify individuals after surveillance, removing their ability to seek redress for legal violations and exacerbating privacy concerns.
In light of these issues, Access Now and civil society organisations are urging UN member states to critically evaluate the convention and resist voting for its adoption in its current form. They recommend thorough national discussions to assess its human rights impacts and call for stronger safeguards in future negotiations.
Why does it matter?
Arab states are encouraged to align their cybercrime laws with international standards and engage civil society in discussions to demonstrate a genuine commitment to human rights. The overarching message is clear: without comprehensive reforms, the convention risks further eroding digital rights and undermining freedom of expression worldwide. It is imperative to ensure that any international treaty robustly protects human rights rather than enabling their violation under the guise of combating cybercrime.
The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.
ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.
Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.
Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.
Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.
A Rotterdam court is set to hold a pretrial hearing on Monday concerning a former Russian employee of ASML accused of stealing intellectual property from the Dutch semiconductor equipment maker. The suspect, a 43-year-old Russian national, allegedly profited by selling company manuals, including those of ASML’s Mapper subsidiary, to Russian buyers, according to Dutch media reports.
ASML, which acquired Mapper in 2019, confirmed its awareness of the case and said it had filed a formal complaint, declining further comment during ongoing legal proceedings. The suspect is reportedly in custody, though details of the arrest remain unclear.
Mapper, a Dutch firm focused on developing E-beam lithography technology, was integrated into ASML following its 2019 bankruptcy. While Mapper’s product did not succeed, its engineers joined ASML’s chip-measuring business, helping to bolster the company’s capabilities. This acquisition eased concerns about sensitive technology falling into foreign hands, a priority for both the Dutch government and the US military.
President-elect Donald Trump‘s transition team has invited tech giants, including Google, Microsoft, Meta, Snap, and TikTok, to a mid-December meeting focused on combating online drug sales, according to a report by The Information. The meeting aims to gather insights from these companies about challenges and priorities in addressing illegal drug activity on their platforms.
Trump has pledged to tackle the fentanyl crisis, emphasising stricter measures against its flow into the US from Mexico and Canada. He has also proposed a nationwide advertising campaign to educate the public about the dangers of fentanyl. Tech companies have faced scrutiny in the past for their platforms’ roles in facilitating drug sales, with Meta under investigation and eBay recently settling a case for failing to prevent the sale of devices used to make counterfeit pills.
The transition team has not commented publicly on the meeting, but it underscores the growing intersection between technology and public health issues, particularly as the US grapples with the devastating impact of fentanyl addiction and trafficking.
A US federal appeals court has upheld a law requiring TikTok’s Chinese parent company, ByteDance, to sell its US operations by 19 January or face a nationwide ban. The ruling marks a significant win for the Justice Department, citing national security concerns over ByteDance’s access to Americans’ data and its potential to influence public discourse. TikTok plans to appeal to the Supreme Court, hoping to block the divestment order.
The decision reflects bipartisan efforts to counter perceived threats from China, with Attorney General Merrick Garland calling it a vital step in preventing the Chinese government from exploiting TikTok. Critics, including the ACLU, argue that banning the app infringes on First Amendment rights, as 170 million Americans rely on TikTok for creative and social expression. The Chinese Embassy denounced the ruling, warning it could damage US-China relations.
Unless overturned or extended by President Biden, the law could also set a precedent for restricting other foreign-owned apps. Meanwhile, TikTok’s rivals, such as Meta and Google, have seen gains in the wake of the decision, as advertisers prepare for potential shifts in the social media landscape.
The European Union has directed TikTok to retain data related to Romania’s elections under the Digital Services Act, citing concerns over foreign interference. The move follows pro-Russia ultranationalist Calin Georgescu’s unexpected success in the presidential race’s first round, raising alarm about coordinated social media promotion.
Declassified documents revealed TikTok’s role in amplifying Georgescu’s profile via coordinated accounts and paid algorithms, despite his claim of no campaign spending. Romania‘s security agencies have flagged these efforts as ‘hybrid Russian attacks,’ accusations Russia denies.
TikTok stated its cooperation with the EU in addressing concerns and pledged to establish facts amid allegations. Romania’s runoff presidential vote is seen as pivotal for the country’s EU alignment.
FCC Chairwoman Jessica Rosenworcel has proposed requiring US communications providers to certify annually that they have plans to defend against cyberattacks. The move comes amid growing concerns over espionage by ‘Salt Typhoon,’ a hacking group allegedly linked to Beijing that has infiltrated several American telecom companies to steal call data.
Rosenworcel highlighted the need for a modern framework to secure networks as US intelligence agencies assess the impact of Salt Typhoon’s widespread attack. A senior US official confirmed the hackers had stolen metadata from numerous Americans, breaching at least eight telecom firms.
The FCC proposal, which Rosenworcel has circulated to other commissioners, would take effect immediately if approved. The announcement follows a classified Senate briefing on the breach, but industry giants like Verizon, AT&T, and T-Mobile have yet to comment.
A senior US official revealed that a Chinese hacking group, known as ‘Salt Typhoon,’ has stolen vast amounts of Americans’ metadata in a broad cyberespionage effort targeting US telecommunications. While specific figures remain undisclosed, the hackers are said to have breached at least eight American telecom firms, including Verizon, AT&T, and T-Mobile.
Call record metadata — detailing who called whom, when, and where — was a key target, exposing sensitive personal and professional patterns. In some cases, telephone audio intercepts were also reportedly stolen. The campaign remains active, with the White House prioritising efforts to counter the intrusions.
Government agencies, including the FBI and the National Security Council, have briefed lawmakers and President Joe Biden on the matter, highlighting the severity of the breach. Efforts to secure the nation’s telecommunications infrastructure are ongoing.
Romania has been subjected to ‘aggressive hybrid Russian attacks’ during a series of recent elections, according to declassified documents from the country’s security council. The revelations come ahead of a presidential runoff between pro-Russian far-right candidate Calin Georgescu and pro-European centrist Elena Lasconi. Georgescu’s unexpected rise, attributed in part to coordinated promotion on TikTok, has raised alarms in this European Union and NATO member state.
Romanian intelligence reported over 85,000 cyber attacks exploiting vulnerabilities, including the publication of election website access data on Russian cybercrime platforms. The attacks persisted on election day and beyond, with officials concluding they stemmed from resources typical of a state actor. Russia has denied any involvement in the election.
If Georgescu wins, his anti-NATO stance and opposition to aiding Ukraine could isolate Romania from Western allies, marking a significant geopolitical shift. The alleged cyber campaigns have intensified concerns about election integrity in the region, drawing attention to the role of foreign interference in shaping democratic outcomes.
Anduril Industries and OpenAI have announced a partnership to advance AI applications for US national security. The collaboration will focus on enhancing counter-unmanned aircraft systems (CUAS), crucial for detecting and neutralising airborne drone threats.
By leveraging Anduril’s extensive CUAS data, AI models will be trained to respond to aerial threats in real time. OpenAI’s CEO, Sam Altman, highlighted the goal of safeguarding military personnel through these advanced AI solutions.
This partnership reflects the escalating global competition in AI-powered autonomous defence technologies, as nations like the United States and China race to innovate in automated military systems. Founded in 2017, Anduril specialises in autonomous systems, including drones and other tactical assets.