TikTok must now urgently appeal to the Supreme Court to either block or reverse a law mandating ByteDance’s sale of the popular short-video platform by 19 January, following an appeals court’s recent decision to deny additional time. TikTok and ByteDance submitted an emergency request to the US Court of Appeals for the District of Columbia, seeking an extension to present their arguments before the US Supreme Court.
With 170 million American users and billions in ad revenue, the platform, a digital giant particularly beloved by younger generations, now stands on the edge of a ban in its largest foreign market. At the centre of this unprecedented conflict lies a confluence of national security concerns, free speech debates, and economic implications far beyond TikTok.
The incipit of the current conflict can be traced back to 2020 when then-President Donald Trump attempted to ban TikTok and Chinese-owned WeChat, citing fears that Beijing could misuse Americans’ data or manipulate public discourse through the platforms. The courts blocked Trump’s effort, and in 2021, President Joe Biden revoked the Trump-era orders. Yet bipartisan concerns about TikTok’s ties to the Chinese government remain. Lawmakers and US intelligence agencies have long raised alarms about the vast amount of data TikTok collects on its American users and the potential for Beijing to exploit this information for espionage or propaganda. This year, Congress passed a bill with overwhelming support requiring ByteDance to divest its US assets, marking the strictest legal threat the platform has ever faced.
The recent appeals court decision to uphold the law has been seen as necessary by Biden’s administration to protect US national security. The ruling cited the ‘well-substantiated threat’ posed by the Chinese government’s relationship with ByteDance, arguing that China’s influence over TikTok is fundamentally at odds with American free speech principles. Attorney General Merrick Garland praised the decision, calling it a crucial step in ‘blocking the Chinese government from weaponising TikTok.’ However, critics of the ruling, including free speech advocates and TikTok itself, have pushed back. The American Civil Liberties Union (ACLU) warned that banning the app would violate the First Amendment rights of millions of Americans who rely on TikTok to communicate and express themselves.
TikTok has vowed to appeal to the Supreme Court to halt the ruling before the 19 January deadline. Consequently, the Supreme Court’s decision will determine whether the platform will survive under ByteDance’s ownership or face a US ban. However, suspicions and obstacles loom even if ByteDance attempts to sell TikTok’s US operations. Any divestiture would need to prove the app is wholly independent of Chinese control—a requirement China’s laws make nearly impossible. ByteDance’s prized algorithm, the key to TikTok’s success, is classified as a technology export by Beijing and cannot be transferred without Chinese government approval.
On the other hand, the economic consequences of a TikTok ban could be profound. Advertisers, who have collectively poured billions into the platform, are closely monitoring the situation. While brands are not yet pulling their marketing budgets, many are developing contingency plans to shift ad spending to rivals like Meta-owned Instagram, Alphabet’s YouTube, and Snap. These platforms, all of which have rolled out short-form video features to compete with TikTok, stand to reap enormous benefits if TikTok disappears from the US landscape. Meta’s stock price soared to an all-time high following the court ruling, reflecting investor optimism that its platforms will absorb TikTok’s market share.
Content creators and small businesses that rely on the app for income now face an uncertain future. Many influencers urge followers to connect with them on alternative platforms like Instagram, YouTube, and X (formerly Twitter) in case TikTok is banned. For small businesses, the situation is equally hard. TikTok’s integrated commerce feature, TikTok Shop, has exploded in popularity since its US launch in September 2023. This year, the platform generated $100 million in Black Friday sales, offering brands a unique and lucrative e-commerce channel. For merchants who have invested in TikTok Shop, a ban would mean losing a critical revenue stream with no comparable alternative.
Yet TikTok’s rise in the US has transformed digital advertising and e-commerce and reshaped global supply chains. Like competitors Shein and Temu, TikTok Shop has connected American consumers with low-cost vendors, many of whom ship products directly from China. This dynamic reflects the extensive economic tensions underpinning the TikTok controversy. The USA, wary of China’s growing tech influence, has imposed strict export controls on Chinese technology and cracked down on perceived threats to its national security. Beijing, in turn, has retaliated with bans on critical minerals and stricter oversight of technologies leaving its borders. TikTok has become the latest and most visible symbol of this escalating US-China tech war.
The path forward is fraught with uncertainty. President Biden, whose administration has led the charge against TikTok, can extend the 19 January deadline by 90 days if he determines that a divestiture is in progress. This alternative would push the final decision to President-elect Donald Trump, who has offered mixed messages about his stance on TikTok. While Trump previously sought to ban the app, he now claims he would not enforce the new law. Nevertheless, the legislation has broad bipartisan support, making it unlikely that a new administration could simply ignore it. Tech companies, meanwhile, face legal risks if they continue to provide services to TikTok after the deadline. App stores like Apple and Google and internet hosting providers could face billions in fines if they fail to comply.
The Chinese government’s role adds another layer of complexity. Beijing has fiercely opposed US efforts to force ByteDance into a sale, framing the TikTok dispute as a ‘commercial robbery’ designed to stifle China’s technological ambitions. By classifying TikTok’s algorithm as a protected export, China has clarified that any divestiture will be a lengthy and politically charged process if it happens at all. Either way, it leaves ByteDance caught between two powerful governments with irreconcilable demands.
For now, TikTok remains fully operational in the US, and its users continue to scroll, create, and shop as usual. However, the next few weeks will determine whether TikTok can escape its existential question or join the growing list of casualties in the US-China tech war. The outcome will shape the future of one of the world’s most influential social media platforms and set a precedent for how governments regulate foreign-owned technology in an era defined by digital dominance and geopolitical rivalry. Whether through divestiture, court intervention, or an outright ban, TikTok’s fate in the US marks a turning point in the ongoing struggle to balance national security, economic interests, and the free flow of information in an inter(net)connected world.
The first mention of consumer law in the EU was in the context of competition law in 1972 when policymakers started to pave the way to protect consumers in policy. Despite the lack of a legal treaty basis, many regulatory initiatives started to take shape to protect consumers (food safety, prevention of doorstep selling, and unfair contract terms).
The first treaty-based mention of a specific consumer protection article was in the 1992 Maastricht treaty. Nowadays, the EU consumer law is one of the most and better developed substantive fields of the EU law.
As contained in the Consolidated Version of the Treaty on the Functioning of the European Union (the treaty that regroups all previous European Union treaties before 2009), Article 169 specifically refers to consumer protection. Article 169(1) reads as follows:
‘In order to promote the interests of consumers and to ensure a high level of consumer protection, the Union shall contribute to protecting the health, safety and economic interests of consumers, as well as to promoting their right to information, education and to organise themselves in order to safeguard their interests.’
Given its history, it has long been established that consumer law purports to guarantee and protect the autonomy of the individual who appears in the market without any profit-market intentions. Beyond the goals set out in Article 169 TFEU, four main directives govern areas of consumer law, the 1985 Product Liability Directive, the 1993 Unfair Terms in Consumer Contracts Directive, the 2011 Consumer Rights Directive, and the subject of this analysis, the 2005 Unfair Commercial Practices Directive.
Since then, there have been numerous amendments to the EU’s consumer protection legislative framework. The main amendment in consumer law includes the adoption of the Modernisation Directive.
Adopted on 27 November 2019, it amended four existing directives, the UCPD, the Price Indication Directive 98/6/EC, the Unfair Contract Term Directive 93/13/EEC, and the Consumer Rights Directive 2011/83/EU. Even more recently, there have been specific proposals for amendments to the UCPD concerning environmental advertising, known as greenwashing, in line with furthering the European Union’s Green Deal.
What is UCP?
An unfair commercial practice (UCP) is a misleading practice (whether deliberate actions or omissions of information), aggressive or prohibited by law (blacklisted in Annex I UCPD). A UCP interferes with consumers’ free choice to determine something for themselves and affects their decision-making power.
Prohibited UCPs are explained in Article 5 of the UCPD. It outlines that a UCP will be prohibited if it is contrary to professional diligence and materially distorts the average consumer’s economic behaviour. The EU clearly outlines and recalls that there are two main categories of UCPs, with examples for both:
First, misleading practices through action (giving false information) or omission (leaving out important information).
Second, aggressive practices aimed at bullying consumers into buying a product.
Some examples of UCPs are bait advertising, non-transparent search results ranking, free claims about cures, false green claims or greenwashing, certain game ads, false offers, and persistent unwanted calls. There is no exhaustive list of what a UCP may be, especially in the digital context where technology is rapidly changing the way we behave towards one another.
This is especially evident in the case of the use of AI. AI is a buzzword that is often impossible to avoid nowadays. Computer Science Professor at Standford University, Dr Fei-Fei Li, said that ‘AI is everywhere. It’s not that big, scary thing in the future. AI is here with us.’
AI is used in UCPs to improve and streamline emotional, behavioural, and other types of targeting. Data can be collected using AI (scraping website reviews or analysing consumer trends), and this information can be leveraged against consumers to influence their decision-making powers, ultimately furthering the commercial goals of traders, potentially to the detriment of the interests of consumers.
When influencing a consumer’s decision-making powers, AI will often employ measures to deceive and manipulate users to get them to influence their decision-making, thus breaching the UCPD. However, these violations often go unnoticed since most people are unaware of UCPD or dark patterns.
Therefore, UCPs are practices that manipulate consumer choices in a certain way, and the advancement of AI widens the gap between consumers and their freedom to decide what they want without them even knowing it.
What is the UCPD?
As part of consumer law and as already stated, this analysis will focus on the UCPD and its recent amendments.
The origin of the UCPD
The UCPD was not the original legislation governing the protection of UCP in the EU. The first law relating to UCPs was adopted in 2005 and amended the 1984 Misleading and Comparative Advertising Directive. Its scope grew from amendment to amendment, and at its core, the directive has always been based on the prohibition of practices contrary to the requirements of professional diligence as contained in Article 2(h) UCPD:
Professional diligence ‘means the standard of special skill and care which a trader may reasonably be expected to exercise towards consumers, commensurate with honest market practice and/or the general principle of good faith in the trader’s field of activity’.
The UCPD was introduced to establish a fully harmonised legal framework for combatting unfair business-to-consumer practices across member states. This entailed introducing legislation harmonising different pre-existing laws to form a cohesive and understandable legal framework. This harmonisation not only combined existing legislation whilst introducing some key amendments but also provided legal certainty by having one centralised document to consult when dealing with unfair commercial practices in the EU.
One of the major drawbacks from a member state’s perspective is that the UCPD has a full harmonisation effect (meaning that member states cannot introduce more or less protection through national legislation efforts). It implied that member states could not introduce the measures they deemed to be necessary to protect consumers against UCP. Member states do have some discretion to implement UCP national legislation in certain sectors such as contract law, health and safety aspects of products, and legislation on regulated professions, but for the most part, they cannot introduce their own pieces of legislation concerning UCPs.
The goals and objectives of the UCPD are twofold. First, it aims to contribute to the internal market by removing obstacles to cross-border trade in the EU. Secondly, it seeks to ensure high consumer protection by shielding consumers from practices that distort their economic decisions and by prohibiting unfair and non-transparent practices.
The UCPD has a blacklist in Annex I with all the prohibitions it includes. A trader cannot employ any of the practices listed in Annex I, and if they do, they are in breach of the UCPD. There is no need to assess the practice, the potential economic distortion or the average consumer. If a trader engages in a practice listed in Annex I of the UCPD, that behaviour is strictly prohibited.
Past amendments to the UCPD
Before the UCPD was implemented, EU member states had their own national legislations and practices regarding consumer law and specifically, UCP. However, this could cause issues for traders trying to sell goods to consumers as they had to consult many legal texts.
By consolidating all of these rules, changing some and adding new ones, the EU could codify UCP in a single document. This helps promote fairness and legal certainty across the EU. The UCPD has been amended several times since it was first published in the Official Journal of the European Union.
These amendments have covered several changes to enhance consumer protection and include the following: marketing of dual-quality products, individual redress, fines for non-compliance, reduced full harmonisation effect of the directive, and information duties in the online context. In essence, these amendments aim to improve the state of consumer law and protect consumers in the EU. Below is a summary of these amendments in more detail.
Marketing of dual quality products: dual quality refers to the issue of some companies selling products in different member states under the same (or similar) branding and packaging but with different compositions. There is currently no explanation of any objective justifications for the marketing of dual-quality products to be allowed under the directive, as there is no explanation of any possible objective criteria.
The directive’s preamble (non-binding but still influenceable) refers to certain examples where the marketing of dual-quality products is permitted. This can be permitted by national legislation, availability or seasonality of raw materials, voluntary strategies to improve access to healthy and nutritious food, and offering goods of the same brand in packages of different weights or volumes in different geographical markets.
Individual redress: a key aspect of these amendments is setting up individual remedies for consumers that did not exist previously. This harmonises remedy efforts across the EU, as many member states did not have individual consumer remedies. Article 11(a) of the directive will propose minimum harmonising remedies, meaning that member states can introduce legislation to further consumer protection.
Fines: the amendments introduced penalties and fines changed compared to the previous UCPD. The new amendments set out criteria for imposing penalties. It is a long list in article 13(2) of the directive. In addition to these criteria, the new amendment proposed that 4% of the EU’s global annual turnover should be the maximum fine for widespread infringement.
Reduced full harmonisation: the amendments also introduced limits to the somewhat controversial full harmonisation of the UCPD. They limited the harmonisation in 2 cases. The first concerns commercial excursions known as ‘Kaffeabrten‘ in Germany. These are low-cost excursions for the elderly where UCP sales occur, such as deception and aggressive sales tactics.
The second concerns commercial practices involving unsolicited visits by a trader to a consumer’s home. If member states wish to introduce legislation to this effect, they must inform the European Commission, which has to inform traders (as part of the information obligation) on a separate, dedicated website.
Recent amendments to the UCPD
The UCPD is not an entrenched directive that cannot be amended. This is evident from its amendment in 2019 and the more recent 2024 amendments. The new proposal introduces two amendments that would add to the existing list of practices considered misleading if they cause or are likely to cause the average consumer to make a transactional decision they would not otherwise make in the context of environmental matters.
The first amendment concerns environmental claims related to future environmental performances without clear, objective, and publicly available commitments.
The second amendment relates to irrelevant advertising benefits for consumers that do not derive from any feature of the product or service.
Additionally, new amendments to the ‘blacklist’ in Annex I have been proposed. A practice added to the blacklist entails it to be considered as unfair in all circumstances. These amendments relate to environmental matters associated with the European Green Deal and aim to reduce the effect of ‘greenwashing’. These amendments include:
Displaying a sustainability label that is not based on a certification scheme or not established by public authorities.
Making a generic environmental claim for which the trader is not able to demonstrate recognised excellent environmental performance relevant to the claim.
Making an environmental claim about the entire product or the trader’s business when it concerns only a certain aspect of the product or a specific activity.
Claiming, based on the offsetting of greenhouse gas emissions, that a product has a neutral, reduced or positive impact on the environment in terms of greenhouse gas emissions.
The focus of the new amendments is evidently to reduce environmental misconceptions that consumers may have about a product, as businesses greenwash products to mislead them into choosing them. This aims to protect consumers in the EU so that they can make an informed choice about whether a product contributes to environmental goals or not without being manipulated or misled into believing that it is because of the use of an environmental colour (green) or an ambiguous title (sustainable).
Final thoughts
The level of consumer law protection in the EU is ever-evolving, always aiming to reach higher and higher peaks. This is reflected in the EU’s efforts to amend and strengthen the legislation that protects us consumers.
Past amendments aim to clarify doubtful areas of consumer law, such as what information should be provided and where member states can legislate on UCPs, reducing the effect of full harmonisation. These amendments also introduced new and important notions such as redress mechanisms for individual consumers along with criteria for fines.
The more recent amendments target trader’s actions towards misleading greenwashing practices. Hopefully, these greenwashing amendments will help consumers make their own informed choices and help make the EU more sustainable by cracking down on the use of misleading, sustainable, and unfair commercial practices.
Given that amendments only took place in 2024, it is unlikely that there will be any new amendments to the UCPD any time soon. However, in the years to come, there are bound to be new proposals, potentially targeting the intersection of AI and unfair commercial practices.
As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunities and risks. Beyond traditional regulatory frameworks, strategies include substantial investments in research, global standard setting, and international collaboration. A key development has been the establishment of AI safety institutes (AISIs), which aim to evaluate and verify AI models before public deployment, among other functions.
In November 2023, the UK and the USA launched their AI Safety Institutes, setting an example for others. In the following months, Japan, Canada, and the European Union followed suit through its AI Office. This wave of developments was further reinforced at the AI Seoul Summit in May 2024, where the Republic of Korea and Singapore introduced their institutes. Meanwhile, Australia, France, and Kenya announced similar initiatives.
Except for the EU AI Office, all other AI safety institutes established so far need more regulatory authority. Their primary functions include conducting research, developing standards, and fostering international cooperation. While AISIs have the potential to make significant advancements, they are not without challenges. Critics highlight issues such as overlapping mandates with existing standard-making bodies like the International Organization for Standardization that may create inefficiencies and the risk of undue industry influence shaping their agendas. Others argue that the narrow focus on safety sidelines broader risks, such as ethical misuse, economic disruption, and societal inequality. Some also warn that this approach could stifle innovation and competitiveness, raising concerns about balancing safety with progress.
Introduction
The AI revolution, while built on decades-old technology, has taken much of the world by surprise, including policymakers. The EU legislators, for instance, have had to scramble to update their advanced legal drafts to account for the rise of generative AI tools like ChatGPT. The risks are considerable, ranging from AI-driven disinformation, autonomous systems causing ethical dilemmas, potential malfunctions, and loss of oversight to cybersecurity vulnerabilities. The World Economic Forum’s Global Cybersecurity Outlook 2024 reports that half of industry leaders in sectors such as finance and agriculture view generative AI as a major cybersecurity threat within two years. These concerns, coupled with fears of economic upheaval and threats to national security, make clear that swift and coordinated action is essential.
The European Union’s AI Act, for instance, classifies AI systems by risk and mandates transparency along with rigorous testing protocols (among other requirements). Other regions are drafting similar legislation, while some governments opt for voluntary commitments from industry leaders. These measures alone cannot address the full scope of challenges posed by AI. In response, some countries have created specialised AI Safety Institutes to fill critical gaps. These institutes are meant to provide oversight and also advance empirical research, develop safety standards, and foster international collaboration – key components for responding to the rapid evolution of AI technologies.
In May 2024, a significant advancement in global AI safety collaboration was achieved by establishing the International Network of AI Safety Institutes. This coalition brings together AI safety institutions from different regions, including Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK, and the USA.
In November 2024, the International Network of AI Safety Institutes convened for its inaugural meeting, marking an important step in global collaboration on AI safety. Discussions centred on advancing research, developing best practices for model testing, promoting global inclusion and knowledge-sharing, laying the foundation for future initiatives ahead of the AI Action Summit in Paris in February 2025.
The first wave of AI safety institutes, established primarily by developed nations, has centred on safeguarding national security and reinforcing democratic values. As other countries establish their institutes, whether they will replicate these models or pursue alternative frameworks more attuned to local needs and contexts remains unclear. As in other digital policy areas, future initiatives from China and India could potentially serve as influential models.
Furthermore, while there is widespread consensus on the importance of key concepts such as ‘AI ethics,’ ‘human oversight,’ and ‘responsible AI,’ their interpretation often varies significantly. These terms are frequently moulded to align with individual nations’ political and cultural priorities, resulting in diverse practical applications. This divergence will inevitably influence the collaboration between AI safety institutes as the global landscape grows increasingly varied.
Finally, a Trump presidency in the USA, with its expected emphasis on deregulation, a more detached US stance toward multilateral institutions, and heightened focus on national security and competitiveness, could further undermine the cooperation needed for these institutes to achieve meaningful impact on AI safety.
Established: In November 2023, with a mission to lead international efforts on AI safety governance and develop global standards. Backed by £100 million in funding through 2030, enabling comprehensive research and policy development.
Key initiatives: – In November 2024, the UK and the US AI safety institutes jointly evaluated Anthropic’s updated Claude 3.5 Sonnet model, testing its biological, cyber, and software capabilities. The evaluation found that the model provided ‘answers that should have been prevented’ when tested on jailbreaks or actions that produce a response from a model that is intended to be restricted.
– Researched and created structured templates, such as the ‘inability’ template, to demonstrate AI systems’ safety within specific deployment contexts.
The UK AI Safety Institute, launched in November 2023 with £100 million in funding through 2030, was created to spearhead global efforts in AI safety. Its mission centres on establishing robust international standards and advancing cutting-edge research. Key initiatives include risk assessments of advanced AI models (so-called ‘frontier models’) and fostering global collaboration to align safety practices. The institute’s flagship event, the Bletchley Park AI Safety Summit, highlighted the UK’s approach to tackling frontier AI risks, focusing on technical and empirical solutions. Frontier AI is being described as follows in the Bletchely declaration:
‘Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended control issues relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are, therefore, hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology and where frontier AI systems may amplify risks such as disinformation.‘
However, this narrow emphasis has drawn criticism, questioning whether it sufficiently addresses AI’s broader, everyday challenges.
At the 2024 Stanford AI+Policy Symposium, Oliver Ilott, Director of the AI Safety Institute, articulated the UK’s vision for AI governance. He underscored that AI risks are highly context- and scenario-specific, arguing that no single institution could address all the challenges AI presents. ‘Creating such an entity would be like duplicating government itself,’ Ilott explained, advocating instead for a cross-governmental engagement where each sector addresses AI risks relevant to its domain. This approach highlights the UK’s deliberate choice to concentrate on ‘frontier harms’ – the most advanced and potentially existential AI threats – rather than adopting the broader, risk-based regulatory model championed by the EU.
The Bletchley Park AI Safety Summit reinforced this philosophy, with participating countries agreeing on the need for a ‘technical, empirical, and measurable’ understanding of AI risks. Ilott noted that the ‘core problem for governments is one of ignorance,’ cautioning that policymakers risk being perpetually surprised by rapid AI advancements. While high-profile summits elevate the political discourse, Ilott stressed that consistent technical work between these events is critical. To this end, the UK institute has prioritised building advanced testing capabilities and coordinating efforts across the government to ensure preparedness.
The UK’s approach diverges significantly from the EU’s more comprehensive, risk-based framework. The EU has implemented sweeping regulations addressing various AI applications, from facial recognition to general-purpose systems. In contrast, the UK’s more laissez-faire policy focuses narrowly on frontier technologies, promoting flexibility and innovation. The Safety Institute, with its targeted focus on addressing frontier risks, illustrates the UK’s approach. However, this narrow focus may leave gaps in governance, overlooking pressing issues like algorithmic bias, data privacy, and the societal impacts of AI already integrated into daily life.
Ultimately, the long-term success of the UK AI Safety Institute depends on the government’s ability to coordinate effectively across departments and to ensure that its focus does not come at the expense of broader societal safeguards.
Established: In 2023 under the National Institute of Standards and Technology, with a US$10 million budget, with a focus on empirical research, model testing, and safety guidelines.
Key initiatives: – In November 2024, the US Artificial Intelligence Safety Institute at the US Department of Commerce’s National Institute of Standards and Technology announced the formation of the Testing Risks of AI for National Security Taskforce, which brings together partners from across the US government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology.
– Conducted joint pre-deployment evaluations (Anthropic’s Claude 3.5 model).
The US AI Safety Institute, established in 2023 under the National Institute of Standards and Technology with a US$10 million budget, is a critical component of the US’s approach to AI governance. Focused on empirical research, rigorous model testing, and developing comprehensive safety guidelines, the institute has sought to bolster national and global AI safety. Elizabeth Kelly, the institute’s director, explained at the 2024 AI+Policy Symposium, ‘AI safety is far from straightforward and filled with many open questions.’ She underscored the institute’s dual objective of addressing future harms while simultaneously mitigating present risks, emphasising that ‘safety drives innovation’ and that a robust safety framework can fuel healthy competition.
Kelly highlighted the collaborative nature of the US approach, which involves working closely with agencies like the Department of Energy to leverage specialised expertise, particularly in high-stakes areas such as nuclear safety. The institute’s priorities include fundamental research, advanced testing and evaluation, and developing standards for content authentication, like watermarking, to combat AI-generated misinformation. According to Kelly, the institute’s success hinges on building ‘an AI safety ecosystem larger than any single government,’ underscoring a vision for broad, cross-sectoral engagement.
The institute’s strategy emphasises a decentralised and adaptive model of governance. By leveraging the expertise of various federal agencies, the US approach aims to remain nimble and responsive to emerging risks. Similar to the UK approach, this model contrasts the European Union’s AI Office, where AI Safety is just one of the five specialised units supported by two advisory roles. The EU AI Office distinguishes itself from other AI Safety Institutes by adopting a centralised and hierarchical model with a strong focus on compliance and harmonisation across the EU member states. Being part of a centralised structure, the AI Safety unit may face delays in responding to rapidly emerging challenges due to its reliance on more rigid decision-making processes.
The US model’s flexibility supports innovation but may leave gaps in areas such as ethical governance and long-term accountability. The Institute operates under a presidential order, making its directives susceptible to shifts in political priorities. The election of Donald Trump for a new mandate introduces significant uncertainty into the institute’s future. Given Trump’s history of favouring deregulation, his administration could alter or dismantle the institute’s initiatives, reduce funding, or pivot away from stringent AI oversight. Such a shift could undermine progress in AI safety and lead to inconsistencies in governance, particularly if policies become more relaxed or innovation-focused at the expense of rigorous safety measures.
A repeal of Biden’s AI Executive Order appears likely, signalling shifts in AI policy priorities. Yet, Trump’s earlier AI executive orders emphasised civil liberties, privacy, and trustworthy AI alongside innovation, and it is possible that his future policy initiatives could maintain this balance.
Ultimately, the future of the US AI Safety Institute will depend on whether it can secure more permanent legislative backing to withstand political fluctuations. Elon Musk, a tech billionaire entrepreneur and a prominent supporter of Trump, advocates extensively to shift the focus of the AI policy debate to existential AI risks, and these efforts might also impact the work of the US AI Safety Institute.
Key initiatives: – Conducts surveys, evaluates AI safety methods, and develops standards while acting as a central hub for collaboration between industry, academia, and AI safety-related organisations in Japan.
– Addresses a wide range of AI-related issues, including social impact, AI systems, data governance, and content, with flexibility to adapt to global trends.
– Focuses on creating safety assessment standards, exploring anti-disinformation tools, cybersecurity measures, and developing a testbed environment for AI evaluation.
– Engages in global collaboration with the AI safety institutes in the UK and USA to align efforts and share expertise.
The Japan AI Safety Institute plays a central role in the nation’s AI governance strategy, aligning its efforts with Japan’s broader commitments under the G7 Hiroshima AI Process. Operating under the Council for Science, Technology, and Innovation, the institute is dedicated to fostering a safe, secure, and trustworthy AI ecosystem.
Akiko Murakami, Executive Director of the institute, emphasised at the 2024 AI+Policy Symposium the need to ‘balance innovation and regulation,’ underscoring that AI safety requires both interagency efforts and robust international collaboration. Highlighting recent progress, she referenced the agreement on interoperable standards reached during the US-Japan Summit in April 2024, underscoring Japan’s commitment to global alignment in AI governance.
Murakami explained that the institute’s approach stands out in terms of integrating private sector expertise. Many members, including leadership figures, participate part-time while continuing their roles in the industry. This model promotes a continuous exchange of insights between policy and practice, ensuring that the institute remains attuned to real-world technological advancements. However, she acknowledged that the institute faces challenges in setting traditional key performance indicators due to the rapid pace of AI development, suggesting the need for ‘alternative metrics’ to assess success beyond conventional safety benchmarks.
The Japan AI Safety Institute’s model prioritises flexibility, real-world industry engagement, and collaboration. The institute benefits from up-to-date expertise and insights by incorporating part-time private sector professionals, making it uniquely adaptable. This hybrid structure differs significantly from the centralised model of the US AI Safety Institute, which relies on federal budgets and agency-specific mandates to drive empirical research and safety guidelines. Japan’s model is also distinct from the European Union’s AI Office, which, besides the AI Safety Unit, has broad enforcement responsibilities of the AI Act across all member states and from the UK’s primary focus on frontier risks.
Zooming out from the AI safety institutes and examining each jurisdiction’s broader AI governance systems reveals differences in approaches. The EU’s governance is defined by its top-down regulatory framework, exemplified by ex-ante regulatory frameworks such as the AI Act, which aims to enforce uniform risk-based oversight across member states. In contrast, Japan employs a participatory governance model integrating government, academia, and industry through voluntary guidelines such as the Social Principles of Human-Centric AI. This strategy fosters flexibility, with stakeholders contributing directly to policy developments through ongoing dialogues; however, the reliance on voluntary standards risks weaker enforcement and accountability. The USA takes an agency-driven, sector-specific approach, emphasising national security and economic competitiveness while leaving the broader AI impacts less regulated. The UK is closer to the US approach, with an enhanced focus on frontier risks addressed mostly through empirical research and technical safeguards.
Japan’s emphasis on international collaboration and developing interoperable standards is a strategic choice. By actively participating in global efforts and agreements, Japan positions itself as a key player in shaping the international AI safety landscape.
While the Hiroshima AI Process and partnerships like the one with the USA are central to Japan’s strategy, they also make its success contingent on stable international relations. If geopolitical tensions were to rise or if global cooperation were to wane, Japan’s AI governance efforts could face setbacks.
Unlike the US and the UK that established new institutions, Singapore repurposed an existing government body, the Digital Trust Centre. At the time of this writing, not enough information is publicly available to assess the work of the Centre.
– It conducts applied and investigator-led research through CIFAR and government-directed projects to address AI safety risks.
– Plays a key role in the International Network of AI Safety Institutes, contributing to global efforts on AI safety and co-developing guidance for responsible AI practices.
Established: January 2024, the European Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. The AI office was part of this package. Funding: €46.5 million, setup funding.
Key Initiatives: – Contributing to the coherent application of the AI Act across the member states, including the set-up of advisory bodies at EU level, facilitating support and information exchange.
– Developing tools, methodologies, and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks.
– Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
– Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action.
– Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation.
The EU AI Office stands out as both an AI safety institute, through its AI Safety Unit, and a regulatory body with broad enforcement powers under the AI Act across EU member states. The AI Safety Unit fulfills the typical functions of a safety institute, conducting evaluations and representing the office internationally in meetings with its counterparts. It is not clear whether the AI Safety Unit will have the necessary resources, both in terms of personnel and funding, to perform similar model testing as its UK and US counterparts.
Established: November 2024, to ensure the safe use of artificial intelligence technology.
Key initiatives: – Preemptively addresses risks like misuse, technical limitations, and loss of control to enhance AI reliability.
– Provides guidance to reduce AI side effects, such as deepfakes, and supports companies in navigating global regulations and certifications.
– Participates in international efforts to establish AI safety norms and align with global frameworks.
– Partners with 24 domestic organisations to strengthen AI safety research and create a secure R&D environment.
– Collaborates with companies like Naver, LG, and SK Telecom to promote ethical AI practices and manage potential risks.
As of this writing, insufficient publicly available information exists to evaluate the work of the Institute, which was only recently established.
Conclusion
The AI safety institutes are beginning their journey, having only established their first basis for collaboration. While early testing efforts offer a glimpse of their potential, it remains to be seen whether these actions alone can effectively curb deploying AI models that pose significant risks. Diverging priorities, including national security concerns, data-sharing policies, and the further weakening of multilateral systems, could undermine their collective effectiveness.
Notably, nations such as India, Brazil, and China have yet to establish AI safety institutes. The governance models these countries propose may differ from existing approaches, setting the stage for a competition between differing visions of global AI safety.
Building trust between the institutes and the AI industry will be critical for meaningful collaboration. This trust could be cultivated through transparent engagement and mutual accountability. Equally, civil society must play an active role in this ecosystem, acting as a watchdog to ensure accountability and safeguard the broader public interest.
Finally, the evolving geopolitical landscape will profoundly impact the trajectory of these initiatives. The success of the AI safety institutes will depend on their ability to adapt to technical and policy challenges and how effectively they navigate and influence the complex global dynamics shaping AI governance.
There is no need to explain the importance of the global network innovation we enjoy today. Many lines have been written on the possibilities and the marvels the network delivers daily. After an initial couple of decades of admiration, the same thing happened with many other wonders of the world we witnessed throughout civilization. We took it for granted. We do not discuss its structure, backbone, and the incentive structure behind it. Unless it interferes with our daily life and freedom.
This is true for any network user, being a state actor, cloud computing company, or everyday end user. When we look at the backbone of the internet, almost everything is open source. What does this mean? Basic protocols and ways we connect over the internet are documented and open for everyone to observe, copy, and build upon. They are agreed upon as a set of transparent public instructions that are free of proprietary obligations.
Industry and innovation
To distinguish innovation from the industry (which might be important to go forward), we can introduce a simple correlation: The industry is an ecosystem that emerged on the need to make the invention more available. The vision of utility is in the industry, and the value of innovation is proven with every iteration of utility. Following this correlation, we can indeed say that the more transparent innovation, the greater its value (or we tend to give it such a position).
When we look at the internet industry, we observe that companies and strategies that followed openness have benefited massively from the invention. This system of benefits from the open source approach can work in depth for both the invention and the consequential industry. To name a couple of the greatest examples: Alphabet (Google, YouTube, or Maps), Linux (used to run almost the entire internet backbone infrastructure), Android (revolutionising the app market, levelling the entry field, and reducing the digital divide). All of them are open source, built on the open-source innovation of the internet.
A closer look at resiliency
Let’s look at one example that may illustrate this precisely: bitcoin. It started as an open-source project and is still one of the most maintained public databases on the internet. Bitcoin brings back the idea of private money after 100 years of the nation’s monopoly on money. Although it is pointed out as a danger to the international financial system, there is no possible coordinated action by such entities to take down this system and/or ban it permanently. Why? The simple answer is in the trade-off.
Stopping bitcoin (or any digital information online) is not impossible per se but would require massive resources. This would require full control of all communication channels towards the internet, including banning satellites from orbiting above your geolocation and persistent efforts to ensure no one is breaching the ban. But in 2024, such a ban would create a tear in the fabric of society. Societal consequences would widely overcome the possible benefits.
Instead, as long as it is neutral, bitcoin does not present a threat but rather an opportunity for all. All other competitors built on bitcoin principles are not the same for that particular reason: they are not open source and transparent. No Central Bank Digital Currency (CBDC), privately issued stablecoin, or any of the thousand cryptocurrency impersonators have proven to hold any of the bitcoin’s value. Following the earlier distinction, innovation is open source, but the industry around it is not so much.
Open source is the right way, not the easy one
Does the above mean that when an industry is not based on open source, it cannot make great discoveries and innovate further? No, not at all. Intellectual property is a large part of the portfolio of the biggest tech companies. For example, Apple’s IP revenues culminated in around USD 22.6 billion in research and development expenditures (in 2022) The proprietary industry moves the needle in the economy and creates wealth, while open source creates opportunities. We need both for a healthy future. All of our opportunities may not result in imminent wealth, but rather in inspiration to move forward rather than oppose the change.
In simple terms, open source empowers the bottom-up approach to building for the future. It helps expand the base of possible contributors, and maybe most importantly, reduces the possibility of ending up in ‘knowledge slavery’. It can create a healthy, neutral, starting point. The one most will perceive as a chance rather than a threat.
If all of you had one particular innovation in mind while reading all this, you are right!
Artificial intelligence (AI) is a new frontier. AI is actually a bit more than just a technology, it is an agent. Anyhow, it is an invention, so chances are high it will follow the path we described above, enabling an entirely new industry of utility providers.
No need to be afraid
We hear all the (reasonable) concerns about AI development. Uncertainties on whether AI should be developed beyond human reach and concerns regarding AI in executive positions, all are based on fear of systems with no overview.
In the past, the carriers of the open source (openness and transparency) approach were mostly in academia. Universities and other research institutions contributed the most to the open source approach. It is a bit different in the AI field. For that, companies are leading the way.
The power to preserve common knowledge is still in the hands of states, and under the set of business and political circumstances, the private sector is also the biggest proponent of the open source approach. With the emergence of large language models and generative AI, the biggest open source initiatives came from Meta (LLaMa) and Alphabet (T5). They align with the incentive to statute open source as a standard for the future. We might be in an equilibrium moment in which both sides agree on the architecture for the future. Nations, international organisations, and the private sector should seize this opportunity. This new race toward more efficient technology of the future should evoke optimism, but there cannot be one without the bottom- up and open source approach to innovation.
The open source approach is still the way forward for innovation. and can build neutral ground, or at least will not be perceived as a threat.
Two things often come to mind when we hear the word ‘crypto’: freedom and crime. Cryptocurrencies for sure have revolutionised the financial world, offering speed, transparency, and accessibility not seen before. Yet, their promise of financial liberation comes with unintended consequences. The decentralised, pseudonymous nature of crypto makes it a double-edged sword—for some it represents freedom and for others a tool for crime.
In 2023, illicit transactions involving cryptocurrencies reached USD 24.2 billion, according to TRM Labs, with scams and fraud accounting for nearly a third of the total.
These numbers reveal a sobering truth: while crypto has opened doors to innovation, it has also become an enabler for global crime networks, from drug and human trafficking to large-scale ransomware operations. Criminals exploit this space to mask their identities, making crypto the go-to medium for those operating in the shadows.
What are the common types of crypto fraud?
Crypto fraud takes many forms, each designed to exploit vulnerabilities and prey on the unsuspecting. The most known ones are:
Ponzi and pyramid schemes– Fraudsters lure victims with promises of guaranteed high returns. These schemes use investments from new participants to pay earlier ones, creating an unsustainable cycle. When the influx of new investors dwindles, the scheme collapses, leaving most participants with nothing. In 2023, these scams contributed significantly to the USD 24.2 billion received by illicit crypto addresses, showcasing their pervasive nature.
Phishing attacks– Fake websites, emails, and messages designed to mimic legitimate services trick victims into revealing sensitive information like wallet keys. A single successful phishing attack can drain entire crypto wallets, with victims often having no recourse. The shift to stablecoins, noted for their volume in scams, has intensified the use of such tactics.
Initial Coin Offering (ICO) scams– The ICO boom has introduced countless opportunities—and risks. Fraudulent projects draw in investors with flashy whitepapers and grand promises, only to vanish with millions. For instance, ICO scams contributed to a notable chunk of crypto crimes in previous years, as highlighted by TRM Labs.
Rug pulls– Developers create hyped tokens, inflate their value, and abruptly withdraw liquidity, leaving investors holding worthless assets. In 2023, such schemes became increasingly sophisticated, targeting decentralised exchanges to exploit inexperienced investors.
Cryptojacking– Hackers infect computers or networks with malware to mine cryptocurrency without the owner’s knowledge. This hidden crime drains energy and resources, often leaving victims to discover their losses long after the attack.
Fake exchanges and wallets– Fraudulent platforms mimic legitimate services, enticing users to deposit funds, only for them to disappear. These scams exploit the trust gap among new investors, further driving crypto-related crime statistics.
The connection between crypto fraud and money laundering
Crypto fraud and money laundering are two sides of the same coin. Stolen funds need to be legitimised, and criminals have devised a range of techniques to obscure their origins. One of the most common methods involves crypto mixers and tumblers. These services blend cryptocurrencies from various sources, making it nearly impossible to trace individual transactions.
The process often works as follows:
Initial theft: Stolen funds are moved from wallets linked to scams or hacks.
Mixing: These funds are transferred to a mixing service, where they are broken into smaller amounts and shuffled with others.
Redistribution: The mixed funds are sent to new, seemingly unrelated wallets.
Conversion: The laundered crypto is then converted to stablecoins or fiat currency, often through decentralised exchanges or peer-to-peer transactions, masking its origins.
This method has made crypto a preferred tool for laundering money linked to drug cartels and even human trafficking networks. The convenience and pseudonymity of crypto ensure its growing role in these illicit industries.
How big crypto crime really is?
The numbers are staggering. Last year (2023), illicit addresses received USD 24.2 billion in funds. While scamming and hacking revenues declined (29.2% and 54.3%, respectively), ransomware attacks and darknet market activity saw significant growth. Sanctions-related transactions alone accounted for USD 14.9 billion, driven by entities operating in restricted jurisdictions.
Bitcoin and Monero remain the most-used cryptocurrency for darknet sales and ransomware.
Cryptocurrencies have become the currency of choice for underground networks and darknet markets facilitate the sale of illicit goods. Human trafficking networks use crypto for cross-border payments, exploiting its decentralised nature to evade detection.
According to the Chainalysis report, the prevalence of crypto in these crimes highlights the urgent need for better monitoring and regulation.
Stablecoins like USDT are gaining traction- criminals prefer stablecoins for their reliability as they mimic traditional fiat currencies, enabling transactions in environments where access to traditional banking is limited.
How to fight crypto crime?
Solving the issue of crypto crime requires a multi-faceted approach:
Regulatory innovation: Governments must create adaptable frameworks to address the evolving crypto landscape while encouraging legitimate use.
Public awareness: Educating users about common scams and best practices can reduce vulnerabilities at the grassroots level.
Global cooperation: International collaboration is essential as cryptocurrencies knows no borders. Only by sharing data and strategies can nations effectively combat cross-border crypto crime.
The thing is cryptocurrency is a young and rapidly evolving space. While some countries have enacted comprehensive legislation, others lag behind. However, the pace of innovation makes it nearly impossible to create foolproof regulations. Every new development introduces potential loopholes, requiring legislators to remain agile and informed.
The power of crypto: innovation or exploitation?
Cryptocurrencies hold immense power, offering unparalleled financial empowerment and innovation. As it usually happens, with great power comes great responsibility. Freedom must be balanced with accountability to ensure it serves civilisation for the greater good. Shockingly, stolen crypto assets are currently circulating undetected within global financial systems, intertwining with legitimate transactions. The question is: can the industry mitigate risks without compromising its core principles of decentralisation and transparency by addressing vulnerabilities and implementing robust safeguards? The true potential of crypto lies in its ability to reshape economies, empower the unbanked, and foster global financial inclusion. Yet, this power can also be exploited if left unchecked, becoming a tool for crime in the wrong hands. The future of crypto depends on ensuring it remains a beacon of innovation and empowerment, harnessed responsibly to create a safer, more equitable financial ecosystem for all.
Chinese big tech companies have emerged as some of the most influential players in the global technology landscape, driving innovation and shaping industries across the board. These companies are deeply entrenched in everyday life in China, offering a wide range of services and products that span e-commerce, social media, gaming, cloud computing, ΑΙ, and telecommunications. Their influence is not confined to China, they also play a significant role in global markets, often competing directly with US tech giants.
The rivalry between China and the US has become one of the defining geopolitical struggles of the 21st century. This competition oscillates between cooperation, fierce competition, and confrontation, influenced by regulatory policies, national security concerns, and shifting political priorities. The geopolitical pendulum of China-US tech firms, totally independent from the US election outcome, reflects the broader tensions between the two powers, with profound implications for global tech industries, innovation, and market dynamics.
The Golden Shield Project
In 2000, under Chairman Jiang Zemin’s leadership, China launched the Golden Shield Project to control media and information flow within the country. The initiative aimed to safeguard national security and restrict the influence of Western propaganda. As part of the Golden Shield, many American tech giants such as Google, Facebook, and Netflix were blocked by the Great Firewall for not complying with China’s data regulations, while companies like Microsoft and LinkedIn were allowed to operate.
At the same time, China’s internet user base grew dramatically, reaching 800 million netizens by 2018, with 98% using mobile devices. This rapid expansion provided a fertile ground for Chinese tech firms, which thrived without significant competition from foreign players. Among the earliest beneficiaries of this system were the BATX companies, which capitalised on China’s evolving internet landscape and rapidly established a dominant presence in the market.
The powerhouses of Chinese tech
The major Chinese tech companies, often referred to as the Big Tech of China, include Alibaba Group, Tencent, Baidu, ByteDance, Huawei, Xiaomi, JD.com, Meituan, Pinduoduo, and Didi Chuxing.
Alibaba Group is a global e-commerce and technology conglomerate, operating platforms such as Taobao and Tmall for e-commerce, AliExpress for international retail, and Alipay for digital payments. The company also has significant investments in cloud computing with Alibaba Cloud and logistics.
Tencent, a massive tech conglomerate, is known for its social media and entertainment services. It owns WeChat, a widely used messaging app that offers payment services, social media features, and more. Tencent also has investments in gaming, owning major stakes in Riot Games, Epic Games, and Activision Blizzard, as well as interests in financial services and cloud computing.
Baidu, often called China’s Google, is a leading search engine provider. In addition to its search services, Baidu has a strong presence in AI development, autonomous driving, and cloud computing, particularly focusing on natural language processing and autonomous vehicles.
ByteDance, the company behind TikTok, has made a name for itself in short-form video content and AI-driven platforms. It also operates Douyin, the Chinese version of TikTok, along with Toutiao, a popular news aggregation platform. ByteDance has expanded into gaming, e-commerce, and other AI technologies.
Huawei is a global leader in telecommunications equipment and consumer electronics, particularly smartphones and 5G infrastructure. The company is deeply involved in cloud computing and AI, despite facing significant geopolitical challenges.
Xiaomi is a leading smartphone manufacturer that also produces smart home devices, wearables, and a wide range of consumer electronics. The company is growing rapidly in the Internet of Things (IoT) space and AI-driven products.
JD.com, one of China’s largest e-commerce platforms, operates similarly to Alibaba, focusing on direct sales, logistics, and tech solutions. JD.com has also made significant strides in robotics, AI, and logistics technology.
Meituan is best known for its food delivery and local services platform, offering everything from restaurant reservations to hotel bookings. The company also operates in sectors like bike-sharing, travel, and ride-hailing.
Pinduoduo has rapidly grown in e-commerce by focusing on group buying and social commerce, particularly targeting lower-tier cities and rural markets in China. The platform offers discounted products to users who buy in groups.
Didi Chuxing is China’s dominant ride-hailing service, offering various transportation services such as ride-hailing, car rentals, and autonomous driving technology.
But what are the BATX companies we mentioned earlier?
BAXT
The term BATX refers to a group of the four dominant Chinese tech companies: Baidu, Alibaba, Tencent, and Xiaomi. These companies are central to China’s technology landscape and are often compared to the US “FAANG” group (Facebook, Apple, Amazon, Netflix, Google) because of their major influence across a range of industries, including e-commerce, search engines, social media, gaming, ΑΙ and telecommunications. Together, BATX companies are key players in shaping China’s tech ecosystem and have a significant impact on global markets.
China’s strategy for tech growth
China’s technology development strategy has proven effective in propelling the country to the forefront of several high-tech industries. This ambitious approach, which involves broad investments across both large state-owned enterprises and smaller private startups, has fostered significant innovation and created a competitive business environment. As a result, it has the potential to serve as a model for other countries looking to stimulate tech growth.
A key driver of China’s success is its diverse investment strategy, supported by government-led initiatives like the “Made in China 2025” and the “Thousand Talents Plan“. These programs offer financial backing and attract top talent from around the globe. This inclusive approach has helped China rapidly emerge as a global leader in fields like AI, robotics, and semiconductors. However, critics argue that the strategy may be overly aggressive, potentially stifling competition and innovation.
Some have raised concerns that China’s government support unfairly favours domestic companies, providing subsidies and other advantages that foreign competitors do not receive. Yet, this type of protectionist approach is not unique to China; other countries have implemented similar strategies to foster the growth of their own industries.
Another critique is that China’s broad investment model may encourage risky ventures and the subsidising of failures, potentially leading to a market that is oversaturated with unprofitable businesses. While this criticism holds merit in some cases, the overall success of China’s strategy in cultivating a dynamic and competitive tech landscape remains evident.
Looking ahead, China’s technology development strategy is likely to continue evolving. As the country strengthens its position on the global stage, it may become more selective in its investments, focusing on firms with the potential for global leadership.
In any case, China’s strategy has shown it can drive innovation and foster growth. Other nations hoping to advance their technological sectors should take note of this model and consider implementing similar policies to enhance their own competitive and innovative business environments.
But under what regulatory framework does Chinese tech policy ultimately operate? How does it affect the whole project? Are there some negative effects of the tight state grip?
China’s regulatory pyramid: Balancing control and consequences
China’s regulatory approach to its booming tech sector is defined by a precarious balance of authority, enforcement, and market response. Angela Zhang, author of High Wire: How China Regulates Big Tech and Governs Its Economy, proposes a “dynamic pyramid model” to explain the system’s intricate dynamics. This model highlights three key features: hierarchy, volatility, and fragility.
The top-down structure of China’s regulatory system is a hallmark of its hierarchy. Regulatory agencies act based on directives from centralised leadership, creating a paradox. In the absence of clear signals, agencies exhibit inaction, allowing industries to flourish unchecked. Conversely, when leadership calls for stricter oversight, regulators often overreach. A prime example of this is the drastic shift in 2020 when China moved from years of leniency toward its tech giants to implementing sweeping crackdowns on firms like Alibaba and Tencent.
This erratic enforcement underscores the volatility of the system. Chinese tech regulation is characterised by cycles of lax oversight followed by abrupt crackdowns, driven by shifts in political priorities. The 2020 – 2022 crackdown, which involved antitrust investigations and record-breaking fines, sent shockwaves through markets, wiping out billions in market value. While the government eased its stance in 2022, the uncertainty created by such pendulum swings has left investors wary, with many viewing the Chinese market as unpredictable and risky.
Despite its intentions to address pressing issues like antitrust violations and data security, China’s heavy-handed regulatory approach often results in fragility. Rapid interventions can undermine confidence, stifle innovation, and damage the very sectors the government seeks to strengthen. Years of lax oversight exacerbate challenges, leaving regulators with steep issues to address and markets vulnerable to overcorrection.
This model offers a lens into the broader governance dynamics in China. The system’s centralised control and reactive policies aim to maintain stability but often generate unintended economic consequences. As Chinese tech firms look to expand overseas amid domestic challenges, the long-term impact of these regulatory cycles remains uncertain, potentially influencing China’s ability to compete on the global stage.
The battle for tech supremacy between the USA and China
The incoming US President Donald Trump is expected to adopt a more aggressive, unilateral approach to counter China’s technological growth, drawing on his history of quick, broad measures such as tariffs. Under his leadership, the USA is likely to expand export controls and impose tougher sanctions on Chinese tech firms. Trump’s advisors predict a significant push to add more companies to the US Entity List, which restricts US firms from selling to blacklisted companies. His administration might focus on using tariffs (potentially up to 60% on Chinese imports) and export controls to pressure China, even if it strains relations with international allies.
The escalating tensions have been further complicated by China’s retaliatory actions. In response to US export controls, China has targeted American companies like Micron Technology and imposed its own restrictions on essential materials for chipmaking and electric vehicle production. These moves highlight the interconnectedness of both economies, with the US still reliant on China for critical resources such as rare earth elements, which are vital for both technology and defence.
This intensifying technological conflict reflects broader concerns over data security, military dominance, and leadership in AI and semiconductors. As both nations aim to protect their strategic interests, the tech war is set to continue evolving, with major consequences for global supply chains, innovation, and the international balance of power in technology.
After three years of negotiations initiated by Russia in 2017, the UN member states at the Ad Hoc Committee (AHC) adopted the draft of the first globally binding legal instrument on cybercrime. This convention will be presented to the UN General Assembly for formal adoption later this year. The Chair emphasised that the convention represents a criminal justice legal instrument and that the aim is to combat cybercrime by prohibiting certain behaviours by physical persons rather than to regulate the behaviour of member states.
The convention’s adoption has proceeded despite significant opposition from human rights groups, civil society, and technology companies, who had raised concerns about the potential risks of increased surveillance. In July, DiploFoundation invited experts from various stakeholder groups to discuss their expectations before the final round of UN negotiations and to review the draft treaty. Experts noted an unprecedented alignment between industry and civil society on concerns with the draft, emphasising the urgent need for a treaty focused on core cybercrime offences, strengthened by robust safeguards and precise intent requirements.
Once formally adopted, how will the UN Cybercrime Convention (further – UN Convention) impact the security of users in the cyber environment? What does this legal instrument actually state about cross-border cooperation in combating cybercrime? What human rights protections and safeguards does it provide?
We invited experts representing the participating delegations in these negotiations to provide us with a better understanding of the agreed draft convention and its practical implications for all of us.
Below, we’re sharing the main takeaways, and if you wish to watch the entire discussion, please follow this link.
Overview of the treaty: What would change once the UN Convention comes into effect?
Irene Grohsmann, Political Affairs Officer, Arms Control, Disarmament and Cybersecurity at the Federal Department of Foreign Affairs FDFA (Switzerland), started outlining that there are a few things that will change once the convention comes into force. The Convention will be new in the sense that it provides a legal basis for the first time at the UN level for states to request mutual legal assistance from each other and other cooperation measures to fight cybercrime. It will also provide, for the first time, a global legal basis for further harmonisation of criminal legal provisions regarding cybercrime between those future states parties to the convention.
‘The Convention will be new in a sense that it provides a legal basis for the first time at UN level for states to request mutual legal assistance from each other and other cooperation measures to fight cybercrime. It will also provide, for the first time, a global legal basis for further harmonisation of criminal legal provisions, regarding cybercrime, between those future states parties to the convention.’
Irene Grohsmann, Political Affairs Officer, Arms Control, Disarmament and Cybersecurity at the Federal Department of Foreign Affairs FDFA (Switzerland)
At the same time, as Irene mentioned, the Convention will remain the same, specifically not the currently applicable standards (such as data protection and human rights safeguards) for fighting cybercrime in the context of law enforcement or cooperation measures. The new UN Convention does not change those existing standards but rather upholds them.
UN Convention vs. the existing instruments: How would they co-exist?
Irene reminded that the UN Convention largely relies on, and was particularly inspired by the Budapest Convention, and therefore will not exclude the application of other existing international or regional instruments, nor will it take precedence over them. It will rather exist, side by side, with other relevant legal frameworks. This is explicitly stated in the Convention’s preamble and Article 60. Furthermore, regional conventions are typically more concrete and thus remain highly relevant in combating cybercrime. Irene noted that when states are parties to a regional convention and the UN Convention, they can opt for the regional one if it offers a more specific basis for cooperation. When states have ratified multiple conventions, they use key principles to decide which to apply, such as specificity and favorability.
Andrew Owusu-Agyemang, Deputy Manager at the Cyber Security Authority (Ghana), agreed with Irene, highlighting the Malabo Convention’s specific provisions on data protection, cybersecurity, and national cybersecurity policy. Andrew noted that the Budapest Convention complements Malabo by covering procedural powers and international cooperation gaps, benefiting parties like Ghana, a member of both. The novelty in the UN Cybercrime Convention, however, is the fact that the text introduces the criminalisation of the non-consensual dissemination of intimate images. Together, these instruments are complementary, filling gaps where others need more.
‘All these treaties can coexist because they are complementary in nature and do not polarize each other. However, the novelty in the UN Cybercrime Convention is that it introduces the criminalization of the non-consensual dissemination of intimate images.’
Andrew Owusu-Agyemang, Deputy Manager at the Cyber Security Authority (Ghana)
Cross-border cooperation and access to electronic evidence: What does the UN Convention say about this, including Article 27?
Catalina Vera Toro, Alternate Representative, Permanent Mission of Chile to the OAS, Ministry of Foreign Affairs (Chile), addressed how the UN Cybercrime Convention, particularly Article 27, handles cross-border cooperation for accessing electronic evidence, allowing states to compel individuals to produce data stored domestically or abroad if they have access to it. However, this raises concerns over accessing data across borders without the host country’s consent—a contentious issue in cybercrime. The Convention emphasises state sovereignty and encourages cooperation through mutual legal assistance rather than unilateral actions, advising states to request data access through established frameworks. While Article 27 allows states to order individuals within their borders to provide electronic data, it does not provide for unilateral cross-border data access without the consent of the other state involved.
‘The fact that we have a convention is also a positive note on what diplomacy and multilateralism can achieve. This convention helps bridge gaps between existing agreements and brings in new countries that are not part of those instruments, making it an instrumental tool for addressing cybercrime. That’s another positive aspect to consider.’
Catalina Vera Toro, Alternate Representative, Permanent Mission of Chile to the OAS, Ministry of Foreign Affairs (Chile)
Catalina noted that this approach balances effective law enforcement with respect for sovereignty. Unlike the Budapest Convention, which raised sovereignty concerns, the UN Convention emphasises cooperation to address these fears. While some states worry it may bypass formal processes, the Convention’s focus on mutual assistance aims to respect jurisdictions while enabling cybercrime cooperation.
Briony Daley Whitworth, Assistant Secretary, Cyber Affairs & Critical Technology Branch, Department of Foreign Affairs and Trade (Australia), added on the placement of this article in the convention as it pertains to law enforcement powers for investigating cybercrime within a state’s territory, distinct from cross-border data sharing. This article must be considered alongside the jurisdiction chapter, which outlines the treaty’s provisions for investigating cybercrimes, including those linked to the territory of each state party. The sovereignty provisions set limits on enforcement powers, dictating where they apply. The article also includes procedural safeguards for data submission requests, such as judicial review. Importantly, ‘specified electronic data’ must be clarified, covering data on personal devices and data controlled but not possessed by individuals, such as cloud-stored information. Legal entities, not just individuals, may be involved; for example, law enforcement would need to request data from a provider like Google rather than the user. Briony highlighted that this framework in the UN Convention drew heavily from the Budapest Convention and stressed the importance of examining its existing interpretations, used by over 76 countries, to guide how Article 27 might be applied, reinforcing that cross-border data access requires the knowledge of the state involved.
Does the convention clarify how individuals and entities can challenge data requests from law enforcement? Briony emphasised the need for clear conditions and safeguards, noting that the convention requires compliance with international human rights laws and domestic review mechanisms. Individuals can challenge orders through judicial review, and law enforcement must justify warrants with scope, duration, and target limitations. However, Briony cautioned that the treaty’s high-level language relies on countries implementing these safeguards domestically. Catalina added that the convention’s protections work best as an integrated framework, noting that countries with strong checks and balances, like Chile, already offer resources for individual rights protection.
‘Human rights protections were really at the forefront of a lot of the negotiations over the last couple of years. We managed to set a uniquely high bar in the general provisions on human rights protections for a UN convention, particularly a criminal convention. This convention not only affirms that human rights apply but also states that nothing in it can be interpreted to permit the suppression of human rights. Additionally, it includes an article on the protection of personal data during international transfers, which is rare for a UN crime convention. Objectively, this convention offers more numerous and robust safeguards than other UN conventions. One of our priorities was ensuring that this convention does not legitimise bad actions. While we cannot stop bad actors, we can ensure that this convention helps combat their actions without legitimising them, which we have largely achieved through the human rights protections.’
Briony Daley Whitworth, Assistant Secretary, Cyber Affairs & Critical Technology Branch, Department of Foreign Affairs and Trade (Australia)
How does the UN Convention define and protect ‘electronic data’?
Catalina noted that defining ‘electronic data’ was challenging throughout negotiations, with interpretations varying based on a country’s governance, which impacts legal frameworks and human rights protections. The convention defines electronic data broadly, covering all types of data stored in digital services, including personal documents, photos, and notes – regardless of whether that data has been communicated to anyone. Importantly, accessing electronic data generally has a lower threshold than accessing content or traffic data, which have more specific definitions within the convention.
This broader definition enables states to request access to electronic data, even if it contains private information intended to remain confidential. However, Catalina emphasised that domestic legal frameworks and other provisions within the convention are designed to protect human rights and safeguard individual privacy.
Briony also clarified that electronic data’ specifically refers to stored data, not actively communicated data. States differentiate electronic data from subscriber, traffic, and content data related to network communications. This definition is based on the Budapest Convention’s terminology for computer data, allowing for a wider interpretation of the types of data involved. She also emphasised that the UN Convention establishes a high standard for human rights protections, affirming their applicability and stating that it should not be interpreted to suppress rights. It includes provisions for protecting personal data during international transfers and reinforcing commitment to human rights in electronic data contexts. However, Briony added that the Convention has some flaws, noting that Australia wishes certain elements had been more thoroughly addressed. Nonetheless, the UN convention is a foundational framework for building trust among states to combat cybercrime effectively while balancing human rights commitments.
Technology transfer: What are the main takeaways from the convention to facilitate capacity building?
Andrew highlighted that technical assistance and capacity development are fundamental to effectively implementing this convention. The UN Cybercrime Treaty lays a robust foundation for technical assistance and capacity development, offering practical mechanisms such as MOUs, personnel exchanges, and collaborative events to strengthen countries’ capacities in their fight against cybercrime. The convention’s technical assistance chapter encourages parties to enter multilateral or bilateral agreements to implement relevant provisions. These MOUs, in particular, can facilitate the development of the capacities of law enforcement agencies, judges, and prosecutors, ensuring that cybercrime is prosecuted effectively.
Implementation and additional protocols: Which mechanisms does the draft convention include for keeping up to date with the pace of technological developments?
Irene clarified that, although the UN Convention has been adopted at the AHC, some topics need further discussion among member states. Due to time constraints, these discussions were postponed, including which crimes should be included in the criminalisation chapter. Some states, like Switzerland, prefer a focused list of cyber-dependent crimes, while others advocate for a broader inclusion of both cyber-dependent and cyber-enabled crimes. Irene noted that resource considerations influence Switzerland’s perspective, emphasising the need to focus on ratification and implementation rather than dividing resources with a supplementary protocol. While a supplementary protocol will need discussion in the future, there is still time to determine its content or negotiation topics.
Irene emphasised that the convention uses technology-neutral language to keep the text up-to-date with technological developments, allowing it to focus on behaviour rather than specific technologies, similar to the successful Budapest Convention. Adopted in 2001, the Budapest Convention has remained relevant for over two decades, and we hope for the same with the UN Convention. Additionally, the convention allows for future amendments; once in force and the Conference of States Parties is established, member states can address any coverage inadequacies and consider amendments five years after implementation.
Ambassador Asoke Mukerji, India’s former ambassador to the United Nations in New York, who chaired India’s national multiple-stakeholder group on recommending cyber norms for India in 2018, noted that, despite initial scepticism about the feasibility of such a framework, the current momentum demonstrates that, with trust and commitment, it is possible to establish international agreements addressing cybercrime. He also praised the effectiveness of multistakeholder participation in addressing the evolving challenges in cyberspace. However, Ambassador Mukerji cautioned about challenges regarding technology transfer, referring to recent statements at the UN General Assembly that could restrict such efforts. He expressed hope that developing countries would receive the necessary flexibility to negotiate favourable terms.
‘The negotiations took place against a very difficult global environment, and our participation from India proved to be useful. It demonstrated that countries, committed to a functional multilateral system, can benefit from it, impacting our objectives of international cooperation. Additionally, the process highlighted the effectiveness of multistakeholder participation in cyberspace. The convention and its negotiation process validate our choice to use this model to address the new challenges facing multilateralism.’
Ambassador Asoke Mukerji, India’s former ambassador to the United Nations in New York
Concluding remarks
The panellists unanimously highlighted the indispensable role of human rights standards, emphasising that any practical international cooperation against cybercrime must prioritise these principles. Briony also pointed out that the increasingly complex cyber threat landscape demands a collective response to enhance cybersecurity resilience and capabilities. The treaty’s significant achievements, including protections against child exploitation and the non-consensual dissemination of intimate images, reflect a commitment to safeguarding both victims’ and offenders’ rights. Catalina highlighted that certain types of crimes, such as gender-based violence, were also included in the text, and this is another significant achievement.
All experts also agreed that the active involvement of civil society, NGOs, and the private sector is vital for ensuring that diverse expertise contributes meaningfully to the ratification and implementation processes. Public-private partnerships were specifically mentioned as essential for fostering collaboration in cybercrime prevention. Ultimately, the success of the Convention lies not only in its provisions but also in the collaborative spirit that must underpin its implementation. By working together, stakeholders can create a safer and more secure cyberspace for all.
We at Diplo invite you all to re-watch the online expert discussion and engage in a broader conversation about the impacts of this negotiation process. In the meantime, stay tuned! We’ll further provide updates and analysis on the UN cybercrime convention and relevant processes.
As the 5 November US presidential election approaches, all eyes are on the tight race between former President Donald Trump and current Vice President Kamala Harris. Polls show the candidates are neck and neck, making voter mobilisation critical for both sides. In this high-stakes environment, the backing of major business groups could be a game changer, with influential figures like Elon Musk stepping into the spotlight.
Musk, the founder of X and one of the world’s wealthiest individuals, has recently rallied support for Trump’s campaign, highlighting the significant role that Big Tech, particularly the so-called ‘Magnificent Seven’, could play in determining the election’s outcome. As both candidates vie for the favour of corporate America, their strategies will likely reflect the growing influence of these business leaders in shaping public policy and voter sentiment.
The Magnificent Seven
The term ‘Magnificent Seven‘ originated with the 1960 Western film The Magnificent Seven, directed by John Sturges. The film follows a group of seven gunslingers, led by Yul Brynner and Steve McQueen, who are hired to protect a Mexican village from bandits. Its legacy spans sequels, a remake in 2016, and cultural resonance, especially for themes of bravery and teamwork.
In finance, The Magnificent Seven is a group of large American tech companies – Apple, Microsoft, Amazon, Nvidia, Meta Platforms, Tesla, and Alphabet. These companies are celebrated for their significant impact on consumer habits, influence over technological advancements, and dominance in the stock market. Holding immense weight in indices like the S&P 500 and NASDAQ, they are seen as critical drivers of market growth and key indicators of economic trends in areas like AI, e-commerce, and social media.
So, it’s quite understandable why the support of these tech giants might be the key to Trump or Harris winning their contested electoral duel.
Trump and tech executives
Top executives from major tech companies are increasingly reaching out to Donald Trump as the presidential election approaches. With polls showing a tight race between Trump and Vice President Kamala Harris, figures like Apple CEO Tim Cook and Amazon CEO Andy Jassy have initiated conversations with the former president. Even Mark Zuckerberg has expressed admiration for Trump following an assassination attempt on him. This shift comes after a tumultuous relationship marked by Facebook’s ban on Trump following the 6 January Capitol riot, a ban that was lifted in 2023.
Trump noted on the Barstool Sports podcast that he appreciates Zuckerberg’s current approach, emphasising that Zuckerberg is staying out of the election. Meta has taken steps to reduce political content on its platforms, including changes to Instagram that limit political recommendations unless users opt-in. Zuckerberg has also stated that he will not endorse any candidates in the 2024 election and plans to avoid significant political engagement. Despite their past conflicts, including Trump’s characterisation of Facebook as an ‘enemy of the people,’ Zuckerberg praised Trump’s resilient response to a recent assassination attempt, calling it ‘badass.’
This comment reflects a complicated dynamic between the two, as Trump claimed Zuckerberg expressed difficulty in voting for a Democrat in the upcoming election. However, Meta denied this, reiterating that Zuckerberg has not indicated any intention to vote or endorse the race.
Elon Musk’s relationship with Donald Trump has seen various phases, reflecting both support and criticism over the past years. Just two years ago, Musk voiced his disapproval of the former president, tweeting in 2022 that it was ‘time for Trump to hang up his hat & sail into the sunset.’ This tweet was in response to Trump publicly calling Musk a liar, accusing him of not being truthful about who he had voted for in past elections. Trump even doubted Musk’s then-pending purchase of Twitter, quipping to a rally crowd, ‘Elon is not going to buy Twitter.’ Of course, Musk did end up buying the platform, now called X, and has since made headlines for his shifting political alliances and increasingly public alignment with issues near Trump’s campaign.
Musk’s stance on US politics was historically more progressive, with nearly exclusive support for Democrats. However, his views on President Biden have notably soured, particularly over unionisation efforts and Biden’s perceived lack of recognition of Tesla’s achievements. Notably, Tesla was not invited to Biden’s 2021 White House electric vehicle summit, despite its status as a major EV manufacturer. Musk’s frustration only grew as his companies have faced federal investigations under the Biden administration, including scrutiny over Tesla’s autopilot feature and his controversial acquisition of Twitter. By 2023, Musk expressed his dissatisfaction with the Biden administration, stopping short of an endorsement for Trump but hinting at his disapproval.
Since taking over Twitter, Musk has shifted noticeably to the right, aligning with Trump on issues like government censorship and criticisms of ‘woke’ ideology. He has lifted Trump’s previous ban on Twitter and frequently shares opinions that echo Trump’s base, from distrust of the media to concerns about unchecked immigration. Political analyst Ryan Broderick suggests that Musk’s stance has transformed drastically since 2018, noting that his earlier, more liberal ‘neoliberal, happy-go-lucky’ messages have given way to tweets that often appeal to the far-right, drawing criticism and sparking debates across the platform.
Trump has responded to this shift with a warmer stance toward Musk. Recently, he praised Musk at a news conference, lauding his patriotism and mutual concern for the country. Musk also seems to have cemented his support for Trump, especially after publicly endorsing him and calling for his recovery following an alleged assassination attempt.
Additionally, Musk has committed $100 million to support Trump, and now, in a move stirring debate, he’s offering $1 million a day to selected voters who sign a petition supporting the First and Second Amendments. This campaign, led by Musk’s America PAC, is focused on registering Trump supporters and has been actively promoting the initiative in Pennsylvania, a key battleground state.
Musk’s financial support and giveaway campaign have raised concerns among election law experts. The PAC requires participants to be registered voters to be eligible for the million-dollar check, which some experts say may cross legal lines. UCLA Law professor Rick Hasen noted that while it is legal to pay people to sign petitions, tying eligibility to voter registration could potentially violate laws against incentivising voter registration.
Kamala Harris and Silicon Valley
On the other hand, Kamala Harris’s presidential campaign has also garnered substantial support from Silicon Valley’s elite, signalling a strong connection between her candidacy and tech industry leaders. Harris’s relationship with Silicon Valley extends back over a decade, partly attributed to her tenure as California’s attorney general and her subsequent role as a US senator. This long-standing connection has led many tech leaders to believe she might adopt a friendlier stance towards the industry than the Biden administration. Notable figures like former Facebook CEO Sheryl Sandberg, LinkedIn co-founder Reid Hoffman, philanthropist Melinda French Gates, and IAC chair Barry Diller are among those supporting Harris, and billionaire Laurene Powell Jobs, Steve Jobs’ widow, has been a close ally since 2013, hosting a fundraiser for Harris that year.
Beyond billionaires, Harris has also drawn support from a broad base of venture capitalists and tech workers. Employees at Alphabet, Amazon, and Microsoft have collectively contributed over $3 million to her campaign. Alphabet workers alone have donated $2.16 million, nearly 40 times their contribution to Trump. Amazon and Microsoft employees have also shown a strong preference for Harris, with their donations amounting to ten and twelve times that of Trump, respectively. While Meta and Apple have not reached the $1 million mark in contributions, their support for Harris also far exceeds what they have given to Trump.
Over 800 VCs have signed a ‘VCs For Kamala’ pledge, and a separate Tech4Kamala letter has gathered more than 1,200 signatures. Among her backers is Steve Spinner, a major Democratic fundraiser who has worked to consolidate Silicon Valley’s support behind Harris, arguing that the majority of the tech industry remains Democratic despite high-profile endorsements of Trump by figures like Elon Musk. Spinner emphasises that ‘for every one person who’s backing Trump, there’s 20 who are backing Kamala,’ dismissing pro-Trump tech figures as outliers in an overwhelmingly liberal industry.
However, this alignment is not without exceptions. David Marcus, former president of PayPal and CEO of the payment company Lightspark, has publicly shifted his allegiance from Democrats to Republicans, criticising what he sees as the Democratic leadership’s ‘hubris’ and its embrace of an ‘increasingly leftist ideology.’ His move underscores a divide within the tech sector, with some executives pulling away from a party they feel is distancing itself from the industry’s priorities.
Tech firms under scrutiny
A key point of focus is the regulatory scrutiny that Big Tech faces under President Joe Biden’s administration, specifically targeting companies like Apple and Google. Biden’s Department of Justice (DOJ) has pursued antitrust actions, arguing that Apple manipulates the smartphone market to limit competition and that Google’s practices resemble those of the AT&T monopoly that was dismantled in the 1980s. This intense scrutiny has created uncertainty for the tech giants, as they face regulatory challenges both at home and abroad, including significant tax penalties imposed by the EU —$14.4 billion for Apple and $2.6 billion for Google.
In older statements, Trump expressed dissatisfaction with Google’s treatment of him, previously calling for maximum-level prosecution against the company for alleged bias. However, he recently noted a shift in Google’s stance, commenting that they appear ‘more inclined’ to support him.
He also mentioned discussing Apple’s European tax rulings with CEO Tim Cook, implying that such regulatory issues would be addressed more favourably under his leadership. Donald Trump has hinted that he might ease this pressure if reelected, suggesting that regulatory hurdles for Big Tech might lessen under his administration.
Trump’s tech policy
Donald Trump’s vision for tech policy includes reducing regulatory barriers to foster innovation and growth. Trump has expressed concern over what he sees as ‘illegal censorship’ by Big Tech, particularly social media platforms, which he claims display bias against conservative viewpoints. The Trump administration previously pursued antitrust actions against tech giants like Google and Meta, and he remains critical of companies he believes unfairly limit free speech online.
Trump’s approach to AI and cryptocurrencies favours a hands-off approach, arguing that the industry should be allowed time to develop without heavy government oversight. His policies suggest he would scale back initiatives such as the electric vehicle challenge and roll back consumer protections implemented under the Biden administration. Trump’s tech policy largely reflects a belief that the market will regulate itself and that minimising government intervention will drive US competitiveness on the world stage. He is also promising favourable policies such as corporate tax cuts.
In general, Trump’s rhetoric suggests a friendlier approach to tech giants, framing his administration as one that would ‘set free’ companies burdened by regulation. This would represent a significant departure from Biden’s approach, which could lead to more extensive oversight, adding another layer of importance to the election’s outcome for these powerful tech companies.
Harris’s point of view
On the contrary, Kamala Harris was appointed by Biden as the AI czar, tasking her with enhancing regulations surrounding AI technology as outlined in his executive order. During her tenure in this role, Harris collaborated with leaders from major tech firms like OpenAI, Microsoft, Alphabet, and Anthropic, emphasising a commitment to prioritising safety over corporate profits. She voiced concerns at the Global Summit on AI Safety last year, asserting that without robust government oversight, tech companies often prioritise profit at the expense of public well-being and democratic stability.
Harris’ approach has also involved data privacy and bias protection, advocating for legislation to mitigate potential harms associated with AI and emerging digital platforms.
A major achievement for the Biden-Harris administration is the CHIPS and Science Act of 2022 which invested in American semiconductor production and tech research and development. This legislation supports clean energy projects and green tech, aiming to secure the country’s tech independence and strengthen national security by bringing more tech manufacturing stateside. Harris’ policies have targeted consumer protection against data misuse and online misinformation, echoing the administration’s interest in strengthening net neutrality and advocating for clearer data privacy laws.
In that sense, experts predict that Harris will largely continue Biden’s current regulatory framework on technology and AI, with only minor adjustments.
However, Harris’s policy positions, particularly on issues crucial to the tech industry such as tax reform, immigration, and antitrust enforcement, remain largely unarticulated, prompting Silicon Valley to tread carefully. Although Harris’s long history in California politics has earned her a base of goodwill, her campaign must address these policy uncertainties to secure substantial financial and strategic backing from an industry navigating the political flux. This balancing act is particularly challenging as she vies to retain traditional Democratic support without alienating a tech sector that remains cautious in light of growing regulatory pressures under the Biden administration.
The future of the tech sector
In conclusion -as technology continues to shape the economy- both candidates’ policies reflect the broader economic vision they hope to achieve. Harris envisions an inclusive, equitable tech landscape where consumer protection and innovation go hand-in-hand, while Trump’s policies prioritise a market-driven model that incentivises growth with minimal intervention. These differences underscore the fundamental contrast in their governance styles and philosophies regarding the role of government in technology.
Ultimately, the next president’s approach to technology will play a crucial role in determining how Americans interact with the digital world, work in an AI-driven economy, and navigate issues of privacy and digital citizenship. As the candidates refine their platforms, voters will face a choice between competing visions of how to guide the nation through a transformative era in technology and innovation.
On 21 and 24 October, DiploFoundation provided just-in time reporting from the UN Security Council sessions on scientific development and on women, peace, and security. Supported by Switzerland, this initiative aims to enhance the work of the UN Security Council and the broader UN system.
At the core of this effort is DiploAI, an advanced platform shaped by years of training on UN materials, which played a crucial role in unlocking the knowledge generated by the Security Council’s deliberations. This knowledge, often trapped in video recordings and transcripts, is now more accessible, providing valuable insights for diplomacy and global peace.
Unlocking the power of AI for peace and security
AI-supported reporting from the UN Security Council (UNSC) demonstrates the potential of combining cutting-edge technology with deep expertise in peace and security. This effort is part of ongoing work by DiploAI, which has been providing detailed reports on Security Council sessions in 2023-2024 and has covered the UN General Assembly (UNGA) for eight consecutive years. DiploAI is actively contributing to expanding the UN’s knowledge ecosystem.
Seamless interplay between experts and AI
The success of this initiative lies in the seamless interplay between DiploAI and security experts well-versed in UNSC procedures. The collaboration began with tailoring the AI system to the unique needs of the Council, using input from experts and diplomats to build a relevant knowledge base. Experts supplied key documents and session materials, which enhanced the AI’s contextual understanding. Feedback loops on keywords, topics, and focus areas ensured the AI’s output remained both accurate and diplomatically relevant.
A pivotal moment in this collaboration was the analysis of New Agenda for Peace , where Security Council experts helped DiploAI identify over 400 critical topics, laying the foundation for a comprehensive taxonomy on peace and security at the UN. This expertise, combined with DiploAI’s technical capabilities, has resulted in an AI system attuned to the subtleties of diplomatic language and priorities. Furthermore, the project introduced a Knowledge Graph—a visual tool for displaying sentiment and relational analysis between statements and topics—which adds new depth to the analysis of Council sessions.
Building on this foundation, DiploAI developed a custom chatbot capable of moving beyond standard Q&A interactions. By integrating data from all 2024 sessions and associated documents, the chatbot allows users to interact conversationally with the content, providing in-depth answers and real-time insights. This evolution marks a significant leap forward in accessing and understanding diplomatic data—shifting from static reports to interactive exploration of session materials.
AI and diplomatic sensitivities
The development of DiploAI’s Q&A module, refined through approximately ten iterations with feedback from UNSC experts, underscores the value of human-AI(-nism) collaboration. This module addresses essential diplomatic questions, with iterative refinements ensuring that responses meet the Council’s standards for accuracy and relevance. The result is an AI system capable of addressing critical inquiries while respecting the sensitivity required in diplomatic settings.
What’s new?
DiploAI’s suite of tools—including real-time meeting transcription and analysis—has transformed reporting and transparency at the UNSC. By integrating customized AI systems like retrieval-augmented generation (RAG) and knowledge graphs, DiploAI adds context, depth, and relevance to the extracted information. Trained on a vast corpus of diplomatic knowledge generated at Diplo over the last two decades, the AI system generates context-specific responses, providing comprehensive answers to questions about transcribed sessions.
Here are some numbers from 10 UNSC meetings that took place between January 2023 and October 2024:
Number of speakers and speech length
Unique speakers: 185
Total time: 201,221.25 min – 2.0 days, 7.0 hours, 53.0 minutes, 41.0 seconds
Total speeches: 583
Total length: 396,172 words, or 0.67 ‘War and Peace’ books
Frequency of selected topics
Name of the topic
Number of times it was mentioned
Name of the session
development
1,665 mentions
The session that most mentioned development: UNSC meeting: Peace and common development (919 mentions)
climate change
451 mentions
The session that most mentioned climate change: UNSC meeting: Climate change and food insecurity (329 mentions)
human rights
360 mentions
The session that most mentioned human rights: UNSC meeting: Peace and common development (93 mentions)
civilians
136 mentions
The session that most mentioned civilians: UNSC meeting: Peacekeeping (72 mentions)
international humanitarian law
27 mentions
The session that most mentioned international humanitarian law: UNSC meeting: Multilateral cooperation (6 mentions)
In conclusion…
DiploAI’s reporting from the Security Council, supported by Switzerland, shows how AI can enhance diplomacy while staying grounded in human expertise and practical needs. This blend of technical capability and domain-specific knowledge demonstrates how AI, when developed collaboratively, can contribute to more inclusive, informed, and impactful diplomacy.
This summer, the UN Member States reached a milestone by agreeing on a draft for the organisation’s first-ever international convention against cybercrime. While this marks a significant step, it has raised many questions among those closely following cybercrime issues. One of the key concerns is how this new UN convention will coexist with current frameworks, particularly the Budapest Convention of the Council of Europe, which has been ratified by 76 countries, and is considered by the Council of Europe as the first international framework to address cybercrime. What distinguishes the UN convention from the Budapest Convention, and how will the two interact moving forward?
In this analysis, we closely look at different chapters of both conventions to highlight the similarities and differences between the two documents.
Status and parties
The ‘United Nations Convention Against Cybercrime; strengthening international cooperation for combating certain crimes committed by means of information and communications technology systems and for the sharing of evidence in electronic form of serious crimes’, or simply the UN Convention, is not formally adopted yet: while the draft was adopted by the Ad Hoc Committee by consensus, the text will be further considered by the General Assembly. Once formally adopted, the convention should come into force if ratified by 40 UN Member States.
The Convention on Cybercrime or Budapest Convention is the legally binding treaty established by a regional organisation, i.e. Council of Europe. The Convention was ratified by 76 States, including both members and non-members of the Council of Europe.
The Convention includes two protocols, developed and adopted over time. The first protocol on xenophobia and racism via computer systems was opened for signature in 2003. The second protocol on enhanced cooperation and disclosure of electronic evidence was finalised in 2022 and has been, for now, only ratified by Serbia and Japan. To come into force, the second protocol requires 5 ratifications.
The distinction between the two by parties that negotiated the treaties should also be noted: all UN Member States vs. 46 Member States of Council of Europe.
Purposes & Scope
While both the Budapest Convention and the UN Convention share the overarching goal (which is to address cybercrime), their scopes are not exactly the same.
The Budapest Convention primarily focuses on the criminalisation of specific offences (e.g. illegal access, data/system interferences, computer-related fraud, child sexual abuse material), procedural powers to address cybercrime, and fostering international cooperation, by offering an advanced framework for cross-border access to electronic evidence (e-evidence).
The UN Convention’s aim is broader and takes a more comprehensive approach: it emphasises the need to prevent and combat cybercrime by strengthening international cooperation, providing technical assistance and capacity building for developing countries, particularly.
In view of scope, the UN Convention offers a broader institutional and global cooperation framework, while the Budapest Convention covers a wider and more specific range of criminal offences and procedural powers related to cybercrime.
Specifically, the Budapest Convention and its Second Protocol apply to e-evidence related to any criminal offence, while the UN Convention limits its scope to offences with a serious crime threshold, defined in the treaty as those punishable by a maximum deprivation of liberty of at least four years or a more serious penalty.
At the same time, the UN Convention is broader by addressing a wider range of issues, including the protection of state sovereignty, preventive measures, and provisions for technical assistance and information exchange, thus extending beyond the criminalisation and procedural focus of the Budapest Convention.
Definitions
To a large extent, the definitions in the Budapest Convention have been replicated in the UN Convention. However, there are some significant differences, particularly reflecting the broader scope of the UN Convention.
The UN Convention specifically uses the terms ‘ICT’ and ‘ICT systems’ instead of ‘computer’ or ‘computer systems,’ broadening its applicability to a wider range of devices and technologies. This language has been a key point of criticism. Notably, in articles like 23(2)(b) and (c), and 35(1)(c), the reference to ‘any criminal offense’ extends beyond cybercrime, potentially allowing the collection of data for any crime as defined by national laws, raising concerns about overreach and the scope of its application. It also uses ‘electronic data’ instead of ‘computer data’ (as the Budapest Convention does) to encompass all forms of electronic data.
Specifically, article 2 defines ‘electronic data’ as ‘any representation of facts, information or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function’, which was criticised by civil society for taking too broad an approach to the terminology. The UN Convention also explicitly introduces ‘content data’ and ‘serious crime’, which are not defined in the Budapest Convention though are mentioned (and what triggered criticism from civil society as the definitions of ‘serious offences’ are left to domestic law and thus will vary from country to country).
Criminalisation
The UN Convention is broader in scope compared to the Budapest Convention, as it criminalises additional forms of conduct. While some offences, like illegal access, are defined similarly in both conventions, the UN treaty expands the range of criminalised activities, addressing areas beyond the cyber-dependent crimes covered by the Budapest Convention, for instance by criminalising money laundering. The UN Convention also providers a provider scope to similar offences – for instance, this broader approach can be seen in provisions related to child sexual abuse material (below).
While article 9 of the Budapest Convention criminalises actions related to material, article 15 of the UN Cybercrime Convention extends beyond content/material and addresses solicitation, grooming, or making arrangements for the purpose of committing sexual offences against children, thus focusing more on preventing sexual offences from occurring by targeting the preparatory actions (solicitation or grooming), not just the possession or distribution of illegal content. However, it’s important to note that both instances refer to content-based crimes, with criticism focusing on the risk that victims may face prosecution simply for possessing certain types of content – particularly when real-time data collection is involved. This raises concerns about how such provisions might be misused to target individuals rather than the perpetrators of the crimes.
Both the Budapest Convention and the UN Convention address the integration of child protection into domestic legislation. However, they do not make a reference to the Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography that was ratified by 176 countries and already has this obligation in it. While both instruments touch on other treaties, they fail to incorporate or cite them directly in their text. The Budapest Convention is somewhat more comprehensive in this respect, as it explicitly references human rights treaties.
Offences related to child pornography (Art 9), the Budapest Convention
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: producing child sexual abuse material for the purpose of its distribution through a computer system; offering or making available child sexual abuse material through a computer system; distributing or transmitting child sexual abuse material through a computer system; procuring child sexual abuse material through a computer system for oneself or for another person; possessing child sexual abuse material in a computer system or on a computer-data storage medium.
Solicitation or grooming for the purpose ofcommitting a sexual offence against a child (Art 15), the UN Convention
Each State Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the act of intentionally communicating, soliciting, grooming, or making any arrangement through an information and communications technology system for the purpose of committing a sexual offence against a child, as defined in domestic law, including for the commission of any of the offences established in accordance with article 14 of this Convention. A State Party may require an act in furtherance of the conduct described in paragraph 1 of this article. A State Party may consider extending criminalization in accordance with paragraph 1 of this article in relation to a person believed to be a child. States Parties may take steps to exclude the criminalization of conduct as described in paragraph 1 of this article when committed by children.
The Budapest Convention doesn’t contain specific provisions for critical infrastructure protection, while the UN Convention specifically addresses the need to protect critical information infrastructures in article 21. At the same time, the UN Convention omits offences related to copyright infringement, which are included in the Budapest Convention.
It should also be noted that the Budapest Convention integrates its criminalisation provisions across different sections (compared to the UN Convention) and is more focused on core cybercrime offences such as illegal access, data interference, and system interference. This structure reflects a narrower focus on crimes directly involving computer systems and data, without expanding into broader cyber-enabled crimes.
Procedural powers
The UN Convention (Articles 23-30) has a broader scope than the Budapest Convention (Articles 14-21), as it incorporates additional measures from UNCAC and UNTOC, such as provisions for the confiscation of crime proceeds (e.g. article 31) and witness protection (article 33 and 34), which are not covered in the Budapest Convention.
However, the core procedural powers between the two conventions are largely similar. Both conventions outline comparable conditions and safeguards, though the UN Convention has faced significant criticism from civil society due to its reliance on domestic laws to establish how these safeguards would be applied, which can vary widely across countries. This variation can lead to inadequate protections in states where local laws do not meet high human rights standards. This concern has also been raised in relation to the Budapest Convention and its protocols for a failure to provide specific procedural protections for privacy and freedom of expression
Conditions and Safeguards (Art 15), the Budapest Convention
1. Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are subject to conditions and safeguards provided for under its domestic law, which shall provide for the adequate protection of human rights and liberties, including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 United Nations International Covenant on Civil and Political Rights, and other applicable international human rights instruments, and which shall incorporate the principle of proportionality.
2. Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include judicial or other independent supervision, grounds justifying application, and limitation of the scope and the duration of such power or procedure.
3. To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities, and legitimate interests of third parties.
Conditions and safeguards (Art 24), the UN Convention
1. Each State Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this chapter are subject to conditions and safeguards provided for under its domestic law, which shall provide for the protection of human rights, in accordance with its obligations under international human rights law, and which shall incorporate the principle of proportionality.
2. In accordance with and pursuant to the domestic law of each State Party, such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, include, inter alia, judicial or other independent review, the right to an effective remedy, grounds justifying application, and limitation of the scope and the duration of such power or procedure.
3. To the extent that it is consistent with the public interest, in particular the proper administration of justice, each State Party shall consider the impact of the powers and procedures in this chapter upon the rights, responsibilities and legitimate interests of third parties.
4. The conditions and safeguards established in accordance with this article shall apply at the domestic level to the powers and procedures set forth in this chapter, both for the purpose of domestic criminal investigations and proceedings and for the purpose of rendering international cooperation by the requested State Party.
5. References to judicial or other independent review in paragraph 2 of this article are references to such review at the domestic level.
International cooperation
Firstly, the Budapest Convention and its Second Protocol allow international cooperation for the collection of electronic evidence related to any criminal offence. This broad scope means that countries can assist each other in investigations involving crimes beyond cyber-related activities, as long as electronic evidence is involved. The Budapest Convention emphasises cross-border cooperation through established networks and mechanisms like 24/7 contact points.
The UN Convention limits its scope of international cooperation to ‘serious crimes’ as defined by the treaty. These are offences punishable by a maximum of at least four years of imprisonment or more. However, as previously noted, articles such as 23(2)(b) and (c), and 35(1)(c) broaden the scope by referencing ‘any criminal offense.’
Secondly, the Budapest Convention in its Second Protocol includes a broader list of advanced tools (e.g. emergency mutual assistance in article 10 or video conferencing in article 11 etc.) for cross-border cooperation to obtain electronic evidence, and none of such tools have been included in the UN Convention. The Budapest Convention also emphasises timely preservation and sharing of data across borders, with an established network of 24/7 contact points to ensure rapid response in cybercrime investigations. The Second Protocol further strengthens data-sharing provisions, including direct cooperation with service providers and expedited disclosure of data in emergency situations.
The UN Convention provides mechanisms for data sharing but has been criticised for its provisions on confidentiality and transparency. Critics, including industry leaders, argue that the treaty has too many references to keeping requests confidential, which might limit transparency and oversight. This could lead to concerns about how certain countries use this data for surveillance or other purposes.
On the other hand, the UN Convention provides more areas for international cooperation since it includes the provisions from the UNTOC and UNCAC and includes provisions on crime prevention as well as freezing, seizure, confiscation and return of the proceeds (article 31), which are not included in the Budapest Convention.
The UN Convention, at the same time, lacks detailed safeguards, particularly, regarding how surveillance and data sharing might impact privacy. One of the provisions in article 22 grants states the authority to assert jurisdiction over crimes committed outside their borders if their nationals are affected, which would effectively allow other states to interfere in their domestic affairs. This also means that if states want to use the convention to prosecute the conduct of individuals outside their territory, they can do so.
Further, article 27 allows states to access electronic data (which is very broadly defined in the treaty) from individuals if they are located in their country, no matter where that data is stored. The same power is designed to order service providers that offer their services in the territory of a state to provide subscriber information relating to such services and this may include phone, emails, account details and other personally identifiable information.
Conclusion
As both the UN Cybercrime Convention and the Budapest Convention continue to shape global cybercrime policy, the challenge of how these instruments will coexist becomes increasingly relevant. The Budapest Convention, as the first international treaty on cybercrime, has long served as a foundational framework, providing a robust structure for addressing cyber-related offences while emphasising human rights and alignment with other international treaties.
However, states already party to the Budapest Convention may find themselves caught between the narrower, more established approach of that treaty and the broader mandates of the UN Convention. The latter’s focus on ‘serious crimes’ and the ambiguity around the scope of data collection for any offense defined by domestic law could lead to inconsistencies in how cybercrime is addressed globally, especially when legal definitions of cyber offences differ between nations.
The ability of these two instruments to coexist may depend on diplomatic efforts to create a complementary relationship between the two. Ensuring that both conventions are implemented in a way that respects existing international norms and human rights will be key to avoiding legal fragmentation and ensuring that global cybercrime prevention efforts are effective and coordinated.