The global regulatory landscape of crypto: Between innovation and control

Blockchain and cryptocurrencies: transformative forces in modern economies

Blockchain is a digital ledger technology that records transactions securely, transparently, and immutable. It functions as a decentralised database, distributed across a network of computers, where data is stored in blocks linked together in chronological order. Each block contains a set of transactions, a timestamp, and a unique cryptographic hash that connects it to the previous block, forming a continuous chain.

 Symbol, Business Card, Paper, Text

The decentralised nature of blockchain means that no single entity has control over the data, and all participants in the network have access to the same version of the ledger. This structure ensures that transactions are tamper-proof, as altering any block would require changing all subsequent blocks and gaining consensus from the majority of the network. Cryptographic techniques and consensus mechanisms, such as proof of work or proof of stake, secure the blockchain, verifying and validating transactions without the need for a central authority.

Initially introduced as the underlying technology for Bitcoin in 2009, blockchain has since evolved to support a wide range of applications beyond cryptocurrencies. It enables smart contracts—self-executing agreements coded directly onto the blockchain—and has found applications in industries such as finance, supply chain management, healthcare, and voting systems. Blockchain’s ability to provide transparency, enhance security, and reduce the need for intermediaries has positioned it as a transformative technology with the potential to reshape the way information and value are exchanged globally. Cryptocurrency is a form of digital or virtual currency that relies on cryptography for security and operates on decentralised networks, typically powered by blockchain technology. Unlike traditional currencies issued and regulated by governments or central banks, cryptocurrencies are not controlled by any central authority, which makes them resistant to censorship and manipulation.

At its core, cryptocurrency functions as a digital medium of exchange, allowing individuals to send and receive payments directly without the need for intermediaries like banks. Transactions are recorded on a blockchain, ensuring transparency, immutability, and security. Each user has a unique digital wallet containing a private key, which grants them access to their funds, and a public key, which serves as their address for receiving payments.

Cryptocurrencies often rely on consensus mechanisms like proof of work or proof of stake to validate transactions and maintain the integrity of blockchain. Bitcoin, the first cryptocurrency, was launched by an anonymous entity known as Satoshi Nakamoto, to create a decentralised and transparent financial system. Since then, thousands of cryptocurrencies have emerged, each with its own unique features and use cases, ranging from smart contracts on Ethereum to stablecoins designed to minimise price volatility.

MicroStrategy now holds 244,800 bitcoins, worth roughly $9.45 billion, after recent large-scale purchases.

Cryptocurrencies can be used for various purposes, including online payments, investments, remittances, and decentralised finance. While they offer benefits such as lower transaction fees, financial sovereignty, and global accessibility, they also face challenges like regulatory uncertainty, price volatility, and scalability issues. Despite these challenges, cryptocurrencies have become a transformative force in the global economy, driving innovation and challenging traditional financial systems.

Regulation necessity

The need for cryptocurrency regulation arises from the rapid growth and widespread adoption of digital assets, which present both opportunities and risks for individuals, businesses, and governments. While cryptocurrencies offer numerous benefits, such as financial inclusion, decentralised finance, and cross-border transactions, their unique characteristics also create challenges that necessitate oversight to ensure the integrity, stability, and safety of financial systems.

One primary reason for regulation is to protect consumers and investors. The crypto market is highly volatile, with prices often experiencing extreme fluctuations. This instability exposes investors to significant risks, and the lack of oversight has led to numerous cases of fraud, scams, and Ponzi schemes. Regulation can establish safeguards, such as requiring exchanges to implement transparency, security measures, and fair practices, which help protect users from financial losses.

Another critical driver for regulation is the need to combat illicit activities. The pseudonymous nature of cryptocurrencies can make them attractive for money laundering, terrorist financing, tax evasion, and other illegal purposes. By enforcing Know Your Customer (KYC) and Anti-Money Laundering (AML) requirements, regulators can minimise these risks and ensure that digital assets are not exploited for unlawful activities.

Regulation is also necessary to enhance market stability and confidence. The crypto space has seen incidents such as exchange hacks, sudden bankruptcies, and the collapse of major projects, which have caused significant disruptions and undermined trust in the ecosystem. Regulatory frameworks can help ensure the resilience and security of the infrastructure supporting cryptocurrencies, fostering a more stable environment.

Furthermore, as cryptocurrencies increasingly integrate into the global economy, regulation is vital to maintain financial stability. Unregulated digital assets could potentially disrupt traditional economic systems, challenge monetary policies, and create systemic risks. By introducing clear rules for the interaction between cryptocurrencies and traditional finance, regulators can prevent market manipulation and mitigate risks to the broader economy.

Finally, regulatory clarity can encourage legitimacy and adoption. A well-regulated crypto market can attract institutional investors, foster innovation, and create opportunities for businesses while addressing the concerns of sceptics and governments. Clear and consistent regulatory frameworks can also ensure fair competition and enable the crypto industry to coexist with traditional financial systems.

 Accessories, Jewelry, Locket, Pendant

Cryptocurrency regulation is necessary to protect users, prevent misuse, stabilise markets, safeguard economies, and promote broader adoption. Striking the right balance is essential to supporting innovation while addressing risks, enabling cryptocurrencies to realise their full potential as a transformative financial tool.

The future of crypto regulation worldwide

Global crypto regulation is a complex and evolving landscape, as governments and regulatory bodies around the world approach the issue with varying degrees of acceptance, restriction, and oversight. Cryptocurrencies, by their nature, operate on decentralised networks that transcend borders, making regional or national regulation a challenging task for policymakers. Governments worldwide are introducing rules to govern digital assets, with organisations like the International Organization of Securities Commissions (IOSCO) and the World Economic Forum (WEF) emphasising the need for consistent global standards. IOSCO has outlined 18 key recommendations for managing crypto and digital assets, while the WEF’s Pathways to the Regulation of Crypto-Assets provides an overview of recent regulatory developments and highlights the necessity of international alignment in overseeing this rapidly evolving industry.

Although regulatory discussions around crypto assets have been ongoing for years, recent crises, including the collapse of crypto-friendly banks and platforms like FTX, have heightened the urgency for clear rules. These incidents have accelerated the drive for stricter accounting and reporting standards.

Some countries have adopted pro-crypto stances, recognising the technology’s potential for economic growth and innovation. These nations often implement clear regulatory frameworks that encourage blockchain development and crypto adoption while addressing risks such as fraud, money laundering, and tax evasion. For instance, countries like Switzerland, Singapore and El Salvador have established themselves as crypto-friendly hubs by offering favourable regulatory environments that support blockchain startups and initial coin offerings (ICOs).

A table with text on it

Conversely, other nations take a more restrictive approach, either banning cryptocurrencies outright or imposing strict controls. Many countries have implemented comprehensive bans on cryptocurrency trading and mining, citing concerns over financial stability, capital flight, and environmental impacts. Some governments are cautious about the use of cryptocurrencies in illicit activities such as money laundering and terrorism financing, leading to calls for stricter KYC and AML requirements. At the international level, organisations such as the Financial Action Task Force (FATF) have introduced guidelines aimed at harmonising cryptocurrency regulations across borders. These guidelines focus on combating financial crimes by requiring cryptocurrency exchanges and service providers to implement measures such as customer identification and transaction reporting. In addition to regulating existing cryptocurrencies, many central banks are exploring the development of Central Bank Digital Currencies (CBDCs) These government-backed digital currencies aim to provide the benefits of cryptocurrencies, such as faster payments and increased financial inclusion, while maintaining centralised control and regulatory oversight.

Overall, global cryptocurrency regulation is dynamic and fragmented, reflecting the varying priorities and perspectives of different jurisdictions. While some countries embrace cryptocurrencies as tools for innovation and financial empowerment, others prioritise control and risk mitigation. The future of crypto regulation is likely to involve a blend of international cooperation and national-level policymaking, as regulators strive to strike a balance between fostering innovation and addressing the challenges posed by this transformative technology.

Let us examine a few examples of regulations.

US cryptocurrency regulation progress

The United States has made slow but steady progress toward establishing a regulatory framework for cryptocurrencies. Legislative efforts like the Financial Innovation and Technology for the 21st Century Act (FIT21) and the Blockchain Regulatory Certainty Act aim to define when cryptocurrencies are classified as securities or commodities and clarify regulatory oversight. Although these bills have yet to gain significant traction, they lay the foundation for future advancements in crypto regulation.

However, Donald Trump’s incoming administration has pledged to position the US as a global leader in cryptocurrency innovation. Plans include creating a Bitcoin strategic reserve, revitalising crypto mining, and pursuing deregulation. The expected nomination of cryptocurrency advocate Paul Atkins as SEC chair has fueled optimism within the industry, raising hopes for a more collaborative and forward-thinking approach to digital asset regulation.

Trump’s family takes centre stage at the Gulf’s biggest bitcoin conference.

While deregulation is a priority, the sector still requires new rules to address its complexities. Key areas for clarification include defining when crypto assets qualify as securities under the Howey test and refining enforcement strategies to focus on fraud prevention without stifling innovation. Addressing the treatment of secondary crypto trading under securities laws could further enhance the competitiveness of US-based exchanges and keep crypto projects in the country.

By balancing deregulation with essential safeguards, the incoming administration could foster an environment of growth and innovation while ensuring compliance and investor protection. The groundwork being laid today may help shape a thriving future for the US cryptocurrency landscape.

Russia strengthens crypto rules

Russia has taken a significant step in regulating cryptocurrency by introducing new rules aimed at integrating digital assets into its financial system while maintaining economic stability. As of 11 January 2025, the Bank of Russia requires contracts involving digital rights—such as cryptocurrencies, tokenised securities, and digital tokens—used in foreign trade to be registered with authorised banks. This applies to import contracts exceeding RUB 3 million and export contracts over RUB 10 million, underscoring the country’s intent to balance oversight with operational efficiency in international trade.

 Architecture, Building, Housing, House, Mansion, Palace, Car, Transportation, Vehicle

The regulations also mandate residents to provide detailed documentation on crypto transactions tied to these contracts. These include records of digital asset transfers or receipts used as payments, along with information on related foreign exchange operations. This level of scrutiny is designed to enhance transparency and mitigate risks, reflecting Russia’s broader goal of establishing a secure and efficient framework for digital assets.

While the move could promote wider adoption of cryptocurrencies by offering regulatory clarity, it also imposes additional compliance obligations on businesses and investors. As digital assets gain prominence in the global economy, Russia aims to leverage their potential while ensuring they are used responsibly within its financial system.

The Bank of Russia’s initiative represents a pivotal moment in the evolution of the nation’s digital financial landscape. Market participants will need to adapt to these changes and navigate the new regulatory environment as Russia positions itself at the forefront of crypto regulation.

China’s complex crypto landscape

China has had a complicated relationship with cryptocurrency, once holding the largest market for Bitcoin transactions globally before a crackdown began in 2017. Despite these regulatory restrictions, the blockchain industry in China remains a leader, with over 5,000 blockchain-related companies. China’s government continues to restrict domestic cryptocurrency trading and initial coin offerings (ICOs), citing concerns over volatility, anonymous transactions, and lack of centralised control. However, major blockchain companies like Binance and Huobi remain influential, and China still leads in blockchain projects globally.

Legally, China does not recognise cryptocurrency as legal tender. Instead, it considers them virtual commodities. Since 2013, the government has implemented several regulations aimed at restricting cryptocurrency trading and protecting investors. These regulations include a ban on domestic cryptocurrency exchanges, and ICOs, as well as the participation of financial institutions in cryptocurrency activities. Although the country has not passed comprehensive cryptocurrency legislation, the government has consistently emphasised that trading virtual currencies carries risks for individuals.

 Symbol

China has also addressed the taxation of cryptocurrency profits. Income generated from trading virtual currencies is subject to individual income tax, specifically categorised under ‘property transfer income.’ Tax authorities require individuals to report the purchase price and taxes, with the government stepping in to determine prices if proof is not provided. The approach demonstrates China’s ongoing control over cryptocurrency activities within its borders.

Despite the regulatory restrictions, China’s blockchain sector remains robust and influential. The government is clearly focused on managing the risks associated with digital currencies while fostering blockchain innovation, which is likely to continue to influence global cryptocurrency trends.

EU’s comprehensive crypto framework

At the forefront of regulatory efforts is the European Union, which unveiled its comprehensive regulatory framework known as the Markets in Crypto-Assets Act (MiCA) in 2020. After nearly three years of development, MiCA was approved by the European Parliament in April 2023, with the enactment date set for 30 December 2024. The MiCA framework aims to create legal clarity and consistency across the EU, streamlining the regulatory approach to crypto assets. Before MiCA, crypto firms in the EU had to navigate a complex landscape of varying national regulations and multiple licensing requirements, but the new legislation provides a unified licensing structure, which will apply across all 27 member states.

 Text

MiCA applies to all crypto assets that fall outside traditional EU financial regulations, covering everything from electronic money tokens (EMTs) and asset-referenced tokens (ARTs) to other types of crypto assets. These assets are defined based on how they function and are backed. EMTs, for example, are digital assets backed by a single fiat currency, while ARTs are pegged to a basket of assets. MiCA does not automatically apply to non-fungible tokens (NFTs) unless they share characteristics with other regulated assets. Additionally, decentralised applications (dApps), decentralised finance (DeFi) projects, and decentralised autonomous organisations (DAOs) may not be fully subject to MiCA, unless they do not meet the criteria for decentralisation.

Businesses that offer crypto-asset services, known as crypto-asset service providers (CASPs), are at the heart of MiCA’s regulatory scope. These include entities involved in cryptocurrency exchanges, wallet services, and crypto trading platforms. Under MiCA, CASPs will need to obtain authorisation to operate across the EU, with a unified process that eliminates the need for multiple licenses in each country. Once authorised, these businesses can offer services across the entire EU, provided they comply with requirements around governance, capital, anti-money laundering, and data protection.

MiCA also introduces important provisions for stablecoins, particularly fiat-backed stablecoins, which must be backed by a 1:1 liquid reserve. However, algorithmic stablecoins—those that do not have explicit reserves tied to traditional assets—are banned. Issuers of EMTs and ARTs will be required to obtain authorisation and provide whitepapers, outlining the characteristics of the assets and the risks to prospective buyers. MiCA’s regulations are designed to protect consumers, reduce market manipulation, and ensure that crypto activities remain secure and transparent.

This regulatory shift is expected to reshape the crypto landscape in the EU, offering businesses and consumers clearer protections and encouraging market integrity. As MiCA comes into effect in 2025, its impact is likely to reverberate beyond Europe, as other nations look to adopt similar frameworks for managing digital assets.

Japan’s evolving crypto regulations

Japan is considering lighter regulations for cryptocurrency intermediaries that are not crypto exchanges. The Financial Services Agency (FSA) recently proposed this to the Financial System Council, following Japan’s early cryptocurrency regulation after the Mt. Gox hack. Currently, crypto intermediaries such as apps or wallets that connect users to exchanges must register as crypto asset exchange service providers (CAESPs), but many do not handle customer funds directly.

 Architecture, Building, Symbol, Text, Sign, Number, Office Building

To reduce the regulatory burden, the FSA is exploring a system where intermediaries would register, provide user information, follow advertising restrictions, and potentially be liable for damages. They might also be required to maintain a security deposit, with exchanges absorbing liability for affiliated intermediaries. This proposal aims to create a more flexible regulatory framework for crypto-related businesses that do not operate exchanges.

Brazil’s new crypto market law

In late 2022, the National Congress approved a bill regulating the cryptocurrency market, focusing on areas like competition, governance, security, and consumer protection. The Central Bank of Brazil (BCB) and the Securities and Exchange Commission (CVM) will oversee its implementation. While there was no specific crypto regulation before, the new law will require companies, including exchanges, to obtain licenses, register with the Brazilian National Registry of Legal Entities (CNPJ), and report suspicious activities to the Council for Financial Activities Control (COAF).

Brazil aims for technological autonomy with a new AI investment initiative.

The regulation mandates KYC (Know Your Customer) and KYT (Know Your Transaction) practices to combat money laundering. It also aligns with the Penal Code of Brazil, enforcing penalties for fraud and crimes. Notably, exchanges must separate client assets from company assets, a provision not yet included in the law but proposed by the Brazilian Association of Cryptoeconomics (ABCripto).

The law was set to take effect between May and June 2023, with full implementation, including licensing rules, expected by 2025. While the decentralised nature of the global crypto market presents challenges, the new regulatory framework aims to offer greater security and attract more investors to the growing Brazilian crypto market.

UK push for crypto regulation

The United Kingdom has taken significant steps to regulate digital currencies, mandating that any company offering such services must obtain proper authorisation from the Financial Conduct Authority (FCA). This regulation is part of a broader effort to establish a clear and secure framework for digital assets, including cryptocurrencies and digital tokens, within the UK financial ecosystem. One area of particular focus is stablecoins, which are digital currencies pegged to stable assets, such as the US dollar or the British pound. Stablecoins have garnered attention for their potential to revolutionise the payments sector by offering faster and cheaper transactions compared to traditional payment methods.

 Logo, Dynamite, Weapon, QR Code, Maroon, Text

The Bank of England has proposed new regulations specifically targeting stablecoins to maximise their benefits while addressing potential risks. These proposed rules aim to strike a balance between encouraging innovation in digital payments and ensuring the financial system’s stability. The regulations are designed to ensure that stablecoins do not pose risks to consumer protection or the integrity of the financial market, particularly in terms of preventing money laundering and illicit financial activities.

This move highlights the UK’s proactive approach to digital asset regulation, aiming to foster a secure environment where cryptocurrencies and blockchain technologies can thrive without undermining the broader financial infrastructure. The efforts also underscore the UK’s commitment to consumer protection, ensuring that individuals and businesses engaging with digital currencies are properly safeguarded. With this comprehensive regulatory approach, the UK is positioning itself as a leader in the integration of digital currencies into traditional finance, setting a precedent for other nations exploring similar regulatory frameworks.

Kenya΄s crypto regulation attempt

Kenya’s journey with cryptocurrency regulation has evolved from scepticism to a more open stance as the government recognises its potential benefits. Initially, in the early 2010s, cryptocurrencies like Bitcoin were viewed with caution by the Central Bank of Kenya (CBK), citing concerns over volatility, fraud, and lack of consumer protection. This led to a public warning against the use of virtual currencies in 2015. However, the growing global interest in digital currencies, including in Kenya, continued, with nearly 10% of Kenyans owning cryptocurrency by 2022, driven by factors such as financial inclusion and the appeal of blockchain technology.

Kenya flag is depicted on a sports cloth fabric with many folds. Sport team waving banner

A turning point for Kenya came in 2018, when the government set up a task force to explore blockchain and the potential of AI, building on the success of mobile money services like M-Pesa. By 2023, the country began assessing money laundering risks associated with virtual assets, signalling a shift in attitude toward cryptocurrencies. By December 2024, the government introduced a draft National Policy on Virtual Assets and Virtual Asset Service Providers (VASPs), outlining a regulatory framework to guide the development of the market.

The proposed regulations include licensing requirements for cryptocurrency exchanges and wallet providers, as well as measures to prevent money laundering and countering and terrorist financing. Consumer protection and cybersecurity are also central to the framework, ensuring that users’ funds and personal data are safeguarded. The draft regulations are open for public consultation until 24 January 2025, with the government seeking input from industry players, consumer groups, and the public.

Kenya’s path from opposition to embracing cryptocurrency reflects a broader trend towards digital financial innovation. By creating a balanced regulatory environment, Kenya hopes to position itself as a leader in Africa’s digital financial revolution, fostering economic growth and financial inclusion, much like the success it achieved with M-Pesa.

The need for a global approach

As we already explained, the international nature of cryptocurrency markets presents unique regulatory challenges. Cross-border activities increase the risk of fraud and investor harm, highlighting the necessity of consistent global standards. The WEF emphasises that international collaboration is “not just desirable but necessary” to maximise the benefits of blockchain technology while mitigating risks.

 Cleaning, Person, People, Adult, Female, Woman, Disk

Differences in market maturity, regulatory capacity, and regional priorities complicate alignment. However, organisations such as IOSCO or the Financial Stability Board (FSB)  stress the role of international bodies and national regulators in fostering a unified regulatory framework. A global approach would not only enhance consumer protections but also create an environment conducive to innovation, ensuring the responsible evolution of cryptocurrency markets.

As the crypto ecosystem evolves, governments and international organisations are working to balance innovation and regulation. By addressing the challenges posed by digital assets through comprehensive, coordinated efforts, the global community aims to create a stable and secure financial environment in the digital age.

The US clock strikes ‘ban or divest TikTok’

TikTok faces an uncertain future as the US government’s 19 January 2025 deadline approaches, demanding ByteDance divest its US operations or face a nationwide ban. The ultimatum, backed by the Supreme Court’s apparent readiness to uphold the decision, appears to be the culmination of years of scrutiny over the platform’s data practices and ties to China. Amid this mounting pressure, reports suggest Elon Musk, the owner of X (formerly Twitter), could acquire TikTok’s US operations, a proposal that has sparked debates about its feasibility and geopolitical implications.

Now, let’s see how it began..

How did the TikTok odyssey begin?

The story of TikTok began in 2014 with Musical.ly, a social media app enabling users to create and share lip-sync videos. Founded in Shanghai, it quickly gained traction among US and European teenagers. By 2017, Musical.ly had over 100 million users and caught the attention of ByteDance, a Chinese tech giant that acquired it for $1 billion. In 2018, ByteDance merged Musical.ly with its domestic app Douyin, launching TikTok for international audiences. Leveraging powerful machine-learning algorithms, TikTok’s ‘For You Page’ became its defining feature, captivating users with an endless stream of personalised content.

TikTok, Person, People, Computer Hardware, Electronics, Hardware, Art
The US clock strikes 'ban or divest TikTok' 18

By 2018, TikTok had become one of the most downloaded apps globally, surpassing giants like Facebook and Instagram. Its cultural influence exploded, reshaping how content was created and consumed. From viral dance challenges to comedic skits, TikTok carved out a unique space in the digital world, particularly among younger users. However, its meteoric rise also brought scrutiny. Concerns emerged over user data privacy and potential manipulation by its parent company ByteDance, which critics claimed had ties to the Chinese government.

The ‘ban or divest’ saga

The incipit of the current conflict can be traced back to 2020 when then-President Donald Trump attempted to ban TikTok and Chinese-owned WeChat, citing fears that Beijing could misuse US data or manipulate public discourse through the platforms. The courts blocked Trump’s effort, and in 2021, President Joe Biden revoked the Trump-era orders, but initiated its review of TikTok’s data practices, keeping the platform under scrutiny. Despite challenges, TikTok continued to grow, surpassing 1 billion active users by 2021. It implemented community guidelines and transparency measures to address content moderation and concerns about misinformation. It also planned to store US user data on Oracle-operated servers to mitigate fears of Chinese government access. However, bipartisan concerns over TikTok’s influence persisted, especially regarding its ties to the Chinese government and the potential data misuse. Lawmakers and US intelligence agencies have long raised alarms about the vast amount of data TikTok collects on its US users and the potential for Beijing to exploit this information for espionage or propaganda. Therefore, last year, Congress passed a bill with overwhelming support requiring ByteDance to divest its US assets, marking the strictest legal threat the platform has ever faced.

The 19 January 2025 deadline and the rumours about Elon Musk’s potential acquisition of TikTok

By 2024, TikTok was at the centre of a geopolitical storm. The US government’s demand for divestment or a ban by 19 January 2025 intensified the platform’s challenges. Amid these disputes, Elon Musk, owner of X (formerly Twitter), has emerged as a potential buyer for TikTok’s US operations. Musk’s ties to US and Chinese markets via Tesla’s Shanghai production hub position him as a unique figure in this debate. If Musk were to acquire TikTok, it could bolster X’s advertising reach and data capabilities, aligning with his broader ambitions in AI and technology. However, such a sale would involve overcoming numerous hurdles, including ByteDance’s valuation of TikTok at $40–50 billion and securing regulatory approvals from both Washington and Beijing. On the other hand, ByteDance, backed by Beijing, is resisting the sale, arguing that the conditioning violates free speech and poses significant logistical hurdles.

 Person, Bulldozer, machine, Text
The US clock strikes 'ban or divest TikTok' 19

TikTok has attempted to safeguard its US user base of 170 million by planning to allow users to download their data in case the ban takes effect. It has also reassured its 7,000 US employees that their jobs and benefits are secure, even if operations are halted. While new downloads would be prohibited under the ban, existing users could retain access temporarily, although the platform’s functionality would degrade over time.

The looming deadline has sparked a surge in alternative platforms, such as RedNote (known in China as Xiaohongshu), which has seen a significant influx of US users in anticipation of TikTok’s potential exit.

TikTok’s cultural legacy and future

The fate of TikTok in the US hangs in the balance as President-elect Donald Trump considers an executive order to delay the enforcement of the ‘ban or divest’ law by up to 90 days. The potential extension, supported by figures from both political sides, including Senate Majority Leader Chuck Schumer and Trump’s incoming national security adviser Mike Waltz, aims to provide ByteDance, TikTok’s Chinese owner, additional time to divest its US operations and avoid a nationwide ban. With over 170 million American users and substantial ad revenue at risk, lawmakers are increasingly wary of the disruption a ban could cause, signalling bipartisan support to keep the app operational while addressing national security concerns. TikTok CEO Shou Zi Chew’s attendance at Trump’s inauguration further hints at a shift in relations between the platform and the new administration. Meanwhile, the uncertainty has already driven US users to explore alternatives like RedNote as the clock ticks down to the Sunday deadline.

Either way, TikTok’s impact on culture and technology is undeniable. It has redefined digital content creation and inspired competitors like Instagram Reels and YouTube Shorts. Yet, its journey highlights the challenges of navigating geopolitical tensions and concerns over data privacy in a hyper-connected world. As the 19 January deadline looms, TikTok stands at a crossroads. Whether it becomes part of Musk’s tech empire, succumbs to a US ban, or finds another path, its legacy as a trailblazer in short-form video content remains secure. The platform’s next chapter, however, hangs in the balance, as these TikTok developments underscore the broader implications of its struggles, including the reshaping of the social media landscape and the role of government intervention in regulating digital platforms.

OEWG’s ninth substantive session: Limited progress in discussions

The UN Open-Ended Working Group (OEWG) on the security of and in the use of information and communications technologies in 2021–2025 held its ninth substantive session on 2-6 December 2024. 

During the session, states outlined cooperative measures to counter cyber threats, continued discussions on possible new norms, tried to reach additional layers of understanding on the international law, discussed elements of the future permanent mechanism, discussed CBMs implementation and the POC Directory operalisation, deliberated the development and operationalisation of the Global Portal on Cooperation and Capacity-Building and the Voluntary Fund, and debated about the shape of the UN mechanism that will succeed the OEWG 2021-2025. 

While there was consensus on certain broad goals, contentious debates highlighted deep divisions, particularly regarding the applicability of international law, the role of norms, and the modalities of stakeholder participation.

Some of the main takeaways from this session are:

  • The threat landscape is rapidly evolving and with it, the OEWG discussions on threats, including measures to counter those threats.
  • The discussion on norms backslides into old disputes, namely the implementation of existing norms vs the development of new norms, in which states hold their old positions. However, the discussion is not entirely static, as many proposals for new norms have emerged. 
  • While the discussions on international law have deepened, and the states have presented very detailed views, there is still no agreement on whether new legally binding regulations for cyberspace are needed.
  • The discussions on CBMs included numerous practical recommendations pertaining to CBM implementation, the sharing of best practices and the operationalisation of the POC directory.
  • Opinions differ on several issues regarding capacity building, including specific details on the structure and governance of the proposed portal, the exact parameters of the voluntary fund, and how to effectively integrate existing capacity-building initiatives without duplication.
  • States disagreed on the scope of thematic groups in the future mechanism: while some countries insist on keeping traditional pillars of the OEWG agenda (threats, norms, international law, CBMs and capacity building), others advocate for a more cross-cutting and policy-oriented nature of such groups. The modalities of multistakeholder engagement in the future mechanism are also in the air. The agenda for the next meeting of the OEWG in February 2025 will likely be inverted, and delegations will start with discussions on regular institutional dialogue to ensure enough time is dedicated to this most pressing issue.

Threats: A rapidly evolving threat landscape
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

Discussions on threats have become more detailed – almost one-fourth of the session was dedicated to this topic. The chair noted that reflection of the rapidly evolving threat landscape, but also signals a growing comfort among states in candidly addressing these issues. 

What’s particularly interesting about this session is that states have dedicated just as much—if not more—time to discussing cooperative measures to counter these threats as they have to outline the threats themselves.

Threats states face in cyberspace

Emerging technologies, including AI, quantum computing, blockchain, and the Internet of Things (IoT), took centre stage in discussions. Delegates broadly acknowledged the dual-use nature of these innovations. On one hand, they offer immense developmental potential; on the other, they introduce sophisticated cyber risks. Multiple states, including South Korea, Kazakhstan, and Canada, highlighted how AI intensifies cyber risks, particularly ransomware, social engineering campaigns, and sophisticated cyberattacks. Concerns about AI misuse include threats to AI systems (Canada), generative AI amplifying attack surfaces (Israel), and adversarial manipulations such as prompt injections and model exfiltration (Bangladesh). 

Nations including Guatemala and Pakistan stressed the risks of integrating emerging technologies into critical systems, warning that without regulation, these systems could enable faster and more destructive cyberattacks. 

Despite the risks, states like Israel and Paraguay recognised the positive potential of AI in strengthening cybersecurity and called for harnessing its benefits responsibly. Countries like Italy and Israel called for international collaboration to ensure safe and trustworthy development and use of AI, aligning with human rights and democratic values.

Ransomware remains one of the most significant and prevalent cyber threats, as multiple delegations highlighted. Switzerland and Ireland flagged the growing sophistication of ransomware attacks, with the rise of ransomware-as-a-service lowering barriers for cybercriminals and enabling the proliferation of such threats. The Netherlands and Switzerland noted ransomware’s profound consequences on societal security, economic stability, and human welfare. Countries including Italy, Germany, and Japan emphasised ransomware’s disruptive impact on critical infrastructure and essential services, such as hospitals and businesses.

Critical infrastructure has become an increasingly prominent target for cyberattacks, with threats stemming from both cyber criminals and state-sponsored actors. Essential services such as healthcare, energy, and transportation are particularly affected. However, the EU, along with countries such as the Netherlands, Switzerland, and the USA, have also raised concerns about malicious activities disrupting essential services and international organisations, including humanitarian agencies. 

Countries such as Ireland, Canada, Argentina, Fiji and Vanuatu have raised alarms about the rising number of cyber incidents targeting these critical subsea infrastructures. These cables are vital for global communication and data transfer, and any disruption could have severe consequences. Ireland called for further examination of the particular vulnerabilities and threats to critical undersea infrastructure, the role of states in the private sector in the operation and security of such infrastructure, and the application of international law which must govern responsible state use and activity in this area.

Germany and Bangladesh highlighted the role of AI in automating disinformation campaigns, scaling influence operations and tailoring misinformation to specific cultural contexts. Countries such as China, North Korea and Albania noted the rampant spread of false narratives and misinformation, emphasising their ability to manipulate public opinion, influence elections, and undermine democratic processes. Misinformation is weaponised in various forms, including phishing attacks and social media manipulation. Misinformation and cyberattacks are increasingly part of broader hybrid threats, aiming to destabilise societies, weaken institutions, and interfere with electoral processes (Albania, Ukraine, Japan, Israel, and the Netherlands). Several countries, including Cuba, Russia, and Bangladesh, stressed how cyber threats, including disinformation and ICT manipulation, are used to undermine the sovereignty of states, interfere in internal affairs, and violate territorial integrity. Countries like Israel and Pakistan warned of the malicious use of bots, deepfakes, phishing schemes, and misinformation to influence public opinion, destabilise governments, and compromise national security. Bosnia highlighted the complexity of these evolving threats, which involve both state and non-state actors working together to destabilise countries, weaken trust, and undermine democratic values.

Cyber operations in the context of armed conflict are no longer a novel concept but have become routine in modern warfare, with enduring consequences, according to New Zealand. Similar observations were made by countries such as the USA, Germany, Albania, North Korea and Pakistan. A worrisome development was brought forth by Switzerland, which noted the involvement of non-state actors in offensive actions against ICTs within the framework of armed conflict between member states.

Countries are also increasingly concerned about the growing sophistication of hacking-as-a-service, malware, phishing, trojans, and DDoS attacks. They are also concerned about the use of cryptocurrencies for enhanced anonymity. Israel also highlighted that the proliferation and availability of advanced cyber tools in the hands of non-state actors and unauthorised private actors constitute a serious threat. The proliferation of commercial cyber intrusion tools, including spyware, is raising alarm among nations like Japan, Switzerland, the UK and France. The UK and France emphasised that certain states’ failure to combat malicious activities within their territories exacerbates the risks posed by these technologies. Additionally, Kazakhstan warned about advanced persistent threats (APTs) exploiting vulnerable IoT devices and zero-day vulnerabilities.

Cuba rejected the militarisation of cyberspace, offensive operations, and information misuse for political purposes. They called for peaceful ICT use and criticized media platforms for spreading misinformation. The UK emphasised states’ responsibilities to prevent malicious activities within their jurisdiction and to share technical information to aid network defenders. Russia warned against hidden functions in ICT products used to harm civilian populations, calling for accountability from countries enabling such activities. Columbia suggested that states which have been the victims of cyberattacks could consider the possibility of undertaking voluntary peer reviews, where they would share their experiences, including lessons learned, challenges, and protocols for protection, response, and recovery.

Cooperative measures to counter threats

Most countries noted the role of capacity building in enabling states to protect themselves. The EU called for coordinated efforts to capacity building and for more reflection on best practices and practical examples. Capacity-building initiatives should align with regional and national contexts, Switzerland and Kazakhstan noted, focusing on identifying vulnerabilities, conducting cyberattack simulations, and developing robust measures, Kazakhstan noted. Columbia highlighted that states should express their needs for capacity building to adequately identify the available supply. Malawi and Guatemala advocated for capacity building, partnerships with international organisations, and knowledge-sharing between governments, the private sector, and academia. Albania emphasised the importance of UN-led training initiatives for technical and policy-level personnel.

The discussions highlighted the urgent need to bridge the technological divide, enabling developing countries to benefit from advancements and manage cyber risks. Vanuatu emphasised the importance of international capacity-building and cooperation to ensure these nations can not only benefit from technological advancements but also manage the associated risks effectively. Zimbabwe called for the OEWG to support initiatives that provide technical assistance and training, empowering developing nations to build robust cybersecurity frameworks. Cuba reinforced this by advocating for the implementation of technical assistance mechanisms that enhance critical infrastructure security, respecting the national laws of the states receiving assistance. Nigeria stressed the importance of equipping personnel in developing countries with the skills to detect vulnerabilities early and deploy preventive measures to safeguard critical information systems.

States also noted that the topic of threats must be included in the new mechanism. Mexico proposed creating a robust deliberative space within the mechanism to deepen understanding and foster cooperation, enhancing capacities to counter ICT threats. Sri Lanka supported reviewing both existing and potential ICT threats within the international security context of the new mandate. Brazil suggested the future mechanism should incorporate dedicated spaces for sharing threats, vulnerabilities, and successful policies. Some countries gave concrete suggestions for thematic groups on threats under the new mechanism. For instance, France highlighted that sector-specific discussions on threats and resilience could serve as strong examples for thematic groups within the future mechanism. Colombia called for a standing thematic working group focused on areas like cyber incident management, secure connectivity technologies (e.g., 5G), and policies for patching and updates. Singapore emphasised using future discussions to focus on building an understanding of emerging technologies and their governance. Egypt advocated for a flexible thematic group on threats within the mechanism, capable of examining ICT incidents with political dimensions.. New Zealand recommended focusing discussions on cross-cutting themes such as critical infrastructure, enabling states to better understand and mitigate threats. Cuba echoed the importance of the future permanent mechanism taking into account the protection of critical infrastructure, and underscored the importance of supporting developing countries with limited resources to protect critical infrastructure.

Delegations highlighted the Global Point of Contact (POC) Directory as a key tool for enhancing international cooperation on cybersecurity. Ghana, Argentina and Kazakhstan emphasised its role in facilitating information exchange among technical and diplomatic contacts to address cyber threats. South Africa proposed using the POC Directory for cybersecurity training and sharing experiences on technologies like AI. Chile stressed that the POC Directory can play a central role in the for improved cyber intelligence capacity and coordinated responses to large-scale incidents. Malaysia called for broader participation and active engagement in POC activities.

Several countries emphasised the importance of strengthening collaboration among national Computer Emergency Response Teams (CERTs). Ghana and New Zealand supported CERT-to-CERT cooperation, with Ghana calling for sharing best practices. Nigeria suggested creating an international framework for harmonising cyber threat responses, including strategic planning and trend observation. Singapore highlighted timely and relevant CERT-related information sharing and capacity building as key to helping states, especially smaller ones, mitigate threats. Fiji prioritised capacity building for CERTs.

Several nations, including Argentina, Sri Lanka, and Indonesia, called for establishing a global platform for threat intelligence sharing. These platforms would enable real-time data exchange, incident reporting, and coordinated responses to strengthen collective security. Such mechanisms, built on mutual trust, would also facilitate transparency and enhance preparedness for emerging cyber challenges. Switzerland voiced support for discussing the platform but also noted that exchanging each member state’s perception of the identified threats can happen through bilateral, regional, or multilateral collaboration forums, or simply by making a member state’s findings publicly accessible.

Egypt noted that there must also be discussions on both the malicious use of ICT by non-state actors, as well as the role and responsibilities of the private sector in this regard. 

Countries like El Salvador and Ghana underscored the importance of integrating security and privacy by design approaches into all stages of system development, ensuring robust protections throughout the lifecycle of ICT systems.

Building shared resilience in cyberspace hinges on collective awareness of threats and vulnerabilities. Bosnia stressed collaboration as essential, while Moldova and Albania highlighted the need for education and awareness campaigns to engage governments, private entities, and civil society. Vietnam advocated using international forums and UN agencies like ITU to bolster critical infrastructure resilience. Similarly, Paraguay called for creating awareness on the use of covert information campaigns, which may become incident, cyber incidents and tools for cyberattacks. Zimbabwe emphasised the critical importance of operationalising CBMs to foster trust and cooperation among nations in cyberspace.Belgium and Egypt emphasised the need to focus on the human impact of cyber threats and to use methodologies measuring harm to victims.

Norms: New norms vs norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

The discussions on norms highlighted once again the division of states on binding vs voluntary andell as the implementation of existing norms vs the development of new norms. 

The chair invited all delegations to reflect on how states can bridge the divides if the discussion on new norms means that states are not prioritising implementation and if states can do both. The chair reminded stakeholders that ideas for new norms have come from delegations, but also from stakeholders. He also added that some of the delegations have said it’s too late to discuss new norms because the process is concluding (e.g. Canada); However, he reminded that when states began the process, some of the delegations also said it’s too early to get into a discussion because it’s important to focus on implementation. The chair concluded by noting that ‘it’s never a good time and it’s always a good time’.

First of all, the main disagreement was over binding vs voluntary norms as well as implementation of existing norms vs development of new norms. Some states, including Zimbabwe, Russia, and Belarus, advocate for the development of a legally binding international instrument to govern ICT security and state behaviour. They argue that existing voluntary norms are insufficient to address emerging threats. 

However, the discussion also served as a platform for new proposals from delegations to achieve a safe and secure cyber environment.  

Some states also proposed specific new norms to address emerging challenges:

  • El Salvador suggested recognising the role of ethical hackers in cybersecurity.
  • Russia proposed several new norms, including:
    • The sovereign right of each state to ensure the security of its national information space as well as to establish norms and mechanisms for governance in its information space in accordance with national legislation.
    • Prevention of the use of ICTs to undermine and infringe upon the sovereignty, territorial integrity and independence of states as well as to interfere in their internal affairs.
    • Inadmissibility of unsubstantiated accusations brought against states of organising and committing wrongful acts with the use of ICTs including computer attacks followed by imposing various restrictions such as unilateral economic measures and other response measures
    • Settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means of the state’s choice including through consultations with the relevant national authorities of states involved.
  • Belarus suggested new norms which could include the norm of national sovereignty, the norm of non-interference in internal affairs, and the norm of exclusive jurisdiction of states over the ICT sphere within the bounds of their territory.
  • China noted that new norms could be developed for data security, supply chain security, and the protection of critical infrastructure, among others.

In addition to this, some states proposed amending or enhancing the existing norms:

  • EU would like to see greater emphasis on the protection of all critical infrastructures supporting essential public services, particularly medical and healthcare facilities, along with enhanced cooperation between states. The EU also wants a priority focus on the critical infrastructure norms 13F, G and H.
  • El Salvador proposed strengthening privacy protections under Norm E, which Malaysia, Singapore and Australia supported. 
  • UK suggested a new practical action recommending that states safeguard against the potential for the illegitimate and malicious use of commercially available ICT intrusion capabilities by ensuring that their development, dissemination, purchase, export or use is consistent with international law, including the protection of human rights and fundamental freedoms under Norm I, which Canada, Switzerland, Malaysia, Australia, France supported.
  • Kazakhstan proposed:
    • adding a focus on strengthening personal data protection measures through the development and enforcement of comprehensive data protection laws to safeguard personal data from unauthorized access, misuse, or exploitation under the norm E
    • emphasising the importance of conducting international scenario-based discussions that simulate ICT-related disruptions under Norm G
    • establishing unified baseline cybersecurity standards will enable all states, respective of their technological development, to protect their critical infrastructure effectively under Norm G
    • promoting ethical guidelines for the development and use of technologies such as AI under Norm K
  • Canada suggested adding text under norm G: ‘Cooperate and take measures to protect international and humanitarian organizations against malicious cyber activities which may disrupt the ability of these organizations to fulfill their respective mandates in a safe, secure and independent manner and undermine trust in their work’

In contrast, other states such as the US, Australia, UK, Canada, Switzerland, Italy and others opposed the creation of new binding norms and highlighted the necessity to prioritise the implementation of the existing voluntary framework.

In between these two polar opposites, there were states who favoured a parallel development arguing that the implementation and the development of new norms can proceed simultaneously. These states were Singapore, China, Indonesia, Malaysia, Brazil, and South Africa.

Egypt questioned if states need to discuss enacting a mix of both binding and non-binding measures to deal with the increasing and rapid development of threats, as well as suggested that states might consider developing a negative list of actions that states are required to refrain from.

Japan called for a priority to focus on the implementation of the norms in a more concrete way. Russia called for the same, and suggested that states present a review of their compliance with national legislation and doctrinal documents with the rules, norms, and principles of behaviour in the field of international information security (IIS), which has been approved by the UN. Russia submitted its review of national compliance with the agreed norms.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

More than fifty member states delivered their statements in the discussions on international law, which included several small and developing states that have previously not done so. 

The discussions highlighted the diverse national and regional perspectives on the application of international law, especially the Common African Position on the application of international law in cyberspace, and the EU’s Declaration on a Common Understanding of International Law in Cyberspace. Tonga, on behalf of the 14 Pacific Island Forum member states, presented a position on international law affirming that international law, including the UN Charter in its entirety, is applicable in cyberspace. Fiji, on behalf of a cross-regional group of states that includes Australia, Colombia, El Salvador, Estonia, Kiribati, Thailand, and Uruguay has recalled a working paper that reflected additional areas of convergence on the application of international law in the use of ICTs. 

As mentioned by Canada, Ireland, France, Switzerland, Australia, and others, these statements build momentum at the OEWG in building common understandings on international law, as over a hundred states have individually or collectively published their positions.

Applicability of international law to cyberspace

Despite the many published statements and intensified discussions, the main major rift between the states persists. On the one hand, the vast majority of the member states call for discussions on how international law applies in cyberspace and do not see the reason to negotiate new legally binding regulations. On the other hand, some states want to see the development of new legally binding regulations (Iran, also recalling requests by the countries of the Non-Aligned Movement, Cuba on behalf of the delegations of the Bolivarian Republic of Venezuela, Nicaragua, as well as Russia, China, Pakistan).

The majority of the states addressed the need to emphasise the applicability of international humanitarian law in the cyber context (EU, Lebanon, the USA, Australia, Poland, Finland, Republic of Korea, Japan, Malawi, Egypt, Sri Lanka, Brazil, South Africa, the Philippines, Ghana, and others) recalling the Resolution on protecting civilians and other protected persons and objects against the potential human cost of ICT activities during armed conflict adopted by consensus at the 34th International Conference of the Red Cross and Red Crescent as a major step forward in international armed conflicts.

EU, Colombia, El Salvador, Uruguay, Australia, Estonia, and others expressed regret that the APR3 did not include a reference to the international humanitarian law and called for it to be included in the final OEWG report.

Other topics

The states also shared what topics in international law shall be discussed in more detail. State responsibility, sovereignty and sovereign equality, attribution and accountability were the most mentioned topics. The member states differed in their opinions on whether the topic of international law and norms should be discussed in the future mechanism within one thematic track or not. 

On capacity building in international law, scenario-based exercises received overwhelming support, with Ghana and Sierra Leone recalling the importance of South-South cooperation and regional capacity-building efforts.

One of the main deciding factors for the future of discussions on international law will certainly be the future permanent mechanism if the states decide to establish under said mechanism a dedicated group which will discuss international law. That would allow states to keep a status quo until the end of the OEWG’s mandate and defer the issue to the next mechanism.

CBMs: Implementing the CBMs and operationalising the POC directory
 Stencil, Text

This session was marked by noticeable activity in the CBM domain – from both developed and developing states – with the organisation of substantial side events and dedicated conferences as well as cross-regional meetings throughout the year. The letter sent by the chair in mid-November channelled pragmatic discussions and the session was marked by numerous practical recommendations pertaining to CBM implementation, the sharing of best practices and the operationalisation of the POC directory. 

A new dynamic concerning CBMs is emerging, now that additional CBMs no longer appear to be a concern. It is likely that the further implementation of CBMs will rely on capillarity. First, from the general CBM implementation point of view, capillarity is expected through the sustained commitment from states to share best practices in a cross-regional way, as shown in the inter-regional conference on cybersecurity organized by the Republic of Korea and North Macedonia, bringing together the OSCE, OAS, ECOWAS and African Union. Second, new levels of participation in the POC directory have been specifically linked to such initiatives and to more general capacity-building to which states are highly recommended to contribute.

CBMs implementation and sharing of best practices

Whereas the guiding questions provided by the chair were oriented towards the implementation of existing CBMs, few new CBMs and measures were nevertheless proposed and not extensively picked up nor discussed by most delegations. The well-worn question of shared technical terminology was brought back to the table solely by Paraguay, and Thailand mentioned an additional measure about CERT-to-CERT cooperation. Finally, Iran proposed a 9th CBM considering the facilitation of access to the ICT security market with the view to mitigate potential risks in the supply chain. El Salvador and Malaysia recommended the inclusion of voluntary identification of critical infrastructure and critical information infrastructure to the CBM 7 current phrasing. 

Focusing on implementation, Switzerland shared an OSCE practice called ‘Adopt-a-CBM’ in which individual or several states adopt a CBM and are committed to its implementation and recommended that CBMs 2, 5, 7 and 8 would be suitable for this approach. Kazakhstan also advised something similar in focusing on specific CBMs and engaging with individual states to promote them. Indonesia and El Salvador displayed numerous ways to foster the implementation of CBMs, among which the importance of shared practices that could fuel guidelines as practical reference for member states.

A substantive engagement by various states was noted, especially about the sharing of specific practices pertaining to each CBMs. Whereas most of these practices are usually confined to regional frameworks, it is noticeable that numerous states have densely exchanged best practices at an ever more global level through the application of CBM 6 about the organisation of workshops, seminars and training programs with inclusive representation of states (Germany, Korea, Peru, Fiji and the UK) and CBM 2 about the exchanging of views and dialogue from bilateral to cross-regional and multilateral levels (Germany, Peru, and Moldova). Consequently, some states also shared their application of CBM 5 about the promotion of information exchange on cooperation and partnership between states to strengthen capacity-building (Korea, Peru). More specific best practice exchange on the protection of CI and CII (CBM 7) was also noted to be undertaken by several states (Malaysia, Fiji, and the UK). Finally, CBM 8 on the strengthening of public-private sector partnership and cooperation was also fostered by several states (Korea, Albania, and the UK).

POC directory operationalisation 

At the time of the 9th substantive meeting, 111 countries had joined the POC directory. Most states sharing insights on ways to increase participation suggested raising awareness through workshops, webinars and side events (for instance, Albania and Kazakhstan). At this level of participation, it is reasonable to think that any increase in participating states should be considered a matter of capacity-building (South Africa).

Still, some states already started sharing their experience with the use of the POC and the feedback could not be more contrasted. On the one hand, Russia stated that it already had problems when cooperating on incident response through the POC directory given that some contacts did not work and some technical POCs had too limited powers which left them unable to respond to notifications. Consequently, it recommended that the determination of the scope of competence of each of the POC should be the first priority task, only supported by Slovakia. On the other hand, France shared that it had received several demands of communications since the creation of the POC and that it answered positively to all of them. Russia and China urged other states to actively use the POC directory; France nevertheless advocated not to exploit and overuse the tool at the risk of making it inoperable.

Lines of division nevertheless sometimes fade and the one about the template question was definitely less stark than last session, considering that few states expressed their reluctance to build such a template (Switzerland and Israel). Contributions nevertheless ranged from general opinion about the format of the template to the very detail of its content. Most delegates advocated for flexible and voluntary templates (Indonesia, Malaysia, Singapore, Thailand, the Netherlands and Paraguay). This framing was justified as enabling a better accommodation of different institutional frameworks as well as local and regional concerns (Brazil, Thailand, the Netherlands, and Singapore). All states nevertheless reasserted the necessity for the template to be as simple as possible for either capacity-building and resource constraints (Kiribati and Russia) or emergency reasons (Brazil, Paraguay, and Thailand). South Africa, supported by Brazil, proposed that the template should at a minimum provide a brief description of the nature of assistance sought, details of the cyber incident, acknowledgement of receipt by the requested state and provide indicative response timeframes. Indonesia added to this list the response actions taken, the requests for technical assistance or additional information and the emergency contacts options. Finally, Kazakhstan notably suggested numerous examples of templates each dedicated to various scenarios such as incident escalation, threat intelligence, CBM reporting, POC verification, capacity-building, cross-border incident coordination, annual reporting and lessons learned. The Secretariat is still expected to produce such a template by April 2025 and the chair expressed its intention to have standardised templates as an outcome of the July report.

Capacity building: Trust fund and Global Cyber Security Cooperation Portal
(GCSCP)
 Art, Drawing, Doodle

As usual, capacity building is one of the topics where there is a high level of consensus, albeit in broad strokes. There isn’t a single delegation denying the importance of capacity building to enhance global cybersecurity. However, opinions differ on several issues, including specific details on the structure and governance of the proposed portal, the exact parameters of the voluntary fund, and how to effectively integrate existing capacity-building initiatives without duplication. It is expected that the OEWG will continue to speak about these issues at length in order to have concrete details in its July 2025 Annual Progress Report (APR) and to allow the future mechanism to dive deeper into capacity building.

During the December session, delegations discussed the development and operationalisation of the Global Portal on Cooperation and Capacity-Building. Most delegations envisioned the portal as a neutral, member-state-driven platform that would adapt dynamically to an evolving ICT environment, integrating modules like the needs-based catalogue to guide decision-making and track progress as well as Kuwait’s latest proposal to add a digital tool module to streamline norm adoption. On the contrary, Russia expressed concerns over the exchange of data on ICT incidents through the portal, stating that such data is confidential data and could be used to level politically motivated accusations. 

The session also discussed the creation of a Voluntary Contribution Fund to support capacity building in the future permanent mechanism. South Africa and other delegations highlighted the need for clearly defined objectives, governance, and operational frameworks to ensure the fund’s efficiency and transparency. Monitoring mechanisms were deemed essential to guarantee alignment with objectives. Delegates broadly agreed on avoiding duplication of efforts, emphasising that the portal and the fund should complement existing initiatives such as the UNIDIR cyber policy portal, the GFCE civil portal, and the World Bank Cyber Trust Fund, rather than replicate their functions or those of regional organizations.

Further deliberations addressed the timing of the next High-Level Global Roundtable on capacity building. The roundtable’s potential overlap with the 2025 Global Conference on Cyber Capacity Building in Geneva presented scheduling challenges, prompting consideration of a 2026 date. Discussions on UNODA’s mapping exercise revealed mixed views: while it highlighted ongoing capacity-building efforts, many felt it inadequately identified gaps, leading to calls for a yearly mapping exercise. 

Finally, multistakeholder engagement emerged as a contentious issue, with Canada and the UK criticising the exclusion of key organisations like FIRST and the GFCE from formal sessions. Delegates called for reforms to ensure broader, more inclusive participation from non-governmental and private sector entities essential to global cybersecurity efforts.

Regular institutional dialogue: Thematic groups and multistakeholder participation
 Accessories, Sunglasses, Text, Handwriting, Glasses

During the last substantive session in July 2024, states adopted the third Annual Progress Report (APR) which contained some modalities of the future regular institutional dialogue (RID) mechanism. One substantive plenary session, at least a week long, will be held annually to discuss key topics and consider thematic group recommendations. States decided that thematic groups within the mechanism would be established to allow for deeper discussions. The chair may convene intersessional meetings for additional issue-specific discussions. A review conference every five years will monitor the mechanism’s effectiveness, provide strategic direction, and decide on any modifications by consensus. 

 Text, Page, Symbol

At the December 2024 substantive session, states continued discussing the number and scope of dedicated thematic groups and modalities of stakeholder participation.

Thematic groups in the future mechanism

There was a general divergence between states regarding the scope of thematic groups. Russia, Cuba, Iran, China, and Indonesia insisted on keeping traditional pillars of the OEWG agenda (threats, norms, international law, CBMs and capacity building). However, the EU, Japan, Guatemala, the UK, Thailand, Chile, Argentina, Malaysia, Israel, and Australia advocated for a more cross-cutting and policy-oriented nature of such groups. 

France and Canada gave suggestions in that vein. France suggested creating three groups that would discuss (a) building the resilience of cyber ecosystems and critical infrastructures, (b) cooperation in the management of ICT-related incidents, and (c) prevention of conflict and increasing stability in cyberspace. Canada suggested addressing practical policy objectives, such as protecting critical infrastructure and assisting states during a cyber incident, including through focused capacity building. The USA suggested the same two groups and highlighted that the new mechanism should maintain the best of the OEWG format but also allow for more in-depth discussion via the cross-cutting working groups on specific policy challenges.

The chair noted that the pillars could help organise future plenary sessions and that cross-cutting groups do not have to signal the end of pillars.

Some states asked for a dedicated group on the applicability of international law (Switzerland, Singapore), but Australia objected. Also, states proposed a dedicated group to create a legally binding mechanism (Cuba, Russia, Iran, South Africa, Thailand). Israel suggested having rotating agendas for thematic groups to keep their number limited.

Multistakeholder participation in the future mechanism

One issue that the OEWG has been struggling with from the start is modalities of multistakeholder engagement. The extent and nature of stakeholder participation was an issue at this session as well. The EU called for meaningful stakeholder participation without a veto from a single state. Canada proposed an accreditation process for stakeholders while emphasising that states would retain decision-making power. Mexico proposed creating a multistakeholder panel to provide inputs on agenda items and suggested considering the UN Convention on Climate Change model for stakeholder participation. Israel suggested adopting stakeholder modalities similar to the Ad Hoc Committee on Cybercrime. In contrast, Iran and Russia argued for maintaining current OEWG modalities, limiting stakeholder participation to informal, consultative roles on technical matters. 

A number of questions remain open, the Chair noted. For instance, is there a need for a veto mechanism for stakeholder participation in the future process? If yes, is there a need for an override mechanism, or a screening mechanism? Is there a need for identical modalities for stakeholder participation in different parts of the future process?

As for the timing of meetings, states also expressed concerns that sessions are too lengthy and that attending numerous thematic sessions and intersessionals will be burdensome for small state delegations. The option to turn some of them into hybrid/virtual meetings was also criticised because states miss the opportunity for in-person interaction onsite. Another way to condense all the activities in 2-3 weeks at once also causes problems as there will be no room for reaching any agreement without properly consulting capital.

Argentina and South Korea asked for a report on the budget implications of the specialised groups, other mechanism initiatives, and the secretariats’ work. 

Finally, Canada, Egypt, the USA, the Philippines, New Zealand, the UK, Malaysia, Switzerland,  Izrael,  Colombia, and Czechia expressed the wish to dedicate more time to discuss the next mechanism at the beginning of the next substantive session. At the same time, Brazil, Argentina and South Africa suggested spending the entire February session on this issue.

What’s next?

As the end of the mandate approaches, with only one more substantive session scheduled in February 2025, the pressure for progress in multiple areas is mounting. 

So far, CBMs and capacity building remain the most uncomplicated topics to discuss and are just waiting to be operationalised. In fact, the OEWG’s schedule for the first quarter of 2025 includes the Global POC Directory simulation exercise and an example template for the Global POC Directory, as well as reports on the Global Ict Security Cooperation And Capacity-Building Portal and the Voluntary Fund. 

The discussion on threats has deepened, maintaining momentum despite occasional tensions between geopolitical rivals. 

However, the discussions on norms and international have been static for quite some time, with deeply entrenched views not budging. RID is currently the most pressing issue if states want to hit the ground running and not get tangled in red tape at the beginning of the next mechanism. 

To expedite discussions on RID, the Chair will put together a discussion paper and make it available to delegations well before the next substantive session in February 2025. The chair will also likely schedule an informal town hall meeting before the February session to hear reactions.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated page:

un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.

Quantum leap: The future of computing

If AI was the buzzword for 2023 and 2024, quantum computing looks set to claim the spotlight in the years ahead. Despite growing interest, much remains unknown about this transformative technology, even as leading companies explore its immense potential.

Quantum computing and AI stand as two revolutionary technologies, each with distinct principles and goals. Quantum systems operate on the principles of quantum mechanics, using qubits capable of existing in multiple states simultaneously due to superposition. Such systems can address problems far beyond the reach of classical computers, including molecular simulations for medical research and complex optimisation challenges.

AI and quantum computing intersect in areas like machine learning, though AI still depends on classical computing infrastructure. Significant hurdles remain for quantum technology, including qubit errors and scalability. The extreme sensitivity of qubits to external factors, such as vibrations and temperature, complicates their control.

Quantum computing

Experts suggest quantum computers could become practical within 10 to 20 years. Classical computers are unlikely to be replaced, as quantum systems will primarily focus on solving tasks beyond classical capabilities. Leading companies are working to shorten development timelines, with advancements poised to transform the way technology is utilised.

Huge investments in quantum computing

Investments in quantum computing have reached record levels, with start-ups raising $1.5 billion across 50 funding rounds in 2024. Figure like this one nearly doubles the $785 million raised the previous year, setting a new benchmark. The growth in AI is partly driving these investments, as quantum computing promises to handle AI’s significant computational demands more efficiently.

Quantum computing offers unmatched speed and energy efficiency, with some estimates suggesting energy use could be reduced by up to 100 times compared to traditional supercomputers. As the demand for faster, more sustainable computing grows, quantum technologies are emerging as a key solution.

Microsoft and Atom Computing announce breakthrough

In November 2024, Microsoft and Atom Computing achieved a milestone in quantum computing. Their system linked 24 logical qubits using just 80 physical qubits, setting a record in efficiency. This advancement could transform industries like blockchain and cryptography by enabling faster problem-solving and enhancing security protocols.

Despite the challenges of implementing such systems, both companies are aiming to release a 1,000-qubit quantum computer by 2025. The development could accelerate the adoption of quantum technologies across various sectors, paving the way for breakthroughs in areas such as machine learning and materials science.

Overcoming traditional computing’s limitations

Start-ups like BlueQubit are transforming quantum computing into a practical tool for industries. The San Francisco-based company has raised $10 million to launch its Quantum-Software-as-a-Service platform, enabling businesses to use quantum processors and emulators that perform tasks up to 100 times faster than conventional systems.

Industries such as finance and pharmaceuticals are already leveraging quantum optimisation. Specialised algorithms are addressing challenges like financial modelling and drug discovery, showcasing quantum computing’s potential to surpass traditional systems in tackling complex problems.

Google among giants pushing quantum computing

Google has recently introduced its cutting-edge quantum chip, Willow, capable of solving a computational problem in just five minutes. Traditional supercomputers would require approximately 10 septillion years for the same task.

The achievement has sparked discussions about quantum computing’s link to multiverse theories. Hartmut Neven, head of Google’s Quantum AI team, suggested the performance might hint at parallel universes influencing quantum calculations. Willow’s success marks significant advancements in cryptography, material science, and artificial intelligence.

Commercialisation is already underway

Global collaborations are fast-tracking quantum technology’s commercialisation. SDT, a Korean firm, and Finnish start-up SemiQon have signed an agreement to integrate SemiQon’s silicon-based quantum processing units into SDT’s precision measurement systems.

SemiQon’s processors, designed to work with existing semiconductor infrastructure, lower production costs and enhance scalability. These partnerships pave the way for more stable and cost-effective quantum systems, bringing their use closer to mainstream industries.

Quantum technologies aiding mobile networks

Telefonica Germany and AWS are exploring quantum applications in mobile networks. Their pilot project aims to optimise mobile tower placement, improve network security with quantum encryption, and prepare for future 6G networks.

Telefonica’s migration of millions of 5G users to AWS cloud infrastructure demonstrates how combining quantum and cloud technologies can enhance network efficiency. The project highlights the growing impact of quantum computing on telecommunications.

Addressing emerging risks

Chinese researchers at Shanghai University have exposed the potential threats quantum computing poses to existing encryption standards. Using a D-Wave quantum computer, they breached algorithms critical to modern cryptographic systems, including AES-256, commonly used for securing cryptocurrency wallets.

Although current quantum hardware faces environmental and technical limitations, researchers stress the urgent need for quantum-resistant cryptography. New encryption methods are essential to safeguard digital systems against future quantum-based vulnerabilities.

Quantum computing promises revolutionary capabilities but must overcome significant challenges in scaling and stability. Its progress depends on interdisciplinary collaboration in physics, engineering, and economics. While AI thrives on rapid commercial investment, quantum technology requires long-term support to fulfil its transformative potential.

Overview of AI policy in 10 jurisdictions

Brazil

Summary:

Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspired by the EU’s AI Act, the bill proposes a risk-based framework, categorising AI systems as unacceptable (banned), high risk (strictly regulated), or low risk (less oversight). This effort builds on Brazil’s 2019 National AI Strategy, which emphasises ethical AI that benefits society, respects human rights, and ensures transparency. Using the OECD’s definition of AI, the bill aims to protect people while fostering innovation.

As of the time of writing, Brazil does not yet have any AI-specific regulations with the force of law. However, the country is actively working towards establishing a regulatory framework for artificial intelligence. Brazilian legislators are currently considering the Proposed AI Regulation Bill No. 2338/2023, though the timeline for its adoption remains uncertain.

Brazil’s journey toward AI regulation began with the launch of the Estratégia Brasileira de Inteligência Artificial (EBIA) in 2019. The strategy outlines the country’s vision for fostering responsible and ethical AI development. Key principles of the EBIA include:

  • AI should benefit people and the planet, contributing to inclusive growth, sustainable development, and societal well-being.
  • AI systems must be designed to uphold the rule of law, human rights, democratic values, and diversity, with safeguards in place, such as human oversight when necessary.
  • AI systems should operate robustly, safely, and securely throughout their lifecycle, with ongoing risk assessment and mitigation.
  • Organisations and individuals involved in the AI lifecycle must commit to transparency and responsible disclosure, providing information that helps:
  1. Promote general understanding of AI systems;
  2. Inform people about their interactions with AI;
  3. Enable those affected by AI systems to understand the outcomes;
  4. Allow those adversely impacted to challenge AI-generated results.

In 2020, Brazil’s Chamber of Deputies began working on Bill 21/2020, aiming to establish a Legal Framework of Artificial Intelligence. Over time, four bills were introduced before the Chamber ultimately approved Bill 21/2020.

Meanwhile, the Federal Senate established a Commission of Legal Experts to support the development of an alternative AI bill. The commission held public hearings and international seminars, consulted with global experts, and conducted research into AI regulations from other jurisdictions. This extensive process culminated in a report that informed the drafting of Bill 2338 of 2023, which aims to govern the use of AI.

Following a similar approach to the European Union’s AI Act, the proposed Brazilian bill adopts a risk-based framework, classifying AI systems into three categories:

  • Unacceptable risk (entirely prohibited),
  • High risk (subject to stringent obligations for providers), and
  • Non-high risk.

This classification aims to ensure that AI systems in Brazil are developed and deployed in a way that minimises potential harm while promoting innovation and growth.

Definition of AI 

As of the time of writing, the concept of AI adopted by the draft Bill is that adopted by the OECD: ‘An AI system is a machine-based system that can, for a given set of objectives defined by humans, make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

Other laws and official documents that may impact the regulation of AI 

Sources

Canada

Summary:

Canada is progressing toward AI regulation with the proposed Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27. The Act focuses on regulating high-impact AI systems through compliance with existing consumer protection and human rights laws, overseen by the Minister of Innovation with support from an AI and Data Commissioner. AIDA also includes criminal provisions against harmful AI uses and will define specific regulations in consultation with stakeholders. While the framework is finalised, a Voluntary Code of Conduct promotes accountability, fairness, transparency, and safety in generative AI development.

As of the time of writing, Canada does not yet have AI-specific regulations with the force of law. However, significant steps have been taken toward establishing a regulatory framework. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

As of now, Bill C-27, the Digital Charter Implementation Act, 2022, remains under discussion and continues to progress through the legislative process. Currently, the Standing Committee on Industry and Technology (INDU) has announced that its review of the bill will stay on hold until at least February 2025. See here more details about the entire deliberation process.

The AIDA includes several key proposals:

  • High-impact AI systems must comply with existing Canadian consumer protection and human rights laws. Specific regulations defining these systems and their requirements will be developed in consultation with stakeholders to protect the public while minimising burdens on the AI ecosystem.
  • The Minister of Innovation, Science, and Industry will oversee the Act’s implementation, supported by an AI and Data Commissioner. Initially, this role will focus on education and assistance, but it will eventually take on compliance and enforcement responsibilities.
  • New criminal law provisions will prohibit reckless and malicious uses of AI that could harm Canadians or their interests.

In addition, Canada has introduced a Voluntary Code of Conduct for the responsible development and management of advanced generative AI systems. This code serves as a temporary measure while the legislative framework is being finalized.

The code of conduct sets out six core principles for AI developers and managers: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. For instance, managers are responsible for ensuring that AI-generated content is clearly labeled, while developers must assess the training data and address harmful biases to promote fairness and equity in AI outcomes.

Definition of AI

At its current stage of drafting, the Artificial Intelligence and Data Act provides the following definitions:

‘Artificial intelligence system is a system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.’

‘General-purpose system is an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.’

‘Machine-learning model is a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.’

Other laws and official documents that may impact the regulation of AI

Sources 

India

Summary:

India is advancing its AI governance framework but currently has no binding AI regulations. Key initiatives include the 2018 National Strategy for Artificial Intelligence, which prioritises AI applications in sectors like healthcare and smart infrastructure, and the 2021 Principles for Responsible AI, which outline ethical standards such as safety, inclusivity, privacy, and accountability. Operational guidelines released later in 2021 emphasise ethics by design and capacity building. Recent developments include the 2024 India AI Mission, with over $1.25 billion allocated for infrastructure, innovation, and safe AI, and advisories addressing deepfakes and generative AI.

As of the time of this writing, no AI regulations currently carry the force of law in India. Several frameworks are being formulated to guide the regulation of AI, including:

  • The National Strategy for Artificial Intelligence released in June 2018, which aims to establish a strong basis for future regulation of AI in India and focuses on AI intervention in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.
  • The Principles for Responsible AI released in February 2021, which serve as India’s roadmap for creating an ethical, responsible AI ecosystem across sectors.
  • The Operationalizing Principles for Responsible AI released in August 2021, which emphasises the need for regulatory and policy interventions, capacity building, and incentivising ethics by design regarding AI.

The Principles for Responsible AI identify the following broad principles for responsible management of AI, which can be leveraged by relevant stakeholders in India:

  • The principle of safety and reliability.
  • The principle of equality.
  • The principle of inclusivity and non-discrimination.
  • The principle of privacy and security.
  • The principle of transparency.
  • The principle of accountability.
  • The principle of protection and reinforcement of positive human values.

The Ministry of Commerce and Industry has established an Artificial Intelligence Task Force, which issued a report in March 2018.

In March 2024, India announced an allocation of over $1.25 billion for the India AI Mission, which will cover various aspects of AI, including computing infrastructure capacity, skilling, innovation, datasets, and safe and trusted AI.

India’s Ministry of Electronics and Information Technology issued advisories related to deepfakes and generative AI in 2024.

Definition of AI

The Principles for Responsible AI describe AI as ‘a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. The natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also make decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time.’

Other laws and official documents that may impact the regulation of AI

Sources

Israel

Summary:

Israel does not yet have binding AI regulations but is advancing a flexible, principles-based framework to encourage responsible innovation. The government’s approach relies on ethical guidelines and voluntary standards tailored to specific sectors, with the potential for broader legislation if common challenges arise. Key milestones include a 2022 white paper on AI and the 2023 Artificial Intelligence Regulations and Ethics.

As of the time of this writing, no AI regulations currently carry the force of law in Israel. Israel’s approach to AI governance encourages responsible innovation in the private sector through a sector-specific, principles-based framework. This strategy uses non-binding tools, including ethical guidelines and voluntary standards, allowing for regulatory flexibility tailored to each sector’s needs. However, the policy also leaves room for the introduction of broader, horizontal legislation should common challenges arise across sectors.

A white paper on AI was published in 2022 by Israel’s Ministry of Innovation, Science and Technology in collaboration with the Ministry of Justice, followed by the Policy on Artificial Intelligence Regulations and Ethics published in 2023.  The AI Policy was developed pursuant to a government resolution that tasked the Ministry of Innovation, Science and Technology with advancing a national AI plan for Israel.

Definition of AI

The AI Policy describes an AI system as having ‘a wide range of applications such as autonomous vehicles, medical imaging analysis, credit scoring, securities trading, personalised learning and employment,’ notwithstanding that ‘the list of applications is constantly expanding.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Japan

Summary:

Japan currently has no binding AI regulations but relies on voluntary guidelines to encourage responsible AI development and use. The AI Guidelines for Business Version 1.0 promote principles like human rights, safety, fairness, transparency, and innovation, fostering a flexible governance model involving stakeholders across sectors. Recent developments include the establishment of the AI Safety Institute in 2024 and the draft ‘Basic Act on the Advancement of Responsible AI,’ which proposes legally binding rules for certain generative AI models, including vetting, reporting, and compliance standards.

At the time of this writing, no AI regulations currently carry the force of law in Japan

The updated AI Guidelines for Business Version 1.0 are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognised AI principles.

The principles outlined by the AI Guidelines are:

  • Human-centric – The utilisation of AI must not infringe upon the fundamental human rights guaranteed by the constitution and international standards.
  • Safety – Each AI business actor should avoid damage to the lives, bodies, minds, and properties of stakeholders.
  • Fairness – Elimination of unfair and harmful bias and discrimination.
  • Privacy protection – Each AI business actor respects and protects privacy.
  • Ensuring security – Each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
  • Transparency – Each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
  • Accountability – Each AI business actor is accountable to stakeholders to ensure traceability, conforming to common guiding principles, based on each AI business actor’s role and degree of risk posed by the AI system or service.
  • Education/literacy – Each AI business actor is expected to provide persons engaged in its business with education regarding knowledge, literacy and ethics concerning the use of AI in a socially correct manner, and provide stakeholders with education about complexity, misinformation, and possibilities of intentional misuse.
  • Ensuring fair competition – Each AI business actor is expected to maintain a fair competitive environment so that new businesses and services using AI are created.
  • Innovation – Each AI business actor is expected to promote innovation and consider interconnectivity and interoperability.

The Guidelines emphasise a flexible governance model where various stakeholders are involved in a swift and ongoing process of assessing risks, setting objectives, designing systems, implementing solutions, and evaluating outcomes. This adaptive cycle operates within different governance structures, such as corporate policies, regulatory frameworks, infrastructure, market dynamics, and societal norms, ensuring they can quickly respond to changing conditions.

The AI Strategy Council was established to explore ways to harness AI’s potential while mitigating associated risks. On May 22, 2024, the Council presented draft discussion points outlining considerations on the necessity and possible scope of future AI regulations.

A working group has proposed the ‘Basic Act on the Advancement of Responsible AI,‘ which would introduce a hard law approach to regulating certain generative AI foundation models. Under the proposed law, the government would designate which AI systems and developers fall under its scope and impose obligations related to the vetting, operation, and output of these systems, along with periodic reporting requirements. 

Similar to the voluntary commitments made by major US AI companies in 2023, this framework would allow industry groups and developers to establish specific compliance standards. The government would have the authority to monitor compliance and enforce penalties for violations. If enacted, this would represent a shift in Japan’s AI regulation from a soft law to a more binding legal framework.

The AI Safety Institute was launched in February 2024 to examine the evaluation methods for AI safety and other related matters. The Institute is established within the Information-technology Promotion Agency, in collaboration with relevant ministries and agencies, including the Cabinet Office.

Definition of AI

The AI Guidelines define AI as an abstract concept that includes AI systems themselves as well as machine-learning software and programs.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Saudi Arabia

Summary:

Saudi Arabia has no binding AI regulations but is advancing its AI agenda through initiatives under Vision 2030, led by the Saudi Data and Artificial Intelligence Authority. The Authority oversees the National Strategy for Data & AI, which includes developing startups, training specialists, and establishing policies and standards. In 2023, SDAIA issued a draft set of AI Ethics Principles, categorising AI risks into four levels: little or no risk, limited risk, high risk (requiring assessments), and unacceptable risk (prohibited). Recent 2024 guidelines for generative AI offer non-binding advice for government and public use. These efforts are supported by a $40 billion AI investment fund.

At the time of this writing, no AI regulations currently carry the force of law in Saudi Arabia. In 2016, Saudi Arabia unveiled a long-term initiative known as Vision 2030, a bold plan spearheaded by Crown Prince Mohammed Bin Salman. 

A key aspect of this initiative was the significant focus on advancing AI, which culminated in the establishment of the Saudi Data and Artificial Intelligence Authority (SDAIA) in August 2019. This same decree also launched the Saudi Artificial Intelligence Center and the Saudi Data Management Office, both operating under SDAIA’s authority. 

SDAIA was tasked with managing the country’s AI research landscape and enforcing new policies and regulations that aligned with its AI objectives. In October 2020, SDAIA rolled out the National Strategy for Data & AI, which broadened the scope of the AI agenda to include goals such as developing over 300 AI and data-focused startups and training more than 20,000 specialists in these fields.

SDAIA was tasked by the Council of Ministers’ Resolution No. 292 to create policies, governance frameworks, standards, and regulations for data and artificial intelligence, and to oversee their enforcement once implemented.  SDAIA have issued draft AI Ethics Principles in 2023. The document enumerates seven principles with corresponding conditions necessary for their sufficient implementation. They include: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and explainability, and accountability and responsibility.

Similar to the EU AI Act, the Principles categorise the risks associated with the development and utilization of AI into four levels with different compliance requirements for each:

  • Little or No Risk: Systems classified as posing little or no risk do not face restrictions, but the SDAIA recommends compliance with the AI Ethics Principles.
  • Limited Risk: Systems classified as limited risk are required to comply with the Principles.
  • High Risk: Systems classified as high risk are required to undergo both pre- and post-deployment conformity assessments, in addition to meeting ethical standards and relevant legal requirements. Such systems are noted for the significant risk they might pose to fundamental rights.
  • Unacceptable Risk: Systems classified as posing unacceptable risks to individuals’ safety, well-being, or rights are strictly prohibited. These include systems that socially profile or sexually exploit children, for instance.

On January 1, 2024, SDAIA released two sets of Generative AI Guidelines. The first is intended for government employees, while the second is aimed at the general public. 

Both documents offer guidance on the adoption and use of generative AI systems, using common scenarios to illustrate their application. They also address the challenges and considerations associated with generative AI, outline principles for responsible use, and suggest best practices. The Guidelines are not legally binding and serve as advisory frameworks.

Much of the attention surrounding Saudi Arabia’s AI advancements is driven by its large-scale investment efforts, notably a $40 billion fund dedicated to AI technology development.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Singapore

Summary:

Singapore has no binding AI regulations but promoted responsible AI through frameworks developed by the Infocomm Media Development Authority (IMDA). Key initiatives include the Model AI Governance Framework, which offers ethical guidelines for the private sector, and AI Verify, a toolkit for assessing AI systems’ alignment with these standards. The National AI Strategy and its 2.0 update emphasise fostering a trusted AI ecosystem while driving innovation and economic growth.

As of the time of this writing, no AI regulations currently carry the force of law in Singapore. Singapore’s AI regulations are largely shaped by the Infocomm Media Development Authority (IMDA), an independent government body that operates under the Ministry of Communications and Information. This statutory board plays a central role in guiding the nation’s approach to artificial intelligence policies and frameworks. IMDA takes a prominent position in shaping Singapore’s technology policies and refers to itself as the ‘architect of the nation’s digital future,’ highlighting its pivotal role in steering the country’s digital transformation.

In 2019, the Smart Nation and Digital Government offices introduced an extensive National AI Strategy, outlining Singapore’s goal to boost its economy and become a leader in the global AI industry. To support these objectives, the government also established a National AI Office within the Ministry to oversee the execution of its AI initiatives.

The Singapore government has developed various frameworks and tools to guide AI deployment and promote the responsible use of AI:

  • The Model AI Governance Framework, that offers comprehensive guidelines to private sector entities on tackling essential ethical and governance challenges in the implementation of AI technologies.
  • AI Verify, was developed by IMDA in collaboration with private sector partners, and supported by the AI Verify Foundation (AIVF) and is a testing framework and toolkit for AI governance, created to assist organisations in assessing the alignment of their AI systems with ethical guidelines using standardised evaluations.
  • The National Artificial Intelligence Strategy 2.0, highlighting Singapore’s vision and dedication to fostering a trusted and accountable AI environment and promoting innovation and economic growth through AI.

Definition of AI

The 2020 Framework defines AI as ‘a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).’

The 2024 Framework defines Generative AI as ‘AI models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable Generative AI to accept natural language prompts as input, including large language models.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Republic of Korea

Summary:

The Republic of Korea has no binding AI regulations but is actively developing its framework through the Ministry of Science and ICT and the Personal Information Protection Commission. Key initiatives include the 2019 National AI Strategy, the 2020 Human-Centered AI Ethics Standards, and the 2023 Digital Bill of Rights. Current legislative efforts focus on the proposed Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which adopts a ‘permit-first-regulate-later’ approach to foster innovation while addressing high-risk applications.

As of the time of this writing, no AI regulations currently carry the force of law in the Republic of Korea. However, two major institutions are actively guiding the development of AI-related policies: the Ministry of Science and ICT (MSIT) and the Personal Information Protection Commission (PIPC). While the PIPC concentrates on ensuring that privacy laws keep pace with AI advancements and emerging risks, MSIT leads the nation’s broader AI initiatives. Among these efforts is the AI Strategy High-Level Consultative Council, a collaborative platform where government and private stakeholders engage in discussions on AI governance.

The Republic of Korea has been progressively shaping its AI governance framework, beginning with the release of its National Strategy for Artificial Intelligence in December 2019. This was followed by the Human-Centered Artificial Intelligence Ethics Standards in 2020 and the introduction of the Digital Bill of Rights in May 2023. Although no comprehensive AI law exists as of yet, several AI-related legislative proposals have been introduced to the National Assembly since 2022. One prominent proposal currently under review is the Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which aims to consolidate earlier legislative drafts into a more cohesive approach.

Unlike the European Union’s AI Act, the Republic of Korea’s proposed legislation follows a ‘permit-first-regulate-later’ philosophy, which emphasises fostering innovation and industrial growth in AI technologies. The bill also outlines specific obligations for high-risk AI applications, such as requiring prior notifications to users and implementing measures to ensure AI systems are trustworthy and safe. The MSIT Minister announced the establishment of an AI Safety Institute at the 2024 AI Safety Summit.

Definition of AI

Under the proposed AI Act, ‘artificial intelligence’ is defined as the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgement, and language comprehension.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UAE

Summary:

The UAE currently lacks binding AI regulations but actively promotes innovation through frameworks like regulatory sandboxes and allowing real-world testing of new technologies under regulatory oversight. AI governance in the UAE is shaped by its complex jurisdictional landscape, including federal laws, Mainland UAE, and financial free zones such as DIFC and ADGM. Key initiatives include the 2017 National Strategy for Artificial Intelligence 2031, managed by the UAE AI and Blockchain Council, which focuses on fairness, transparency, accountability, and responsible AI practices. Dubai’s 2019 AI Principles and Ethical AI Toolkit emphasize safety, fairness, and explainability in AI systems. The UAE’s AI Ethics: Principles and Guidelines (2022) provide a non-binding framework balancing innovation and societal interests, supported by the beta AI Ethics Self-Assessment Tool to evaluate and refine AI systems ethically. In 2023, the UAE released Falcon 180B, an open-source large language model, and in 2024, the Charter for the Development and Use of Artificial Intelligence, which aims to position the UAE as a global AI leader by 2031 while addressing algorithmic bias, privacy, and compliance with international standards.

At the time of this writing, no AI regulations currently carry the force of law in the UAE. The regulatory landscape of the United Arab Emirates is quite complex due to its division into multiple jurisdictions, each governed by its own set of rules and, in some cases, distinct regulatory bodies. 

Broadly, the UAE can be viewed in terms of its Financial Free Zones, such as the Dubai International Financial Centre (DIFC) and the Abu Dhabi Global Market (ADGM), which operate under separate legal frameworks, and Mainland UAE, which encompasses all areas outside these financial zones. Mainland UAE is further split into non-financial free zones and the broader onshore region, where the general laws of the country apply. As the UAE is a federal state composed of seven emirates – Dubai, Abu Dhabi, Sharjah, Fujairah, Ras Al Khaimah, Ajman, and Umm Al-Quwain – each of them retains control over local matters not specifically governed by federal law. The UAE is a strong advocate for “regulatory sandboxes,” a framework that allows new technologies to be tested in real-world conditions within a controlled setting, all under the close oversight of a regulatory authority.

In 2017, the UAE appointed a Minister of State for AI, Digital Economy and Remote Work Applications and released the National Strategy for Artificial Intelligence 2031, with the aim to create the country’s AI ecosystem. The UAE Artificial Intelligence and Blockchain Council is responsible for managing the National Strategy’s implementation, including crafting regulations and establishing best practices related to AI risks, data management, cybersecurity, and various other digital matters.

The City of Dubai launched the AI Principles and Guidelines for the Emirate of Dubai in January 2019. The Principles promote fairness, transparency, accountability, and explainability in AI development and oversight. Dubai introduced an Ethical AI Toolkit outlining principles for AI systems to ensure safety, fairness, transparency, accountability, and comprehensibility.

The UAE AI Ethics: Principles and Guidelines, released in December 2022 under the Minister of State for Artificial Intelligence, provides a non-binding framework for ethical AI design and use, focusing on fairness, accountability, transparency, explainability, robustness, human-centered design, sustainability, and privacy preservation. Drafted as a collaborative, multi-stakeholder effort, the guidelines balance the need for innovation with the protection of intellectual property and invite ongoing dialogue among stakeholders. It aims to evolve into a universal, practical, and widely adopted standard for ethical AI, aligning with the UAE National AI Strategy and Sustainable Development Goals to ensure AI serves societal interests while upholding global norms and advancing responsible innovation.

To operationalise these principles, the UAE has introduced a beta version of its AI Ethics Self-Assessment Tool, designed to help developers and operators evaluate the ethical performance of their AI systems. This tool encourages consideration of potential ethical challenges from initial development stages to full system maintenance and helps prioritise necessary mitigation measures. While non-compulsory, it employs weighted recommendations—where ‘should’ indicates high priority and ‘should consider’ denotes moderate importance—and discourages implementation unless a minimum ethics performance threshold is met. As a beta version, the tool invites extensive user feedback and shared use cases to refine its functionality.

In 2023, the UAE, through the support of the Advanced Technology Research Council under the Abu Dhabi government, released the open-source large language model, Falcon 180B, named after the country’s national bird.

In July 2024, the UAE’s AI, Digital Economy, and Remote Work Applications Office released the Charter for the Development and Use of Artificial Intelligence. The Charter establishes a framework to position the UAE as a global leader in AI by 2031, prioritising human well-being, safety, inclusivity, and fairness in AI development. It addresses algorithmic bias, ensures transparency and accountability, and emphasises innovation while safeguarding community privacy in line with UAE data standards. The Charter also highlights the need for ethical oversight and compliance with international treaties and local regulations to ensure AI serves societal interests and upholds fundamental rights.

Definition of AI

The  AI Office has defined AI as ‘systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the data they collect’ in the 2023 AI Adoption Guideline in Government Services.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UK

Summary:

The UK currently has no binding AI regulations but adopts a principles-based framework allowing sector-specific regulators to govern AI development and use within their domains. Key principles outlined in the 2023 White Paper: A Pro-Innovation Approach to AI Regulation include safety, transparency, fairness, accountability, and contestability. The UK’s National AI Strategy, overseen by the Office for Artificial Intelligence, aims to position the country as a global AI leader by promoting innovation and aligning with international frameworks. Recent developments, including proposed legislation for advanced AI models and the Digital Information and Smart Data Bill, signal a shift toward more structured regulation. The UK solidified its leadership in AI governance by hosting the 2023 Bletchley Summit, where 28 countries committed to advancing global AI safety and responsible development.

As of the time of this writing, no AI regulations currently carry the force of law in the UK. The UK supports a principles-based framework for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK aims to position itself as a global leader in AI by establishing a flexible regulatory framework that fosters innovation and growth in the sector. In 2022, the Government  issued an AI Regulation Policy Paper followed by a White Paper in 2023 with the title ‘A Pro-Innovation Approach to AI Regulation.’

The White Paper lists five key principles designed to ensure responsible AI development: 

  1. Safety, Security, and Robustness. 
  2. Appropriate Transparency and Explainability.
  3. Fairness.
  4. Accountability and Governance.
  5. Contestability and Redress.

The UK Government set up an Office for Artificial Intelligence to oversee the implementation of the UK’s National AI Strategy, adopted in September 2021. The Strategy recognises the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors, and sets up a plan for the next decade to position the UK as a world leader in artificial intelligence. The AI office will perform various central functions to support the framework’s implementation, including: 

  1. monitoring and evaluating the overall efficacy of the regulatory framework;
  2. assessing and monitoring risks across the economy arising from AI;
  3. promoting interoperability with international regulatory frameworks.

Shifting away from the flexible regulatory approach, In July 2024, King Charles III suggested plans to enact legislation requiring developers of the most advanced AI models to meet specific standards. Additionally, the announcement included the Digital Information and Smart Data Bill, which will reform data-related laws to ensure the safe development and use of emerging technologies, including AI. The details of how these measures will be implemented remain unclear.

The UK hosted in November 2023 the Bletchley Summit, positioning itself as a leader in fostering international collaboration on AI safety and governance. At the Summit, a landmark declaration was signed by 28 countries, committing to collaborate on managing the risks of frontier AI technologies, ensuring AI safety, and advancing responsible AI development and governance globally.

Definition of AI

The White Paper describes AI as ‘products and services that are ‘adaptable’ and ‘autonomous.”

Other laws and official non binding documents that may impact the regulation of AI

Sources

The US clock is ticking (for) TikTok

The ongoing TikTok legal saga in the USA has entered its most delicate phase yet, with a federal appeals court ruling to uphold a law that could force its Chinese parent company, ByteDance, to divest TikTok’s US operations by 19 January 2025 or face an unprecedented ban.

TikTok must now urgently appeal to the Supreme Court to either block or reverse a law mandating ByteDance’s sale of the popular short-video platform by 19 January, following an appeals court’s recent decision to deny additional time. TikTok and ByteDance submitted an emergency request to the US Court of Appeals for the District of Columbia, seeking an extension to present their arguments before the US Supreme Court.

With 170 million American users and billions in ad revenue, the platform, a digital giant particularly beloved by younger generations, now stands on the edge of a ban in its largest foreign market. At the centre of this unprecedented conflict lies a confluence of national security concerns, free speech debates, and economic implications far beyond TikTok.

The incipit of the current conflict can be traced back to 2020 when then-President Donald Trump attempted to ban TikTok and Chinese-owned WeChat, citing fears that Beijing could misuse Americans’ data or manipulate public discourse through the platforms. The courts blocked Trump’s effort, and in 2021, President Joe Biden revoked the Trump-era orders. Yet bipartisan concerns about TikTok’s ties to the Chinese government remain. Lawmakers and US intelligence agencies have long raised alarms about the vast amount of data TikTok collects on its American users and the potential for Beijing to exploit this information for espionage or propaganda. This year, Congress passed a bill with overwhelming support requiring ByteDance to divest its US assets, marking the strictest legal threat the platform has ever faced.

 Person, Bulldozer, machine, Text

The recent appeals court decision to uphold the law has been seen as necessary by Biden’s administration to protect US national security. The ruling cited the ‘well-substantiated threat’ posed by the Chinese government’s relationship with ByteDance, arguing that China’s influence over TikTok is fundamentally at odds with American free speech principles. Attorney General Merrick Garland praised the decision, calling it a crucial step in ‘blocking the Chinese government from weaponising TikTok.’ However, critics of the ruling, including free speech advocates and TikTok itself, have pushed back. The American Civil Liberties Union (ACLU) warned that banning the app would violate the First Amendment rights of millions of Americans who rely on TikTok to communicate and express themselves.

TikTok has vowed to appeal to the Supreme Court to halt the ruling before the 19 January deadline. Consequently, the Supreme Court’s decision will determine whether the platform will survive under ByteDance’s ownership or face a US ban. However, suspicions and obstacles loom even if ByteDance attempts to sell TikTok’s US operations. Any divestiture would need to prove the app is wholly independent of Chinese control—a requirement China’s laws make nearly impossible. ByteDance’s prized algorithm, the key to TikTok’s success, is classified as a technology export by Beijing and cannot be transferred without Chinese government approval.

TikTok, Person, People, Computer Hardware, Electronics, Hardware, Art

On the other hand, the economic consequences of a TikTok ban could be profound. Advertisers, who have collectively poured billions into the platform, are closely monitoring the situation. While brands are not yet pulling their marketing budgets, many are developing contingency plans to shift ad spending to rivals like Meta-owned Instagram, Alphabet’s YouTube, and Snap. These platforms, all of which have rolled out short-form video features to compete with TikTok, stand to reap enormous benefits if TikTok disappears from the US landscape. Meta’s stock price soared to an all-time high following the court ruling, reflecting investor optimism that its platforms will absorb TikTok’s market share.

Content creators and small businesses that rely on the app for income now face an uncertain future. Many influencers urge followers to connect with them on alternative platforms like Instagram, YouTube, and X (formerly Twitter) in case TikTok is banned. For small businesses, the situation is equally hard. TikTok’s integrated commerce feature, TikTok Shop, has exploded in popularity since its US launch in September 2023. This year, the platform generated $100 million in Black Friday sales, offering brands a unique and lucrative e-commerce channel. For merchants who have invested in TikTok Shop, a ban would mean losing a critical revenue stream with no comparable alternative.

 Electronics, Phone, Mobile Phone, Smoke Pipe

Yet TikTok’s rise in the US has transformed digital advertising and e-commerce and reshaped global supply chains. Like competitors Shein and Temu, TikTok Shop has connected American consumers with low-cost vendors, many of whom ship products directly from China. This dynamic reflects the extensive economic tensions underpinning the TikTok controversy. The USA, wary of China’s growing tech influence, has imposed strict export controls on Chinese technology and cracked down on perceived threats to its national security. Beijing, in turn, has retaliated with bans on critical minerals and stricter oversight of technologies leaving its borders. TikTok has become the latest and most visible symbol of this escalating US-China tech war.

The path forward is fraught with uncertainty. President Biden, whose administration has led the charge against TikTok, can extend the 19 January deadline by 90 days if he determines that a divestiture is in progress. This alternative would push the final decision to President-elect Donald Trump, who has offered mixed messages about his stance on TikTok. While Trump previously sought to ban the app, he now claims he would not enforce the new law. Nevertheless, the legislation has broad bipartisan support, making it unlikely that a new administration could simply ignore it. Tech companies, meanwhile, face legal risks if they continue to provide services to TikTok after the deadline. App stores like Apple and Google and internet hosting providers could face billions in fines if they fail to comply.

TikTok has launched Symphony Creative Studios globally, helping advertisers create customised, high-quality content through advanced AI tools.

The Chinese government’s role adds another layer of complexity. Beijing has fiercely opposed US efforts to force ByteDance into a sale, framing the TikTok dispute as a ‘commercial robbery’ designed to stifle China’s technological ambitions. By classifying TikTok’s algorithm as a protected export, China has clarified that any divestiture will be a lengthy and politically charged process if it happens at all. Either way, it leaves ByteDance caught between two powerful governments with irreconcilable demands.

For now, TikTok remains fully operational in the US, and its users continue to scroll, create, and shop as usual. However, the next few weeks will determine whether TikTok can escape its existential question or join the growing list of casualties in the US-China tech war. The outcome will shape the future of one of the world’s most influential social media platforms and set a precedent for how governments regulate foreign-owned technology in an era defined by digital dominance and geopolitical rivalry. Whether through divestiture, court intervention, or an outright ban, TikTok’s fate in the US marks a turning point in the ongoing struggle to balance national security, economic interests, and the free flow of information in an inter(net)connected world.

The evolution of the EU consumer protection law: Adapting to new challenges in the digital era

What is EU consumer law?

The first mention of consumer law in the EU was in the context of competition law in 1972 when policymakers started to pave the way to protect consumers in policy. Despite the lack of a legal treaty basis, many regulatory initiatives started to take shape to protect consumers (food safety, prevention of doorstep selling, and unfair contract terms). 

The first treaty-based mention of a specific consumer protection article was in the 1992 Maastricht treaty. Nowadays, the EU consumer law is one of the most and better developed substantive fields of the EU law.

As contained in the Consolidated Version of the Treaty on the Functioning of the European Union (the treaty that regroups all previous European Union treaties before 2009), Article 169 specifically refers to consumer protection. Article 169(1) reads as follows:

‘In order to promote the interests of consumers and to ensure a high level of consumer protection, the Union shall contribute to protecting the health, safety and economic interests of consumers, as well as to promoting their right to information, education and to organise themselves in order to safeguard their interests.’

Given its history, it has long been established that consumer law purports to guarantee and protect the autonomy of the individual who appears in the market without any profit-market intentions.  Beyond the goals set out in Article 169 TFEU, four main directives govern areas of consumer law, the 1985 Product Liability Directive, the 1993 Unfair Terms in Consumer Contracts Directive, the 2011 Consumer Rights Directive, and the subject of this analysis, the 2005 Unfair Commercial Practices Directive.

Since then, there have been numerous amendments to the EU’s consumer protection legislative framework. The main amendment in consumer law includes the adoption of the Modernisation Directive.

EU flags in front of European Commission

Adopted on 27 November 2019, it amended four existing directives, the UCPD, the Price Indication Directive 98/6/EC, the Unfair Contract Term Directive 93/13/EEC, and the Consumer Rights Directive 2011/83/EU. Even more recently, there have been specific proposals for amendments to the UCPD concerning environmental advertising, known as greenwashing, in line with furthering the European Union’s Green Deal.

What is UCP?

An unfair commercial practice (UCP) is a misleading practice (whether deliberate actions or omissions of information), aggressive or prohibited by law (blacklisted in Annex I UCPD). A UCP interferes with consumers’ free choice to determine something for themselves and affects their decision-making power.

Prohibited UCPs are explained in Article 5 of the UCPD.  It outlines that a UCP will be prohibited if it is contrary to professional diligence and materially distorts the average consumer’s economic behaviour. The EU clearly outlines and recalls that there are two main categories of UCPs, with examples for both:

  • First, misleading practices through action (giving false information) or omission (leaving out important information).
  • Second, aggressive practices aimed at bullying consumers into buying a product.

Some examples of UCPs are bait advertising, non-transparent search results ranking, free claims about cures, false green claims or greenwashing, certain game ads, false offers, and persistent unwanted calls. There is no exhaustive list of what a UCP may be, especially in the digital context where technology is rapidly changing the way we behave towards one another.

This is especially evident in the case of the use of AI. AI is a buzzword that is often impossible to avoid nowadays. Computer Science Professor at Standford University, Dr Fei-Fei Li, said that ‘AI is everywhere. It’s not that big, scary thing in the future. AI is here with us.’ 

AI is used in UCPs to improve and streamline emotional, behavioural, and other types of targeting. Data can be collected using AI (scraping website reviews or analysing consumer trends), and this information can be leveraged against consumers to influence their decision-making powers, ultimately furthering the commercial goals of traders, potentially to the detriment of the interests of consumers.

EU consumer protection

When influencing a consumer’s decision-making powers, AI will often employ measures to deceive and manipulate users to get them to influence their decision-making, thus breaching the UCPD. However, these violations often go unnoticed since most people are unaware of UCPD or dark patterns.

Therefore, UCPs are practices that manipulate consumer choices in a certain way, and the advancement of AI widens the gap between consumers and their freedom to decide what they want without them even knowing it.

What is the UCPD?

As part of consumer law and as already stated, this analysis will focus on the UCPD and its recent amendments.

The origin of the UCPD

The UCPD was not the original legislation governing the protection of UCP in the EU. The first law relating to UCPs was adopted in 2005 and amended the 1984 Misleading and Comparative Advertising Directive. Its scope grew from amendment to amendment, and at its core, the directive has always been based on the prohibition of practices contrary to the requirements of professional diligence as contained in Article 2(h) UCPD:

Professional diligence ‘means the standard of special skill and care which a trader may reasonably be expected to exercise towards consumers, commensurate with honest market practice and/or the general principle of good faith in the trader’s field of activity’.

The UCPD was introduced to establish a fully harmonised legal framework for combatting unfair business-to-consumer practices across member states. This entailed introducing legislation harmonising different pre-existing laws to form a cohesive and understandable legal framework. This harmonisation not only combined existing legislation whilst introducing some key amendments but also provided legal certainty by having one centralised document to consult when dealing with unfair commercial practices in the EU.

One of the major drawbacks from a member state’s perspective is that the UCPD has a full harmonisation effect (meaning that member states cannot introduce more or less protection through national legislation efforts). It implied that member states could not introduce the measures they deemed to be necessary to protect consumers against UCP. Member states do have some discretion to implement UCP national legislation in certain sectors such as contract law, health and safety aspects of products, and legislation on regulated professions, but for the most part, they cannot introduce their own pieces of legislation concerning UCPs.

The goals and objectives of the UCPD are twofold. First, it aims to contribute to the internal market by removing obstacles to cross-border trade in the EU. Secondly, it seeks to ensure high consumer protection by shielding consumers from practices that distort their economic decisions and by prohibiting unfair and non-transparent practices.

The UCPD has a blacklist in Annex I with all the prohibitions it includes. A trader cannot employ any of the practices listed in Annex I, and if they do, they are in breach of the UCPD. There is no need to assess the practice, the potential economic distortion or the average consumer. If a trader engages in a practice listed in Annex I of the UCPD, that behaviour is strictly prohibited.

Past amendments to the UCPD

Before the UCPD was implemented, EU member states had their own national legislations and practices regarding consumer law and specifically, UCP. However, this could cause issues for traders trying to sell goods to consumers as they had to consult many legal texts.

By consolidating all of these rules, changing some and adding new ones, the EU could codify UCP in a single document. This helps promote fairness and legal certainty across the EU. The UCPD has been amended several times since it was first published in the Official Journal of the European Union.

These amendments have covered several changes to enhance consumer protection and include the following: marketing of dual-quality products, individual redress, fines for non-compliance, reduced full harmonisation effect of the directive, and information duties in the online context. In essence, these amendments aim to improve the state of consumer law and protect consumers in the EU. Below is a summary of these amendments in more detail.

Marketing of dual quality products: dual quality refers to the issue of some companies selling products in different member states under the same (or similar) branding and packaging but with different compositions. There is currently no explanation of any objective justifications for the marketing of dual-quality products to be allowed under the directive, as there is no explanation of any possible objective criteria.

The directive’s preamble (non-binding but still influenceable) refers to certain examples where the marketing of dual-quality products is permitted. This can be permitted by national legislation, availability or seasonality of raw materials, voluntary strategies to improve access to healthy and nutritious food, and offering goods of the same brand in packages of different weights or volumes in different geographical markets.

Individual redress: a key aspect of these amendments is setting up individual remedies for consumers that did not exist previously. This harmonises remedy efforts across the EU, as many member states did not have individual consumer remedies. Article 11(a) of the directive will propose minimum harmonising remedies, meaning that member states can introduce legislation to further consumer protection.

Fines: the amendments introduced penalties and fines changed compared to the previous UCPD. The new amendments set out criteria for imposing penalties. It is a long list in article 13(2) of the directive. In addition to these criteria, the new amendment proposed that 4% of the EU’s global annual turnover should be the maximum fine for widespread infringement.

Reduced full harmonisation: the amendments also introduced limits to the somewhat controversial full harmonisation of the UCPD. They limited the harmonisation in 2 cases. The first concerns commercial excursions known as ‘Kaffeabrten‘ in Germany. These are low-cost excursions for the elderly where UCP sales occur, such as deception and aggressive sales tactics.

The second concerns commercial practices involving unsolicited visits by a trader to a consumer’s home. If member states wish to introduce legislation to this effect, they must inform the European Commission, which has to inform traders (as part of the information obligation) on a separate, dedicated website.

Recent amendments to the UCPD

The UCPD is not an entrenched directive that cannot be amended. This is evident from its amendment in 2019 and the more recent 2024 amendments.  The new proposal introduces two amendments that would add to the existing list of practices considered misleading if they cause or are likely to cause the average consumer to make a transactional decision they would not otherwise make in the context of environmental matters.

  • The first amendment concerns environmental claims related to future environmental performances without clear, objective, and publicly available commitments.
  • The second amendment relates to irrelevant advertising benefits for consumers that do not derive from any feature of the product or service.

Additionally, new amendments to the ‘blacklist’ in Annex I have been proposed. A practice added to the blacklist entails it to be considered as unfair in all circumstances. These amendments relate to environmental matters associated with the European Green Deal and aim to reduce the effect of ‘greenwashing’. These amendments include:

  • Displaying a sustainability label that is not based on a certification scheme or not established by public authorities.
  • Making a generic environmental claim for which the trader is not able to demonstrate recognised excellent environmental performance relevant to the claim.
  • Making an environmental claim about the entire product or the trader’s business when it concerns only a certain aspect of the product or a specific activity.
  • Claiming, based on the offsetting of greenhouse gas emissions, that a product has a neutral, reduced or positive impact on the environment in terms of greenhouse gas emissions.

The focus of the new amendments is evidently to reduce environmental misconceptions that consumers may have about a product, as businesses greenwash products to mislead them into choosing them. This aims to protect consumers in the EU so that they can make an informed choice about whether a product contributes to environmental goals or not without being manipulated or misled into believing that it is because of the use of an environmental colour (green) or an ambiguous title (sustainable).

Final thoughts

The level of consumer law protection in the EU is ever-evolving, always aiming to reach higher and higher peaks. This is reflected in the EU’s efforts to amend and strengthen the legislation that protects us consumers.

Past amendments aim to clarify doubtful areas of consumer law, such as what information should be provided and where member states can legislate on UCPs, reducing the effect of full harmonisation. These amendments also introduced new and important notions such as redress mechanisms for individual consumers along with criteria for fines.

 Architecture, Building, Office Building, City, Urban, High Rise, Condo, Housing, Handrail, Metropolis, Apartment Building, Window

The more recent amendments target trader’s actions towards misleading greenwashing practices. Hopefully, these greenwashing amendments will help consumers make their own informed choices and help make the EU more sustainable by cracking down on the use of misleading, sustainable, and unfair commercial practices.

Given that amendments only took place in 2024, it is unlikely that there will be any new amendments to the UCPD any time soon. However, in the years to come, there are bound to be new proposals, potentially targeting the intersection of AI and unfair commercial practices.

Are AI safety institutes shaping the future of trustworthy AI?

Summary

As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunities and risks. Beyond traditional regulatory frameworks, strategies include substantial investments in research, global standard setting, and international collaboration. A key development has been the establishment of AI safety institutes (AISIs), which aim to evaluate and verify AI models before public deployment, among other functions.

In November 2023, the UK and the USA launched their AI Safety Institutes, setting an example for others. In the following months, Japan, Canada, and the European Union followed suit through its AI Office. This wave of developments was further reinforced at the AI Seoul Summit in May 2024, where the Republic of Korea and Singapore introduced their institutes. Meanwhile, Australia, France, and Kenya announced similar initiatives.

Except for the EU AI Office, all other AI safety institutes established so far need more regulatory authority. Their primary functions include conducting research, developing standards, and fostering international cooperation. While AISIs have the potential to make significant advancements, they are not without challenges. Critics highlight issues such as overlapping mandates with existing standard-making bodies like the International Organization for Standardization that may create inefficiencies and the risk of undue industry influence shaping their agendas. Others argue that the narrow focus on safety sidelines broader risks, such as ethical misuse, economic disruption, and societal inequality. Some also warn that this approach could stifle innovation and competitiveness, raising concerns about balancing safety with progress.

Introduction

The AI revolution, while built on decades-old technology, has taken much of the world by surprise, including policymakers. The EU legislators, for instance, have had to scramble to update their advanced legal drafts to account for the rise of generative AI tools like ChatGPT. The risks are considerable, ranging from AI-driven disinformation, autonomous systems causing ethical dilemmas, potential malfunctions, and loss of oversight to cybersecurity vulnerabilities. The World Economic Forum’s Global Cybersecurity Outlook 2024 reports that half of industry leaders in sectors such as finance and agriculture view generative AI as a major cybersecurity threat within two years. These concerns, coupled with fears of economic upheaval and threats to national security, make clear that swift and coordinated action is essential.

The European Union’s AI Act, for instance, classifies AI systems by risk and mandates transparency along with rigorous testing protocols (among other requirements). Other regions are drafting similar legislation, while some governments opt for voluntary commitments from industry leaders. These measures alone cannot address the full scope of challenges posed by AI. In response, some countries have created specialised AI Safety Institutes to fill critical gaps. These institutes are meant to provide oversight and also advance empirical research, develop safety standards, and foster international collaboration – key components for responding to the rapid evolution of AI technologies.

In May 2024, a significant advancement in global AI safety collaboration was achieved by establishing the International Network of AI Safety Institutes. This coalition brings together AI safety institutions from different regions, including Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK, and the USA. 

In November 2024, the International Network of AI Safety Institutes convened for its inaugural meeting, marking an important step in global collaboration on AI safety. Discussions centred on advancing research, developing best practices for model testing, promoting global inclusion and knowledge-sharing, laying the foundation for future initiatives ahead of the AI Action Summit in Paris in February 2025.

The first wave of AI safety institutes, established primarily by developed nations, has centred on safeguarding national security and reinforcing democratic values. As other countries establish their institutes, whether they will replicate these models or pursue alternative frameworks more attuned to local needs and contexts remains unclear. As in other digital policy areas, future initiatives from China and India could potentially serve as influential models. 

Furthermore, while there is widespread consensus on the importance of key concepts such as ‘AI ethics,’ ‘human oversight,’ and ‘responsible AI,’ their interpretation often varies significantly. These terms are frequently moulded to align with individual nations’ political and cultural priorities, resulting in diverse practical applications. This divergence will inevitably influence the collaboration between AI safety institutes as the global landscape grows increasingly varied.

Finally, a Trump presidency in the USA, with its expected emphasis on deregulation, a more detached US stance toward multilateral institutions, and heightened focus on national security and competitiveness, could further undermine the cooperation needed for these institutes to achieve meaningful impact on AI safety.

Overview of AI safety institutes

The UK AI Safety Institute

Established: In November 2023, with a mission to lead international efforts on AI safety governance and develop global standards. Backed by £100 million in funding through 2030, enabling comprehensive research and policy development.

Key initiatives:
– In November 2024, the UK and the US AI safety institutes jointly evaluated Anthropic’s updated Claude 3.5 Sonnet model, testing its biological, cyber, and software capabilities. The evaluation found that the model provided ‘answers that should have been prevented’ when tested on jailbreaks or actions that produce a response from a model that is intended to be restricted.

– Researched and created structured templates, such as the ‘inability’ template, to demonstrate AI systems’ safety within specific deployment contexts.

– Released tools like Inspect Evals to evaluate AI systems.
Offers up to £200,000 in grants for researchers advancing systemic AI safety.

– Partnered with institutes in the US and France to develop safety frameworks, share research insights, and foster talent exchange.

– Expanded globally with a San Francisco office and published major studies, such as the International Scientific Report on Advanced AI Safety.

The UK AI Safety Institute, launched in November 2023 with £100 million in funding through 2030, was created to spearhead global efforts in AI safety. Its mission centres on establishing robust international standards and advancing cutting-edge research. Key initiatives include risk assessments of advanced AI models (so-called ‘frontier models’) and fostering global collaboration to align safety practices. The institute’s flagship event, the Bletchley Park AI Safety Summit, highlighted the UK’s approach to tackling frontier AI risks, focusing on technical and empirical solutions. Frontier AI is being described as follows in the Bletchely declaration

‘Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended control issues relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are, therefore, hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology and where frontier AI systems may amplify risks such as disinformation.

However, this narrow emphasis has drawn criticism, questioning whether it sufficiently addresses AI’s broader, everyday challenges.

At the 2024 Stanford AI+Policy Symposium, Oliver Ilott, Director of the AI Safety Institute, articulated the UK’s vision for AI governance. He underscored that AI risks are highly context- and scenario-specific, arguing that no single institution could address all the challenges AI presents. ‘Creating such an entity would be like duplicating government itself,’ Ilott explained, advocating instead for a cross-governmental engagement where each sector addresses AI risks relevant to its domain. This approach highlights the UK’s deliberate choice to concentrate on ‘frontier harms’ – the most advanced and potentially existential AI threats – rather than adopting the broader, risk-based regulatory model championed by the EU.

The Bletchley Park AI Safety Summit reinforced this philosophy, with participating countries agreeing on the need for a ‘technical, empirical, and measurable’ understanding of AI risks. Ilott noted that the ‘core problem for governments is one of ignorance,’ cautioning that policymakers risk being perpetually surprised by rapid AI advancements. While high-profile summits elevate the political discourse, Ilott stressed that consistent technical work between these events is critical. To this end, the UK institute has prioritised building advanced testing capabilities and coordinating efforts across the government to ensure preparedness.

The UK’s approach diverges significantly from the EU’s more comprehensive, risk-based framework. The EU has implemented sweeping regulations addressing various AI applications, from facial recognition to general-purpose systems. In contrast, the UK’s more laissez-faire policy focuses narrowly on frontier technologies, promoting flexibility and innovation. The Safety Institute, with its targeted focus on addressing frontier risks, illustrates the UK’s approach. However, this narrow focus may leave gaps in governance, overlooking pressing issues like algorithmic bias, data privacy, and the societal impacts of AI already integrated into daily life.

Ultimately, the long-term success of the UK AI Safety Institute depends on the government’s ability to coordinate effectively across departments and to ensure that its focus does not come at the expense of broader societal safeguards. 

The US AI Safety Institute

Established: In 2023 under the National Institute of Standards and Technology, with a US$10 million budget, with a focus on empirical research, model testing, and safety guidelines.

Key initiatives:
– In November 2024, the US Artificial Intelligence Safety Institute at the US Department of Commerce’s National Institute of Standards and Technology announced the formation of the Testing Risks of AI for National Security Taskforce, which brings together partners from across the US government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology. 

– Conducted joint pre-deployment evaluations (Anthropic’s Claude 3.5 model).

– Launched the International Network of AI Safety Institutes to foster international collaboration, with an inaugural convening in San Francisco in November 2024.

– Issued guidance documents, requested input on chemical/biological AI risks, and formed a consortium with over 200 stakeholders to advance AI safety.
Signed agreements with entities like Anthropic and OpenAI to enhance research and evaluation efforts.

– Expanded leadership and outlined a strategic vision for global cooperation, aligning with the Biden administration’s AI Executive Order.

The US AI Safety Institute, established in 2023 under the National Institute of Standards and Technology with a US$10 million budget, is a critical component of the US’s approach to AI governance. Focused on empirical research, rigorous model testing, and developing comprehensive safety guidelines, the institute has sought to bolster national and global AI safety. Elizabeth Kelly, the institute’s director, explained at the 2024 AI+Policy Symposium, ‘AI safety is far from straightforward and filled with many open questions.’ She underscored the institute’s dual objective of addressing future harms while simultaneously mitigating present risks, emphasising that ‘safety drives innovation’ and that a robust safety framework can fuel healthy competition.

Kelly highlighted the collaborative nature of the US approach, which involves working closely with agencies like the Department of Energy to leverage specialised expertise, particularly in high-stakes areas such as nuclear safety. The institute’s priorities include fundamental research, advanced testing and evaluation, and developing standards for content authentication, like watermarking, to combat AI-generated misinformation. According to Kelly, the institute’s success hinges on building ‘an AI safety ecosystem larger than any single government,’ underscoring a vision for broad, cross-sectoral engagement.

The institute’s strategy emphasises a decentralised and adaptive model of governance. By leveraging the expertise of various federal agencies, the US approach aims to remain nimble and responsive to emerging risks. Similar to the UK approach, this model contrasts the European Union’s AI Office, where AI Safety is just one of the five specialised units supported by two advisory roles. The EU AI Office distinguishes itself from other AI Safety Institutes by adopting a centralised and hierarchical model with a strong focus on compliance and harmonisation across the EU member states. Being part of a centralised structure, the AI Safety unit may face delays in responding to rapidly emerging challenges due to its reliance on more rigid decision-making processes.

The US model’s flexibility supports innovation but may leave gaps in areas such as ethical governance and long-term accountability. The Institute operates under a presidential order, making its directives susceptible to shifts in political priorities. The election of Donald Trump for a new mandate introduces significant uncertainty into the institute’s future. Given Trump’s history of favouring deregulation, his administration could alter or dismantle the institute’s initiatives, reduce funding, or pivot away from stringent AI oversight. Such a shift could undermine progress in AI safety and lead to inconsistencies in governance, particularly if policies become more relaxed or innovation-focused at the expense of rigorous safety measures.

A repeal of Biden’s AI Executive Order appears likely, signalling shifts in AI policy priorities. Yet, Trump’s earlier AI executive orders emphasised civil liberties, privacy, and trustworthy AI alongside innovation, and it is possible that his future policy initiatives could maintain this balance.

Ultimately, the future of the US AI Safety Institute will depend on whether it can secure more permanent legislative backing to withstand political fluctuations. Elon Musk, a tech billionaire entrepreneur and a prominent supporter of Trump,  advocates extensively to shift the focus of the AI policy debate to existential AI risks, and these efforts might also impact the work of the US AI Safety Institute.

Japan’s AI Safety Institute

Established: In 2024, under the Council for Science, Technology, and Innovation, as part of the G7 Hiroshima AI Process.

Key initiatives:
– Conducts surveys, evaluates AI safety methods, and develops standards while acting as a central hub for collaboration between industry, academia, and AI safety-related organisations in Japan.

– Addresses a wide range of AI-related issues, including social impact, AI systems, data governance, and content, with flexibility to adapt to global trends.

– Focuses on creating safety assessment standards, exploring anti-disinformation tools, cybersecurity measures, and developing a testbed environment for AI evaluation.

– Engages in global collaboration with the AI safety institutes in the UK and USA to align efforts and share expertise.

The Japan AI Safety Institute plays a central role in the nation’s AI governance strategy, aligning its efforts with Japan’s broader commitments under the G7 Hiroshima AI Process. Operating under the Council for Science, Technology, and Innovation, the institute is dedicated to fostering a safe, secure, and trustworthy AI ecosystem.

Akiko Murakami, Executive Director of the institute, emphasised at the 2024 AI+Policy Symposium the need to ‘balance innovation and regulation,’ underscoring that AI safety requires both interagency efforts and robust international collaboration. Highlighting recent progress, she referenced the agreement on interoperable standards reached during the US-Japan Summit in April 2024, underscoring Japan’s commitment to global alignment in AI governance.

Murakami explained that the institute’s approach stands out in terms of integrating private sector expertise. Many members, including leadership figures, participate part-time while continuing their roles in the industry. This model promotes a continuous exchange of insights between policy and practice, ensuring that the institute remains attuned to real-world technological advancements. However, she acknowledged that the institute faces challenges in setting traditional key performance indicators due to the rapid pace of AI development, suggesting the need for ‘alternative metrics’ to assess success beyond conventional safety benchmarks.

The Japan AI Safety Institute’s model prioritises flexibility, real-world industry engagement, and collaboration. The institute benefits from up-to-date expertise and insights by incorporating part-time private sector professionals, making it uniquely adaptable. This hybrid structure differs significantly from the centralised model of the US AI Safety Institute, which relies on federal budgets and agency-specific mandates to drive empirical research and safety guidelines. Japan’s model is also distinct from the European Union’s AI Office, which, besides the AI Safety Unit, has broad enforcement responsibilities of the AI Act across all member states and from the UK’s primary focus on frontier risks.

Zooming out from the AI safety institutes and examining each jurisdiction’s broader AI governance systems reveals differences in approaches. The EU’s governance is defined by its top-down regulatory framework, exemplified by ex-ante regulatory frameworks such as the AI Act, which aims to enforce uniform risk-based oversight across member states. In contrast, Japan employs a participatory governance model integrating government, academia, and industry through voluntary guidelines such as the Social Principles of Human-Centric AI. This strategy fosters flexibility, with stakeholders contributing directly to policy developments through ongoing dialogues; however, the reliance on voluntary standards risks weaker enforcement and accountability. The USA takes an agency-driven, sector-specific approach, emphasising national security and economic competitiveness while leaving the broader AI impacts less regulated. The UK is closer to the US approach, with an enhanced focus on frontier risks addressed mostly through empirical research and technical safeguards. 

Japan’s emphasis on international collaboration and developing interoperable standards is a strategic choice. By actively participating in global efforts and agreements, Japan positions itself as a key player in shaping the international AI safety landscape. 

While the Hiroshima AI Process and partnerships like the one with the USA are central to Japan’s strategy, they also make its success contingent on stable international relations. If geopolitical tensions were to rise or if global cooperation were to wane, Japan’s AI governance efforts could face setbacks. 

Singapore’s AI Safety Institute 

Funding:  $50 million grant, starting from October 2022.

Key initiatives:
– Focuses on rigorous evaluation of AI systems, including generative AI, to address gaps in global AI safety science.

– Develops frameworks for the design, development, and deployment of safe and reliable AI models.

– Researches and implements methods to ensure the accuracy and reliability of AI-generated content.

– Provides science-based input for AI governance and contributes to international AI safety frameworks.

– Works with other AI safety institutes, including those in the USA and UK, to advance shared goals in AI safety and governance.

– Led the launch of the ASEAN Guide on AI Governance and Ethics to address regional AI safety needs cohesively and interoperably.

Unlike the US and the UK that established new institutions, Singapore repurposed an existing government body, the Digital Trust Centre. At the time of this writing, not enough information is publicly available to assess the work of the Centre. 

Canada’s AI Safety Institute

Established: November 2024, as part of Canada’s broader strategy to ensure the safe and responsible development of AI. Funding: C$50 million.

Key initiatives:
– CAISI operates under Innovation, Science and Economic Development Canada (ISED) and collaborates with the National Research Council of Canada (NRC) and the Canadian Institute for Advanced Research (CIFAR).

– It conducts applied and investigator-led research through CIFAR and government-directed projects to address AI safety risks.

– Plays a key role in the International Network of AI Safety Institutes, contributing to global efforts on AI safety and co-developing guidance for responsible AI practices.

– Supporting Canada’s Pan-Canadian Artificial Intelligence Strategy, the Artificial Intelligence and Data Act (Bill C-27), and voluntary codes of conduct for advanced AI systems.

As of this writing, more publicly available information is needed to evaluate the work of the Institute, which was only recently established.

European Union’s AI Office

Established: January 2024, the European Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. The AI office was part of this package. Funding: €46.5 million, setup funding.

Key Initiatives:
– Contributing to the coherent application of the AI Act across the member states, including the set-up of advisory bodies at EU level, facilitating support and information exchange.

– Developing tools, methodologies, and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks.

– Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts

– Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action.

– Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation.

The EU AI Office stands out as both an AI safety institute, through its AI Safety Unit, and a regulatory body with broad enforcement powers under the AI Act across EU member states. The AI Safety Unit fulfills the typical functions of a safety institute, conducting evaluations and representing the office internationally in meetings with its counterparts. It is not clear whether the AI Safety Unit  will have the necessary resources, both in terms of personnel and funding, to perform similar model testing as its UK and US counterparts. 

Republic of Korea’s AI Safety Institute

Established: November 2024, to ensure the safe use of artificial intelligence technology.

Key initiatives:
– Preemptively addresses risks like misuse, technical limitations, and loss of control to enhance AI reliability.

– Provides guidance to reduce AI side effects, such as deepfakes, and supports companies in navigating global regulations and certifications.

– Participates in international efforts to establish AI safety norms and align with global frameworks.

– Partners with 24 domestic organisations to strengthen AI safety research and create a secure R&D environment.

– Collaborates with companies like Naver, LG, and SK Telecom to promote ethical AI practices and manage potential risks.

As of this writing, insufficient publicly available information exists to evaluate the work of the Institute, which was only recently established.

Conclusion 

The AI safety institutes are beginning their journey, having only established their first basis for collaboration. While early testing efforts offer a glimpse of their potential, it remains to be seen whether these actions alone can effectively curb deploying AI models that pose significant risks. Diverging priorities, including national security concerns, data-sharing policies, and the further weakening of multilateral systems, could undermine their collective effectiveness.

Notably, nations such as India, Brazil, and China have yet to establish AI safety institutes. The governance models these countries propose may differ from existing approaches, setting the stage for a competition between differing visions of global AI safety. 

Building trust between the institutes and the AI industry will be critical for meaningful collaboration. This trust could be cultivated through transparent engagement and mutual accountability. Equally, civil society must play an active role in this ecosystem, acting as a watchdog to ensure accountability and safeguard the broader public interest.

Finally, the evolving geopolitical landscape will profoundly impact the trajectory of these initiatives. The success of the AI safety institutes will depend on their ability to adapt to technical and policy challenges and how effectively they navigate and influence the complex global dynamics shaping AI governance.

Open source (still) means innovations

There is no need to explain the importance of the global network innovation we enjoy today. Many lines have been written on the possibilities and the marvels the network delivers daily. After an initial couple of decades of admiration, the same thing happened with many other wonders of the world we witnessed throughout civilization. We took it for granted. We do not discuss its structure, backbone, and the incentive structure behind it. Unless it interferes with our daily life and freedom.

This is true for any network user, being a state actor, cloud computing company, or everyday end user. When we look at the backbone of the internet, almost everything is open source. What does this mean? Basic protocols and ways we connect over the internet are documented and open for everyone to observe, copy, and build upon. They are agreed upon as a set of transparent public instructions that are free of proprietary obligations. 

Industry and innovation 

To distinguish innovation from the industry (which might be important to go forward), we can introduce a simple correlation: The industry is an ecosystem that emerged on the need to make the invention more available. The vision of utility is in the industry, and the value of innovation is proven with every iteration of utility. Following this correlation, we can indeed say that the more transparent innovation, the greater its value (or we tend to give it such a position).

When we look at the internet industry, we observe that companies and strategies that followed openness have benefited massively from the invention. This system of benefits from the open source approach can work in depth for both the invention and the consequential industry. To name a couple of the greatest examples: Alphabet (Google, YouTube, or Maps), Linux (used to run almost the entire internet backbone infrastructure), Android (revolutionising the app market, levelling the entry field, and reducing the digital divide). All of them are open source, built on the open-source innovation of the internet.

 Architecture, Building, Diagram, CAD Diagram

A closer look at resiliency

Let’s look at one example that may illustrate this precisely: bitcoin. It started as an open-source project and is still one of the most maintained public databases on the internet. Bitcoin brings back the idea of private money after 100 years of the nation’s monopoly on money. Although it is pointed out as a danger to the international financial system, there is no possible coordinated action by such entities to take down this system and/or ban it permanently. Why? The simple answer is in the trade-off. 

Stopping bitcoin (or any digital information online) is not impossible per se but would require massive resources. This would require full control of all communication channels towards the internet, including banning satellites from orbiting above your geolocation and persistent efforts to ensure no one is breaching the ban. But in 2024, such a ban would create a tear in the fabric of society. Societal consequences would widely overcome the possible benefits.

Instead, as long as it is neutral, bitcoin does not present a threat but rather an opportunity for all. All other competitors built on bitcoin principles are not the same for that particular reason: they are not open source and transparent. No Central Bank Digital Currency (CBDC), privately issued stablecoin, or any of the thousand cryptocurrency impersonators have proven to hold any of the bitcoin’s value. Following the earlier distinction, innovation is open source, but the industry around it is not so much.

Open source is the right way, not the easy one

Does the above mean that when an industry is not based on open source, it cannot make great discoveries and innovate further? No, not at all. Intellectual property is a large part of the portfolio of the biggest tech companies. For example, Apple’s IP revenues culminated in around USD 22.6 billion in research and development expenditures (in 2022) The proprietary industry moves the needle in the economy and creates wealth, while open source creates opportunities. We need both for a healthy future. All of our opportunities may not result in imminent wealth, but rather in inspiration to move forward rather than oppose the change. 

In simple terms, open source empowers the bottom-up approach to building for the future. It helps expand the base of possible contributors, and maybe most importantly, reduces the possibility of ending up in ‘knowledge slavery’. It can create a healthy, neutral, starting point. The one most will perceive as a chance rather than a threat. 

If all of you had one particular innovation in mind while reading all this, you are right!

Artificial intelligence (AI) is a new frontier. AI is actually a bit more than just a technology, it is an agent. Anyhow, it is an invention, so chances are high it will follow the path we described above, enabling an entirely new industry of utility providers.

No need to be afraid

We hear all the (reasonable) concerns about AI development. Uncertainties on whether AI should be developed beyond human reach and concerns regarding AI in executive positions, all are based on fear of systems with no overview.  

In the past, the carriers of the open source (openness and transparency) approach were mostly in academia. Universities and other research institutions contributed the most to the open source approach. It is a bit different in the AI field. For that, companies are leading the way.  

The power to preserve common knowledge is still in the hands of states, and under the set of business and political circumstances, the private sector is also the biggest proponent of the open source approach. With the emergence of large language models and generative AI, the biggest open source initiatives came from Meta (LLaMa) and Alphabet (T5). They align with the incentive to statute open source as a standard for the future. We might be in an equilibrium moment in which both sides agree on the architecture for the future. Nations, international organisations, and the private sector should seize this opportunity. This new race toward more efficient technology of the future should evoke optimism, but there cannot be one without the bottom- up and open source approach to innovation. 

The open source approach is still the way forward for innovation. and can build neutral ground, or at least will not be perceived as a threat.

Read more of our ideas about the way forward in AI governance on the humAInism page

The dark side of crypto: fraud and money laundering

Two things often come to mind when we hear the word ‘crypto’: freedom and crime. Cryptocurrencies for sure have revolutionised the financial world, offering speed, transparency, and accessibility not seen before. Yet, their promise of financial liberation comes with unintended consequences. The decentralised, pseudonymous nature of crypto makes it a double-edged sword—for some it represents freedom and for others a tool for crime. 

In 2023, illicit transactions involving cryptocurrencies reached USD 24.2 billion, according to TRM Labs, with scams and fraud accounting for nearly a third of the total. 

These numbers reveal a sobering truth: while crypto has opened doors to innovation, it has also become an enabler for global crime networks, from drug and human trafficking to large-scale ransomware operations. Criminals exploit this space to mask their identities, making crypto the go-to medium for those operating in the shadows.

 Lighting, Adult, Male, Man, Person, Computer Hardware, Electronics, Hardware, Monitor, Screen, Computer, Laptop, Pc, Computer Keyboard, Furniture, Table, Clothing, Coat, Mouse

What are the common types of crypto fraud?

Crypto fraud takes many forms, each designed to exploit vulnerabilities and prey on the unsuspecting. The most known ones are: 

  • Ponzi and pyramid schemes– Fraudsters lure victims with promises of guaranteed high returns. These schemes use investments from new participants to pay earlier ones, creating an unsustainable cycle. When the influx of new investors dwindles, the scheme collapses, leaving most participants with nothing. In 2023, these scams contributed significantly to the USD 24.2 billion received by illicit crypto addresses, showcasing their pervasive nature.
  • Phishing attacks– Fake websites, emails, and messages designed to mimic legitimate services trick victims into revealing sensitive information like wallet keys. A single successful phishing attack can drain entire crypto wallets, with victims often having no recourse. The shift to stablecoins, noted for their volume in scams, has intensified the use of such tactics.
  • Initial Coin Offering (ICO) scams– The ICO boom has introduced countless opportunities—and risks. Fraudulent projects draw in investors with flashy whitepapers and grand promises, only to vanish with millions. For instance, ICO scams contributed to a notable chunk of crypto crimes in previous years, as highlighted by TRM Labs.
  • Rug pulls– Developers create hyped tokens, inflate their value, and abruptly withdraw liquidity, leaving investors holding worthless assets. In 2023, such schemes became increasingly sophisticated, targeting decentralised exchanges to exploit inexperienced investors.
  • Cryptojacking– Hackers infect computers or networks with malware to mine cryptocurrency without the owner’s knowledge. This hidden crime drains energy and resources, often leaving victims to discover their losses long after the attack. 
  • Fake exchanges and wallets– Fraudulent platforms mimic legitimate services, enticing users to deposit funds, only for them to disappear. These scams exploit the trust gap among new investors, further driving crypto-related crime statistics.
 Computer, Electronics, Laptop, Pc, Computer Hardware, Hardware, Disk, Advertisement, Poster, Dvd

The connection between crypto fraud and money laundering

Crypto fraud and money laundering are two sides of the same coin. Stolen funds need to be legitimised, and criminals have devised a range of techniques to obscure their origins. One of the most common methods involves crypto mixers and tumblers. These services blend cryptocurrencies from various sources, making it nearly impossible to trace individual transactions.

The process often works as follows:

  1. Initial theft: Stolen funds are moved from wallets linked to scams or hacks.
  2. Mixing: These funds are transferred to a mixing service, where they are broken into smaller amounts and shuffled with others.
  3. Redistribution: The mixed funds are sent to new, seemingly unrelated wallets.
  4. Conversion: The laundered crypto is then converted to stablecoins or fiat currency, often through decentralised exchanges or peer-to-peer transactions, masking its origins.

This method has made crypto a preferred tool for laundering money linked to drug cartels and even human trafficking networks. The convenience and pseudonymity of crypto ensure its growing role in these illicit industries. 

How big crypto crime really is? 

The numbers are staggering. Last year (2023), illicit addresses received USD 24.2 billion in funds. While scamming and hacking revenues declined (29.2% and 54.3%, respectively), ransomware attacks and darknet market activity saw significant growth. Sanctions-related transactions alone accounted for USD 14.9 billion, driven by entities operating in restricted jurisdictions.

Bitcoin and Monero remain the most-used cryptocurrency for darknet sales and ransomware.

Cryptocurrencies have become the currency of choice for underground networks and darknet markets facilitate the sale of illicit goods. Human trafficking networks use crypto for cross-border payments, exploiting its decentralised nature to evade detection. 

According to the Chainalysis report, the prevalence of crypto in these crimes highlights the urgent need for better monitoring and regulation. 

Stablecoins like USDT are gaining traction- criminals prefer stablecoins for their reliability as they mimic traditional fiat currencies, enabling transactions in environments where access to traditional banking is limited. 

 Accessories, Jewelry, Money, Ring

How to fight crypto crime? 

Solving the issue of crypto crime requires a multi-faceted approach:

  • Regulatory innovation: Governments must create adaptable frameworks to address the evolving crypto landscape while encouraging legitimate use.
  • Public awareness: Educating users about common scams and best practices can reduce vulnerabilities at the grassroots level.
  • Global cooperation: International collaboration is essential as cryptocurrencies knows no borders. Only by sharing data and strategies can nations effectively combat cross-border crypto crime.

The thing is cryptocurrency is a young and rapidly evolving space. While some countries have enacted comprehensive legislation, others lag behind. However, the pace of innovation makes it nearly impossible to create foolproof regulations. Every new development introduces potential loopholes, requiring legislators to remain agile and informed. 

The power of crypto: innovation or exploitation?

Cryptocurrencies hold immense power, offering unparalleled financial empowerment and innovation. As it usually happens, with great power comes great responsibility. Freedom must be balanced with accountability to ensure it serves civilisation for the greater good. Shockingly, stolen crypto assets are currently circulating undetected within global financial systems, intertwining with legitimate transactions. The question is: can the industry mitigate risks without compromising its core principles of decentralisation and transparency by addressing vulnerabilities and implementing robust safeguards? The true potential of crypto lies in its ability to reshape economies, empower the unbanked, and foster global financial inclusion. Yet, this power can also be exploited if left unchecked, becoming a tool for crime in the wrong hands. The future of crypto depends on ensuring it remains a beacon of innovation and empowerment, harnessed responsibly to create a safer, more equitable financial ecosystem for all.