Overview of AI policy in 15 jurisdictions

1. CHINA

China has passed a law to address growing concerns over economic data fraud.

Summary

China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant data resources. Although no single, overarching AI law is in place (such as the EU AI Act), the country has introduced a multilayered regulatory framework – combining data protection, copyright, AI-specific provisions, and ethical guidelines – to balance technological innovation with national security, content governance, and social stability.

AI landscape 

China’s regulatory landscape for AI is anchored by several core laws and a growing portfolio of AI-specific rules. At the core of this framework are data protection and copyright laws, which provide the legal baseline for AI deployments. 

The Personal Information Protection Law (PIPL), enacted in 2021, serves as a direct parallel to the EU’s General Data Protection Regulation (GDPR) by placing strict obligations on how personal data is collected and handled. Significantly, and unlike the GDPR, it clarifies that personal information already in the public domain can be processed without explicit consent as long as such use does not unduly infringe on individuals’ rights or go against their explicit objections. The PIPL also addresses automated decision-making, explicitly barring discriminatory or exploitative algorithmic practices, such as charging different prices to different consumer groups without justification.

Copyright considerations further shape the development of AI. Under the Chinese Copyright Law, outputs generated entirely by AI, devoid of human originality, cannot be granted copyright protection. Yet courts have repeatedly recognised that when users meaningfully contribute creative elements through prompts, they can secure copyrights in the resulting works, as illustrated by rulings in cases like Shenzhen Tencent v Shanghai Yingxun. At the same time, developers of generative AI systems have faced legal liabilities when their algorithms inadvertently produce content that violates intellectual property or personality rights, exemplified by high-profile instances involving the unauthorised use of the Ultraman character and imitations of distinctive human voices.

Over the past few years, these broader legal anchors have been reinforced by regulations specifically tailored for algorithmic and generative AI systems. One of the most notable is the Provisions on the Management of Algorithmic Recommendations in Internet Information Services of 2021, which target services deploying recommendation algorithms for personalised news feeds, product rankings, or other user-facing suggestions. Providers deemed capable of shaping public opinion must register with authorities, disclose essential technical details, and implement robust security safeguards. These requirements extend to ensuring transparency in how content is recommended and offering users the option to disable personalisation altogether.

In 2022, China introduced the Provisions on the Administration of Deep Synthesis Internet Information Services to address AI-generated synthetic media. These requirements obligate service providers to clearly label media that has been artificially generated or manipulated, particularly when there is a risk of misleading the public. To facilitate accountability, users must undergo real-name verification, and any provider offering a service with a marked capacity to influence public opinion or mobilise society must conduct additional security assessments.

Interim Measures for the Management of Generative Artificial Intelligence Services, which came into effect on 15 August 2023, apply to a broad range of generative technologies, from large language models to advanced image and audio generators. Led by the Cyberspace Administration of China (CAC), these rules require compliance with existing data and intellectual property laws, including obtaining informed user consent for personal data usage and engaging in comprehensive data labelling. Providers must also detect and block illegal or harmful content, particularly anything that might jeopardise national security, contravene moral standards, or infringe upon IP rights, and are expected to maintain thorough complaint mechanisms and special protective measures for minors. 

Where public opinion could be swayed, providers are required to file details of their algorithms for governmental review and may face additional scrutiny if they are deemed highly influential.

Building on these interim measures, the Basic Safety Requirements for Generative AI Services, which came into effect in 2024, took a more granular approach to technical controls. Issued by the National Information Security Standardization Technical Committee (TC260), these requirements outline 31 risk categories ranging from content that undermines socialist core values to discriminatory or infringing materials.

Under these guidelines, training data must be meticulously checked via random spot checks of at least 4,000 items from the entire dataset to ensure that at least 96 percent is free from illegal or unhealthy information. Illegal or unhealthy information is information that contains any of the 29 safety risks listed in the Annex. 

Providers are similarly obligated to secure explicit consent from individuals whose personal data might be used in model development. If a user prompt is suspected of eliciting unlawful or inappropriate outputs, AI systems must be capable of refusing to comply, and providers are expected to maintain logs of such refusals and accepted queries.

Alongside these binding regulations, the Chinese government and local authorities have published a range of ethical and governance guidelines. The Ethical Norms for New Generation AI, released in 2021 by the National New Generation AI Governance Specialist Committee, articulate six guiding principles, including respect for human welfare, fairness, privacy, and accountability.

While these norms do not themselves impose concrete penalties, they have guided subsequent legislative efforts. In a more formal measure, the 2023 Measures for Scientific and Technological Ethics Review stipulates that institutions engaging in ethically sensitive AI research, particularly those working on large language models with the potential to sway social attitudes, must establish ethics committees.

These committees are subject to national registration, and violations can result in administrative, civil, or even criminal penalties. Local governments, such as those in Shenzhen and Shanghai, have further set up municipal AI ethics committees to oversee particularly high-risk AI projects, often requiring providers to
conduct ex-ante risk reviews before introducing new systems.

Under the binding frameworks, providers can be subject to financial penalties, service suspension, or even criminal proceedings if they fail to comply with content governance or user rights.

In 2023, China’s State Council announced that it would draft an AI law. However, since then, China has halted all efforts to unify its AI legislation, instead opting for a piecemeal, sector-focused regulatory strategy that continues to evolve in response to emerging technologies.

2. AUSTRALIA

Australia bans 'Terrorgram' as part of efforts to combat online extremism and antisemitism.

Summary

Australia takes a principles-based approach to AI governance, blending existing laws, such as privacy and consumer protection, with voluntary standards and whole-of-government policies to encourage both innovation and public trust. There is currently no single, overarching AI law; rather, the government has proposed additional, risk-based mandatory guardrails – especially for ‘high-risk’ AI uses – and issued a policy to ensure responsible adoption of AI across all federal agencies.

AI landscape

  • The voluntary AI Safety Standard (2024) introduces ten guardrails, such as accountability, transparency, and model testing that guide organisations toward safe AI practices.

There is no single all-encompassing AI law in Australia. The government has pursued a flexible approach that builds upon privacy protections, consumer safeguards, and voluntary principles while moving steadily towards risk-based regulation of high-impact AI applications. 

At the core of Australia’s legal baseline is the Privacy Act of 1988, which has been under review to address emerging challenges, including AI-driven data processing and automated decision-making. Under updated guidance, the Privacy Act now clarifies that any personal information handled by an AI system, including inferred or artificially generated data, falls under the Australian Privacy Principles, meaning organisations must lawfully and fairly collect it (with consent for sensitive data), maintain transparency about AI usage, ensure accuracy, and uphold stringent security and oversight measures. Alongside the Privacy Act, the Consumer Data Right facilitates secure data sharing in sectors such as finance and energy, allowing AI-driven products to leverage richer data sets under strict consent mechanisms. 

From a consumer protection standpoint, the Australian Consumer Law, enforced by the Australian Competition and Consumer Commission (ACCC) prohibits misleading or unfair conduct. This has occasionally encompassed AI-driven pricing or recommendation algorithms, as exemplified in the ACCC v Trivago case involving deceptive hotel pricing displays.

Various sectors impose complementary rules. The Online Safety Act 2021 addresses harmful or exploitative content, which may include AI-generated deepfakes. The Copyright Act governs the permissible scope of AI training data, while the Corporations Act 2001 influences AI tools used in financial services, such as algorithmic trading and robo-advice.

The government has introduced several AI-specific guidelines and policies to add to these laws: 

  • Voluntary AI Safety Standard (2024) was issued by DISR and covers accountability, data governance, model testing, and other risk management practices to help organisations innovate responsibly. 

Category 1 AI: Foreseeable uses of AI with known but manageable risks

Category 2 AI: More advanced or unpredictable AI systems with the potential for large-scale harm. Enforcement mechanisms include licensing, registration, or mandatory ex-ante approvals.

A variety of additional AI initiatives complement these policies, such as the Australian Framework for Generative Artificial Intelligence (AI) in Schools, which sets guidelines for safe generative AI adoption in education, covering transparency, user protection, and data security; the AFP Technology Strategy that sets guidelines for AI-based tools in federal law enforcement; and the Medical Research Future Fund that invests in AI-driven healthcare pilots, such as diagnostics for skin cancer and radiological screenings.

Internationally, Australia aligns with the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Principles, actively collaborating with global partners on AI policy and digital trade.

3. SWITZERLAND

 Flag, Switzerland Flag

Summary

Switzerland follows a sector-focused, technology-neutral approach to AI regulation, grounded in strong data protection and existing legal frameworks for finance, medtech, and other industries. Although the Federal Council’s 2020 Strategy on Artificial Intelligence sets ethical and societal priorities, there is no single, overarching AI law in force.

AI landscape

  • Current AI uses fall under traditional Swiss laws. Switzerland has not enacted an overarching AI law, relying instead on sectoral oversight and cantonal initiatives.
  • Oversight responsibilities are distributed among several federal entities and occasionally supplemented by cantonal authorities. For instance, the Federal Data Protection and Information Commissioner (FDPIC) addresses privacy concerns, while the Financial Market Supervisory Authority (FINMA) exercises administrative powers to regulate financial institutions, including the authority to revoke licenses for noncompliance. The Federal Council sets the AI policy agenda. Cantonal governments, for their part, may provide frameworks for local pilot programmes, fostering public-private partnerships and encouraging best practices in AI adoption.
  • The Strategy on Artificial Intelligence (2020) emphasises human oversight, data governance, and collaborative R&D to position Switzerland as an innovation hub for finance, medtech, robotics, and precision engineering.

At the core of Swiss data protection is the Revised Federal Act on Data Protection (FADP), which took effect in 2023. It imposes strict obligations on entities that process personal data, extending to AI-driven activities. Under Article 21, the FADP places particular emphasis on automated decision-making, urging transparency when significant personal or economic consequences may result. The FDPIC enforces the law, carrying out investigations and offering guidance, though it lacks broad direct penalty powers.

Beyond data privacy, AI solutions must comply with existing sectoral regulations. In healthcare, the Therapeutic Products Act and the corresponding Medical Devices Ordinance govern AI-based diagnostic tools, with Swissmedic classifying such systems as medical devices when applicable.

In finance, FINMA oversees AI applications in robo-advisory, algorithmic trading, and risk analysis, regularly issuing circulars and risk monitors that highlight expectations for transparency, reliability, and robust risk controls. Other domains, like autonomous vehicles and drones, fall under the jurisdiction of the Federal Department of the Environment, Transport, Energy, and Communications (DETEC), which grants pilot licenses and operational approvals through agencies such as the Federal Office of Civil Aviation (FOCA).

Liability, intellectual property, and non-discrimination matters are similarly addressed through existing legislation. The Product Liability Act, the Civil Code, and the Code of Obligations govern contracts and liability for AI products and services, while the Copyright Act and the Patents Act regulate AI training data usage and software IP rights. The Gender Equality Act and the Disability Discrimination Act may apply if AI outputs result in systematic bias or exclusion.

At a local level, several cantonal innovation hubs, such as Zurich’s Innovation Sandbox for Artificial Intelligence support pilot projects and produce policy feedback on emerging technologies. The Swiss Supercomputer Project – a collaboration among national labs, Hewlett Packard Enterprise, and NVIDIA – provides high-performance computing resources to bolster AI research in areas ranging from precision engineering to climate simulations. In the same vein, the Swiss AI Initiative is a national effort led by ETH Zurich and the Swiss Federal Technology Institute of Lausanne (EPFL) and powered by the world’s most advanced GPU supercomputer, uniting experts across Switzerland to develop large-scale, domain-specific AI models.

The Digital Society Initiative at the University of Zurich focuses on interdisciplinary research and public engagement, exploring the ethical, social, and legal impacts of digital transformation. 

Switzerland engages with the OECD AI Principles and participates in the Council of Europe Committee on Artificial Intelligence. In November 2023, the Federal Council instructed the Federal Department of the Environment, Transport, Energy and Communications and the Federal Department of Foreign Affairs to produce an overview of potential regulatory approaches for artificial intelligence, emphasising transparency, traceability, and alignment with international standards such as the Council of Europe’s AI Convention and the EU’s AI Act.

In February 2025, they presented a plan proposing sector-specific legislative changes in areas like data protection and non-discrimination, along with non-binding measures such as self-declarations and industry-led solutions, to protect fundamental rights, bolster public trust, and support Swiss innovation.

4. TÜRKIYE

 Flag, Turkey Flag

Summary

Türkiye strives to become a major regional AI hub by investing in industrial applications, defence innovation, and a rapidly growing tech workforce. While there is no single, overarching AI law at present, a Draft AI Bill introduced in June 2024 is under parliamentary review and, if enacted, will establish guiding principles on safety, transparency, accountability, and privacy, especially for high-risk AI like autonomous vehicles, medical diagnostics, and defence systems.

Existing sectoral legislation, privacy rules under the Law on the Protection of Personal Data, and the National Artificial Intelligence Strategy (2021–2025) shape responsible AI use across industries.

AI landscape

  • The National Artificial Intelligence Strategy (2021–2025) is a roadmap for talent development, data infrastructure, ethical frameworks, and AI hubs to spark local innovation.
  • The Draft AI Bill, proposed in June 2024, is pending parliamentary approval. The Draft proposes broad principles, such as safety, transparency, equality, accountability, and privacy, as well as a registration requirement for certain high-risk AI use cases.
  • The Personal Data Protection Law, overseen by KVKK, underpins AI-driven processing of personal data and mandates informed consent and data minimisation.

There is no single, overarching AI law. Sectoral regulations play a key role. In banking and finance, the Banking Regulation and Supervision Agency (BRSA) supervises AI-driven credit scoring, risk analysis, and fraud detection, proposing rules mandating explicit consent and algorithmic fairness audits. The defence sector, led by state-owned enterprises such as TUSAŞ and ASELSAN, deploys autonomous drones and advanced targeting systems, although official details remain classified for national security reasons. The automotive industry invests in connected and self-driving vehicles — particularly through TOGG, Türkiye’s national electric car project – aligning with the National Artificial Intelligence Strategy’s push for advanced
manufacturing.

The Law on Consumer Protection, the E-commerce Law, and the Turkish Criminal Code collectively impose transparency, fairness, and liability standards on AI-driven advertising, misinformation, and automated decision-making, while the Industrial Property Code governs the permissible use of copyrighted data for AI training and clarifies patentability criteria for AI-based innovations.

While not an EU member, Türkiye often harmonises regulations with EU norms to facilitate trade and ensure cross-border legal compatibility. It also engages in the Global Partnership on Artificial Intelligence (GPAI) and participates in the Council of Europe Committee on Artificial Intelligence.

5. MEXICO

 Flag, Mexico Flag

Summary

Mexico does not have a single, overarching AI law or a fully institutionalised national strategy. The 2018 National AI Strategy, commissioned by the British Embassy in Mexico, developed by Oxford Insights and C Minds, and informed by government and expert input has been influential in articulating principles for ethical AI adoption, open data, and talent development.

However, it has not been officially enforced as a national plan. AI adoption in the private sector remains limited, although the public sector ranks relatively high in Latin America for AI integration. Data protection laws were previously enforced by the National Institute for Transparency, Access to Information, and Personal Data Protection (INAI), which was eliminated in December 2024 due to budgetary constraints. These responsibilities now fall under the Secretariat of Anti-Corruption and Good Governance (SABG).

AI landscape

  • The 2018 National AI Strategy outlined fairness, accountability, and a robust AI workforce, but remains unevenly implemented.

At the heart of Mexico’s data governance is the Federal Law on the Protection of Personal Data Held by Private Parties (2010). This law imposes consent, transparency, and security obligations on any entity handling personal data, including AI-driven projects. Until December 2024, the National Institute for Transparency, Access to Information, and Personal Data Protection enforced these rules; enforcement responsibilities have since been transferred to the Secretariat of Anti-Corruption and Good Governance. Although its powers were primarily focused on privacy, INAI has periodically offered guidance on best practices for AI-based solutions, such as public chatbots and e-commerce platforms.

Beyond privacy, other laws – such as the Consumer Protection Law and the E-Commerce Law – can indirectly govern AI use, particularly when automated tools influence marketing, pricing, or other consumer-facing decisions.

Copyright and IP regulations apply to AI developers, especially regarding training data usage or patent filings. Regarding training data usage, developers must rely on obtaining proper licenses or using public domain material to avoid potential copyright infringement when training AI models. Patents require a genuine technical solution and novelty, and AI cannot be named as the inventor. Mexico accounts for a significant share of AI patent applications in Latin America, alongside Brazil.

Mexico’s public sector ranks third in Latin America in terms of AI integration, with pilot projects
in:

Healthcare (AI-based triage and diagnostics);

Agriculture (precision farming via drones);

Municipal services (chatbots and data analytics tools).

Nonetheless, private-sector adoption remains modest, scoring below the regional average in the
Latin American AI Index; critics argue that this is due to Mexico’s relatively low R&D spending,
fragmented policy environment, and insufficient incentives for businesses.

6. INDONESIA

Indonesia fines Google $12.4 million for payment system abuses.

Summary

While no single, overarching AI law is in place, Indonesia’s Ministry of Communication and Digital Affairs has announced a forthcoming AI regulation. Currently, the Personal Data Protection Law (2022) provides an important legal foundation for AI-related personal data processing.

Key institutions, including the Ministry of Communication and Information Technology (Kominfo) and the National Research and Innovation Agency (BRIN), jointly shape AI policies and promote R&D initiatives, with further sector-specific guidelines emerging at both the national and provincial levels.

Indonesia envisions AI as a driver of national development, aiming to strengthen healthcare, education, food security, and public services through the 2020–2045 Masterplan for National Artificial Intelligence (Stranas KA).

AI landscape

  • The Stranas KA (2020–2045) is a long-term roadmap dedicated to setting ethical AI goals, boosting data infrastructure, cultivating local capacity, and encouraging global partnerships.
  • The Personal Data Protection Law (2022) establishes consent, transparency, and data minimization requirements, backed by Kominfo’s authority to impose administrative fines or suspend services.

In January 2025, Indonesia’s Ministry of Communication and Digital Affairs announced a forthcoming AI regulation that will build on guidelines emphasising transparency, accountability, human rights, and safety. Minister Meutya Hafid assigned Deputy Minister Nezar Patria to draft this regulation, as well as to gather stakeholder input across sectors such as education, health, infrastructure, and financial services.

Currently, Indonesia’s AI governance is anchored by strategic planning under the Stranas KA (2020–2045) and the Personal Data Protection Law (2022). Existing regulations, along with provincial-level guidelines and interministerial collaboration, guide the adoption of AI systems across multiple industries.

To bolster cybersecurity and data protection, the National Cyber and Encryption Agency sets additional security standards for AI implementations in critical sectors.

The Stranas KA (2020–2045) provides short-term milestones (2025) and long-term goals (2045) aimed at constructing a robust data infrastructure, prioritising ethical AI, and building a large talent pool. Five national priorities structure these efforts:

AI solutions for telemedicine, remote diagnostics, and hospital administration;

Automating public services with chatbots and data analytics;

Upskilling and training for a domestic AI workforce;

Precision agriculture, pest detection, and yield forecasting;

AI for traffic management, urban planning, and public safety.

The Stranas KA provides only broad principles rather than explicit, enforceable audit mandates, covering areas such as data handling, model performance, and ethical compliance, so formal requirements remain relatively limited. 

Certain provincial governments have issued draft guidelines for AI usage in local services, including chatbots for administrative tasks and agritech solutions that support smallholder farmers. These guidelines typically incorporate privacy measures and user consent requirements, aligning with the Personal Data Protection Law.

Indonesia cooperates with ASEAN partners on cross-border digital initiatives. During its 2022 G20 presidency, Indonesia spotlighted AI as a tool for inclusive growth, focusing on bridging the digital divide.

7. EGYPT

 Egypt Flag, Flag

Summary

Although there is no single, overarching AI law, the Egypt National Artificial Intelligence Strategy (2020) provides a roadmap for research, capacity development, and foreign investment in AI, while the Personal Data Protection Law No. 151 of 2020 governs personal data used by AI systems. In January 2025, President Abdel Fattah El-Sisi launched the updated 2025–2030 National AI Strategy, aiming to grow the ICT sector’s contribution to GDP to 7.7% by 2030, establish 250+ AI startups, and develop a talent pool of 30,000 AI professionals. The new strategy also announces the development of a national foundational model, including a large-scale Arabic language model, as a key enabler for Egypt’s AI ecosystem.

With multiple pilot projects, ranging from AI-assisted disease screening to smart city solutions, Egypt is laying the groundwork for broader AI deployment, with the Data Protection Authority providing oversight of AI-driven data processing. The Ministry of Communications and Information Technology (MCIT) spearheads AI policy, focusing on AI applications in healthcare, finance, agriculture, and education.

AI landscape

  • The Ministry of Communications and Information Technology (MCIT) leads Egypt’s AI efforts, coordinating with other ministries on digital transformation and legislative updates. The Data Protection Authority can levy fines or administrative measures against noncompliant entities, while the Central Bank of Egypt supervises AI-based credit scoring and fraud detection in financial services.
  • The 2020 National Artificial Intelligence Strategy established strategic goals for AI research, workforce development, and partnerships with global tech players, aligning with the Vision 2030 framework. The AI Strategy acknowledged non-discrimination and responsible usage, though enforcement mostly fell under existing data protection measures.
  • The newly introduced 2025–2030 National Artificial Intelligence Strategy builds on the first plan, with a focus on inclusive AI, domain-specific large language models, and stronger alignment with the Digital Egypt initiative.
  • The Personal Data Protection Law No. 151 of 2020 requires consent, data security, and transparency in automated processing, enforced by the Data Protection Authority.
  • Healthcare initiatives deploy AI-driven disease screening and telemedicine, expanded during public health emergencies. Agriculture pilots focus on yield prediction and irrigation optimisation. Smart cities apply AI in traffic management and public safety. Education reforms integrate AI curricula in universities, coordinated by MCIT and the Ministry of Higher Education.

As part of the 2025–2030 plan, Egypt is re-emphasising ethical AI, with additional guidelines under the Egyptian Charter for Responsible AI (2023) and plans for domain-specific AI regulations. The strategy also aims to strengthen AI infrastructure with next-generation data centres, robust 5G connectivity, and sustainable computing facilities.

AI adoption aligns closely with the overarching Egypt Vision 2030 framework, highlighting the role of AI in socio-economic reforms.

8. MALAYSIA

 Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Helmet, Laptop, Pc

Summary

Malaysia aims to become a regional AI power through government-led initiatives such as the Malaysia Artificial Intelligence Roadmap (2021–2025) and the MyDIGITAL blueprint. While there is currently no single, overarching AI legislation, the National Guidelines on AI Governance and Ethics (2024) serve as a key reference point for responsible AI development. Established in December 2024, the National AI Office now centralises policy coordination and is expected to propose regulatory measures for high-stakes AI use cases.

AI landscape

  • The Malaysia Artificial Intelligence Roadmap (2021–2025) outlines talent building, ethical guidelines, and R&D priorities spanning sectors like finance, healthcare, and manufacturing.
  • The National Guidelines on AI Governance and Ethics (2024) promote seven key principles – fairness, reliability/safety, privacy/security, inclusiveness, transparency, accountability, and human well-being – and clarify stakeholder obligations for end users, policymakers, and developers.

In addition to these frameworks, sectoral bodies impose further requirements:

  • Bank Negara Malaysia (BNM) oversees AI in finance, emphasising fairness and transparency for credit scoring and fraud detection tools.

Major enterprises leverage AI for e-services, medical diagnostics, manufacturing optimisation, and real-time analytics.

9. NIGERIA

 Flag, Nigeria Flag

Summary

While no single, overarching AI law is in place, the Nigeria Data Protection Act (NDPA) provides
an important legal foundation for AI-related personal data processing. Key institutions, including
the Federal Ministry of Communications, Innovation and Digital Economy (FMCIDE) and the
National Information Technology Development Agency (NITDA), shape Nigeria’s AI policy
framework and encourage responsible adoption.

AI landscape

Nigeria’s NDPA applies to AI to some extent, as its provisions demand consent, data minimisation, and the possibility of human intervention for decisions with significant personal impact. The NDPC has the authority to impose penalties on violators, and the SEC requires robo-advisory firms to adopt safeguards against algorithmic errors or bias. The Nigerian Bar Association issued Guidelines for the Use of Artificial Intelligence in the Legal Profession in Nigeria in 2024, emphasising data privacy, human oversight, and transparency in AI-driven decisions.

In 2023, Nigeria joined the Bletchley Declaration on AI, pledging to cooperate internationally on responsible AI development.

10. KENYA

a red flag with a white cross

Summary

While no single, overarching AI law is in place, Kenya’s Data Protection Act (2019) offers a foundational framework for AI-related personal data usage, while existing ICT legislation, sector-specific guidelines, and taskforce reports further shape AI governance. The Ministry of Information, Communications, and the Digital Economy (MoIC) steers national digital transformation, supported by the Kenya ICT Authority’s oversight of ICT projects and the Office of the Data Protection Commissioner (ODPC) enforcing privacy provisions. The National Artificial Intelligence (AI) Strategy (2025-2030) aims to consolidate these diverse efforts, focusing on risk management, ethical standards, and broader AI-driven economic growth.

AI landscape

  • Kenya’s National Artificial Intelligence Strategy for 2025–2030 aims to drive inclusive, ethical, and innovation-driven AI adoption across key sectors – agriculture, healthcare, education, and public services – by establishing a robust infrastructure, governance, and talent development frameworks to address national challenges and foster sustainable growth.
  • The Ministry of Information, Communications, and the Digital Economy (MoIC) sets high-level policy, reflecting AI priorities in Kenya’s broader digital agenda. The Kenya ICT Authority coordinates pilot projects, manages government ICT initiatives, and promotes AI adoption across sectors.
  • The Data Protection Act (2019) mandates consent, data minimisation, and user rights in automated decision-making. The Office of the Data Protection Commissioner (ODPC) enforces these rules, investigating breaches and imposing sanctions – particularly relevant to AI-driven digital lending and fintech solutions.
  • The Distributed Ledgers Technology and Artificial Intelligence Taskforce (2019) proposed ethics guidelines, innovation sandboxes, and specialised oversight for distributed ledgers technology, AI, the internet of things, and 5G wireless technology. The taskforce aims to balance consumer and human rights protection with promoting innovation and market competition.

The Data Protection Act (2019) remains central, requiring accountability and consent for AI-driven profiling – particularly in high-impact domains like micro-lending, where machine-learning models analyse creditworthiness. 

The MoIC has integrated AI objectives into national strategies for e-government, supporting pilot projects such as chatbot-based public services and resource allocation.

The National AI Strategy aims to harmonise Kenya’s diverse AI efforts, addressing potential algorithmic bias, auditing standards, and the practicalities of responsible AI, particularly in healthcare, agritech, and fintech. To achieve this, the strategy sets out a clear governance framework, establishes multi- stakeholder collaboration platforms, and develops robust guidelines that promote transparent, ethical, and inclusive AI development across these priority sectors.

The government collaborates with global organisations, such as GIZ, World Bank, UNDP and regional partners such as Smart Africa with the aspiration to become an AI hub in Africa.

11. ARGENTINA

 Flag, Car, Transportation, Vehicle, Argentina Flag

Summary

While no single, overarching AI law is in place, Data Protection Law No. 25.326 (Habeas Data, 2000) provides an important baseline for AI-related personal data use, enforced by the Argentine Agency of Access to Public Information (AAIP). The government has developed a National AI Plan and has issued Recommendations for Trustworthy Artificial Intelligence (2023) to guide ethical AI adoption – especially within the public sector. Academic institutions, entrepreneurial tech clusters in Buenos Aires and Córdoba, and partnerships with multinational firms support Argentina’s growing AI ecosystem.

AI landscape

  • The National Artificial Intelligence Plan outlines high-level goals for ethical, inclusive AI development aligned with the country’s economic and social priorities.
  • The Data Protection Law No. 25.326 (Habeas Data, 2000) requires consent, transparency, and data minimisation in automated processing. The AAIP can sanction entities that misuse personal data, including performing AI-driven profiling.
  • Recommendations for Trustworthy Artificial Intelligence (2023), approved by the Undersecretariat for Information Technologies, promote human-centred AI in public-sector projects, emphasising ethics, responsibility, and oversight.

Argentina’s AI governance relies on existing data protection rules, plus emerging policy instruments rather than a single, dedicated, and overarching AI law. Public institutions like the Ministry of Science, Technology, and Innovation (MINCyT) and the Ministry of Economy coordinate research and innovation, working with the AAIP to ensure privacy compliance. The government also supports pilot programmes, testing practical AI solutions.

Argentina’s newly launched AI unit within the Ministry of Security, designed to predict and prevent future crimes, has sparked controversy over surveillance, data privacy, and ethical concerns, prompting calls for greater transparency and regulation.

12. QATAR

Qatar flag

Summary

While no single, overarching AI law is in place, Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data serves as a key legal framework for AI-related personal data processing. The Ministry of Communications and Information Technology (MCIT) leads Qatar’s AI agenda through the National Artificial Intelligence Strategy for Qatar (2019), focusing on local expertise development, ethical guidelines, and strategic infrastructure – aligned with Qatar National Vision 2030. Enforcement of data privacy obligations is handled by MCIT’s Compliance and Data Protection (CDP) Department, which can impose fines for noncompliance. Oversight in finance, Sharia-compliant credit scoring, and other sensitive domains is provided by the Qatar Financial Centre Regulatory Authority and the Central Bank.

AI landscape

  • The National Artificial Intelligence Strategy for Qatar (2019) sets goals for talent development, research, ethics, and cross-sector collaboration, supporting the country’s economic diversification.
  • Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data enforces consent, transparency, and robust security for personal data usage in AI. MCIT’s Compliance and Data Protection (CDP) Department monitors data privacy compliance, imposing monetary penalties for violations.
  • The Qatar Financial Centre Regulatory Authority and the Central Bank regulate AI-driven financial services, ensuring consumer protection and adherence to Sharia principles.
  • Lusail City, which brands itself as the city of the future and one of the most technologically advanced cities in the world, leverages AI-based traffic management, energy optimisation, and advanced surveillance. 

Although Qatar has not enacted a single, overarching AI law, its National AI Strategy and the work of the Artificial Intelligence Committee provide a structured blueprint, prioritizing responsible, culturally aligned AI applications. 

Qatar’s AI market is projected to reach USD 567 million by 2025, driven by strategic investments and digital infrastructure development that is expected to boost economic growth, attract global partnerships, and continue efforts to align national regulations with international standards.

13. PAKISTAN

Pakistan considers tighter digital surveillance amid VPN controversy.

Summary

While no single, overarching AI law is in place, the Ministry of Information Technology & Telecommunication (MoITT) spearheads AI policy through the Digital Pakistan Policy and the Draft National Artificial Intelligence Policy (2023), focusing on responsible AI adoption, skill-building, and risk management. Although the Personal Data Protection Bill is still pending, its adoption would introduce dedicated oversight for AI-driven personal data processing. In parallel, the proposed Regulation of Artificial Intelligence Act 2024 seeks to mandate human oversight of AI systems and impose substantial fines for violations.

AI landscape

  • The Ministry of Information Technology & Telecommunication (MoITT) drives Pakistan’s AI policy under the Digital Pakistan Vision, integrating AI across e-government services, education, and agritech.
  • The National Centre of Artificial Intelligence, under the Higher Education Commission, fosters research collaborations among universities.
  • Digital Pakistan Policy (2018) underscores AI’s role in public-sector digitalisation and workforce development. The Draft National Artificial Intelligence Policy (2023) underscores ethically guided AI growth, job creation, and specialised training initiatives.
  • The Personal Data Protection Bill proposes establishing a data protection authority with enforcement powers over AI-related personal data misuse. 
  • The Regulation of Artificial Intelligence Act 2024 would fine violators up to PKR 2.5 billion (approximately USD 9 million), mandate transparent data collection, require human oversight in sensitive applications, and create a National AI Commission in Islamabad.
  • Pakistan uses AI to expedite citizen inquiries through chatbots, streamline government operations with digital ID systems, and address food security by optimising crop monitoring and yields. AI-based credit scoring broadens microfinance access but raises questions of fairness and privacy. 

Pakistan’s AI trajectory is propelled by the MoITT’s Digital Pakistan agenda, with the National Centre of Artificial Intelligence coordinating academic research in emerging fields like machine learning and robotics. 

Legislative initiatives are rapidly evolving. The Regulation of Artificial Intelligence Act 2024, currently under review by the Senate Standing Committee on Information Technology, aims to ensure responsible AI deployment, penalising misuse and unethical practices with high-value fines. Once enacted, the law would establish the National Artificial Intelligence Commission to govern AI adoption and uphold social welfare goals, with commissioners prohibited from holding public or political office. Parallel to this, the Personal Data Protection Bill would further strengthen consumer data rights by regulating AI-driven profiling.

Ongoing debates centre on balancing innovation with privacy, transparency, and accountability. As Pakistan expands international collaborations, particularly through the China-Pakistan Economic Corridor and broader Islamic cooperation forums, more concrete regulations are expected to emerge by the end of 2025.

14. VIETNAM

Meta plans to increase its investment in Vietnam, focusing on virtual reality production and innovation.

Summary

While Vietnam has not enacted a single, overarching AI law, the Law on Cyberinformation Security (2015) provides a basic legal framework that partially governs AI-driven data handling. Two ministries – the Ministry of Science and Technology (MOST) and the Ministry of Information and Communications (MIC) – jointly drive AI initiatives under Vietnam’s National Strategy on Research, Development and Application of AI by 2030, with an emphasis on AI education, R&D, and responsible use in manufacturing, healthcare, and e-governance. Although the national strategy references ethics and bias prevention, there is no single oversight body or binding ethical code for AI, prompting growing calls from civil society for greater transparency and accountability.

AI landscape

  • The Ministry of Science and Technology (MOST) allocates funds for AI research, supporting collaborations between universities, startups, and private enterprises.
  • The Ministry of Information and Communications (MIC) oversees the broader digital transformation agenda, sets cybersecurity standards, and can impose fines for data misuse under existing regulations.
  • The National Strategy on AI (2021–2030) aims to develop an AI-trained workforce (50,000 professionals), expand AI usage in public services through chatbots, and digital government, and promote AI-based solutions in manufacturing, healthcare diagnostics, and city management. The strategy mentions ethical principles like bias mitigation and accountability but does not specify formal enforcement or an AI ethics board.
  • The Law on Cyberinformation Security (2015) outlines baseline data security measures for organisations, which partially apply to AI-related activities, as the law’s general data protection and system security requirements extend to AI systems that process or store personal or sensitive information. The MIC can impose fines or restrict services for cybersecurity breaches and unauthorised data processing. 
  • The State Bank of Vietnam can issue additional rules for AI deployments in finance or consumer lending.
  • Factories adopt AI for predictive maintenance, robotics, and supply-chain optimisation. AI-based diagnostics and imaging pilot projects are implemented in major hospitals, partially funded by MOST grants. AI chatbots reduce administrative backlogs. Ho Chi Minh City explores AI-driven traffic control and security systems. Tech hubs in Hanoi and Ho Chi Minh City foster AI-focused enterprises in fintech, retail analytics, and EdTech.

Vietnam’s push for AI is central to its ambition of enhancing economic competitiveness and digitizing governance. However, comprehensive AI legislation remains absent. The National Strategy on AI acknowledges concerns around fairness, personal data rights, and possible algorithmic bias, but explicit regulatory mandates or ethics boards have yet to be instituted. 

Vietnam collaborates with ASEAN on a regional digital masterplan and maintains partnerships with tech-leading countries, such as Japan and South Korea for AI research and capacity development. The  government is also formulating new regulations in the digital technology sector, including a draft Law on Digital Technology Industry, expected to be adopted in May 2025, which may introduce risk-based rules for AI and a sandbox approach for emerging technologies.

15. RUSSIA

 Flag, Russia Flag

Summary

Russia has adopted multiple AI-related policies including an AI regulation framework, the National AI Development Strategy (2019–2030), the National Digital Economy Programme, and experimental legal regimes (ELRs) – to advance AI in tightly regulated environments. The recently enacted rules mandating liability insurance for AI developers in ELRs signal a shift toward stricter risk management.

AI landscape

  • The National AI Development Strategy (2019–2030) was adopted via presidential decree and it sets ambitious goals for AI R&D, talent growth, and widespread adoption in the healthcare, finance, and defence sectors.
  • Effective in 2025, Russia’s updated AI regulation framework prohibits AI in education if it simply completes student assignments (to prevent cheating), clarifies legal liability for AI-generated content, mandates accountability for AI-related harm, promotes human oversight, and focuses on national security through industry-specific guidelines.
  • Experimental Legal Regimes (ELRs) allow the testing of AI-driven solutions (e.g., autonomous vehicles in Moscow and Tatarstan). Federal Law 123-FZ adopted in 2024 now requires developers in ELRs to insure civil liability for potential AI-related harm.

Instead, authorities have relied on diverse initiatives, laws, and financial incentives to direct AI governance. The key remains the National AI Development Strategy (2019), focusing on technological sovereignty, a deeper investment in research, and investments in attracting talent.

Alongside it, the digital economy framework has bankrolled significant projects, from data centres to connectivity enhancements, enabling the preliminary deployment of advanced AI solutions.

In 2020, policymakers introduced the Conceptual Framework for the Regulation of AI and Robotics, identifying gaps in liability allocation among AI developers, users, and operators. This has been effective as of 2025, as noted above.

Technical Committee 164 under Rosstandart issues AI-related safety and interoperability guidelines. Personal data management is governed by Federal Law No. 152-FZ, complemented by updated biometric data regulations that organise the handling of facial and voice profiles. The voluntary AI Ethics Code, shaped in collaboration with governmental entities and technology companies, aims to curb risks such as algorithmic bias, discriminatory profiling, and the unchecked use of personal data.

AI adoption is especially visible in the following:

  • Companies like Yandex are conducting trials of self-driving cars in designated zones. Under the new insurance requirements, liabilities for potential accidents must be covered.
  • The Central Bank endorses AI-driven services for fraud prevention and credit analysis, ensuring providers remain responsible under established banking and consumer protection laws.
  • AI-assisted diagnostic tools and telemedicine applications go through a registration process akin to medical device approval, overseen by Roszdravnadzor. 
  • Russian authorities use AI-driven facial recognition in public surveillance, managed by biometric data policies and overseen by security services. Advocacy groups have voiced concerns regarding privacy and data retention practices.

Data Protection Day 2025: A new mandate for data protection

This analysis will be a detailed summary of Data Protection Day, providing the most relevant aspects from each session. The event welcomed people to Brussels, as well as virtually, to celebrate Data Protection Day 2025 together.

Filled with a tight schedule, the event programme kicked off with opening remarks by the Secretary General of the European Data Protection Supervisor (EDPS), followed by a day of panels, speeches and side sessions from the brightest of minds in the data protection field.

Keynote speech by Leonardo Cervera Navas

Given the recent political turmoil in the EU, specifically the repealing of the Romanian elections a few months ago, it was no surprise that the first keynote speech addressed how algorithms are used to destabilise democracies and threaten them. Navas explained how third-country algorithms are used against EU democracies to target their values.

He then went on to discuss how there is a big power imbalance when certain wealthy people with their companies dominate the tech world and end up violating our privacy. However, he turned towards a hopeful future when he spoke about how the crisis in Europe is making us Europeans stronger. ‘Our values are what unite us, and part of them are the data protection values the EDPB strongly upholds’, he emphasised.

He acknowledged the evident overlap of rules and regulations between different legal instruments but also highlighted the creation of tools that can help uphold our privacy, such as the Digital Clearing House 2.0.

Organiser’s panel moderated by Kait Bolongaro

This panel discussed a wide variety of data protection topics, such as the developments on the ground, how international cooperation played a role in the fight against privacy violations, and what each panellist’s priorities were for the upcoming years. That last question was especially interesting to hear given the professional affiliations of each panellist.

What is interesting about these panels, is the fact that the organisers spent a lot of time curating a diverse panel. They had people from academia, private industry, public bodies, and even the EDPS. This ensures that a panel’s topic is discussed from more than one point of view, which is much more engaging.

Wojciech Wiewiorowski, the current European Data Protection Supervisor, reminded us of the important role that data protection authorities (DPAs) play in the effective enforcement of the GDPR. Matthias Kloth, Head of Digital Governance and Sport, CoE, showed us a broader perspective. As his work surrounds the evolved Convention 108, now known as Convention 108+, he shed some light on the advancements of updating and bringing past laws into today’s modern age.

Regarding international cooperation, each panellist had their own unique take on how to facilitate and streamline it. Wiewiorowski correctly stated that data has no borders and that cooperation with everyone is needed, as a global effort. However, he reminded, that in the age of cooperation, we cannot have a low level of protection by following the ‘lowest common denominator level of protection’.

Jo Pierson, Professor at the Vrije University Brussels and the Hasselt University, said that international cooperation is very challenging. He gave the example that country’s values may change overnight, giving the example of Trump’s recent re-election victory.

Audience questions

A member of the audience posed a very relevant question regarding the legal field as a whole.
He asked the panellists what they thought of the fact that enforcing one’s rights is a difficult and
costly process. To provide context, he explained how a person must be legally literate and bear their own costs for litigation to litigate or filing an appeal.

Wiewiorowski of the EDPS pointed out that changing the procedural rules of the GDPR is not feasible to tackle this issue. There is the option for small-scale procedural amendments, but he does not foresee the GDPR being opened up in the coming years.

However, Pierson had a more practical take on the matter and suggested that this is where individuals and civil society organisations can join forces. Individuals can approach organisations such as noyb, Privacy International, and EDRi for help or advice on the matter. But then it begs the question, on whose shoulders should this burden rest?

One last question from the audience was about the bombshell new Chinese AI ‘DeepSeek’ recently dropped onto the market. The panellists were asked whether this new AI is an enemy or a friend to us Europeans. Each panellist avoided calling Chinese AI an enemy or a friend, but they did find common ground on the fact that we need international cooperation and that an open-source AI is not a bad thing if it can be trained by Europeans.

The last remark regarding this panel was Wiewiorowski’s comment on Chinese AI and how he compared it to ‘Sputnik Day’ (the 1950s space race between the United States and the USSR). Are we in a new technological gap? Will non-Western allies and foes beat us in this digital arms race?

Data protection in a changing world: What lies ahead? Moderated by Anna Buchta

This session also had a series of interesting questions for high-profile panellists. The range of this panel was impressive as it regrouped opinions from the European Commission, the Polish Minister of Digital Affairs, the European Parliament, the UK’s Information Commissioner, and DIGITALEUROPE.

Notably, Marina Kaljurand from LIBE and her passion for cyber matters. She revealed that many people in the European Parliament are not tech literate. On the other hand, some people are extremely well-versed in how the technology is used. There seems to be a big information asymmetry within the European Parliament that needs to be addressed if they are to vote on digital regulations.

She gave an important overview of the state of data transfers with the UK and the USA. The UK has in place an adequacy decision that has raised multiple flags in the European Parliament and is set to expire in June 2025.

The future of data transfer in the UK is very uncertain. As for the USA, she mentioned that there will be difficult times due to the actions of the recently re-elected President Trump that are degrading US-EU relations. Regarding her views on the child sexual abuse material regulation, she stresses how important it is to protect children and that the debate is not about whether or not to protect them or not, but that it is difficult to find out ‘how’ to protect them.

The current proposed regulations will put too much stress on violating one’s privacy, but on the other hand, it is difficult to find alternatives to protect children. This reflects how difficult regulating can be even when everyone at the table may have the same goals.

Irena Moozova, the Deputy Director-General of DG JUST at the European Commission, said that her priorities for the upcoming years are to cut red tape, simplify guidelines for businesses to work and support business compliance efforts for small and medium-sized enterprises. She mentions the public consultation phases that will be held for the upcoming Digital Fairness Act this summer.

John Edwards, the UK Information Commissioner, highlighted the transformative impact of emerging technologies, particularly Chinese AI, and how disruptive innovations can rapidly reshape markets. He discussed the ICO’s evolving strategies, noting their alignment with ideas shared by other experts. The organisation’s focus for the next two years includes key areas such as AI’s role in biometrics and tracking, as well as safeguarding children’s privacy. To address these priorities, the ICO has published an online tracking strategy and conducted research on children’s data privacy, including the development of systems tailored to protect young users.

Alberto Di Felice, Legal Counsel to DIGITALEUROPE, stressed the importance of simplifying regulations. He repeatedly stated numerous times that there is too much bureaucracy and too many actors involved in regulation. For example, if a company wants to operate in the EU market, they will have to consult DPAs, AI Act authority, data from the public sector (Data Governance Act), manufacturers or digital products (authorities for this), and financial sector authorities.

He advocated for a single regulator. He also mentioned how the quality of regulation in Europe
is poor and that sometimes regulations are too long. For example, some AI Act articles are 17 lines long with exceptions and sub-exceptions that lawyers cannot even make sense of. He suggested reforms such as having one regulator and proposing changes to streamline legal compliance.

Keynote speech by Beatriz de Anchorena on global data protection

Beatriz de Anchorena, Head of Argentina’s DPA and current Chair of the Convention 108+ Committee, delivered a compelling address on the importance of global collaboration in data protection. Representing a non-European perspective, she emphasised Argentina’s unique contribution to the Council of Europe (CoE).

Argentina was the first country outside Europe to receive an EU adequacy decision, which has since been renewed. Despite having data protection laws originating in the 2000s, Argentina remains a leader in promoting modernised frameworks.

Anchorena highlighted Argentina’s role as the 23rd state to ratify the Convention 108+, noting that only seven more countries need to ratify it to come into force fully. She advocated Convention 108+ as a global standard for data protection, capable of upgrading current data protection standards without demanding complete homogeneity. Instead, it offers a common ground for nations to align on privacy matters.

What’s on your mind: Neuroscience and data protection moderated by Ella Mein

Marcello Ienca, a Professor of Ethics of AI and Neuroscience at the University of Munich, gave everyone in the audience a breakdown of how data and neuroscience intersect and the real-world implications for people’s privacy.

The brain, often described as the largest data repository in the world, presents a vast opportunity for exploration and AI is acting as a catalyst in this process. Large-scale language models are helping researchers in decoding the brain’s ‘hardware’ and ‘software’, although the full ‘language of thought’ remains unclear and uncertain.

Neurotechnology raises real privacy and ethical concerns. For instance, the ability to biomark
conditions like schizophrenia or dementia introduces new vulnerabilities, such as the risk of
‘neuro discrimination’, where predicting one’s illness might lead to stigmatisation or unequal
treatment.

However, it is argued that understanding and predicting neurological conditions is important, as nearly every individual is expected to experience at least one neurological condition in their lifetime. As one panellist put it, ‘We cannot cure what we don’t understand, and we cannot understand what we don’t measure.’

This field also poses questions about data ownership and access. Who should have the ‘right to read brains’, and how can we ensure that access to such sensitive data, particularly emotions and memories unrelated to clinical goals, is tightly controlled? With the data economy in an ‘arms race’, there is a push to extract information directly from its source: the human brain.

As neurotechnology advances, balancing its potential benefits with safeguards will be important to ensure that innovation does not come at the cost of individual privacy and autonomy as mandated by law.

In addition to this breakdown, Jurisconsult Anna Austin explained to us the ECtHR’s legal background surrounding this. A jurisconsult plays a key role in keeping the court informed by maintaining a network that monitors relevant case law from member states and central to this discussion are questions of consent and waiver.

Current ECtHR case law states that any waiver must be unequivocal, fully informed, and fully understand its consequences, which can be challenging to meet. This high standard exists to safeguard fundamental rights, such as protection from torture and inhumane treatment and ensuring the right to a fair trial. As it stands, she stated that there is no fully comprehensive waiver mechanism.

The right to a fair trial is an absolute right that needs to be understood in this context. One nuance in this context is therapeutic necessity where forced medical interventions can be justified under strict conditions with safeguards to ensure proportionality.

Yet concerns remain regarding self-incrimination under Article 6. Particularly in scenarios where reading one’s mind could improperly compel evidence, raising questions about the abuse of such technologies.

Alessandra Pierucci from the Italian DPA made a relevant case for whether new laws should be
created for this matter or whether existing ones are sufficient. Within the context of her work, they are developing a mental privacy risk assessment.

Beyond privacy unveiling the true stakes of data protection. Moderated by Romain Robert

Nathalie Laneret, Vice President of Government Affairs and Public Policy at Criteo, presented her viewpoint on the role of AI and data protection. Addressing the balance between data protection and innovation, Laneret explained that these areas must work together.

She stressed the importance of finding a ways to use pseudonymised data and clear codes of conduct for businesses to use when pointing out that innovation is high on the European Commission’s political agenda.

Laneret addressed concerns about sensitive data, such as children’s data, highlighting Criteo’s proactive approach. With an internal ethics team, the company anticipated potential regulatory challenges around sensitive data, ensuring it stayed ahead of ethical and compliance issues.

In contrast, Max Schrems, Chair of noyb, offered a more critical perspective on data practices. He pointed out the economic disparity in the advertising model, explaining that while advertisers generate minimal revenue per user annually, they often charge users huge fees for their data. Schrems highlighted the importance of individuals having the right to freely give up their privacy if they choose, provided that consent is genuinely voluntary and given.

Forging the future: reinventing data protection? Moderated by Gabriela Zanfir-Fortuna

In this last panel, Johnny Ryan from the Irish Council for Civil Liberties painted a stark picture of
the societal challenges tied to data misuse. He described a crisis fuelled by external influence,
misunderstandings, and data being weaponised against individuals.

However, Ryan argued that the core issue is not merely the problems themselves but the fact that the EU lacks an effective and immediate response strategy. He stated the need for swift protective measures, criticising the current underuse of interim tools that could mitigate harm in real-time.

Nora Ni Loideain, a Lecturer and Director at the University of London’s Information Law and Policy Centre, discussed the impact of the GDPR on data protection enforcement. Explaining how DPAs had limited powers in the past and, for example, in events like the Cambridge Analytica scandal, she noted that the UK’s Data Protection Authority could only fine Facebook £500,000 due to a lack of resources and authority.

This is where the GDPR has allowed for DPAs to step up with independence, greater resources, and stronger enforcement capabilities, significantly improving their ability to hold companies accountable for their privacy violations.

Happy Data Protection Day 2025!

Legacy media vs social media and alternative media channels

In today’s digital age, the rapid proliferation of information has empowered and complicated the way societies communicate and stay informed. At its best, this interconnectedness fosters creativity, knowledge-sharing, and transparency. However, it also opens the floodgates for misinformation, disinformation, and the rise of deepfakes, tools that distort truth and challenge our ability to distinguish fact from fiction. These modern challenges are not confined to the fringes of the internet; they infiltrate mainstream platforms, influencing public opinion, political decisions, and cultural narratives on an unprecedented scale.

The emergence of alternative media platforms like podcasts, social media networks, and independent streaming channels has disrupted the traditional gatekeepers of information. While these platforms offer voices outside the mainstream a chance to be heard, they also often lack the editorial oversight of traditional media. This peculiarity has created a complex media ecosystem where authenticity competes with sensationalism, and viral content can quickly overshadow fact-checking.

Content policy has become a battlefield, with platforms struggling to balance free expression and the need to curb harmful or deceptive narratives. The debate is further complicated by the increasing sophistication of deepfake technology and AI-generated content, which can fabricate convincing yet entirely false narratives. Whether it is a politician giving a speech they never delivered, a celebrity endorsing a product they have never used, or a manipulated video sparking social unrest, the stakes are high.

These challenges have sparked fierce debates among tech giants, policymakers, journalists, and users on who should bear responsibility for ensuring accurate and ethical content. Against this backdrop, recent high-profile incidents, such as Novak Djokovic’s response to perceived media bias and Joe Rogan’s defiance of traditional norms, or Elon Musk’s ‘nazi salute’, highlight the tension between established media practices and the uncharted territory of modern communication channels. These case studies shed light on the shifting dynamics of information dissemination in an era where the lines between truth and fabrication are increasingly blurred.

Case study No. 1: The Djokovic incident, traditional media vs social media dynamics

The intersection of media and public discourse took centre stage during the 2025 Australian Open when tennis icon Novak Djokovic decided to boycott an on-court interview with Channel 9, the official broadcaster of the tournament. The decision, rooted in a dispute over comments made by one of its journalists, Tony Jones, highlighted the ongoing tension between traditional media’s content policies and the freedom of expression offered by modern social media platforms.

The incident

Namely, on 19 January 2025, following his victory over Jiri Lehecka in the fourth round of the Australian Open, Novak Djokovic, the 24-time Grand Slam champion, refused to engage in the customary on-court interview for Channel 9, a long-standing practice in tennis that directly connects players with fans. The reason was not due to personal animosity towards the interviewer, Jim Courier, but rather a response to remarks made by Channel 9 sports journalist Tony Jones. During a live broadcast, Jones had mocked Serbian fans chanting for Djokovic, calling the player ‘overrated’ and a ‘has-been,’ and even suggested they ‘kick him out’, a phrase that resonated deeply given Djokovic’s previous deportation from Australia over vaccine mandate issues in 2022.

The response and social media amplification

In his post-match press conference, Djokovic clarified his stance, saying that he would not conduct interviews with Channel 9 until he received an apology from both Jones and the network for what he described as ‘insulting and offensive’ comments. The incident quickly escalated beyond the tennis courts when Djokovic took to X (formerly Twitter) to share a video explaining his actions, directly addressing his fans and the broader public. 

What happened was a protest against the Australian broadcaster and the strategic use of social media to bypass traditional media channels, often seen as gatekeepers of information with their own biases and agendas. The response was immediate; the video went viral, drawing comments from various quarters, including from Elon Musk, the owner of X. Musk retweeted Djokovic’s video with a critique of ‘legacy media’, stating, ‘It’s way better just to talk to the public directly than go through the negativity filter of legacy media.’ Djokovic’s simple reply, ‘Indeed’, underscored his alignment with this view, further fuelling the discussion about media integrity and control.

Content policy and misinformation

The incident brings to light several issues concerning content policy in traditional media. Traditional media like Channel 9 operate under strict content policies where editorial decisions are made to balance entertainment and journalistic integrity. However, remarks like those from Jones can blur this line, leading to public backlash and accusations of bias or misinformation.

The response from Channel 9, an apology after the public outcry, showcases the reactive nature of traditional media when managing content that might be deemed offensive or misinformative, often after significant damage has been done to public perception.

Unlike social media, where anyone can broadcast their viewpoint, traditional media has the infrastructure for fact-checking but can also be accused of pushing a narrative. The Djokovic case has raised questions about whether Jones’s comments were intended as humour or reflected a deeper bias against Djokovic or his nationality.

The role of social media

Social media platforms such as X enable figures like Djokovic to communicate directly with their audience, controlling their narrative without the mediation of traditional media. Direct public exposure can be empowering, but it can also bypass established journalistic checks and balances.

While this incident showcased the power of social media for positive storytelling, it also highlights the platform’s potential for misinformation. Messages can be amplified without context or correction without editorial oversight, leading to public misinterpretation.

Case study No. 2: Alternative media and political discourse – The Joe Rogan experience

As traditional media grapples with issues of trust and relevance, alternative media platforms like podcasts have risen, offering new avenues for information dissemination. Joe Rogan’s podcast, ‘The Joe Rogan Experience’, has become a significant player in this space, influencing political discourse and public opinion, mainly through his interviews with high-profile figures such as Donald Trump and Kamala Harris.

Donald Trump’s podcast appearance

In 2024, Donald Trump’s appearance on Joe Rogan’s podcast was a pivotal moment, often credited with aiding his resurgence in the political arena, leading to his election as the 47th President of the USA. The podcast format allowed for an extended, unscripted conversation, allowing Trump to discuss his policies, personality, and plans without the usual media constraints. 

Unlike traditional media interviews, where questions and answers are often tightly controlled, Rogan’s podcast allowed Trump to engage with audiences more authentically, potentially influencing voters who felt alienated by mainstream media.

Critics argue that such platforms can spread misinformation due to the lack of immediate fact-checking. Yet, supporters laud the format for allowing a deeper understanding of the candidate’s views without the spin of journalists.

Kamala Harris’s conditional interview

Contrastingly, Kamala Harris’s approach to the same platform was markedly different. She requested special conditions for her interview, including pre-approved questions, which Rogan declined. Harris then chose not to participate, highlighting a critical difference in how politicians view and interact with alternative media. Her decision reflects a broader strategy among some politicians to control their media exposure, preferring environments where the narrative can be shaped to their advantage, which is often less feasible in an open podcast format.

Some might see her refusal as avoidance of tough, unfiltered questions, potentially impacting her public image as less transparent than figures like Trump, who embraced the platform.

Vladimir Klitschko’s interview on ‘The Joe Rogan Experience

Adding another layer to this narrative, former Ukrainian boxer and political figure Vladimir Klitschko appeared on Rogan’s show, discussing his athletic career and geopolitical issues affecting Ukraine. This interview showcased how alternative media like podcasts can give a voice to international figures, offering a different perspective on global issues that might be underrepresented or misrepresented in traditional media.

Rogan’s discussions often delve into subjects with educational value, providing listeners with nuanced insights into complex topics, something traditional news might cover in soundbites.

Analysing media dynamics

Content policy in alternative media: While Rogan’s podcast does not adhere to the same content policies as traditional media, it does have its own set of guidelines, which include a commitment to free speech and a responsibility not to platform dangerous misinformation.

Fact-checking and public accountability: Unlike traditional media, where fact-checking can be institutional, podcast listeners often take on this role, leading to community-driven corrections or discussions on platforms like Reddit or X.

The spread of disinformation: Like social media, podcasts can be vectors of misinformation if not moderated or if hosts fail to challenge or correct inaccuracies. However, Rogan’s approach often includes challenging guests, providing a counterbalance.

Impact on journalism: The rise of podcasts challenges traditional journalism by offering alternative narratives, sometimes at the cost of depth or accuracy but gaining in terms of directness and personal connection with the audience.

Case study No. 3: Elon Musk and the ‘Nazi salute’

The evolution of media consumption has been profound, with the rise of social media and alternative channels significantly altering the landscape traditionally dominated by legacy media. The signs of this evolution are poignantly highlighted in a tweet by Elon Musk, where he commented on the dynamics of media interaction:

‘It was astonishing how insanely hard legacy media tried to cancel me for saying “my heart goes out to you” and moving my hand from my heart to the audience. In the end, this deception will just be another nail in the coffin of legacy media.’ – Elon Musk, 24 January 2025, 10:22 UTC 

Legacy media: the traditional gatekeepers

Legacy media, encompassing print, television, and radio, has long been the public’s primary source of news and information. These platforms have established content policies to ensure journalistic integrity, fact-checking, and editorial oversight. However, as Musk’s tweet suggests, they are often perceived as inherently biased, sometimes acting as ‘negativity filters’ that skew public perception. This critique reflects a broader sentiment that legacy media can be slow to adapt, overly cautious, and sometimes accused of pushing an agenda, as seen in Musk’s experience of being ‘cancelled’ over a simple gesture interpreted out of context. The traditional model involves gatekeepers who decide what news reaches the audience, which can lead to a controlled narrative that might not always reflect the full spectrum of public discourse. 

Modern social media: direct engagement

In contrast, social media platforms like X (formerly Twitter) democratise information dissemination by allowing direct communication from individuals to the public, bypassing traditional media gatekeepers. Musk’s use of X to address his audience directly illustrates this shift. Social media provides an unfiltered stage where public figures can share their stories, engage in real-time, and counteract what they see as biassed reporting from legacy media. This directness enhances transparency and authenticity but also poses significant challenges. Without the same level of editorial oversight, misinformation can spread rapidly, as social media algorithms often prioritise engagement over accuracy, potentially amplifying falsehoods or sensational content. 

Alternative media channels: a new frontier

Beyond social media, alternative channels like podcasts, independent streaming services, and blogs have emerged, offering even more diverse voices and perspectives. These platforms often operate with less stringent content policies, emphasising freedom of speech and direct audience interaction. For instance, podcasts like ‘The Joe Rogan Experience’ have become influential by hosting long-form discussions that delve deeper into topics than typical news segments. This format allows for nuanced conversations but lacks the immediate fact-checking mechanisms of traditional media, relying instead on the community or the host’s discretion to challenge or correct misinformation. The rise of alternative media has challenged the monopoly of legacy media, providing platforms where narratives can be shaped by content creators themselves, often leading to a richer, albeit sometimes less regulated, exchange of ideas. 

Content policy and freedom of expression

The tension between content policy and freedom of expression is starkly highlighted in Musk’s tweet. Legacy media’s structured approach to content can sometimes suppress voices or misrepresent intentions, as Musk felt with his gesture. On the other hand, social media and alternative platforms offer broader freedom of expression, yet this freedom comes with the responsibility to manage content that might be misleading or harmful. The debate here revolves around how much control should be exerted over content to prevent harm while preserving the open nature of these platforms. Musk’s situation underscores the need for a balanced approach where the public can engage with authentic expressions without the distortion of ‘legacy media’s negativity filter’. 

To summarise:

The juxtaposition of Djokovic’s media strategies and the political interviews on ‘The Joe Rogan Experience’ illustrates a shift in how information is consumed, controlled, and critiqued. Traditional media continues to wield considerable influence but is increasingly challenged by platforms offering less censorship, potentially more misinformation, and direct, unfiltered communication. 

Elon Musk’s tweet is another vivid example of the ongoing battle between legacy media’s control over narrative and the liberating yet chaotic nature of modern social media and alternative channels. These platforms have reshaped the way information is consumed, offering both opportunities for direct, unmediated communication and challenges in maintaining the integrity of information. 

As society continues to navigate this complex media landscape, the balance between ensuring factual accuracy, preventing misinformation, and respecting freedom of speech will remain a critical discussion point. The future of media lies in finding this equilibrium, where the benefits of both traditional oversight (perhaps through stringent/severe regulatory measures) and modern openness can coexist to serve an informed and engaged public.

DeepSeek: Speeding up the planet or levelling with ChatGPT?

Although the company’s name somewhat overlaps with Google DeepMind, which was launched earlier, the new player in the market has sparked a leap in attention and public interest, becoming one of the biggest AI surprises on the planet upon its launch.

DeepSeek, a company headquartered in China, enjoys significant popularity primarily because its most sought-after features keep pace with giants like OpenAI and Google, as well as due to notable stock market changes that are far from negligible.

In the following points, we will explore these factors and what the future holds for this young company, particularly in the context of the dynamics between China and the US.

How did it start? Origins of DeepSeek

DeepSeek is an AI company from China based in Hangzhou, Zhejiang, founded by entrepreneur and businessman Liang Wenfeng. The company develops open-source LLMs and is owned by a Chinese hedge fund, High-Flyer.

It all started back in 2015 when Liang Wenfeng cofounded High-Flyer. At first, it was a startup, but in 2019, it grew into a hedge fund focused on developing and using AI trading algorithms. For the first two years, they used AI only for trading.

In 2023, High-Flyer founded a startup called DeepSeek, and Liang Wenfeng was appointed CEO. Two years later, on 10 January 2025, DeepSeek announced the release of its first free-to-use chatbot app. The app surpassed its main competitor, ChatGPT, as the most downloaded free app in the US in just 17 days, causing an unprecedented stir on the market.

Unprecedented impact on the market

Few missed the launch of the DeepSeek model, which is why the stock market felt the impact, and so did some of the biggest giants.

For instance, the value of Nvidia shares dropped by as much as 18%. Similar declines were experienced by giants like OpenAI, Google, and other AI companies focused on small and medium-sized enterprises.

On top of this, there is justified concern among investors, who could quickly shift their focus and redirect their investments. However, this could lead to an even more significant drop in the shares of the largest companies.

Open-source approach

DeepSeek embraces an open-source philosophy, making its AI algorithms, models, and training details freely accessible to the public. The company stated that it is committed to transparency and fosters collaboration among developers and researchers worldwide. They also advocate for a more inclusive and innovative AI ecosystem.

Their strategy has the potential to reshape the AI landscape, as it empowers individuals and organisations to contribute to the evolution of AI technology. DeepSeek’s initiative highlights the importance of open collaboration in driving progress and solving complex challenges in the tech industry.

DeepSeek quickly secured the information after being alerted.

With the growing demand for ethical and transparent AI development, DeepSeek’s open-source model sets a precedent for the industry. The company paves the way for a future where AI breakthroughs are driven by collective effort rather than proprietary control.

Cheaper AI model that shook the market

By being cheaper than the competition, DeepSeek has opened the doors of the AI market to many other companies that do not have as much financial power. As dr Jovan Kurbalija, executive director of Diplo, says in his blog post titled ‘How David outwits Goliath in the age of AI?‘, ‘the age of David challenging Goliath has arrived in AI’.

For individuals, this means monthly costs are reduced by 30% to 50%, which can be, and often is, the biggest incentive for users looking to save.

The privileges once enjoyed by those with greater financial resources are now available to those who want to advance their small and medium-sized businesses.

Cyber threats and challenges faced by DeepSeek

Shortly after its launch, DeepSeek faced a significant setback when it was revealed that an error had exposed sensitive information to the public.

This raised alarms for many, especially as the immense popularity led to the AI Assistant being removed from the AppStore more times than OpenAI’s offering, and a large amount of data became accessible.

Experts have expressed concerns that others may have accessed the leaked data. The company has not yet commented on the incident, while the system’s vulnerability provides a foundation for hacking groups to exploit.

DeepSeek for the top spot, ChatGPT defends the throne

The AI race is heating up as DeepSeek challenges industry leader ChatGPT, aiming to claim the top spot in AI. With its open-source approach, DeepSeek is rapidly gaining attention by publicly making its models and training methods available, fostering innovation and collaboration across the AI community.

The race was further spiced up by DeepSeek’s claim that it built an AI model on par with OpenAI’s ChatGPT for under $6 million (£4.8 million). In comparison, Microsoft, OpenAI’s main partner, plans to invest around $80 billion in AI infrastructure this year.

OpenAI’s ChatGPT search tool faces risks of manipulation via hidden content, leading to biased or harmful outputs.

As DeepSeek pushes forward with its transparent and accessible model, the battle for AI supremacy intensifies. Whether openness will outmatch ChatGPT’s established presence remains to be seen, but one thing is sure—the AI landscape is evolving faster than ever.

Why is DeepSeek gaining popularity in 2025?

DeepSeek has emerged as a major player in AI by embracing an open-source philosophy, making its models and training data freely available to developers. This transparency has fueled rapid innovation, allowing researchers and businesses to build upon its technology and contribute to advancements in AI.

Unlike closed systems controlled by major tech giants, DeepSeek’s approach promotes accessibility and collaboration, attracting a growing community of AI enthusiasts. Its cost-effective development, reportedly achieving results comparable to top-tier models with significantly lower investment, has also drawn attention.

As the demand for more open and adaptable AI solutions rises, DeepSeek’s commitment to shared knowledge positions it as a strong contender in the industry. Whether this strategy will redefine the AI landscape remains to be seen, but its growing influence in 2025 is undeniable.

DeepSeek in the future: Development, features, and strategies

Now that it has experienced ‘overnight success,’ the Chinese company aims to push DeepSeek to the top and position it among the most powerful AI firms in the world.

Users can definitely expect many advanced features that will fuel a fierce battle with giants like DeepMind and ChatGPT.

Strategically, DeepSeek will attempt to break into the American market and offer more financially accessible solutions, forcing the key players to make significant cuts.

DeepSeek is undoubtedly a real hit in the market, but it remains to be seen whether price is the only measure of its success.

Whether it will make a leap in its own technology and completely outpace the competition or remain shoulder to shoulder with the giants—or even falter—will be revealed in the near future.

One thing is sure: the Chinese company has seriously shaken up the market, which will need considerable time to recover.

Can quantum computing break the cryptocurrency’s code?

The digital revolution has brought in remarkable innovations, and quantum computing is emerging as one of its brightest stars. As this technology begins to showcase its immense potential, questions are being raised about its impact on blockchain and cryptocurrency. With its ability to tackle problems thought to be unsolvable, quantum computing is redefining the limits of computational power.

At the same time, its rapid advancements leave many wondering whether it will bolster the crypto ecosystem or undermine its security and decentralised nature. Can this computing breakthrough empower crypto, or does it pose a threat to its very foundations? Let’s dive deeper. 

What is quantum computing? 

Quantum computing represents a groundbreaking leap in technology. Unlike classical computers that process data in binary (0s and 1s), quantum computers use qubits, capable of existing in multiple states simultaneously due to quantum phenomena such as superposition and entanglement.

For example, Google’s new chip, Willow, is claimed to solve a problem in just five minutes—a task that would take the world’s fastest supercomputers approximately ten septillion years—highlighting the extraordinary power of quantum computing and fuelling further debate about its implications. 

These advancements enable quantum machines to handle problems with countless variables, benefiting fields such as electric vehicles, climate research, and logistics optimisation. While quantum computing promises faster, more efficient processing, its intersection with blockchain technology adds a layer of complexity so the story takes an interesting twist. 

 Nature, Night, Outdoors, Astronomy, Outer Space, Text

How does quantum computing relate to blockchain?

Blockchain technology relies on cryptographic protocols to secure transactions and ensure decentralisation. Cryptocurrencies like Bitcoin and Ethereum use elliptic curve cryptography (ECC) to safeguard wallets and transactions through mathematical puzzles that classical computers cannot solve quickly. 

Quantum computers pose a significant challenge to these cryptographic foundations. Their advanced processing power could potentially expose private keys or alter transaction records, threatening the trustless environment that blockchain depends upon.

Opportunities: Can crypto benefit from quantum computing? 

While the risks are concerning, quantum computing offers several opportunities to revolutionise blockchain: 

  • Faster transactions: Quantum algorithms could significantly accelerate transaction validation, addressing scalability challenges. 
  • Enhanced security: Developers can leverage quantum principles to create stronger, quantum-secure algorithms. 
  • Smarter decentralisation: Quantum-powered computations could enhance the functionality of smart contracts and decentralised apps (DApps). 

By embracing quantum advancements, the blockchain industry could evolve to become more robust and scalable— hopefully great news for the crypto community, which is optimistic about the potential for progress. 

How does quantum computing threaten cryptocurrency? 

Despite its potential benefits, quantum computing poses significant risks to the cryptocurrency ecosystem, depending on how it is used and who controls it: 

  1. Breaking public key cryptography
    Quantum computers equipped with Shor’s algorithm can decrypt ECC and RSA encryption. Tasks that would take classical computers millennia could be accomplished by a quantum computer in mere hours. This capability threatens to expose private keys, allowing hackers to access wallets and steal funds. 
  2. Mining oligopoly 
    The mining process, vital for cryptocurrency creation and transaction validation, depends on computational difficulty. Quantum computers could dominate mining activities, disrupting the decentralisation and fairness fundamental to blockchain systems.
  3. Dormant wallet risks
    Wallets with exposed public keys, particularly older ones, are at heightened risk. A quantum attack could compromise these funds before users can adopt protective measures.

With projections suggesting that quantum computers capable of breaking current encryption standards could emerge within 10–20 years—or perhaps even sooner—the urgency to address these threats is intensifying.

Solutions: Quantum-resistant tokens and cryptography

 Baby, Person, Body Part, Finger, Hand

Where there is a challenge, there is a solution. The crypto industry is proactively addressing quantum threats with quantum-resistant tokens and post-quantum cryptography. Lattice-based cryptography, for example, creates puzzles too complex for quantum computers, with projects like CRYSTALS-Kyber leading the charge. Hash-based methods, such as QRL’s XMSS, ensure data integrity, while code-based cryptography, like the McEliece system, uses noisy signals to protect messages. Multivariate polynomial cryptography also adds robust defences through complex equations. 

As we can see, promising solutions are already actively working to uphold blockchain principles. These innovations are crucial not only for securing crypto assets but also for maintaining the integrity of blockchain networks. Quantum-resistant measures ensure that transaction records remain immutable, safeguarding the trust and transparency that decentralised systems are built upon.

The quantum future for crypto 

Quantum computing holds tremendous promise for humanity, but it also brings challenges, particularly for blockchain and cryptocurrency. As its capabilities grow, the risks to existing cryptographic protocols become more apparent. However, the crypto community has shown remarkable resilience, with quantum-resistant technologies already being developed to secure the ecosystem. This cycle of threats and solutions is a perpetual motion—each technological advancement introduces new vulnerabilities, met with equally innovative defences. It is the inevitable price to pay for embracing the modern decentralised finance era and the transformative potential it brings. 

The future of crypto does not have to be at odds with quantum advancements. With proactive innovation, collaboration, and the implementation of quantum-safe solutions, blockchain can survive and thrive in the quantum era. So, is quantum computing a threat to cryptocurrency? The answer lies in our ability to adapt. After all, with great power comes great responsibility—and opportunity. 

The global regulatory landscape of crypto: Between innovation and control

Blockchain and cryptocurrencies: transformative forces in modern economies

Blockchain is a digital ledger technology that records transactions securely, transparently, and immutable. It functions as a decentralised database, distributed across a network of computers, where data is stored in blocks linked together in chronological order. Each block contains a set of transactions, a timestamp, and a unique cryptographic hash that connects it to the previous block, forming a continuous chain.

 Symbol, Business Card, Paper, Text

The decentralised nature of blockchain means that no single entity has control over the data, and all participants in the network have access to the same version of the ledger. This structure ensures that transactions are tamper-proof, as altering any block would require changing all subsequent blocks and gaining consensus from the majority of the network. Cryptographic techniques and consensus mechanisms, such as proof of work or proof of stake, secure the blockchain, verifying and validating transactions without the need for a central authority.

Initially introduced as the underlying technology for Bitcoin in 2009, blockchain has since evolved to support a wide range of applications beyond cryptocurrencies. It enables smart contracts—self-executing agreements coded directly onto the blockchain—and has found applications in industries such as finance, supply chain management, healthcare, and voting systems. Blockchain’s ability to provide transparency, enhance security, and reduce the need for intermediaries has positioned it as a transformative technology with the potential to reshape the way information and value are exchanged globally. Cryptocurrency is a form of digital or virtual currency that relies on cryptography for security and operates on decentralised networks, typically powered by blockchain technology. Unlike traditional currencies issued and regulated by governments or central banks, cryptocurrencies are not controlled by any central authority, which makes them resistant to censorship and manipulation.

At its core, cryptocurrency functions as a digital medium of exchange, allowing individuals to send and receive payments directly without the need for intermediaries like banks. Transactions are recorded on a blockchain, ensuring transparency, immutability, and security. Each user has a unique digital wallet containing a private key, which grants them access to their funds, and a public key, which serves as their address for receiving payments.

Cryptocurrencies often rely on consensus mechanisms like proof of work or proof of stake to validate transactions and maintain the integrity of blockchain. Bitcoin, the first cryptocurrency, was launched by an anonymous entity known as Satoshi Nakamoto, to create a decentralised and transparent financial system. Since then, thousands of cryptocurrencies have emerged, each with its own unique features and use cases, ranging from smart contracts on Ethereum to stablecoins designed to minimise price volatility.

MicroStrategy now holds 244,800 bitcoins, worth roughly $9.45 billion, after recent large-scale purchases.

Cryptocurrencies can be used for various purposes, including online payments, investments, remittances, and decentralised finance. While they offer benefits such as lower transaction fees, financial sovereignty, and global accessibility, they also face challenges like regulatory uncertainty, price volatility, and scalability issues. Despite these challenges, cryptocurrencies have become a transformative force in the global economy, driving innovation and challenging traditional financial systems.

Regulation necessity

The need for cryptocurrency regulation arises from the rapid growth and widespread adoption of digital assets, which present both opportunities and risks for individuals, businesses, and governments. While cryptocurrencies offer numerous benefits, such as financial inclusion, decentralised finance, and cross-border transactions, their unique characteristics also create challenges that necessitate oversight to ensure the integrity, stability, and safety of financial systems.

One primary reason for regulation is to protect consumers and investors. The crypto market is highly volatile, with prices often experiencing extreme fluctuations. This instability exposes investors to significant risks, and the lack of oversight has led to numerous cases of fraud, scams, and Ponzi schemes. Regulation can establish safeguards, such as requiring exchanges to implement transparency, security measures, and fair practices, which help protect users from financial losses.

Another critical driver for regulation is the need to combat illicit activities. The pseudonymous nature of cryptocurrencies can make them attractive for money laundering, terrorist financing, tax evasion, and other illegal purposes. By enforcing Know Your Customer (KYC) and Anti-Money Laundering (AML) requirements, regulators can minimise these risks and ensure that digital assets are not exploited for unlawful activities.

Regulation is also necessary to enhance market stability and confidence. The crypto space has seen incidents such as exchange hacks, sudden bankruptcies, and the collapse of major projects, which have caused significant disruptions and undermined trust in the ecosystem. Regulatory frameworks can help ensure the resilience and security of the infrastructure supporting cryptocurrencies, fostering a more stable environment.

Furthermore, as cryptocurrencies increasingly integrate into the global economy, regulation is vital to maintain financial stability. Unregulated digital assets could potentially disrupt traditional economic systems, challenge monetary policies, and create systemic risks. By introducing clear rules for the interaction between cryptocurrencies and traditional finance, regulators can prevent market manipulation and mitigate risks to the broader economy.

Finally, regulatory clarity can encourage legitimacy and adoption. A well-regulated crypto market can attract institutional investors, foster innovation, and create opportunities for businesses while addressing the concerns of sceptics and governments. Clear and consistent regulatory frameworks can also ensure fair competition and enable the crypto industry to coexist with traditional financial systems.

 Accessories, Jewelry, Locket, Pendant

Cryptocurrency regulation is necessary to protect users, prevent misuse, stabilise markets, safeguard economies, and promote broader adoption. Striking the right balance is essential to supporting innovation while addressing risks, enabling cryptocurrencies to realise their full potential as a transformative financial tool.

The future of crypto regulation worldwide

Global crypto regulation is a complex and evolving landscape, as governments and regulatory bodies around the world approach the issue with varying degrees of acceptance, restriction, and oversight. Cryptocurrencies, by their nature, operate on decentralised networks that transcend borders, making regional or national regulation a challenging task for policymakers. Governments worldwide are introducing rules to govern digital assets, with organisations like the International Organization of Securities Commissions (IOSCO) and the World Economic Forum (WEF) emphasising the need for consistent global standards. IOSCO has outlined 18 key recommendations for managing crypto and digital assets, while the WEF’s Pathways to the Regulation of Crypto-Assets provides an overview of recent regulatory developments and highlights the necessity of international alignment in overseeing this rapidly evolving industry.

Although regulatory discussions around crypto assets have been ongoing for years, recent crises, including the collapse of crypto-friendly banks and platforms like FTX, have heightened the urgency for clear rules. These incidents have accelerated the drive for stricter accounting and reporting standards.

Some countries have adopted pro-crypto stances, recognising the technology’s potential for economic growth and innovation. These nations often implement clear regulatory frameworks that encourage blockchain development and crypto adoption while addressing risks such as fraud, money laundering, and tax evasion. For instance, countries like Switzerland, Singapore and El Salvador have established themselves as crypto-friendly hubs by offering favourable regulatory environments that support blockchain startups and initial coin offerings (ICOs).

A table with text on it

Conversely, other nations take a more restrictive approach, either banning cryptocurrencies outright or imposing strict controls. Many countries have implemented comprehensive bans on cryptocurrency trading and mining, citing concerns over financial stability, capital flight, and environmental impacts. Some governments are cautious about the use of cryptocurrencies in illicit activities such as money laundering and terrorism financing, leading to calls for stricter KYC and AML requirements. At the international level, organisations such as the Financial Action Task Force (FATF) have introduced guidelines aimed at harmonising cryptocurrency regulations across borders. These guidelines focus on combating financial crimes by requiring cryptocurrency exchanges and service providers to implement measures such as customer identification and transaction reporting. In addition to regulating existing cryptocurrencies, many central banks are exploring the development of Central Bank Digital Currencies (CBDCs) These government-backed digital currencies aim to provide the benefits of cryptocurrencies, such as faster payments and increased financial inclusion, while maintaining centralised control and regulatory oversight.

Overall, global cryptocurrency regulation is dynamic and fragmented, reflecting the varying priorities and perspectives of different jurisdictions. While some countries embrace cryptocurrencies as tools for innovation and financial empowerment, others prioritise control and risk mitigation. The future of crypto regulation is likely to involve a blend of international cooperation and national-level policymaking, as regulators strive to strike a balance between fostering innovation and addressing the challenges posed by this transformative technology.

Let us examine a few examples of regulations.

US cryptocurrency regulation progress

The United States has made slow but steady progress toward establishing a regulatory framework for cryptocurrencies. Legislative efforts like the Financial Innovation and Technology for the 21st Century Act (FIT21) and the Blockchain Regulatory Certainty Act aim to define when cryptocurrencies are classified as securities or commodities and clarify regulatory oversight. Although these bills have yet to gain significant traction, they lay the foundation for future advancements in crypto regulation.

However, Donald Trump’s incoming administration has pledged to position the US as a global leader in cryptocurrency innovation. Plans include creating a Bitcoin strategic reserve, revitalising crypto mining, and pursuing deregulation. The expected nomination of cryptocurrency advocate Paul Atkins as SEC chair has fueled optimism within the industry, raising hopes for a more collaborative and forward-thinking approach to digital asset regulation.

Trump’s family takes centre stage at the Gulf’s biggest bitcoin conference.

While deregulation is a priority, the sector still requires new rules to address its complexities. Key areas for clarification include defining when crypto assets qualify as securities under the Howey test and refining enforcement strategies to focus on fraud prevention without stifling innovation. Addressing the treatment of secondary crypto trading under securities laws could further enhance the competitiveness of US-based exchanges and keep crypto projects in the country.

By balancing deregulation with essential safeguards, the incoming administration could foster an environment of growth and innovation while ensuring compliance and investor protection. The groundwork being laid today may help shape a thriving future for the US cryptocurrency landscape.

Russia strengthens crypto rules

Russia has taken a significant step in regulating cryptocurrency by introducing new rules aimed at integrating digital assets into its financial system while maintaining economic stability. As of 11 January 2025, the Bank of Russia requires contracts involving digital rights—such as cryptocurrencies, tokenised securities, and digital tokens—used in foreign trade to be registered with authorised banks. This applies to import contracts exceeding RUB 3 million and export contracts over RUB 10 million, underscoring the country’s intent to balance oversight with operational efficiency in international trade.

 Architecture, Building, Housing, House, Mansion, Palace, Car, Transportation, Vehicle

The regulations also mandate residents to provide detailed documentation on crypto transactions tied to these contracts. These include records of digital asset transfers or receipts used as payments, along with information on related foreign exchange operations. This level of scrutiny is designed to enhance transparency and mitigate risks, reflecting Russia’s broader goal of establishing a secure and efficient framework for digital assets.

While the move could promote wider adoption of cryptocurrencies by offering regulatory clarity, it also imposes additional compliance obligations on businesses and investors. As digital assets gain prominence in the global economy, Russia aims to leverage their potential while ensuring they are used responsibly within its financial system.

The Bank of Russia’s initiative represents a pivotal moment in the evolution of the nation’s digital financial landscape. Market participants will need to adapt to these changes and navigate the new regulatory environment as Russia positions itself at the forefront of crypto regulation.

China’s complex crypto landscape

China has had a complicated relationship with cryptocurrency, once holding the largest market for Bitcoin transactions globally before a crackdown began in 2017. Despite these regulatory restrictions, the blockchain industry in China remains a leader, with over 5,000 blockchain-related companies. China’s government continues to restrict domestic cryptocurrency trading and initial coin offerings (ICOs), citing concerns over volatility, anonymous transactions, and lack of centralised control. However, major blockchain companies like Binance and Huobi remain influential, and China still leads in blockchain projects globally.

Legally, China does not recognise cryptocurrency as legal tender. Instead, it considers them virtual commodities. Since 2013, the government has implemented several regulations aimed at restricting cryptocurrency trading and protecting investors. These regulations include a ban on domestic cryptocurrency exchanges, and ICOs, as well as the participation of financial institutions in cryptocurrency activities. Although the country has not passed comprehensive cryptocurrency legislation, the government has consistently emphasised that trading virtual currencies carries risks for individuals.

 Symbol

China has also addressed the taxation of cryptocurrency profits. Income generated from trading virtual currencies is subject to individual income tax, specifically categorised under ‘property transfer income.’ Tax authorities require individuals to report the purchase price and taxes, with the government stepping in to determine prices if proof is not provided. The approach demonstrates China’s ongoing control over cryptocurrency activities within its borders.

Despite the regulatory restrictions, China’s blockchain sector remains robust and influential. The government is clearly focused on managing the risks associated with digital currencies while fostering blockchain innovation, which is likely to continue to influence global cryptocurrency trends.

EU’s comprehensive crypto framework

At the forefront of regulatory efforts is the European Union, which unveiled its comprehensive regulatory framework known as the Markets in Crypto-Assets Act (MiCA) in 2020. After nearly three years of development, MiCA was approved by the European Parliament in April 2023, with the enactment date set for 30 December 2024. The MiCA framework aims to create legal clarity and consistency across the EU, streamlining the regulatory approach to crypto assets. Before MiCA, crypto firms in the EU had to navigate a complex landscape of varying national regulations and multiple licensing requirements, but the new legislation provides a unified licensing structure, which will apply across all 27 member states.

 Text

MiCA applies to all crypto assets that fall outside traditional EU financial regulations, covering everything from electronic money tokens (EMTs) and asset-referenced tokens (ARTs) to other types of crypto assets. These assets are defined based on how they function and are backed. EMTs, for example, are digital assets backed by a single fiat currency, while ARTs are pegged to a basket of assets. MiCA does not automatically apply to non-fungible tokens (NFTs) unless they share characteristics with other regulated assets. Additionally, decentralised applications (dApps), decentralised finance (DeFi) projects, and decentralised autonomous organisations (DAOs) may not be fully subject to MiCA, unless they do not meet the criteria for decentralisation.

Businesses that offer crypto-asset services, known as crypto-asset service providers (CASPs), are at the heart of MiCA’s regulatory scope. These include entities involved in cryptocurrency exchanges, wallet services, and crypto trading platforms. Under MiCA, CASPs will need to obtain authorisation to operate across the EU, with a unified process that eliminates the need for multiple licenses in each country. Once authorised, these businesses can offer services across the entire EU, provided they comply with requirements around governance, capital, anti-money laundering, and data protection.

MiCA also introduces important provisions for stablecoins, particularly fiat-backed stablecoins, which must be backed by a 1:1 liquid reserve. However, algorithmic stablecoins—those that do not have explicit reserves tied to traditional assets—are banned. Issuers of EMTs and ARTs will be required to obtain authorisation and provide whitepapers, outlining the characteristics of the assets and the risks to prospective buyers. MiCA’s regulations are designed to protect consumers, reduce market manipulation, and ensure that crypto activities remain secure and transparent.

This regulatory shift is expected to reshape the crypto landscape in the EU, offering businesses and consumers clearer protections and encouraging market integrity. As MiCA comes into effect in 2025, its impact is likely to reverberate beyond Europe, as other nations look to adopt similar frameworks for managing digital assets.

Japan’s evolving crypto regulations

Japan is considering lighter regulations for cryptocurrency intermediaries that are not crypto exchanges. The Financial Services Agency (FSA) recently proposed this to the Financial System Council, following Japan’s early cryptocurrency regulation after the Mt. Gox hack. Currently, crypto intermediaries such as apps or wallets that connect users to exchanges must register as crypto asset exchange service providers (CAESPs), but many do not handle customer funds directly.

 Architecture, Building, Symbol, Text, Sign, Number, Office Building

To reduce the regulatory burden, the FSA is exploring a system where intermediaries would register, provide user information, follow advertising restrictions, and potentially be liable for damages. They might also be required to maintain a security deposit, with exchanges absorbing liability for affiliated intermediaries. This proposal aims to create a more flexible regulatory framework for crypto-related businesses that do not operate exchanges.

Brazil’s new crypto market law

In late 2022, the National Congress approved a bill regulating the cryptocurrency market, focusing on areas like competition, governance, security, and consumer protection. The Central Bank of Brazil (BCB) and the Securities and Exchange Commission (CVM) will oversee its implementation. While there was no specific crypto regulation before, the new law will require companies, including exchanges, to obtain licenses, register with the Brazilian National Registry of Legal Entities (CNPJ), and report suspicious activities to the Council for Financial Activities Control (COAF).

Brazil aims for technological autonomy with a new AI investment initiative.

The regulation mandates KYC (Know Your Customer) and KYT (Know Your Transaction) practices to combat money laundering. It also aligns with the Penal Code of Brazil, enforcing penalties for fraud and crimes. Notably, exchanges must separate client assets from company assets, a provision not yet included in the law but proposed by the Brazilian Association of Cryptoeconomics (ABCripto).

The law was set to take effect between May and June 2023, with full implementation, including licensing rules, expected by 2025. While the decentralised nature of the global crypto market presents challenges, the new regulatory framework aims to offer greater security and attract more investors to the growing Brazilian crypto market.

UK push for crypto regulation

The United Kingdom has taken significant steps to regulate digital currencies, mandating that any company offering such services must obtain proper authorisation from the Financial Conduct Authority (FCA). This regulation is part of a broader effort to establish a clear and secure framework for digital assets, including cryptocurrencies and digital tokens, within the UK financial ecosystem. One area of particular focus is stablecoins, which are digital currencies pegged to stable assets, such as the US dollar or the British pound. Stablecoins have garnered attention for their potential to revolutionise the payments sector by offering faster and cheaper transactions compared to traditional payment methods.

 Logo, Dynamite, Weapon, QR Code, Maroon, Text

The Bank of England has proposed new regulations specifically targeting stablecoins to maximise their benefits while addressing potential risks. These proposed rules aim to strike a balance between encouraging innovation in digital payments and ensuring the financial system’s stability. The regulations are designed to ensure that stablecoins do not pose risks to consumer protection or the integrity of the financial market, particularly in terms of preventing money laundering and illicit financial activities.

This move highlights the UK’s proactive approach to digital asset regulation, aiming to foster a secure environment where cryptocurrencies and blockchain technologies can thrive without undermining the broader financial infrastructure. The efforts also underscore the UK’s commitment to consumer protection, ensuring that individuals and businesses engaging with digital currencies are properly safeguarded. With this comprehensive regulatory approach, the UK is positioning itself as a leader in the integration of digital currencies into traditional finance, setting a precedent for other nations exploring similar regulatory frameworks.

Kenya΄s crypto regulation attempt

Kenya’s journey with cryptocurrency regulation has evolved from scepticism to a more open stance as the government recognises its potential benefits. Initially, in the early 2010s, cryptocurrencies like Bitcoin were viewed with caution by the Central Bank of Kenya (CBK), citing concerns over volatility, fraud, and lack of consumer protection. This led to a public warning against the use of virtual currencies in 2015. However, the growing global interest in digital currencies, including in Kenya, continued, with nearly 10% of Kenyans owning cryptocurrency by 2022, driven by factors such as financial inclusion and the appeal of blockchain technology.

Kenya flag is depicted on a sports cloth fabric with many folds. Sport team waving banner

A turning point for Kenya came in 2018, when the government set up a task force to explore blockchain and the potential of AI, building on the success of mobile money services like M-Pesa. By 2023, the country began assessing money laundering risks associated with virtual assets, signalling a shift in attitude toward cryptocurrencies. By December 2024, the government introduced a draft National Policy on Virtual Assets and Virtual Asset Service Providers (VASPs), outlining a regulatory framework to guide the development of the market.

The proposed regulations include licensing requirements for cryptocurrency exchanges and wallet providers, as well as measures to prevent money laundering and countering and terrorist financing. Consumer protection and cybersecurity are also central to the framework, ensuring that users’ funds and personal data are safeguarded. The draft regulations are open for public consultation until 24 January 2025, with the government seeking input from industry players, consumer groups, and the public.

Kenya’s path from opposition to embracing cryptocurrency reflects a broader trend towards digital financial innovation. By creating a balanced regulatory environment, Kenya hopes to position itself as a leader in Africa’s digital financial revolution, fostering economic growth and financial inclusion, much like the success it achieved with M-Pesa.

The need for a global approach

As we already explained, the international nature of cryptocurrency markets presents unique regulatory challenges. Cross-border activities increase the risk of fraud and investor harm, highlighting the necessity of consistent global standards. The WEF emphasises that international collaboration is “not just desirable but necessary” to maximise the benefits of blockchain technology while mitigating risks.

 Cleaning, Person, People, Adult, Female, Woman, Disk

Differences in market maturity, regulatory capacity, and regional priorities complicate alignment. However, organisations such as IOSCO or the Financial Stability Board (FSB)  stress the role of international bodies and national regulators in fostering a unified regulatory framework. A global approach would not only enhance consumer protections but also create an environment conducive to innovation, ensuring the responsible evolution of cryptocurrency markets.

As the crypto ecosystem evolves, governments and international organisations are working to balance innovation and regulation. By addressing the challenges posed by digital assets through comprehensive, coordinated efforts, the global community aims to create a stable and secure financial environment in the digital age.

The US clock strikes ‘ban or divest TikTok’

TikTok faces an uncertain future as the US government’s 19 January 2025 deadline approaches, demanding ByteDance divest its US operations or face a nationwide ban. The ultimatum, backed by the Supreme Court’s apparent readiness to uphold the decision, appears to be the culmination of years of scrutiny over the platform’s data practices and ties to China. Amid this mounting pressure, reports suggest Elon Musk, the owner of X (formerly Twitter), could acquire TikTok’s US operations, a proposal that has sparked debates about its feasibility and geopolitical implications.

Now, let’s see how it began..

How did the TikTok odyssey begin?

The story of TikTok began in 2014 with Musical.ly, a social media app enabling users to create and share lip-sync videos. Founded in Shanghai, it quickly gained traction among US and European teenagers. By 2017, Musical.ly had over 100 million users and caught the attention of ByteDance, a Chinese tech giant that acquired it for $1 billion. In 2018, ByteDance merged Musical.ly with its domestic app Douyin, launching TikTok for international audiences. Leveraging powerful machine-learning algorithms, TikTok’s ‘For You Page’ became its defining feature, captivating users with an endless stream of personalised content.

TikTok, Person, People, Computer Hardware, Electronics, Hardware, Art
The US clock strikes 'ban or divest TikTok' 42

By 2018, TikTok had become one of the most downloaded apps globally, surpassing giants like Facebook and Instagram. Its cultural influence exploded, reshaping how content was created and consumed. From viral dance challenges to comedic skits, TikTok carved out a unique space in the digital world, particularly among younger users. However, its meteoric rise also brought scrutiny. Concerns emerged over user data privacy and potential manipulation by its parent company ByteDance, which critics claimed had ties to the Chinese government.

The ‘ban or divest’ saga

The incipit of the current conflict can be traced back to 2020 when then-President Donald Trump attempted to ban TikTok and Chinese-owned WeChat, citing fears that Beijing could misuse US data or manipulate public discourse through the platforms. The courts blocked Trump’s effort, and in 2021, President Joe Biden revoked the Trump-era orders, but initiated its review of TikTok’s data practices, keeping the platform under scrutiny. Despite challenges, TikTok continued to grow, surpassing 1 billion active users by 2021. It implemented community guidelines and transparency measures to address content moderation and concerns about misinformation. It also planned to store US user data on Oracle-operated servers to mitigate fears of Chinese government access. However, bipartisan concerns over TikTok’s influence persisted, especially regarding its ties to the Chinese government and the potential data misuse. Lawmakers and US intelligence agencies have long raised alarms about the vast amount of data TikTok collects on its US users and the potential for Beijing to exploit this information for espionage or propaganda. Therefore, last year, Congress passed a bill with overwhelming support requiring ByteDance to divest its US assets, marking the strictest legal threat the platform has ever faced.

The 19 January 2025 deadline and the rumours about Elon Musk’s potential acquisition of TikTok

By 2024, TikTok was at the centre of a geopolitical storm. The US government’s demand for divestment or a ban by 19 January 2025 intensified the platform’s challenges. Amid these disputes, Elon Musk, owner of X (formerly Twitter), has emerged as a potential buyer for TikTok’s US operations. Musk’s ties to US and Chinese markets via Tesla’s Shanghai production hub position him as a unique figure in this debate. If Musk were to acquire TikTok, it could bolster X’s advertising reach and data capabilities, aligning with his broader ambitions in AI and technology. However, such a sale would involve overcoming numerous hurdles, including ByteDance’s valuation of TikTok at $40–50 billion and securing regulatory approvals from both Washington and Beijing. On the other hand, ByteDance, backed by Beijing, is resisting the sale, arguing that the conditioning violates free speech and poses significant logistical hurdles.

 Person, Bulldozer, machine, Text
The US clock strikes 'ban or divest TikTok' 43

TikTok has attempted to safeguard its US user base of 170 million by planning to allow users to download their data in case the ban takes effect. It has also reassured its 7,000 US employees that their jobs and benefits are secure, even if operations are halted. While new downloads would be prohibited under the ban, existing users could retain access temporarily, although the platform’s functionality would degrade over time.

The looming deadline has sparked a surge in alternative platforms, such as RedNote (known in China as Xiaohongshu), which has seen a significant influx of US users in anticipation of TikTok’s potential exit.

TikTok’s cultural legacy and future

The fate of TikTok in the US hangs in the balance as President-elect Donald Trump considers an executive order to delay the enforcement of the ‘ban or divest’ law by up to 90 days. The potential extension, supported by figures from both political sides, including Senate Majority Leader Chuck Schumer and Trump’s incoming national security adviser Mike Waltz, aims to provide ByteDance, TikTok’s Chinese owner, additional time to divest its US operations and avoid a nationwide ban. With over 170 million American users and substantial ad revenue at risk, lawmakers are increasingly wary of the disruption a ban could cause, signalling bipartisan support to keep the app operational while addressing national security concerns. TikTok CEO Shou Zi Chew’s attendance at Trump’s inauguration further hints at a shift in relations between the platform and the new administration. Meanwhile, the uncertainty has already driven US users to explore alternatives like RedNote as the clock ticks down to the Sunday deadline.

Either way, TikTok’s impact on culture and technology is undeniable. It has redefined digital content creation and inspired competitors like Instagram Reels and YouTube Shorts. Yet, its journey highlights the challenges of navigating geopolitical tensions and concerns over data privacy in a hyper-connected world. As the 19 January deadline looms, TikTok stands at a crossroads. Whether it becomes part of Musk’s tech empire, succumbs to a US ban, or finds another path, its legacy as a trailblazer in short-form video content remains secure. The platform’s next chapter, however, hangs in the balance, as these TikTok developments underscore the broader implications of its struggles, including the reshaping of the social media landscape and the role of government intervention in regulating digital platforms.

OEWG’s ninth substantive session: Limited progress in discussions

The UN Open-Ended Working Group (OEWG) on the security of and in the use of information and communications technologies in 2021–2025 held its ninth substantive session on 2-6 December 2024. 

During the session, states outlined cooperative measures to counter cyber threats, continued discussions on possible new norms, tried to reach additional layers of understanding on the international law, discussed elements of the future permanent mechanism, discussed CBMs implementation and the POC Directory operalisation, deliberated the development and operationalisation of the Global Portal on Cooperation and Capacity-Building and the Voluntary Fund, and debated about the shape of the UN mechanism that will succeed the OEWG 2021-2025. 

While there was consensus on certain broad goals, contentious debates highlighted deep divisions, particularly regarding the applicability of international law, the role of norms, and the modalities of stakeholder participation.

Some of the main takeaways from this session are:

  • The threat landscape is rapidly evolving and with it, the OEWG discussions on threats, including measures to counter those threats.
  • The discussion on norms backslides into old disputes, namely the implementation of existing norms vs the development of new norms, in which states hold their old positions. However, the discussion is not entirely static, as many proposals for new norms have emerged. 
  • While the discussions on international law have deepened, and the states have presented very detailed views, there is still no agreement on whether new legally binding regulations for cyberspace are needed.
  • The discussions on CBMs included numerous practical recommendations pertaining to CBM implementation, the sharing of best practices and the operationalisation of the POC directory.
  • Opinions differ on several issues regarding capacity building, including specific details on the structure and governance of the proposed portal, the exact parameters of the voluntary fund, and how to effectively integrate existing capacity-building initiatives without duplication.
  • States disagreed on the scope of thematic groups in the future mechanism: while some countries insist on keeping traditional pillars of the OEWG agenda (threats, norms, international law, CBMs and capacity building), others advocate for a more cross-cutting and policy-oriented nature of such groups. The modalities of multistakeholder engagement in the future mechanism are also in the air. The agenda for the next meeting of the OEWG in February 2025 will likely be inverted, and delegations will start with discussions on regular institutional dialogue to ensure enough time is dedicated to this most pressing issue.

Threats: A rapidly evolving threat landscape
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

Discussions on threats have become more detailed – almost one-fourth of the session was dedicated to this topic. The chair noted that reflection of the rapidly evolving threat landscape, but also signals a growing comfort among states in candidly addressing these issues. 

What’s particularly interesting about this session is that states have dedicated just as much—if not more—time to discussing cooperative measures to counter these threats as they have to outline the threats themselves.

Threats states face in cyberspace

Emerging technologies, including AI, quantum computing, blockchain, and the Internet of Things (IoT), took centre stage in discussions. Delegates broadly acknowledged the dual-use nature of these innovations. On one hand, they offer immense developmental potential; on the other, they introduce sophisticated cyber risks. Multiple states, including South Korea, Kazakhstan, and Canada, highlighted how AI intensifies cyber risks, particularly ransomware, social engineering campaigns, and sophisticated cyberattacks. Concerns about AI misuse include threats to AI systems (Canada), generative AI amplifying attack surfaces (Israel), and adversarial manipulations such as prompt injections and model exfiltration (Bangladesh). 

Nations including Guatemala and Pakistan stressed the risks of integrating emerging technologies into critical systems, warning that without regulation, these systems could enable faster and more destructive cyberattacks. 

Despite the risks, states like Israel and Paraguay recognised the positive potential of AI in strengthening cybersecurity and called for harnessing its benefits responsibly. Countries like Italy and Israel called for international collaboration to ensure safe and trustworthy development and use of AI, aligning with human rights and democratic values.

Ransomware remains one of the most significant and prevalent cyber threats, as multiple delegations highlighted. Switzerland and Ireland flagged the growing sophistication of ransomware attacks, with the rise of ransomware-as-a-service lowering barriers for cybercriminals and enabling the proliferation of such threats. The Netherlands and Switzerland noted ransomware’s profound consequences on societal security, economic stability, and human welfare. Countries including Italy, Germany, and Japan emphasised ransomware’s disruptive impact on critical infrastructure and essential services, such as hospitals and businesses.

Critical infrastructure has become an increasingly prominent target for cyberattacks, with threats stemming from both cyber criminals and state-sponsored actors. Essential services such as healthcare, energy, and transportation are particularly affected. However, the EU, along with countries such as the Netherlands, Switzerland, and the USA, have also raised concerns about malicious activities disrupting essential services and international organisations, including humanitarian agencies. 

Countries such as Ireland, Canada, Argentina, Fiji and Vanuatu have raised alarms about the rising number of cyber incidents targeting these critical subsea infrastructures. These cables are vital for global communication and data transfer, and any disruption could have severe consequences. Ireland called for further examination of the particular vulnerabilities and threats to critical undersea infrastructure, the role of states in the private sector in the operation and security of such infrastructure, and the application of international law which must govern responsible state use and activity in this area.

Germany and Bangladesh highlighted the role of AI in automating disinformation campaigns, scaling influence operations and tailoring misinformation to specific cultural contexts. Countries such as China, North Korea and Albania noted the rampant spread of false narratives and misinformation, emphasising their ability to manipulate public opinion, influence elections, and undermine democratic processes. Misinformation is weaponised in various forms, including phishing attacks and social media manipulation. Misinformation and cyberattacks are increasingly part of broader hybrid threats, aiming to destabilise societies, weaken institutions, and interfere with electoral processes (Albania, Ukraine, Japan, Israel, and the Netherlands). Several countries, including Cuba, Russia, and Bangladesh, stressed how cyber threats, including disinformation and ICT manipulation, are used to undermine the sovereignty of states, interfere in internal affairs, and violate territorial integrity. Countries like Israel and Pakistan warned of the malicious use of bots, deepfakes, phishing schemes, and misinformation to influence public opinion, destabilise governments, and compromise national security. Bosnia highlighted the complexity of these evolving threats, which involve both state and non-state actors working together to destabilise countries, weaken trust, and undermine democratic values.

Cyber operations in the context of armed conflict are no longer a novel concept but have become routine in modern warfare, with enduring consequences, according to New Zealand. Similar observations were made by countries such as the USA, Germany, Albania, North Korea and Pakistan. A worrisome development was brought forth by Switzerland, which noted the involvement of non-state actors in offensive actions against ICTs within the framework of armed conflict between member states.

Countries are also increasingly concerned about the growing sophistication of hacking-as-a-service, malware, phishing, trojans, and DDoS attacks. They are also concerned about the use of cryptocurrencies for enhanced anonymity. Israel also highlighted that the proliferation and availability of advanced cyber tools in the hands of non-state actors and unauthorised private actors constitute a serious threat. The proliferation of commercial cyber intrusion tools, including spyware, is raising alarm among nations like Japan, Switzerland, the UK and France. The UK and France emphasised that certain states’ failure to combat malicious activities within their territories exacerbates the risks posed by these technologies. Additionally, Kazakhstan warned about advanced persistent threats (APTs) exploiting vulnerable IoT devices and zero-day vulnerabilities.

Cuba rejected the militarisation of cyberspace, offensive operations, and information misuse for political purposes. They called for peaceful ICT use and criticized media platforms for spreading misinformation. The UK emphasised states’ responsibilities to prevent malicious activities within their jurisdiction and to share technical information to aid network defenders. Russia warned against hidden functions in ICT products used to harm civilian populations, calling for accountability from countries enabling such activities. Columbia suggested that states which have been the victims of cyberattacks could consider the possibility of undertaking voluntary peer reviews, where they would share their experiences, including lessons learned, challenges, and protocols for protection, response, and recovery.

Cooperative measures to counter threats

Most countries noted the role of capacity building in enabling states to protect themselves. The EU called for coordinated efforts to capacity building and for more reflection on best practices and practical examples. Capacity-building initiatives should align with regional and national contexts, Switzerland and Kazakhstan noted, focusing on identifying vulnerabilities, conducting cyberattack simulations, and developing robust measures, Kazakhstan noted. Columbia highlighted that states should express their needs for capacity building to adequately identify the available supply. Malawi and Guatemala advocated for capacity building, partnerships with international organisations, and knowledge-sharing between governments, the private sector, and academia. Albania emphasised the importance of UN-led training initiatives for technical and policy-level personnel.

The discussions highlighted the urgent need to bridge the technological divide, enabling developing countries to benefit from advancements and manage cyber risks. Vanuatu emphasised the importance of international capacity-building and cooperation to ensure these nations can not only benefit from technological advancements but also manage the associated risks effectively. Zimbabwe called for the OEWG to support initiatives that provide technical assistance and training, empowering developing nations to build robust cybersecurity frameworks. Cuba reinforced this by advocating for the implementation of technical assistance mechanisms that enhance critical infrastructure security, respecting the national laws of the states receiving assistance. Nigeria stressed the importance of equipping personnel in developing countries with the skills to detect vulnerabilities early and deploy preventive measures to safeguard critical information systems.

States also noted that the topic of threats must be included in the new mechanism. Mexico proposed creating a robust deliberative space within the mechanism to deepen understanding and foster cooperation, enhancing capacities to counter ICT threats. Sri Lanka supported reviewing both existing and potential ICT threats within the international security context of the new mandate. Brazil suggested the future mechanism should incorporate dedicated spaces for sharing threats, vulnerabilities, and successful policies. Some countries gave concrete suggestions for thematic groups on threats under the new mechanism. For instance, France highlighted that sector-specific discussions on threats and resilience could serve as strong examples for thematic groups within the future mechanism. Colombia called for a standing thematic working group focused on areas like cyber incident management, secure connectivity technologies (e.g., 5G), and policies for patching and updates. Singapore emphasised using future discussions to focus on building an understanding of emerging technologies and their governance. Egypt advocated for a flexible thematic group on threats within the mechanism, capable of examining ICT incidents with political dimensions.. New Zealand recommended focusing discussions on cross-cutting themes such as critical infrastructure, enabling states to better understand and mitigate threats. Cuba echoed the importance of the future permanent mechanism taking into account the protection of critical infrastructure, and underscored the importance of supporting developing countries with limited resources to protect critical infrastructure.

Delegations highlighted the Global Point of Contact (POC) Directory as a key tool for enhancing international cooperation on cybersecurity. Ghana, Argentina and Kazakhstan emphasised its role in facilitating information exchange among technical and diplomatic contacts to address cyber threats. South Africa proposed using the POC Directory for cybersecurity training and sharing experiences on technologies like AI. Chile stressed that the POC Directory can play a central role in the for improved cyber intelligence capacity and coordinated responses to large-scale incidents. Malaysia called for broader participation and active engagement in POC activities.

Several countries emphasised the importance of strengthening collaboration among national Computer Emergency Response Teams (CERTs). Ghana and New Zealand supported CERT-to-CERT cooperation, with Ghana calling for sharing best practices. Nigeria suggested creating an international framework for harmonising cyber threat responses, including strategic planning and trend observation. Singapore highlighted timely and relevant CERT-related information sharing and capacity building as key to helping states, especially smaller ones, mitigate threats. Fiji prioritised capacity building for CERTs.

Several nations, including Argentina, Sri Lanka, and Indonesia, called for establishing a global platform for threat intelligence sharing. These platforms would enable real-time data exchange, incident reporting, and coordinated responses to strengthen collective security. Such mechanisms, built on mutual trust, would also facilitate transparency and enhance preparedness for emerging cyber challenges. Switzerland voiced support for discussing the platform but also noted that exchanging each member state’s perception of the identified threats can happen through bilateral, regional, or multilateral collaboration forums, or simply by making a member state’s findings publicly accessible.

Egypt noted that there must also be discussions on both the malicious use of ICT by non-state actors, as well as the role and responsibilities of the private sector in this regard. 

Countries like El Salvador and Ghana underscored the importance of integrating security and privacy by design approaches into all stages of system development, ensuring robust protections throughout the lifecycle of ICT systems.

Building shared resilience in cyberspace hinges on collective awareness of threats and vulnerabilities. Bosnia stressed collaboration as essential, while Moldova and Albania highlighted the need for education and awareness campaigns to engage governments, private entities, and civil society. Vietnam advocated using international forums and UN agencies like ITU to bolster critical infrastructure resilience. Similarly, Paraguay called for creating awareness on the use of covert information campaigns, which may become incident, cyber incidents and tools for cyberattacks. Zimbabwe emphasised the critical importance of operationalising CBMs to foster trust and cooperation among nations in cyberspace.Belgium and Egypt emphasised the need to focus on the human impact of cyber threats and to use methodologies measuring harm to victims.

Norms: New norms vs norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

The discussions on norms highlighted once again the division of states on binding vs voluntary andell as the implementation of existing norms vs the development of new norms. 

The chair invited all delegations to reflect on how states can bridge the divides if the discussion on new norms means that states are not prioritising implementation and if states can do both. The chair reminded stakeholders that ideas for new norms have come from delegations, but also from stakeholders. He also added that some of the delegations have said it’s too late to discuss new norms because the process is concluding (e.g. Canada); However, he reminded that when states began the process, some of the delegations also said it’s too early to get into a discussion because it’s important to focus on implementation. The chair concluded by noting that ‘it’s never a good time and it’s always a good time’.

First of all, the main disagreement was over binding vs voluntary norms as well as implementation of existing norms vs development of new norms. Some states, including Zimbabwe, Russia, and Belarus, advocate for the development of a legally binding international instrument to govern ICT security and state behaviour. They argue that existing voluntary norms are insufficient to address emerging threats. 

However, the discussion also served as a platform for new proposals from delegations to achieve a safe and secure cyber environment.  

Some states also proposed specific new norms to address emerging challenges:

  • El Salvador suggested recognising the role of ethical hackers in cybersecurity.
  • Russia proposed several new norms, including:
    • The sovereign right of each state to ensure the security of its national information space as well as to establish norms and mechanisms for governance in its information space in accordance with national legislation.
    • Prevention of the use of ICTs to undermine and infringe upon the sovereignty, territorial integrity and independence of states as well as to interfere in their internal affairs.
    • Inadmissibility of unsubstantiated accusations brought against states of organising and committing wrongful acts with the use of ICTs including computer attacks followed by imposing various restrictions such as unilateral economic measures and other response measures
    • Settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means of the state’s choice including through consultations with the relevant national authorities of states involved.
  • Belarus suggested new norms which could include the norm of national sovereignty, the norm of non-interference in internal affairs, and the norm of exclusive jurisdiction of states over the ICT sphere within the bounds of their territory.
  • China noted that new norms could be developed for data security, supply chain security, and the protection of critical infrastructure, among others.

In addition to this, some states proposed amending or enhancing the existing norms:

  • EU would like to see greater emphasis on the protection of all critical infrastructures supporting essential public services, particularly medical and healthcare facilities, along with enhanced cooperation between states. The EU also wants a priority focus on the critical infrastructure norms 13F, G and H.
  • El Salvador proposed strengthening privacy protections under Norm E, which Malaysia, Singapore and Australia supported. 
  • UK suggested a new practical action recommending that states safeguard against the potential for the illegitimate and malicious use of commercially available ICT intrusion capabilities by ensuring that their development, dissemination, purchase, export or use is consistent with international law, including the protection of human rights and fundamental freedoms under Norm I, which Canada, Switzerland, Malaysia, Australia, France supported.
  • Kazakhstan proposed:
    • adding a focus on strengthening personal data protection measures through the development and enforcement of comprehensive data protection laws to safeguard personal data from unauthorized access, misuse, or exploitation under the norm E
    • emphasising the importance of conducting international scenario-based discussions that simulate ICT-related disruptions under Norm G
    • establishing unified baseline cybersecurity standards will enable all states, respective of their technological development, to protect their critical infrastructure effectively under Norm G
    • promoting ethical guidelines for the development and use of technologies such as AI under Norm K
  • Canada suggested adding text under norm G: ‘Cooperate and take measures to protect international and humanitarian organizations against malicious cyber activities which may disrupt the ability of these organizations to fulfill their respective mandates in a safe, secure and independent manner and undermine trust in their work’

In contrast, other states such as the US, Australia, UK, Canada, Switzerland, Italy and others opposed the creation of new binding norms and highlighted the necessity to prioritise the implementation of the existing voluntary framework.

In between these two polar opposites, there were states who favoured a parallel development arguing that the implementation and the development of new norms can proceed simultaneously. These states were Singapore, China, Indonesia, Malaysia, Brazil, and South Africa.

Egypt questioned if states need to discuss enacting a mix of both binding and non-binding measures to deal with the increasing and rapid development of threats, as well as suggested that states might consider developing a negative list of actions that states are required to refrain from.

Japan called for a priority to focus on the implementation of the norms in a more concrete way. Russia called for the same, and suggested that states present a review of their compliance with national legislation and doctrinal documents with the rules, norms, and principles of behaviour in the field of international information security (IIS), which has been approved by the UN. Russia submitted its review of national compliance with the agreed norms.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

More than fifty member states delivered their statements in the discussions on international law, which included several small and developing states that have previously not done so. 

The discussions highlighted the diverse national and regional perspectives on the application of international law, especially the Common African Position on the application of international law in cyberspace, and the EU’s Declaration on a Common Understanding of International Law in Cyberspace. Tonga, on behalf of the 14 Pacific Island Forum member states, presented a position on international law affirming that international law, including the UN Charter in its entirety, is applicable in cyberspace. Fiji, on behalf of a cross-regional group of states that includes Australia, Colombia, El Salvador, Estonia, Kiribati, Thailand, and Uruguay has recalled a working paper that reflected additional areas of convergence on the application of international law in the use of ICTs. 

As mentioned by Canada, Ireland, France, Switzerland, Australia, and others, these statements build momentum at the OEWG in building common understandings on international law, as over a hundred states have individually or collectively published their positions.

Applicability of international law to cyberspace

Despite the many published statements and intensified discussions, the main major rift between the states persists. On the one hand, the vast majority of the member states call for discussions on how international law applies in cyberspace and do not see the reason to negotiate new legally binding regulations. On the other hand, some states want to see the development of new legally binding regulations (Iran, also recalling requests by the countries of the Non-Aligned Movement, Cuba on behalf of the delegations of the Bolivarian Republic of Venezuela, Nicaragua, as well as Russia, China, Pakistan).

The majority of the states addressed the need to emphasise the applicability of international humanitarian law in the cyber context (EU, Lebanon, the USA, Australia, Poland, Finland, Republic of Korea, Japan, Malawi, Egypt, Sri Lanka, Brazil, South Africa, the Philippines, Ghana, and others) recalling the Resolution on protecting civilians and other protected persons and objects against the potential human cost of ICT activities during armed conflict adopted by consensus at the 34th International Conference of the Red Cross and Red Crescent as a major step forward in international armed conflicts.

EU, Colombia, El Salvador, Uruguay, Australia, Estonia, and others expressed regret that the APR3 did not include a reference to the international humanitarian law and called for it to be included in the final OEWG report.

Other topics

The states also shared what topics in international law shall be discussed in more detail. State responsibility, sovereignty and sovereign equality, attribution and accountability were the most mentioned topics. The member states differed in their opinions on whether the topic of international law and norms should be discussed in the future mechanism within one thematic track or not. 

On capacity building in international law, scenario-based exercises received overwhelming support, with Ghana and Sierra Leone recalling the importance of South-South cooperation and regional capacity-building efforts.

One of the main deciding factors for the future of discussions on international law will certainly be the future permanent mechanism if the states decide to establish under said mechanism a dedicated group which will discuss international law. That would allow states to keep a status quo until the end of the OEWG’s mandate and defer the issue to the next mechanism.

CBMs: Implementing the CBMs and operationalising the POC directory
 Stencil, Text

This session was marked by noticeable activity in the CBM domain – from both developed and developing states – with the organisation of substantial side events and dedicated conferences as well as cross-regional meetings throughout the year. The letter sent by the chair in mid-November channelled pragmatic discussions and the session was marked by numerous practical recommendations pertaining to CBM implementation, the sharing of best practices and the operationalisation of the POC directory. 

A new dynamic concerning CBMs is emerging, now that additional CBMs no longer appear to be a concern. It is likely that the further implementation of CBMs will rely on capillarity. First, from the general CBM implementation point of view, capillarity is expected through the sustained commitment from states to share best practices in a cross-regional way, as shown in the inter-regional conference on cybersecurity organized by the Republic of Korea and North Macedonia, bringing together the OSCE, OAS, ECOWAS and African Union. Second, new levels of participation in the POC directory have been specifically linked to such initiatives and to more general capacity-building to which states are highly recommended to contribute.

CBMs implementation and sharing of best practices

Whereas the guiding questions provided by the chair were oriented towards the implementation of existing CBMs, few new CBMs and measures were nevertheless proposed and not extensively picked up nor discussed by most delegations. The well-worn question of shared technical terminology was brought back to the table solely by Paraguay, and Thailand mentioned an additional measure about CERT-to-CERT cooperation. Finally, Iran proposed a 9th CBM considering the facilitation of access to the ICT security market with the view to mitigate potential risks in the supply chain. El Salvador and Malaysia recommended the inclusion of voluntary identification of critical infrastructure and critical information infrastructure to the CBM 7 current phrasing. 

Focusing on implementation, Switzerland shared an OSCE practice called ‘Adopt-a-CBM’ in which individual or several states adopt a CBM and are committed to its implementation and recommended that CBMs 2, 5, 7 and 8 would be suitable for this approach. Kazakhstan also advised something similar in focusing on specific CBMs and engaging with individual states to promote them. Indonesia and El Salvador displayed numerous ways to foster the implementation of CBMs, among which the importance of shared practices that could fuel guidelines as practical reference for member states.

A substantive engagement by various states was noted, especially about the sharing of specific practices pertaining to each CBMs. Whereas most of these practices are usually confined to regional frameworks, it is noticeable that numerous states have densely exchanged best practices at an ever more global level through the application of CBM 6 about the organisation of workshops, seminars and training programs with inclusive representation of states (Germany, Korea, Peru, Fiji and the UK) and CBM 2 about the exchanging of views and dialogue from bilateral to cross-regional and multilateral levels (Germany, Peru, and Moldova). Consequently, some states also shared their application of CBM 5 about the promotion of information exchange on cooperation and partnership between states to strengthen capacity-building (Korea, Peru). More specific best practice exchange on the protection of CI and CII (CBM 7) was also noted to be undertaken by several states (Malaysia, Fiji, and the UK). Finally, CBM 8 on the strengthening of public-private sector partnership and cooperation was also fostered by several states (Korea, Albania, and the UK).

POC directory operationalisation 

At the time of the 9th substantive meeting, 111 countries had joined the POC directory. Most states sharing insights on ways to increase participation suggested raising awareness through workshops, webinars and side events (for instance, Albania and Kazakhstan). At this level of participation, it is reasonable to think that any increase in participating states should be considered a matter of capacity-building (South Africa).

Still, some states already started sharing their experience with the use of the POC and the feedback could not be more contrasted. On the one hand, Russia stated that it already had problems when cooperating on incident response through the POC directory given that some contacts did not work and some technical POCs had too limited powers which left them unable to respond to notifications. Consequently, it recommended that the determination of the scope of competence of each of the POC should be the first priority task, only supported by Slovakia. On the other hand, France shared that it had received several demands of communications since the creation of the POC and that it answered positively to all of them. Russia and China urged other states to actively use the POC directory; France nevertheless advocated not to exploit and overuse the tool at the risk of making it inoperable.

Lines of division nevertheless sometimes fade and the one about the template question was definitely less stark than last session, considering that few states expressed their reluctance to build such a template (Switzerland and Israel). Contributions nevertheless ranged from general opinion about the format of the template to the very detail of its content. Most delegates advocated for flexible and voluntary templates (Indonesia, Malaysia, Singapore, Thailand, the Netherlands and Paraguay). This framing was justified as enabling a better accommodation of different institutional frameworks as well as local and regional concerns (Brazil, Thailand, the Netherlands, and Singapore). All states nevertheless reasserted the necessity for the template to be as simple as possible for either capacity-building and resource constraints (Kiribati and Russia) or emergency reasons (Brazil, Paraguay, and Thailand). South Africa, supported by Brazil, proposed that the template should at a minimum provide a brief description of the nature of assistance sought, details of the cyber incident, acknowledgement of receipt by the requested state and provide indicative response timeframes. Indonesia added to this list the response actions taken, the requests for technical assistance or additional information and the emergency contacts options. Finally, Kazakhstan notably suggested numerous examples of templates each dedicated to various scenarios such as incident escalation, threat intelligence, CBM reporting, POC verification, capacity-building, cross-border incident coordination, annual reporting and lessons learned. The Secretariat is still expected to produce such a template by April 2025 and the chair expressed its intention to have standardised templates as an outcome of the July report.

Capacity building: Trust fund and Global Cyber Security Cooperation Portal
(GCSCP)
 Art, Drawing, Doodle

As usual, capacity building is one of the topics where there is a high level of consensus, albeit in broad strokes. There isn’t a single delegation denying the importance of capacity building to enhance global cybersecurity. However, opinions differ on several issues, including specific details on the structure and governance of the proposed portal, the exact parameters of the voluntary fund, and how to effectively integrate existing capacity-building initiatives without duplication. It is expected that the OEWG will continue to speak about these issues at length in order to have concrete details in its July 2025 Annual Progress Report (APR) and to allow the future mechanism to dive deeper into capacity building.

During the December session, delegations discussed the development and operationalisation of the Global Portal on Cooperation and Capacity-Building. Most delegations envisioned the portal as a neutral, member-state-driven platform that would adapt dynamically to an evolving ICT environment, integrating modules like the needs-based catalogue to guide decision-making and track progress as well as Kuwait’s latest proposal to add a digital tool module to streamline norm adoption. On the contrary, Russia expressed concerns over the exchange of data on ICT incidents through the portal, stating that such data is confidential data and could be used to level politically motivated accusations. 

The session also discussed the creation of a Voluntary Contribution Fund to support capacity building in the future permanent mechanism. South Africa and other delegations highlighted the need for clearly defined objectives, governance, and operational frameworks to ensure the fund’s efficiency and transparency. Monitoring mechanisms were deemed essential to guarantee alignment with objectives. Delegates broadly agreed on avoiding duplication of efforts, emphasising that the portal and the fund should complement existing initiatives such as the UNIDIR cyber policy portal, the GFCE civil portal, and the World Bank Cyber Trust Fund, rather than replicate their functions or those of regional organizations.

Further deliberations addressed the timing of the next High-Level Global Roundtable on capacity building. The roundtable’s potential overlap with the 2025 Global Conference on Cyber Capacity Building in Geneva presented scheduling challenges, prompting consideration of a 2026 date. Discussions on UNODA’s mapping exercise revealed mixed views: while it highlighted ongoing capacity-building efforts, many felt it inadequately identified gaps, leading to calls for a yearly mapping exercise. 

Finally, multistakeholder engagement emerged as a contentious issue, with Canada and the UK criticising the exclusion of key organisations like FIRST and the GFCE from formal sessions. Delegates called for reforms to ensure broader, more inclusive participation from non-governmental and private sector entities essential to global cybersecurity efforts.

Regular institutional dialogue: Thematic groups and multistakeholder participation
 Accessories, Sunglasses, Text, Handwriting, Glasses

During the last substantive session in July 2024, states adopted the third Annual Progress Report (APR) which contained some modalities of the future regular institutional dialogue (RID) mechanism. One substantive plenary session, at least a week long, will be held annually to discuss key topics and consider thematic group recommendations. States decided that thematic groups within the mechanism would be established to allow for deeper discussions. The chair may convene intersessional meetings for additional issue-specific discussions. A review conference every five years will monitor the mechanism’s effectiveness, provide strategic direction, and decide on any modifications by consensus. 

 Text, Page, Symbol

At the December 2024 substantive session, states continued discussing the number and scope of dedicated thematic groups and modalities of stakeholder participation.

Thematic groups in the future mechanism

There was a general divergence between states regarding the scope of thematic groups. Russia, Cuba, Iran, China, and Indonesia insisted on keeping traditional pillars of the OEWG agenda (threats, norms, international law, CBMs and capacity building). However, the EU, Japan, Guatemala, the UK, Thailand, Chile, Argentina, Malaysia, Israel, and Australia advocated for a more cross-cutting and policy-oriented nature of such groups. 

France and Canada gave suggestions in that vein. France suggested creating three groups that would discuss (a) building the resilience of cyber ecosystems and critical infrastructures, (b) cooperation in the management of ICT-related incidents, and (c) prevention of conflict and increasing stability in cyberspace. Canada suggested addressing practical policy objectives, such as protecting critical infrastructure and assisting states during a cyber incident, including through focused capacity building. The USA suggested the same two groups and highlighted that the new mechanism should maintain the best of the OEWG format but also allow for more in-depth discussion via the cross-cutting working groups on specific policy challenges.

The chair noted that the pillars could help organise future plenary sessions and that cross-cutting groups do not have to signal the end of pillars.

Some states asked for a dedicated group on the applicability of international law (Switzerland, Singapore), but Australia objected. Also, states proposed a dedicated group to create a legally binding mechanism (Cuba, Russia, Iran, South Africa, Thailand). Israel suggested having rotating agendas for thematic groups to keep their number limited.

Multistakeholder participation in the future mechanism

One issue that the OEWG has been struggling with from the start is modalities of multistakeholder engagement. The extent and nature of stakeholder participation was an issue at this session as well. The EU called for meaningful stakeholder participation without a veto from a single state. Canada proposed an accreditation process for stakeholders while emphasising that states would retain decision-making power. Mexico proposed creating a multistakeholder panel to provide inputs on agenda items and suggested considering the UN Convention on Climate Change model for stakeholder participation. Israel suggested adopting stakeholder modalities similar to the Ad Hoc Committee on Cybercrime. In contrast, Iran and Russia argued for maintaining current OEWG modalities, limiting stakeholder participation to informal, consultative roles on technical matters. 

A number of questions remain open, the Chair noted. For instance, is there a need for a veto mechanism for stakeholder participation in the future process? If yes, is there a need for an override mechanism, or a screening mechanism? Is there a need for identical modalities for stakeholder participation in different parts of the future process?

As for the timing of meetings, states also expressed concerns that sessions are too lengthy and that attending numerous thematic sessions and intersessionals will be burdensome for small state delegations. The option to turn some of them into hybrid/virtual meetings was also criticised because states miss the opportunity for in-person interaction onsite. Another way to condense all the activities in 2-3 weeks at once also causes problems as there will be no room for reaching any agreement without properly consulting capital.

Argentina and South Korea asked for a report on the budget implications of the specialised groups, other mechanism initiatives, and the secretariats’ work. 

Finally, Canada, Egypt, the USA, the Philippines, New Zealand, the UK, Malaysia, Switzerland,  Izrael,  Colombia, and Czechia expressed the wish to dedicate more time to discuss the next mechanism at the beginning of the next substantive session. At the same time, Brazil, Argentina and South Africa suggested spending the entire February session on this issue.

What’s next?

As the end of the mandate approaches, with only one more substantive session scheduled in February 2025, the pressure for progress in multiple areas is mounting. 

So far, CBMs and capacity building remain the most uncomplicated topics to discuss and are just waiting to be operationalised. In fact, the OEWG’s schedule for the first quarter of 2025 includes the Global POC Directory simulation exercise and an example template for the Global POC Directory, as well as reports on the Global Ict Security Cooperation And Capacity-Building Portal and the Voluntary Fund. 

The discussion on threats has deepened, maintaining momentum despite occasional tensions between geopolitical rivals. 

However, the discussions on norms and international have been static for quite some time, with deeply entrenched views not budging. RID is currently the most pressing issue if states want to hit the ground running and not get tangled in red tape at the beginning of the next mechanism. 

To expedite discussions on RID, the Chair will put together a discussion paper and make it available to delegations well before the next substantive session in February 2025. The chair will also likely schedule an informal town hall meeting before the February session to hear reactions.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated page:

un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.

Quantum leap: The future of computing

If AI was the buzzword for 2023 and 2024, quantum computing looks set to claim the spotlight in the years ahead. Despite growing interest, much remains unknown about this transformative technology, even as leading companies explore its immense potential.

Quantum computing and AI stand as two revolutionary technologies, each with distinct principles and goals. Quantum systems operate on the principles of quantum mechanics, using qubits capable of existing in multiple states simultaneously due to superposition. Such systems can address problems far beyond the reach of classical computers, including molecular simulations for medical research and complex optimisation challenges.

AI and quantum computing intersect in areas like machine learning, though AI still depends on classical computing infrastructure. Significant hurdles remain for quantum technology, including qubit errors and scalability. The extreme sensitivity of qubits to external factors, such as vibrations and temperature, complicates their control.

Quantum computing

Experts suggest quantum computers could become practical within 10 to 20 years. Classical computers are unlikely to be replaced, as quantum systems will primarily focus on solving tasks beyond classical capabilities. Leading companies are working to shorten development timelines, with advancements poised to transform the way technology is utilised.

Huge investments in quantum computing

Investments in quantum computing have reached record levels, with start-ups raising $1.5 billion across 50 funding rounds in 2024. Figure like this one nearly doubles the $785 million raised the previous year, setting a new benchmark. The growth in AI is partly driving these investments, as quantum computing promises to handle AI’s significant computational demands more efficiently.

Quantum computing offers unmatched speed and energy efficiency, with some estimates suggesting energy use could be reduced by up to 100 times compared to traditional supercomputers. As the demand for faster, more sustainable computing grows, quantum technologies are emerging as a key solution.

Microsoft and Atom Computing announce breakthrough

In November 2024, Microsoft and Atom Computing achieved a milestone in quantum computing. Their system linked 24 logical qubits using just 80 physical qubits, setting a record in efficiency. This advancement could transform industries like blockchain and cryptography by enabling faster problem-solving and enhancing security protocols.

Despite the challenges of implementing such systems, both companies are aiming to release a 1,000-qubit quantum computer by 2025. The development could accelerate the adoption of quantum technologies across various sectors, paving the way for breakthroughs in areas such as machine learning and materials science.

Overcoming traditional computing’s limitations

Start-ups like BlueQubit are transforming quantum computing into a practical tool for industries. The San Francisco-based company has raised $10 million to launch its Quantum-Software-as-a-Service platform, enabling businesses to use quantum processors and emulators that perform tasks up to 100 times faster than conventional systems.

Industries such as finance and pharmaceuticals are already leveraging quantum optimisation. Specialised algorithms are addressing challenges like financial modelling and drug discovery, showcasing quantum computing’s potential to surpass traditional systems in tackling complex problems.

Google among giants pushing quantum computing

Google has recently introduced its cutting-edge quantum chip, Willow, capable of solving a computational problem in just five minutes. Traditional supercomputers would require approximately 10 septillion years for the same task.

The achievement has sparked discussions about quantum computing’s link to multiverse theories. Hartmut Neven, head of Google’s Quantum AI team, suggested the performance might hint at parallel universes influencing quantum calculations. Willow’s success marks significant advancements in cryptography, material science, and artificial intelligence.

Commercialisation is already underway

Global collaborations are fast-tracking quantum technology’s commercialisation. SDT, a Korean firm, and Finnish start-up SemiQon have signed an agreement to integrate SemiQon’s silicon-based quantum processing units into SDT’s precision measurement systems.

SemiQon’s processors, designed to work with existing semiconductor infrastructure, lower production costs and enhance scalability. These partnerships pave the way for more stable and cost-effective quantum systems, bringing their use closer to mainstream industries.

Quantum technologies aiding mobile networks

Telefonica Germany and AWS are exploring quantum applications in mobile networks. Their pilot project aims to optimise mobile tower placement, improve network security with quantum encryption, and prepare for future 6G networks.

Telefonica’s migration of millions of 5G users to AWS cloud infrastructure demonstrates how combining quantum and cloud technologies can enhance network efficiency. The project highlights the growing impact of quantum computing on telecommunications.

Addressing emerging risks

Chinese researchers at Shanghai University have exposed the potential threats quantum computing poses to existing encryption standards. Using a D-Wave quantum computer, they breached algorithms critical to modern cryptographic systems, including AES-256, commonly used for securing cryptocurrency wallets.

Although current quantum hardware faces environmental and technical limitations, researchers stress the urgent need for quantum-resistant cryptography. New encryption methods are essential to safeguard digital systems against future quantum-based vulnerabilities.

Quantum computing promises revolutionary capabilities but must overcome significant challenges in scaling and stability. Its progress depends on interdisciplinary collaboration in physics, engineering, and economics. While AI thrives on rapid commercial investment, quantum technology requires long-term support to fulfil its transformative potential.

Overview of AI policy in 10 jurisdictions

Brazil

Summary:

Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspired by the EU’s AI Act, the bill proposes a risk-based framework, categorising AI systems as unacceptable (banned), high risk (strictly regulated), or low risk (less oversight). This effort builds on Brazil’s 2019 National AI Strategy, which emphasises ethical AI that benefits society, respects human rights, and ensures transparency. Using the OECD’s definition of AI, the bill aims to protect people while fostering innovation.

As of the time of writing, Brazil does not yet have any AI-specific regulations with the force of law. However, the country is actively working towards establishing a regulatory framework for artificial intelligence. Brazilian legislators are currently considering the Proposed AI Regulation Bill No. 2338/2023, though the timeline for its adoption remains uncertain.

Brazil’s journey toward AI regulation began with the launch of the Estratégia Brasileira de Inteligência Artificial (EBIA) in 2019. The strategy outlines the country’s vision for fostering responsible and ethical AI development. Key principles of the EBIA include:

  • AI should benefit people and the planet, contributing to inclusive growth, sustainable development, and societal well-being.
  • AI systems must be designed to uphold the rule of law, human rights, democratic values, and diversity, with safeguards in place, such as human oversight when necessary.
  • AI systems should operate robustly, safely, and securely throughout their lifecycle, with ongoing risk assessment and mitigation.
  • Organisations and individuals involved in the AI lifecycle must commit to transparency and responsible disclosure, providing information that helps:
  1. Promote general understanding of AI systems;
  2. Inform people about their interactions with AI;
  3. Enable those affected by AI systems to understand the outcomes;
  4. Allow those adversely impacted to challenge AI-generated results.

In 2020, Brazil’s Chamber of Deputies began working on Bill 21/2020, aiming to establish a Legal Framework of Artificial Intelligence. Over time, four bills were introduced before the Chamber ultimately approved Bill 21/2020.

Meanwhile, the Federal Senate established a Commission of Legal Experts to support the development of an alternative AI bill. The commission held public hearings and international seminars, consulted with global experts, and conducted research into AI regulations from other jurisdictions. This extensive process culminated in a report that informed the drafting of Bill 2338 of 2023, which aims to govern the use of AI.

Following a similar approach to the European Union’s AI Act, the proposed Brazilian bill adopts a risk-based framework, classifying AI systems into three categories:

  • Unacceptable risk (entirely prohibited),
  • High risk (subject to stringent obligations for providers), and
  • Non-high risk.

This classification aims to ensure that AI systems in Brazil are developed and deployed in a way that minimises potential harm while promoting innovation and growth.

Definition of AI 

As of the time of writing, the concept of AI adopted by the draft Bill is that adopted by the OECD: ‘An AI system is a machine-based system that can, for a given set of objectives defined by humans, make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

Other laws and official documents that may impact the regulation of AI 

Sources

Canada

Summary:

Canada is progressing toward AI regulation with the proposed Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27. The Act focuses on regulating high-impact AI systems through compliance with existing consumer protection and human rights laws, overseen by the Minister of Innovation with support from an AI and Data Commissioner. AIDA also includes criminal provisions against harmful AI uses and will define specific regulations in consultation with stakeholders. While the framework is finalised, a Voluntary Code of Conduct promotes accountability, fairness, transparency, and safety in generative AI development.

As of the time of writing, Canada does not yet have AI-specific regulations with the force of law. However, significant steps have been taken toward establishing a regulatory framework. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

As of now, Bill C-27, the Digital Charter Implementation Act, 2022, remains under discussion and continues to progress through the legislative process. Currently, the Standing Committee on Industry and Technology (INDU) has announced that its review of the bill will stay on hold until at least February 2025. See here more details about the entire deliberation process.

The AIDA includes several key proposals:

  • High-impact AI systems must comply with existing Canadian consumer protection and human rights laws. Specific regulations defining these systems and their requirements will be developed in consultation with stakeholders to protect the public while minimising burdens on the AI ecosystem.
  • The Minister of Innovation, Science, and Industry will oversee the Act’s implementation, supported by an AI and Data Commissioner. Initially, this role will focus on education and assistance, but it will eventually take on compliance and enforcement responsibilities.
  • New criminal law provisions will prohibit reckless and malicious uses of AI that could harm Canadians or their interests.

In addition, Canada has introduced a Voluntary Code of Conduct for the responsible development and management of advanced generative AI systems. This code serves as a temporary measure while the legislative framework is being finalized.

The code of conduct sets out six core principles for AI developers and managers: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. For instance, managers are responsible for ensuring that AI-generated content is clearly labeled, while developers must assess the training data and address harmful biases to promote fairness and equity in AI outcomes.

Definition of AI

At its current stage of drafting, the Artificial Intelligence and Data Act provides the following definitions:

‘Artificial intelligence system is a system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.’

‘General-purpose system is an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.’

‘Machine-learning model is a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.’

Other laws and official documents that may impact the regulation of AI

Sources 

India

Summary:

India is advancing its AI governance framework but currently has no binding AI regulations. Key initiatives include the 2018 National Strategy for Artificial Intelligence, which prioritises AI applications in sectors like healthcare and smart infrastructure, and the 2021 Principles for Responsible AI, which outline ethical standards such as safety, inclusivity, privacy, and accountability. Operational guidelines released later in 2021 emphasise ethics by design and capacity building. Recent developments include the 2024 India AI Mission, with over $1.25 billion allocated for infrastructure, innovation, and safe AI, and advisories addressing deepfakes and generative AI.

As of the time of this writing, no AI regulations currently carry the force of law in India. Several frameworks are being formulated to guide the regulation of AI, including:

  • The National Strategy for Artificial Intelligence released in June 2018, which aims to establish a strong basis for future regulation of AI in India and focuses on AI intervention in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.
  • The Principles for Responsible AI released in February 2021, which serve as India’s roadmap for creating an ethical, responsible AI ecosystem across sectors.
  • The Operationalizing Principles for Responsible AI released in August 2021, which emphasises the need for regulatory and policy interventions, capacity building, and incentivising ethics by design regarding AI.

The Principles for Responsible AI identify the following broad principles for responsible management of AI, which can be leveraged by relevant stakeholders in India:

  • The principle of safety and reliability.
  • The principle of equality.
  • The principle of inclusivity and non-discrimination.
  • The principle of privacy and security.
  • The principle of transparency.
  • The principle of accountability.
  • The principle of protection and reinforcement of positive human values.

The Ministry of Commerce and Industry has established an Artificial Intelligence Task Force, which issued a report in March 2018.

In March 2024, India announced an allocation of over $1.25 billion for the India AI Mission, which will cover various aspects of AI, including computing infrastructure capacity, skilling, innovation, datasets, and safe and trusted AI.

India’s Ministry of Electronics and Information Technology issued advisories related to deepfakes and generative AI in 2024.

Definition of AI

The Principles for Responsible AI describe AI as ‘a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. The natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also make decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time.’

Other laws and official documents that may impact the regulation of AI

Sources

Israel

Summary:

Israel does not yet have binding AI regulations but is advancing a flexible, principles-based framework to encourage responsible innovation. The government’s approach relies on ethical guidelines and voluntary standards tailored to specific sectors, with the potential for broader legislation if common challenges arise. Key milestones include a 2022 white paper on AI and the 2023 Artificial Intelligence Regulations and Ethics.

As of the time of this writing, no AI regulations currently carry the force of law in Israel. Israel’s approach to AI governance encourages responsible innovation in the private sector through a sector-specific, principles-based framework. This strategy uses non-binding tools, including ethical guidelines and voluntary standards, allowing for regulatory flexibility tailored to each sector’s needs. However, the policy also leaves room for the introduction of broader, horizontal legislation should common challenges arise across sectors.

A white paper on AI was published in 2022 by Israel’s Ministry of Innovation, Science and Technology in collaboration with the Ministry of Justice, followed by the Policy on Artificial Intelligence Regulations and Ethics published in 2023.  The AI Policy was developed pursuant to a government resolution that tasked the Ministry of Innovation, Science and Technology with advancing a national AI plan for Israel.

Definition of AI

The AI Policy describes an AI system as having ‘a wide range of applications such as autonomous vehicles, medical imaging analysis, credit scoring, securities trading, personalised learning and employment,’ notwithstanding that ‘the list of applications is constantly expanding.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Japan

Summary:

Japan currently has no binding AI regulations but relies on voluntary guidelines to encourage responsible AI development and use. The AI Guidelines for Business Version 1.0 promote principles like human rights, safety, fairness, transparency, and innovation, fostering a flexible governance model involving stakeholders across sectors. Recent developments include the establishment of the AI Safety Institute in 2024 and the draft ‘Basic Act on the Advancement of Responsible AI,’ which proposes legally binding rules for certain generative AI models, including vetting, reporting, and compliance standards.

At the time of this writing, no AI regulations currently carry the force of law in Japan

The updated AI Guidelines for Business Version 1.0 are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognised AI principles.

The principles outlined by the AI Guidelines are:

  • Human-centric – The utilisation of AI must not infringe upon the fundamental human rights guaranteed by the constitution and international standards.
  • Safety – Each AI business actor should avoid damage to the lives, bodies, minds, and properties of stakeholders.
  • Fairness – Elimination of unfair and harmful bias and discrimination.
  • Privacy protection – Each AI business actor respects and protects privacy.
  • Ensuring security – Each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
  • Transparency – Each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
  • Accountability – Each AI business actor is accountable to stakeholders to ensure traceability, conforming to common guiding principles, based on each AI business actor’s role and degree of risk posed by the AI system or service.
  • Education/literacy – Each AI business actor is expected to provide persons engaged in its business with education regarding knowledge, literacy and ethics concerning the use of AI in a socially correct manner, and provide stakeholders with education about complexity, misinformation, and possibilities of intentional misuse.
  • Ensuring fair competition – Each AI business actor is expected to maintain a fair competitive environment so that new businesses and services using AI are created.
  • Innovation – Each AI business actor is expected to promote innovation and consider interconnectivity and interoperability.

The Guidelines emphasise a flexible governance model where various stakeholders are involved in a swift and ongoing process of assessing risks, setting objectives, designing systems, implementing solutions, and evaluating outcomes. This adaptive cycle operates within different governance structures, such as corporate policies, regulatory frameworks, infrastructure, market dynamics, and societal norms, ensuring they can quickly respond to changing conditions.

The AI Strategy Council was established to explore ways to harness AI’s potential while mitigating associated risks. On May 22, 2024, the Council presented draft discussion points outlining considerations on the necessity and possible scope of future AI regulations.

A working group has proposed the ‘Basic Act on the Advancement of Responsible AI,‘ which would introduce a hard law approach to regulating certain generative AI foundation models. Under the proposed law, the government would designate which AI systems and developers fall under its scope and impose obligations related to the vetting, operation, and output of these systems, along with periodic reporting requirements. 

Similar to the voluntary commitments made by major US AI companies in 2023, this framework would allow industry groups and developers to establish specific compliance standards. The government would have the authority to monitor compliance and enforce penalties for violations. If enacted, this would represent a shift in Japan’s AI regulation from a soft law to a more binding legal framework.

The AI Safety Institute was launched in February 2024 to examine the evaluation methods for AI safety and other related matters. The Institute is established within the Information-technology Promotion Agency, in collaboration with relevant ministries and agencies, including the Cabinet Office.

Definition of AI

The AI Guidelines define AI as an abstract concept that includes AI systems themselves as well as machine-learning software and programs.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Saudi Arabia

Summary:

Saudi Arabia has no binding AI regulations but is advancing its AI agenda through initiatives under Vision 2030, led by the Saudi Data and Artificial Intelligence Authority. The Authority oversees the National Strategy for Data & AI, which includes developing startups, training specialists, and establishing policies and standards. In 2023, SDAIA issued a draft set of AI Ethics Principles, categorising AI risks into four levels: little or no risk, limited risk, high risk (requiring assessments), and unacceptable risk (prohibited). Recent 2024 guidelines for generative AI offer non-binding advice for government and public use. These efforts are supported by a $40 billion AI investment fund.

At the time of this writing, no AI regulations currently carry the force of law in Saudi Arabia. In 2016, Saudi Arabia unveiled a long-term initiative known as Vision 2030, a bold plan spearheaded by Crown Prince Mohammed Bin Salman. 

A key aspect of this initiative was the significant focus on advancing AI, which culminated in the establishment of the Saudi Data and Artificial Intelligence Authority (SDAIA) in August 2019. This same decree also launched the Saudi Artificial Intelligence Center and the Saudi Data Management Office, both operating under SDAIA’s authority. 

SDAIA was tasked with managing the country’s AI research landscape and enforcing new policies and regulations that aligned with its AI objectives. In October 2020, SDAIA rolled out the National Strategy for Data & AI, which broadened the scope of the AI agenda to include goals such as developing over 300 AI and data-focused startups and training more than 20,000 specialists in these fields.

SDAIA was tasked by the Council of Ministers’ Resolution No. 292 to create policies, governance frameworks, standards, and regulations for data and artificial intelligence, and to oversee their enforcement once implemented.  SDAIA have issued draft AI Ethics Principles in 2023. The document enumerates seven principles with corresponding conditions necessary for their sufficient implementation. They include: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and explainability, and accountability and responsibility.

Similar to the EU AI Act, the Principles categorise the risks associated with the development and utilization of AI into four levels with different compliance requirements for each:

  • Little or No Risk: Systems classified as posing little or no risk do not face restrictions, but the SDAIA recommends compliance with the AI Ethics Principles.
  • Limited Risk: Systems classified as limited risk are required to comply with the Principles.
  • High Risk: Systems classified as high risk are required to undergo both pre- and post-deployment conformity assessments, in addition to meeting ethical standards and relevant legal requirements. Such systems are noted for the significant risk they might pose to fundamental rights.
  • Unacceptable Risk: Systems classified as posing unacceptable risks to individuals’ safety, well-being, or rights are strictly prohibited. These include systems that socially profile or sexually exploit children, for instance.

On January 1, 2024, SDAIA released two sets of Generative AI Guidelines. The first is intended for government employees, while the second is aimed at the general public. 

Both documents offer guidance on the adoption and use of generative AI systems, using common scenarios to illustrate their application. They also address the challenges and considerations associated with generative AI, outline principles for responsible use, and suggest best practices. The Guidelines are not legally binding and serve as advisory frameworks.

Much of the attention surrounding Saudi Arabia’s AI advancements is driven by its large-scale investment efforts, notably a $40 billion fund dedicated to AI technology development.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Singapore

Summary:

Singapore has no binding AI regulations but promoted responsible AI through frameworks developed by the Infocomm Media Development Authority (IMDA). Key initiatives include the Model AI Governance Framework, which offers ethical guidelines for the private sector, and AI Verify, a toolkit for assessing AI systems’ alignment with these standards. The National AI Strategy and its 2.0 update emphasise fostering a trusted AI ecosystem while driving innovation and economic growth.

As of the time of this writing, no AI regulations currently carry the force of law in Singapore. Singapore’s AI regulations are largely shaped by the Infocomm Media Development Authority (IMDA), an independent government body that operates under the Ministry of Communications and Information. This statutory board plays a central role in guiding the nation’s approach to artificial intelligence policies and frameworks. IMDA takes a prominent position in shaping Singapore’s technology policies and refers to itself as the ‘architect of the nation’s digital future,’ highlighting its pivotal role in steering the country’s digital transformation.

In 2019, the Smart Nation and Digital Government offices introduced an extensive National AI Strategy, outlining Singapore’s goal to boost its economy and become a leader in the global AI industry. To support these objectives, the government also established a National AI Office within the Ministry to oversee the execution of its AI initiatives.

The Singapore government has developed various frameworks and tools to guide AI deployment and promote the responsible use of AI:

  • The Model AI Governance Framework, that offers comprehensive guidelines to private sector entities on tackling essential ethical and governance challenges in the implementation of AI technologies.
  • AI Verify, was developed by IMDA in collaboration with private sector partners, and supported by the AI Verify Foundation (AIVF) and is a testing framework and toolkit for AI governance, created to assist organisations in assessing the alignment of their AI systems with ethical guidelines using standardised evaluations.
  • The National Artificial Intelligence Strategy 2.0, highlighting Singapore’s vision and dedication to fostering a trusted and accountable AI environment and promoting innovation and economic growth through AI.

Definition of AI

The 2020 Framework defines AI as ‘a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).’

The 2024 Framework defines Generative AI as ‘AI models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable Generative AI to accept natural language prompts as input, including large language models.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Republic of Korea

Summary:

The Republic of Korea has no binding AI regulations but is actively developing its framework through the Ministry of Science and ICT and the Personal Information Protection Commission. Key initiatives include the 2019 National AI Strategy, the 2020 Human-Centered AI Ethics Standards, and the 2023 Digital Bill of Rights. Current legislative efforts focus on the proposed Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which adopts a ‘permit-first-regulate-later’ approach to foster innovation while addressing high-risk applications.

As of the time of this writing, no AI regulations currently carry the force of law in the Republic of Korea. However, two major institutions are actively guiding the development of AI-related policies: the Ministry of Science and ICT (MSIT) and the Personal Information Protection Commission (PIPC). While the PIPC concentrates on ensuring that privacy laws keep pace with AI advancements and emerging risks, MSIT leads the nation’s broader AI initiatives. Among these efforts is the AI Strategy High-Level Consultative Council, a collaborative platform where government and private stakeholders engage in discussions on AI governance.

The Republic of Korea has been progressively shaping its AI governance framework, beginning with the release of its National Strategy for Artificial Intelligence in December 2019. This was followed by the Human-Centered Artificial Intelligence Ethics Standards in 2020 and the introduction of the Digital Bill of Rights in May 2023. Although no comprehensive AI law exists as of yet, several AI-related legislative proposals have been introduced to the National Assembly since 2022. One prominent proposal currently under review is the Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which aims to consolidate earlier legislative drafts into a more cohesive approach.

Unlike the European Union’s AI Act, the Republic of Korea’s proposed legislation follows a ‘permit-first-regulate-later’ philosophy, which emphasises fostering innovation and industrial growth in AI technologies. The bill also outlines specific obligations for high-risk AI applications, such as requiring prior notifications to users and implementing measures to ensure AI systems are trustworthy and safe. The MSIT Minister announced the establishment of an AI Safety Institute at the 2024 AI Safety Summit.

Definition of AI

Under the proposed AI Act, ‘artificial intelligence’ is defined as the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgement, and language comprehension.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UAE

Summary:

The UAE currently lacks binding AI regulations but actively promotes innovation through frameworks like regulatory sandboxes and allowing real-world testing of new technologies under regulatory oversight. AI governance in the UAE is shaped by its complex jurisdictional landscape, including federal laws, Mainland UAE, and financial free zones such as DIFC and ADGM. Key initiatives include the 2017 National Strategy for Artificial Intelligence 2031, managed by the UAE AI and Blockchain Council, which focuses on fairness, transparency, accountability, and responsible AI practices. Dubai’s 2019 AI Principles and Ethical AI Toolkit emphasize safety, fairness, and explainability in AI systems. The UAE’s AI Ethics: Principles and Guidelines (2022) provide a non-binding framework balancing innovation and societal interests, supported by the beta AI Ethics Self-Assessment Tool to evaluate and refine AI systems ethically. In 2023, the UAE released Falcon 180B, an open-source large language model, and in 2024, the Charter for the Development and Use of Artificial Intelligence, which aims to position the UAE as a global AI leader by 2031 while addressing algorithmic bias, privacy, and compliance with international standards.

At the time of this writing, no AI regulations currently carry the force of law in the UAE. The regulatory landscape of the United Arab Emirates is quite complex due to its division into multiple jurisdictions, each governed by its own set of rules and, in some cases, distinct regulatory bodies. 

Broadly, the UAE can be viewed in terms of its Financial Free Zones, such as the Dubai International Financial Centre (DIFC) and the Abu Dhabi Global Market (ADGM), which operate under separate legal frameworks, and Mainland UAE, which encompasses all areas outside these financial zones. Mainland UAE is further split into non-financial free zones and the broader onshore region, where the general laws of the country apply. As the UAE is a federal state composed of seven emirates – Dubai, Abu Dhabi, Sharjah, Fujairah, Ras Al Khaimah, Ajman, and Umm Al-Quwain – each of them retains control over local matters not specifically governed by federal law. The UAE is a strong advocate for “regulatory sandboxes,” a framework that allows new technologies to be tested in real-world conditions within a controlled setting, all under the close oversight of a regulatory authority.

In 2017, the UAE appointed a Minister of State for AI, Digital Economy and Remote Work Applications and released the National Strategy for Artificial Intelligence 2031, with the aim to create the country’s AI ecosystem. The UAE Artificial Intelligence and Blockchain Council is responsible for managing the National Strategy’s implementation, including crafting regulations and establishing best practices related to AI risks, data management, cybersecurity, and various other digital matters.

The City of Dubai launched the AI Principles and Guidelines for the Emirate of Dubai in January 2019. The Principles promote fairness, transparency, accountability, and explainability in AI development and oversight. Dubai introduced an Ethical AI Toolkit outlining principles for AI systems to ensure safety, fairness, transparency, accountability, and comprehensibility.

The UAE AI Ethics: Principles and Guidelines, released in December 2022 under the Minister of State for Artificial Intelligence, provides a non-binding framework for ethical AI design and use, focusing on fairness, accountability, transparency, explainability, robustness, human-centered design, sustainability, and privacy preservation. Drafted as a collaborative, multi-stakeholder effort, the guidelines balance the need for innovation with the protection of intellectual property and invite ongoing dialogue among stakeholders. It aims to evolve into a universal, practical, and widely adopted standard for ethical AI, aligning with the UAE National AI Strategy and Sustainable Development Goals to ensure AI serves societal interests while upholding global norms and advancing responsible innovation.

To operationalise these principles, the UAE has introduced a beta version of its AI Ethics Self-Assessment Tool, designed to help developers and operators evaluate the ethical performance of their AI systems. This tool encourages consideration of potential ethical challenges from initial development stages to full system maintenance and helps prioritise necessary mitigation measures. While non-compulsory, it employs weighted recommendations—where ‘should’ indicates high priority and ‘should consider’ denotes moderate importance—and discourages implementation unless a minimum ethics performance threshold is met. As a beta version, the tool invites extensive user feedback and shared use cases to refine its functionality.

In 2023, the UAE, through the support of the Advanced Technology Research Council under the Abu Dhabi government, released the open-source large language model, Falcon 180B, named after the country’s national bird.

In July 2024, the UAE’s AI, Digital Economy, and Remote Work Applications Office released the Charter for the Development and Use of Artificial Intelligence. The Charter establishes a framework to position the UAE as a global leader in AI by 2031, prioritising human well-being, safety, inclusivity, and fairness in AI development. It addresses algorithmic bias, ensures transparency and accountability, and emphasises innovation while safeguarding community privacy in line with UAE data standards. The Charter also highlights the need for ethical oversight and compliance with international treaties and local regulations to ensure AI serves societal interests and upholds fundamental rights.

Definition of AI

The  AI Office has defined AI as ‘systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the data they collect’ in the 2023 AI Adoption Guideline in Government Services.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UK

Summary:

The UK currently has no binding AI regulations but adopts a principles-based framework allowing sector-specific regulators to govern AI development and use within their domains. Key principles outlined in the 2023 White Paper: A Pro-Innovation Approach to AI Regulation include safety, transparency, fairness, accountability, and contestability. The UK’s National AI Strategy, overseen by the Office for Artificial Intelligence, aims to position the country as a global AI leader by promoting innovation and aligning with international frameworks. Recent developments, including proposed legislation for advanced AI models and the Digital Information and Smart Data Bill, signal a shift toward more structured regulation. The UK solidified its leadership in AI governance by hosting the 2023 Bletchley Summit, where 28 countries committed to advancing global AI safety and responsible development.

As of the time of this writing, no AI regulations currently carry the force of law in the UK. The UK supports a principles-based framework for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK aims to position itself as a global leader in AI by establishing a flexible regulatory framework that fosters innovation and growth in the sector. In 2022, the Government  issued an AI Regulation Policy Paper followed by a White Paper in 2023 with the title ‘A Pro-Innovation Approach to AI Regulation.’

The White Paper lists five key principles designed to ensure responsible AI development: 

  1. Safety, Security, and Robustness. 
  2. Appropriate Transparency and Explainability.
  3. Fairness.
  4. Accountability and Governance.
  5. Contestability and Redress.

The UK Government set up an Office for Artificial Intelligence to oversee the implementation of the UK’s National AI Strategy, adopted in September 2021. The Strategy recognises the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors, and sets up a plan for the next decade to position the UK as a world leader in artificial intelligence. The AI office will perform various central functions to support the framework’s implementation, including: 

  1. monitoring and evaluating the overall efficacy of the regulatory framework;
  2. assessing and monitoring risks across the economy arising from AI;
  3. promoting interoperability with international regulatory frameworks.

Shifting away from the flexible regulatory approach, In July 2024, King Charles III suggested plans to enact legislation requiring developers of the most advanced AI models to meet specific standards. Additionally, the announcement included the Digital Information and Smart Data Bill, which will reform data-related laws to ensure the safe development and use of emerging technologies, including AI. The details of how these measures will be implemented remain unclear.

The UK hosted in November 2023 the Bletchley Summit, positioning itself as a leader in fostering international collaboration on AI safety and governance. At the Summit, a landmark declaration was signed by 28 countries, committing to collaborate on managing the risks of frontier AI technologies, ensuring AI safety, and advancing responsible AI development and governance globally.

Definition of AI

The White Paper describes AI as ‘products and services that are ‘adaptable’ and ‘autonomous.”

Other laws and official non binding documents that may impact the regulation of AI

Sources