US charges Russian-Israeli citizen over Lockbit ransomware

The United States has charged Rostislav Panev, a Russian-Israeli dual citizen, for his alleged role as a developer for the Lockbit ransomware group, which authorities describe as one of the world’s most destructive cybercrime operations. Panev, arrested in Israel in August, awaits extradition.

Lockbit, active since 2019, targeted over 2,500 victims across 120 countries, including critical infrastructure and businesses, extorting $500 million. Recent arrests, guilty pleas, and international law enforcement efforts have significantly disrupted the group’s activities.

Experts say law enforcement actions have tarnished Lockbit’s reputation, reducing its attacks and deterring affiliates. Authorities emphasise the importance of holding cybercriminals accountable.

NETSCOUT enhances DDoS protection with AI/ML-Driven adaptive solutions

NETSCOUT SYSTEMS announced significant updates to its Arbor Edge Defense (AED) and Arbor Enterprise Manager (AEM) products as part of its Adaptive DDoS Protection solution. These enhancements are designed to address the growing threats of AI-enabled DDoS attacks, which have surged in sophistication and frequency.

Application-layer and volumetric attacks have increased by 43% and 30%, respectively, with DDoS-for-hire services making attacks easier to execute. To combat these evolving threats, NETSCOUT leverages AI and machine learning (ML) within its ATLAS Threat Intelligence system, which monitors over 550 Tbps of real-time internet traffic across 500 ISPs and 2,000 enterprise sites worldwide.

The AI/ML-powered solution enables dynamic threat identification and mitigation, creating a scalable, proactive defence mechanism. The updated AED and AEM products automate a closed-loop DDoS attack detection and mitigation process, providing real-time protection by adapting to changing attack vectors and applying mitigation recommendations automatically.

NETSCOUT’s solution also offers comprehensive protection across hybrid IT environments, including on-premise infrastructure, private data centres, and public cloud platforms like AWS and Microsoft Azure, with enhancements such as 200 Gbps mitigation capacity, high-performance decryption, and visibility into non-DDoS threats.

By minimising downtime and safeguarding business-critical services, NETSCOUT’s Adaptive DDoS Protection reduces business risks and protects productivity and reputation. As the threat landscape continues to evolve, organisations can rely on NETSCOUT’s innovative technology to stay ahead of attackers and maintain IT resilience. Industry experts and agencies like the Cybersecurity & Infrastructure Security Agency (CISA) highlight the need for adaptive cybersecurity measures. NETSCOUT’s AI/ML-driven solutions meet these demands by offering robust, future-proof protection for critical IT infrastructure.

WhatsApp wins case as US judge rules against NSO Group

A US judge has ruled against Israel’s NSO Group in a lawsuit brought by WhatsApp, finding the spyware firm liable for hacking and breach of contract. The case, heard in Oakland, California, revolves around allegations that NSO exploited a vulnerability in WhatsApp to install Pegasus spyware, enabling unauthorised surveillance of 1,400 individuals. The court decision moves the case forward to determine damages.

Will Cathcart, head of WhatsApp, described the ruling as a triumph for privacy, emphasising the need for accountability in the spyware industry. WhatsApp expressed gratitude for support from various organisations and pledged continued efforts to safeguard private communications. Cybersecurity experts, including Citizen Lab’s John Scott-Railton, hailed the judgment as a pivotal moment for holding spyware companies accountable.

NSO argued that its Pegasus software serves to combat serious crime and threats to national security. However, the courts previously rejected claims of immunity, noting the company’s activities fell outside the protection of federal law. Appeals by NSO to higher courts, including the US Supreme Court, failed, paving the way for the trial to proceed.

The judgment signals a significant shift in how the spyware industry may be regulated, with implications for firms previously claiming they were not responsible for the misuse of their technology. Experts see it as a warning to surveillance companies that illegal actions will not go unchallenged.

TikTok faces ban in Albania after teen’s death

Albania has announced a one-year nationwide ban on TikTok, citing concerns about the platform’s influence on children. The decision follows the fatal stabbing of a 14-year-old boy in November, reportedly linked to social media disputes. Prime Minister Edi Rama revealed the ban as part of a broader strategy to enhance school safety after consultations with parents and teachers.

The Prime Minister has criticised TikTok and similar platforms for encouraging youth violence. Videos supporting the killing were shared online, raising alarms about the role of social media in such incidents. Rama stated that society, not children, bears responsibility for the issue, describing TikTok as a platform that holds children ‘hostage’.

Several European nations, including France and Germany, have introduced restrictions on social media for children. Albania’s move aligns with a growing global trend, with Australia recently approving a complete social media ban for users under 16.

TikTok responded by seeking clarity from the Albanian government, claiming no evidence linked the involved teens to the platform. A spokesperson suggested another platform might have hosted the content tied to the incident.

Trump signals support for TikTok amid national security debate

President-elect Donald Trump hinted at allowing TikTok to continue operating in the US, at least temporarily, citing the platform’s significant role in his presidential campaign. Speaking to conservative supporters in Phoenix, Arizona, Trump shared that his campaign content had garnered billions of views on TikTok, describing it as a “beautiful” success that made him reconsider the app’s future.

TikTok’s parent company, ByteDance, has faced pressure from US lawmakers to divest the app over national security concerns, with allegations that Chinese control of TikTok poses risks to American data. The US Supreme Court is set to decide on the matter, as ByteDance challenges a law that could force divestment. Without a favourable ruling or compliance with the law, TikTok could face a US ban by January 19, just before Trump takes office.

Trump’s openness to TikTok contrasts with bipartisan support for stricter measures against the app. While the Justice Department argues that Chinese ties to TikTok remain a security threat, TikTok counters that its user data and operations are managed within the US, with storage handled by Oracle and moderation decisions made domestically. Despite ongoing legal battles, Trump’s remarks and a recent meeting with TikTok’s CEO suggest he sees potential in maintaining the platform’s presence in the US market.

Tech giants join forces for US defence contracts, FT says

Data analytics firm Palantir Technologies and defence tech company Anduril Industries are leading efforts to form a consortium of technology companies to bid jointly for US government contracts, according to a report from the Financial Times. The group is expected to include SpaceX, OpenAI, Scale AI, autonomous shipbuilder Saronic, and other key players, with formal agreements anticipated as early as January.

The consortium aims to reshape the defence contracting landscape by combining cutting-edge technologies from some of Silicon Valley’s most innovative firms. A member involved in the initiative described it as a move toward creating “a new generation of defence contractors.” This collective effort seeks to enhance the efficiency of supplying advanced defence systems, leveraging technologies like AI, autonomous vehicles, and other innovations.

The initiative aligns with President-elect Donald Trump’s push for greater government efficiency, spearheaded in part by Elon Musk, who has been outspoken about reforming Pentagon spending priorities. Musk and others have criticised traditional defence programs, such as Lockheed Martin’s F-35 fighter jet, advocating instead for the development of cost-effective, AI-driven drones, missiles, and submarines.

With these partnerships, the consortium hopes to challenge the dominance of established defence contractors like Boeing, Northrop Grumman, and Lockheed Martin, offering a modernised approach to defence technology and procurement in the US.

North Korean hackers linked to surge in stolen cryptocurrency

Cryptocurrency theft reached $2.2bn (£1.76bn) in 2024, with North Korean hackers reportedly responsible for $1.3bn, according to a Chainalysis report. The total marks a 21% increase from 2023, though it remains lower than peak years.

The study highlights that hackers often target private keys used to access crypto platforms, causing severe losses for centralised exchanges. Significant breaches included a $300m theft from Japan‘s DMM Bitcoin and a $235m loss from India-based WazirX. Many attacks were linked to citizens of North Korea posing as remote IT workers.

The United States government has accused Pyongyang of using stolen funds to evade sanctions and finance weapons programmes. Recently, 14 North Koreans were indicted in a federal court for alleged extortion schemes, while the State Department announced a $5m reward for information on these activities.

US CISA unveils draft update to National Cyber Incident Response Plan

The US Cybersecurity and Infrastructure Security Agency (CISA) has released a draft update to the National Cyber Incident Response Plan (NCIRP) for public feedback, reflecting changes in cybersecurity, law, policy, and operational processes since the plan’s 2016 release. Developed in collaboration with the Joint Cyber Defense Collaborative (JCDC) and the Office of the National Cyber Director (ONCD), the update aims to improve national preparedness for the growing complexity of cyber threats.

Key updates include clarifying how non-federal stakeholders, such as private sector entities, can participate in cyber incident response efforts, enhancing usability by aligning the plan with the incident response lifecycle, and incorporating the latest legal and policy changes. The NCIRP will now undergo regular updates to stay relevant as threats and technologies evolve.

The NCIRP coordinates efforts across federal agencies, state and local governments, the private sector, and international partners as a strategic framework. It outlines four critical lines of effort (LOEs): Asset Response, Threat Response, Intelligence Support, and Affected Entity Response, ensuring cohesive and coordinated actions during a cyber incident.

The plan also defines two key phases—Detection and Response—focusing on identifying significant incidents and then containing, eradicating, and recovering from them. Coordination between government agencies, private sector entities, and other stakeholders is vital to managing the response and minimising the impact on national security, the economy, and public health.

Collaboration and continuous improvement are central to the NCIRP’s success. The JCDC, Cyber Unified Coordination Group (Cyber UCG), and Cyber Response Group (CRG) ensure all stakeholders are aligned in their efforts, with the CRG overseeing policy coordination and broader strategic responses.

The NCIRP will be regularly reviewed and updated based on feedback and post-incident assessments, allowing it to adapt to new threats and technological changes. CISA is committed to strengthening the nation’s ability to respond to cyber incidents, emphasising the need for an agile, effective framework to keep pace with evolving cyber risks.

Overview of AI policy in 10 jurisdictions

Brazil

Summary:

Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspired by the EU’s AI Act, the bill proposes a risk-based framework, categorising AI systems as unacceptable (banned), high risk (strictly regulated), or low risk (less oversight). This effort builds on Brazil’s 2019 National AI Strategy, which emphasises ethical AI that benefits society, respects human rights, and ensures transparency. Using the OECD’s definition of AI, the bill aims to protect people while fostering innovation.

As of the time of writing, Brazil does not yet have any AI-specific regulations with the force of law. However, the country is actively working towards establishing a regulatory framework for artificial intelligence. Brazilian legislators are currently considering the Proposed AI Regulation Bill No. 2338/2023, though the timeline for its adoption remains uncertain.

Brazil’s journey toward AI regulation began with the launch of the Estratégia Brasileira de Inteligência Artificial (EBIA) in 2019. The strategy outlines the country’s vision for fostering responsible and ethical AI development. Key principles of the EBIA include:

  • AI should benefit people and the planet, contributing to inclusive growth, sustainable development, and societal well-being.
  • AI systems must be designed to uphold the rule of law, human rights, democratic values, and diversity, with safeguards in place, such as human oversight when necessary.
  • AI systems should operate robustly, safely, and securely throughout their lifecycle, with ongoing risk assessment and mitigation.
  • Organisations and individuals involved in the AI lifecycle must commit to transparency and responsible disclosure, providing information that helps:
  1. Promote general understanding of AI systems;
  2. Inform people about their interactions with AI;
  3. Enable those affected by AI systems to understand the outcomes;
  4. Allow those adversely impacted to challenge AI-generated results.

In 2020, Brazil’s Chamber of Deputies began working on Bill 21/2020, aiming to establish a Legal Framework of Artificial Intelligence. Over time, four bills were introduced before the Chamber ultimately approved Bill 21/2020.

Meanwhile, the Federal Senate established a Commission of Legal Experts to support the development of an alternative AI bill. The commission held public hearings and international seminars, consulted with global experts, and conducted research into AI regulations from other jurisdictions. This extensive process culminated in a report that informed the drafting of Bill 2338 of 2023, which aims to govern the use of AI.

Following a similar approach to the European Union’s AI Act, the proposed Brazilian bill adopts a risk-based framework, classifying AI systems into three categories:

  • Unacceptable risk (entirely prohibited),
  • High risk (subject to stringent obligations for providers), and
  • Non-high risk.

This classification aims to ensure that AI systems in Brazil are developed and deployed in a way that minimises potential harm while promoting innovation and growth.

Definition of AI 

As of the time of writing, the concept of AI adopted by the draft Bill is that adopted by the OECD: ‘An AI system is a machine-based system that can, for a given set of objectives defined by humans, make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

Other laws and official documents that may impact the regulation of AI 

Sources

Canada

Summary:

Canada is progressing toward AI regulation with the proposed Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27. The Act focuses on regulating high-impact AI systems through compliance with existing consumer protection and human rights laws, overseen by the Minister of Innovation with support from an AI and Data Commissioner. AIDA also includes criminal provisions against harmful AI uses and will define specific regulations in consultation with stakeholders. While the framework is finalised, a Voluntary Code of Conduct promotes accountability, fairness, transparency, and safety in generative AI development.

As of the time of writing, Canada does not yet have AI-specific regulations with the force of law. However, significant steps have been taken toward establishing a regulatory framework. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

As of now, Bill C-27, the Digital Charter Implementation Act, 2022, remains under discussion and continues to progress through the legislative process. Currently, the Standing Committee on Industry and Technology (INDU) has announced that its review of the bill will stay on hold until at least February 2025. See here more details about the entire deliberation process.

The AIDA includes several key proposals:

  • High-impact AI systems must comply with existing Canadian consumer protection and human rights laws. Specific regulations defining these systems and their requirements will be developed in consultation with stakeholders to protect the public while minimising burdens on the AI ecosystem.
  • The Minister of Innovation, Science, and Industry will oversee the Act’s implementation, supported by an AI and Data Commissioner. Initially, this role will focus on education and assistance, but it will eventually take on compliance and enforcement responsibilities.
  • New criminal law provisions will prohibit reckless and malicious uses of AI that could harm Canadians or their interests.

In addition, Canada has introduced a Voluntary Code of Conduct for the responsible development and management of advanced generative AI systems. This code serves as a temporary measure while the legislative framework is being finalized.

The code of conduct sets out six core principles for AI developers and managers: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. For instance, managers are responsible for ensuring that AI-generated content is clearly labeled, while developers must assess the training data and address harmful biases to promote fairness and equity in AI outcomes.

Definition of AI

At its current stage of drafting, the Artificial Intelligence and Data Act provides the following definitions:

‘Artificial intelligence system is a system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.’

‘General-purpose system is an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.’

‘Machine-learning model is a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.’

Other laws and official documents that may impact the regulation of AI

Sources 

India

Summary:

India is advancing its AI governance framework but currently has no binding AI regulations. Key initiatives include the 2018 National Strategy for Artificial Intelligence, which prioritises AI applications in sectors like healthcare and smart infrastructure, and the 2021 Principles for Responsible AI, which outline ethical standards such as safety, inclusivity, privacy, and accountability. Operational guidelines released later in 2021 emphasise ethics by design and capacity building. Recent developments include the 2024 India AI Mission, with over $1.25 billion allocated for infrastructure, innovation, and safe AI, and advisories addressing deepfakes and generative AI.

As of the time of this writing, no AI regulations currently carry the force of law in India. Several frameworks are being formulated to guide the regulation of AI, including:

  • The National Strategy for Artificial Intelligence released in June 2018, which aims to establish a strong basis for future regulation of AI in India and focuses on AI intervention in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.
  • The Principles for Responsible AI released in February 2021, which serve as India’s roadmap for creating an ethical, responsible AI ecosystem across sectors.
  • The Operationalizing Principles for Responsible AI released in August 2021, which emphasises the need for regulatory and policy interventions, capacity building, and incentivising ethics by design regarding AI.

The Principles for Responsible AI identify the following broad principles for responsible management of AI, which can be leveraged by relevant stakeholders in India:

  • The principle of safety and reliability.
  • The principle of equality.
  • The principle of inclusivity and non-discrimination.
  • The principle of privacy and security.
  • The principle of transparency.
  • The principle of accountability.
  • The principle of protection and reinforcement of positive human values.

The Ministry of Commerce and Industry has established an Artificial Intelligence Task Force, which issued a report in March 2018.

In March 2024, India announced an allocation of over $1.25 billion for the India AI Mission, which will cover various aspects of AI, including computing infrastructure capacity, skilling, innovation, datasets, and safe and trusted AI.

India’s Ministry of Electronics and Information Technology issued advisories related to deepfakes and generative AI in 2024.

Definition of AI

The Principles for Responsible AI describe AI as ‘a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. The natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also make decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time.’

Other laws and official documents that may impact the regulation of AI

Sources

Israel

Summary:

Israel does not yet have binding AI regulations but is advancing a flexible, principles-based framework to encourage responsible innovation. The government’s approach relies on ethical guidelines and voluntary standards tailored to specific sectors, with the potential for broader legislation if common challenges arise. Key milestones include a 2022 white paper on AI and the 2023 Artificial Intelligence Regulations and Ethics.

As of the time of this writing, no AI regulations currently carry the force of law in Israel. Israel’s approach to AI governance encourages responsible innovation in the private sector through a sector-specific, principles-based framework. This strategy uses non-binding tools, including ethical guidelines and voluntary standards, allowing for regulatory flexibility tailored to each sector’s needs. However, the policy also leaves room for the introduction of broader, horizontal legislation should common challenges arise across sectors.

A white paper on AI was published in 2022 by Israel’s Ministry of Innovation, Science and Technology in collaboration with the Ministry of Justice, followed by the Policy on Artificial Intelligence Regulations and Ethics published in 2023.  The AI Policy was developed pursuant to a government resolution that tasked the Ministry of Innovation, Science and Technology with advancing a national AI plan for Israel.

Definition of AI

The AI Policy describes an AI system as having ‘a wide range of applications such as autonomous vehicles, medical imaging analysis, credit scoring, securities trading, personalised learning and employment,’ notwithstanding that ‘the list of applications is constantly expanding.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Japan

Summary:

Japan currently has no binding AI regulations but relies on voluntary guidelines to encourage responsible AI development and use. The AI Guidelines for Business Version 1.0 promote principles like human rights, safety, fairness, transparency, and innovation, fostering a flexible governance model involving stakeholders across sectors. Recent developments include the establishment of the AI Safety Institute in 2024 and the draft ‘Basic Act on the Advancement of Responsible AI,’ which proposes legally binding rules for certain generative AI models, including vetting, reporting, and compliance standards.

At the time of this writing, no AI regulations currently carry the force of law in Japan

The updated AI Guidelines for Business Version 1.0 are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognised AI principles.

The principles outlined by the AI Guidelines are:

  • Human-centric – The utilisation of AI must not infringe upon the fundamental human rights guaranteed by the constitution and international standards.
  • Safety – Each AI business actor should avoid damage to the lives, bodies, minds, and properties of stakeholders.
  • Fairness – Elimination of unfair and harmful bias and discrimination.
  • Privacy protection – Each AI business actor respects and protects privacy.
  • Ensuring security – Each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
  • Transparency – Each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
  • Accountability – Each AI business actor is accountable to stakeholders to ensure traceability, conforming to common guiding principles, based on each AI business actor’s role and degree of risk posed by the AI system or service.
  • Education/literacy – Each AI business actor is expected to provide persons engaged in its business with education regarding knowledge, literacy and ethics concerning the use of AI in a socially correct manner, and provide stakeholders with education about complexity, misinformation, and possibilities of intentional misuse.
  • Ensuring fair competition – Each AI business actor is expected to maintain a fair competitive environment so that new businesses and services using AI are created.
  • Innovation – Each AI business actor is expected to promote innovation and consider interconnectivity and interoperability.

The Guidelines emphasise a flexible governance model where various stakeholders are involved in a swift and ongoing process of assessing risks, setting objectives, designing systems, implementing solutions, and evaluating outcomes. This adaptive cycle operates within different governance structures, such as corporate policies, regulatory frameworks, infrastructure, market dynamics, and societal norms, ensuring they can quickly respond to changing conditions.

The AI Strategy Council was established to explore ways to harness AI’s potential while mitigating associated risks. On May 22, 2024, the Council presented draft discussion points outlining considerations on the necessity and possible scope of future AI regulations.

A working group has proposed the ‘Basic Act on the Advancement of Responsible AI,‘ which would introduce a hard law approach to regulating certain generative AI foundation models. Under the proposed law, the government would designate which AI systems and developers fall under its scope and impose obligations related to the vetting, operation, and output of these systems, along with periodic reporting requirements. 

Similar to the voluntary commitments made by major US AI companies in 2023, this framework would allow industry groups and developers to establish specific compliance standards. The government would have the authority to monitor compliance and enforce penalties for violations. If enacted, this would represent a shift in Japan’s AI regulation from a soft law to a more binding legal framework.

The AI Safety Institute was launched in February 2024 to examine the evaluation methods for AI safety and other related matters. The Institute is established within the Information-technology Promotion Agency, in collaboration with relevant ministries and agencies, including the Cabinet Office.

Definition of AI

The AI Guidelines define AI as an abstract concept that includes AI systems themselves as well as machine-learning software and programs.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Saudi Arabia

Summary:

Saudi Arabia has no binding AI regulations but is advancing its AI agenda through initiatives under Vision 2030, led by the Saudi Data and Artificial Intelligence Authority. The Authority oversees the National Strategy for Data & AI, which includes developing startups, training specialists, and establishing policies and standards. In 2023, SDAIA issued a draft set of AI Ethics Principles, categorising AI risks into four levels: little or no risk, limited risk, high risk (requiring assessments), and unacceptable risk (prohibited). Recent 2024 guidelines for generative AI offer non-binding advice for government and public use. These efforts are supported by a $40 billion AI investment fund.

At the time of this writing, no AI regulations currently carry the force of law in Saudi Arabia. In 2016, Saudi Arabia unveiled a long-term initiative known as Vision 2030, a bold plan spearheaded by Crown Prince Mohammed Bin Salman. 

A key aspect of this initiative was the significant focus on advancing AI, which culminated in the establishment of the Saudi Data and Artificial Intelligence Authority (SDAIA) in August 2019. This same decree also launched the Saudi Artificial Intelligence Center and the Saudi Data Management Office, both operating under SDAIA’s authority. 

SDAIA was tasked with managing the country’s AI research landscape and enforcing new policies and regulations that aligned with its AI objectives. In October 2020, SDAIA rolled out the National Strategy for Data & AI, which broadened the scope of the AI agenda to include goals such as developing over 300 AI and data-focused startups and training more than 20,000 specialists in these fields.

SDAIA was tasked by the Council of Ministers’ Resolution No. 292 to create policies, governance frameworks, standards, and regulations for data and artificial intelligence, and to oversee their enforcement once implemented.  SDAIA have issued draft AI Ethics Principles in 2023. The document enumerates seven principles with corresponding conditions necessary for their sufficient implementation. They include: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and explainability, and accountability and responsibility.

Similar to the EU AI Act, the Principles categorise the risks associated with the development and utilization of AI into four levels with different compliance requirements for each:

  • Little or No Risk: Systems classified as posing little or no risk do not face restrictions, but the SDAIA recommends compliance with the AI Ethics Principles.
  • Limited Risk: Systems classified as limited risk are required to comply with the Principles.
  • High Risk: Systems classified as high risk are required to undergo both pre- and post-deployment conformity assessments, in addition to meeting ethical standards and relevant legal requirements. Such systems are noted for the significant risk they might pose to fundamental rights.
  • Unacceptable Risk: Systems classified as posing unacceptable risks to individuals’ safety, well-being, or rights are strictly prohibited. These include systems that socially profile or sexually exploit children, for instance.

On January 1, 2024, SDAIA released two sets of Generative AI Guidelines. The first is intended for government employees, while the second is aimed at the general public. 

Both documents offer guidance on the adoption and use of generative AI systems, using common scenarios to illustrate their application. They also address the challenges and considerations associated with generative AI, outline principles for responsible use, and suggest best practices. The Guidelines are not legally binding and serve as advisory frameworks.

Much of the attention surrounding Saudi Arabia’s AI advancements is driven by its large-scale investment efforts, notably a $40 billion fund dedicated to AI technology development.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Singapore

Summary:

Singapore has no binding AI regulations but promoted responsible AI through frameworks developed by the Infocomm Media Development Authority (IMDA). Key initiatives include the Model AI Governance Framework, which offers ethical guidelines for the private sector, and AI Verify, a toolkit for assessing AI systems’ alignment with these standards. The National AI Strategy and its 2.0 update emphasise fostering a trusted AI ecosystem while driving innovation and economic growth.

As of the time of this writing, no AI regulations currently carry the force of law in Singapore. Singapore’s AI regulations are largely shaped by the Infocomm Media Development Authority (IMDA), an independent government body that operates under the Ministry of Communications and Information. This statutory board plays a central role in guiding the nation’s approach to artificial intelligence policies and frameworks. IMDA takes a prominent position in shaping Singapore’s technology policies and refers to itself as the ‘architect of the nation’s digital future,’ highlighting its pivotal role in steering the country’s digital transformation.

In 2019, the Smart Nation and Digital Government offices introduced an extensive National AI Strategy, outlining Singapore’s goal to boost its economy and become a leader in the global AI industry. To support these objectives, the government also established a National AI Office within the Ministry to oversee the execution of its AI initiatives.

The Singapore government has developed various frameworks and tools to guide AI deployment and promote the responsible use of AI:

  • The Model AI Governance Framework, that offers comprehensive guidelines to private sector entities on tackling essential ethical and governance challenges in the implementation of AI technologies.
  • AI Verify, was developed by IMDA in collaboration with private sector partners, and supported by the AI Verify Foundation (AIVF) and is a testing framework and toolkit for AI governance, created to assist organisations in assessing the alignment of their AI systems with ethical guidelines using standardised evaluations.
  • The National Artificial Intelligence Strategy 2.0, highlighting Singapore’s vision and dedication to fostering a trusted and accountable AI environment and promoting innovation and economic growth through AI.

Definition of AI

The 2020 Framework defines AI as ‘a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).’

The 2024 Framework defines Generative AI as ‘AI models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable Generative AI to accept natural language prompts as input, including large language models.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Republic of Korea

Summary:

The Republic of Korea has no binding AI regulations but is actively developing its framework through the Ministry of Science and ICT and the Personal Information Protection Commission. Key initiatives include the 2019 National AI Strategy, the 2020 Human-Centered AI Ethics Standards, and the 2023 Digital Bill of Rights. Current legislative efforts focus on the proposed Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which adopts a ‘permit-first-regulate-later’ approach to foster innovation while addressing high-risk applications.

As of the time of this writing, no AI regulations currently carry the force of law in the Republic of Korea. However, two major institutions are actively guiding the development of AI-related policies: the Ministry of Science and ICT (MSIT) and the Personal Information Protection Commission (PIPC). While the PIPC concentrates on ensuring that privacy laws keep pace with AI advancements and emerging risks, MSIT leads the nation’s broader AI initiatives. Among these efforts is the AI Strategy High-Level Consultative Council, a collaborative platform where government and private stakeholders engage in discussions on AI governance.

The Republic of Korea has been progressively shaping its AI governance framework, beginning with the release of its National Strategy for Artificial Intelligence in December 2019. This was followed by the Human-Centered Artificial Intelligence Ethics Standards in 2020 and the introduction of the Digital Bill of Rights in May 2023. Although no comprehensive AI law exists as of yet, several AI-related legislative proposals have been introduced to the National Assembly since 2022. One prominent proposal currently under review is the Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which aims to consolidate earlier legislative drafts into a more cohesive approach.

Unlike the European Union’s AI Act, the Republic of Korea’s proposed legislation follows a ‘permit-first-regulate-later’ philosophy, which emphasises fostering innovation and industrial growth in AI technologies. The bill also outlines specific obligations for high-risk AI applications, such as requiring prior notifications to users and implementing measures to ensure AI systems are trustworthy and safe. The MSIT Minister announced the establishment of an AI Safety Institute at the 2024 AI Safety Summit.

Definition of AI

Under the proposed AI Act, ‘artificial intelligence’ is defined as the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgement, and language comprehension.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UAE

Summary:

The UAE currently lacks binding AI regulations but actively promotes innovation through frameworks like regulatory sandboxes and allowing real-world testing of new technologies under regulatory oversight. AI governance in the UAE is shaped by its complex jurisdictional landscape, including federal laws, Mainland UAE, and financial free zones such as DIFC and ADGM. Key initiatives include the 2017 National Strategy for Artificial Intelligence 2031, managed by the UAE AI and Blockchain Council, which focuses on fairness, transparency, accountability, and responsible AI practices. Dubai’s 2019 AI Principles and Ethical AI Toolkit emphasize safety, fairness, and explainability in AI systems. The UAE’s AI Ethics: Principles and Guidelines (2022) provide a non-binding framework balancing innovation and societal interests, supported by the beta AI Ethics Self-Assessment Tool to evaluate and refine AI systems ethically. In 2023, the UAE released Falcon 180B, an open-source large language model, and in 2024, the Charter for the Development and Use of Artificial Intelligence, which aims to position the UAE as a global AI leader by 2031 while addressing algorithmic bias, privacy, and compliance with international standards.

At the time of this writing, no AI regulations currently carry the force of law in the UAE. The regulatory landscape of the United Arab Emirates is quite complex due to its division into multiple jurisdictions, each governed by its own set of rules and, in some cases, distinct regulatory bodies. 

Broadly, the UAE can be viewed in terms of its Financial Free Zones, such as the Dubai International Financial Centre (DIFC) and the Abu Dhabi Global Market (ADGM), which operate under separate legal frameworks, and Mainland UAE, which encompasses all areas outside these financial zones. Mainland UAE is further split into non-financial free zones and the broader onshore region, where the general laws of the country apply. As the UAE is a federal state composed of seven emirates – Dubai, Abu Dhabi, Sharjah, Fujairah, Ras Al Khaimah, Ajman, and Umm Al-Quwain – each of them retains control over local matters not specifically governed by federal law. The UAE is a strong advocate for “regulatory sandboxes,” a framework that allows new technologies to be tested in real-world conditions within a controlled setting, all under the close oversight of a regulatory authority.

In 2017, the UAE appointed a Minister of State for AI, Digital Economy and Remote Work Applications and released the National Strategy for Artificial Intelligence 2031, with the aim to create the country’s AI ecosystem. The UAE Artificial Intelligence and Blockchain Council is responsible for managing the National Strategy’s implementation, including crafting regulations and establishing best practices related to AI risks, data management, cybersecurity, and various other digital matters.

The City of Dubai launched the AI Principles and Guidelines for the Emirate of Dubai in January 2019. The Principles promote fairness, transparency, accountability, and explainability in AI development and oversight. Dubai introduced an Ethical AI Toolkit outlining principles for AI systems to ensure safety, fairness, transparency, accountability, and comprehensibility.

The UAE AI Ethics: Principles and Guidelines, released in December 2022 under the Minister of State for Artificial Intelligence, provides a non-binding framework for ethical AI design and use, focusing on fairness, accountability, transparency, explainability, robustness, human-centered design, sustainability, and privacy preservation. Drafted as a collaborative, multi-stakeholder effort, the guidelines balance the need for innovation with the protection of intellectual property and invite ongoing dialogue among stakeholders. It aims to evolve into a universal, practical, and widely adopted standard for ethical AI, aligning with the UAE National AI Strategy and Sustainable Development Goals to ensure AI serves societal interests while upholding global norms and advancing responsible innovation.

To operationalise these principles, the UAE has introduced a beta version of its AI Ethics Self-Assessment Tool, designed to help developers and operators evaluate the ethical performance of their AI systems. This tool encourages consideration of potential ethical challenges from initial development stages to full system maintenance and helps prioritise necessary mitigation measures. While non-compulsory, it employs weighted recommendations—where ‘should’ indicates high priority and ‘should consider’ denotes moderate importance—and discourages implementation unless a minimum ethics performance threshold is met. As a beta version, the tool invites extensive user feedback and shared use cases to refine its functionality.

In 2023, the UAE, through the support of the Advanced Technology Research Council under the Abu Dhabi government, released the open-source large language model, Falcon 180B, named after the country’s national bird.

In July 2024, the UAE’s AI, Digital Economy, and Remote Work Applications Office released the Charter for the Development and Use of Artificial Intelligence. The Charter establishes a framework to position the UAE as a global leader in AI by 2031, prioritising human well-being, safety, inclusivity, and fairness in AI development. It addresses algorithmic bias, ensures transparency and accountability, and emphasises innovation while safeguarding community privacy in line with UAE data standards. The Charter also highlights the need for ethical oversight and compliance with international treaties and local regulations to ensure AI serves societal interests and upholds fundamental rights.

Definition of AI

The  AI Office has defined AI as ‘systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the data they collect’ in the 2023 AI Adoption Guideline in Government Services.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UK

Summary:

The UK currently has no binding AI regulations but adopts a principles-based framework allowing sector-specific regulators to govern AI development and use within their domains. Key principles outlined in the 2023 White Paper: A Pro-Innovation Approach to AI Regulation include safety, transparency, fairness, accountability, and contestability. The UK’s National AI Strategy, overseen by the Office for Artificial Intelligence, aims to position the country as a global AI leader by promoting innovation and aligning with international frameworks. Recent developments, including proposed legislation for advanced AI models and the Digital Information and Smart Data Bill, signal a shift toward more structured regulation. The UK solidified its leadership in AI governance by hosting the 2023 Bletchley Summit, where 28 countries committed to advancing global AI safety and responsible development.

As of the time of this writing, no AI regulations currently carry the force of law in the UK. The UK supports a principles-based framework for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK aims to position itself as a global leader in AI by establishing a flexible regulatory framework that fosters innovation and growth in the sector. In 2022, the Government  issued an AI Regulation Policy Paper followed by a White Paper in 2023 with the title ‘A Pro-Innovation Approach to AI Regulation.’

The White Paper lists five key principles designed to ensure responsible AI development: 

  1. Safety, Security, and Robustness. 
  2. Appropriate Transparency and Explainability.
  3. Fairness.
  4. Accountability and Governance.
  5. Contestability and Redress.

The UK Government set up an Office for Artificial Intelligence to oversee the implementation of the UK’s National AI Strategy, adopted in September 2021. The Strategy recognises the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors, and sets up a plan for the next decade to position the UK as a world leader in artificial intelligence. The AI office will perform various central functions to support the framework’s implementation, including: 

  1. monitoring and evaluating the overall efficacy of the regulatory framework;
  2. assessing and monitoring risks across the economy arising from AI;
  3. promoting interoperability with international regulatory frameworks.

Shifting away from the flexible regulatory approach, In July 2024, King Charles III suggested plans to enact legislation requiring developers of the most advanced AI models to meet specific standards. Additionally, the announcement included the Digital Information and Smart Data Bill, which will reform data-related laws to ensure the safe development and use of emerging technologies, including AI. The details of how these measures will be implemented remain unclear.

The UK hosted in November 2023 the Bletchley Summit, positioning itself as a leader in fostering international collaboration on AI safety and governance. At the Summit, a landmark declaration was signed by 28 countries, committing to collaborate on managing the risks of frontier AI technologies, ensuring AI safety, and advancing responsible AI development and governance globally.

Definition of AI

The White Paper describes AI as ‘products and services that are ‘adaptable’ and ‘autonomous.”

Other laws and official non binding documents that may impact the regulation of AI

Sources

Senators push Biden to extend TikTok sale deadline amid legal uncertainty

Democratic Senator Ed Markey and Republican Senator Rand Paul are urging President Joe Biden to extend the January 19 deadline for ByteDance, the China-based owner of TikTok, to sell the app’s US assets or face a nationwide ban. The Supreme Court is set to hear arguments on January 10 regarding ByteDance’s legal challenge, which claims the law mandating the sale violates First Amendment free speech rights. In their letter to Biden, the senators highlighted the potential consequences for free expression and the uncertain future of the law.

The controversial legislation, signed by Biden in April, was passed due to national security concerns. The Justice Department asserts that TikTok’s vast data on 170 million American users poses significant risks, including potential manipulation of content. TikTok, however, denies posing any threat to US security.

The debate has split lawmakers. Senate Minority Leader Mitch McConnell supports enforcing the deadline, while President-elect Donald Trump has softened his stance, expressing support for TikTok and suggesting he would review the situation. The deadline falls just a day before Trump is set to take office on January 20, adding to the uncertainty surrounding the app’s fate.