AI advances ovarian cancer detection and speeds up blood tests

AI is revolutionising medical testing, including early detection of ovarian cancer and faster identification of life-threatening infections like pneumonia. Researchers are leveraging AI to interpret complex patterns in blood tests, improving accuracy and speed in diagnosing diseases.

Dr Daniel Heller’s team at Memorial Sloan Kettering Cancer Center developed a nanotube-based blood test that uses AI to detect ovarian cancer earlier than traditional methods. Despite limited data, the technology shows promise, with further studies underway to enhance its effectiveness and expand its application.

AI is also transforming infectious disease diagnosis. California-based Karius uses AI to identify pneumonia-causing pathogens within 24 hours, cutting costs and improving treatment outcomes. Meanwhile, AstraZeneca‘s Dr Slavé Petrovski developed a platform that identifies over 120 diseases from United Kingdom biobank data. However, challenges persist, including a lack of data sharing among researchers, prompting calls for more collaborative efforts.

Overview of AI policy in 10 jurisdictions

Brazil

Summary:

Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspired by the EU’s AI Act, the bill proposes a risk-based framework, categorising AI systems as unacceptable (banned), high risk (strictly regulated), or low risk (less oversight). This effort builds on Brazil’s 2019 National AI Strategy, which emphasises ethical AI that benefits society, respects human rights, and ensures transparency. Using the OECD’s definition of AI, the bill aims to protect people while fostering innovation.

As of the time of writing, Brazil does not yet have any AI-specific regulations with the force of law. However, the country is actively working towards establishing a regulatory framework for artificial intelligence. Brazilian legislators are currently considering the Proposed AI Regulation Bill No. 2338/2023, though the timeline for its adoption remains uncertain.

Brazil’s journey toward AI regulation began with the launch of the Estratégia Brasileira de Inteligência Artificial (EBIA) in 2019. The strategy outlines the country’s vision for fostering responsible and ethical AI development. Key principles of the EBIA include:

  • AI should benefit people and the planet, contributing to inclusive growth, sustainable development, and societal well-being.
  • AI systems must be designed to uphold the rule of law, human rights, democratic values, and diversity, with safeguards in place, such as human oversight when necessary.
  • AI systems should operate robustly, safely, and securely throughout their lifecycle, with ongoing risk assessment and mitigation.
  • Organisations and individuals involved in the AI lifecycle must commit to transparency and responsible disclosure, providing information that helps:
  1. Promote general understanding of AI systems;
  2. Inform people about their interactions with AI;
  3. Enable those affected by AI systems to understand the outcomes;
  4. Allow those adversely impacted to challenge AI-generated results.

In 2020, Brazil’s Chamber of Deputies began working on Bill 21/2020, aiming to establish a Legal Framework of Artificial Intelligence. Over time, four bills were introduced before the Chamber ultimately approved Bill 21/2020.

Meanwhile, the Federal Senate established a Commission of Legal Experts to support the development of an alternative AI bill. The commission held public hearings and international seminars, consulted with global experts, and conducted research into AI regulations from other jurisdictions. This extensive process culminated in a report that informed the drafting of Bill 2338 of 2023, which aims to govern the use of AI.

Following a similar approach to the European Union’s AI Act, the proposed Brazilian bill adopts a risk-based framework, classifying AI systems into three categories:

  • Unacceptable risk (entirely prohibited),
  • High risk (subject to stringent obligations for providers), and
  • Non-high risk.

This classification aims to ensure that AI systems in Brazil are developed and deployed in a way that minimises potential harm while promoting innovation and growth.

Definition of AI 

As of the time of writing, the concept of AI adopted by the draft Bill is that adopted by the OECD: ‘An AI system is a machine-based system that can, for a given set of objectives defined by humans, make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

Other laws and official documents that may impact the regulation of AI 

Sources

Canada

Summary:

Canada is progressing toward AI regulation with the proposed Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27. The Act focuses on regulating high-impact AI systems through compliance with existing consumer protection and human rights laws, overseen by the Minister of Innovation with support from an AI and Data Commissioner. AIDA also includes criminal provisions against harmful AI uses and will define specific regulations in consultation with stakeholders. While the framework is finalised, a Voluntary Code of Conduct promotes accountability, fairness, transparency, and safety in generative AI development.

As of the time of writing, Canada does not yet have AI-specific regulations with the force of law. However, significant steps have been taken toward establishing a regulatory framework. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

As of now, Bill C-27, the Digital Charter Implementation Act, 2022, remains under discussion and continues to progress through the legislative process. Currently, the Standing Committee on Industry and Technology (INDU) has announced that its review of the bill will stay on hold until at least February 2025. See here more details about the entire deliberation process.

The AIDA includes several key proposals:

  • High-impact AI systems must comply with existing Canadian consumer protection and human rights laws. Specific regulations defining these systems and their requirements will be developed in consultation with stakeholders to protect the public while minimising burdens on the AI ecosystem.
  • The Minister of Innovation, Science, and Industry will oversee the Act’s implementation, supported by an AI and Data Commissioner. Initially, this role will focus on education and assistance, but it will eventually take on compliance and enforcement responsibilities.
  • New criminal law provisions will prohibit reckless and malicious uses of AI that could harm Canadians or their interests.

In addition, Canada has introduced a Voluntary Code of Conduct for the responsible development and management of advanced generative AI systems. This code serves as a temporary measure while the legislative framework is being finalized.

The code of conduct sets out six core principles for AI developers and managers: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. For instance, managers are responsible for ensuring that AI-generated content is clearly labeled, while developers must assess the training data and address harmful biases to promote fairness and equity in AI outcomes.

Definition of AI

At its current stage of drafting, the Artificial Intelligence and Data Act provides the following definitions:

‘Artificial intelligence system is a system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.’

‘General-purpose system is an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.’

‘Machine-learning model is a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.’

Other laws and official documents that may impact the regulation of AI

Sources 

India

Summary:

India is advancing its AI governance framework but currently has no binding AI regulations. Key initiatives include the 2018 National Strategy for Artificial Intelligence, which prioritises AI applications in sectors like healthcare and smart infrastructure, and the 2021 Principles for Responsible AI, which outline ethical standards such as safety, inclusivity, privacy, and accountability. Operational guidelines released later in 2021 emphasise ethics by design and capacity building. Recent developments include the 2024 India AI Mission, with over $1.25 billion allocated for infrastructure, innovation, and safe AI, and advisories addressing deepfakes and generative AI.

As of the time of this writing, no AI regulations currently carry the force of law in India. Several frameworks are being formulated to guide the regulation of AI, including:

  • The National Strategy for Artificial Intelligence released in June 2018, which aims to establish a strong basis for future regulation of AI in India and focuses on AI intervention in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.
  • The Principles for Responsible AI released in February 2021, which serve as India’s roadmap for creating an ethical, responsible AI ecosystem across sectors.
  • The Operationalizing Principles for Responsible AI released in August 2021, which emphasises the need for regulatory and policy interventions, capacity building, and incentivising ethics by design regarding AI.

The Principles for Responsible AI identify the following broad principles for responsible management of AI, which can be leveraged by relevant stakeholders in India:

  • The principle of safety and reliability.
  • The principle of equality.
  • The principle of inclusivity and non-discrimination.
  • The principle of privacy and security.
  • The principle of transparency.
  • The principle of accountability.
  • The principle of protection and reinforcement of positive human values.

The Ministry of Commerce and Industry has established an Artificial Intelligence Task Force, which issued a report in March 2018.

In March 2024, India announced an allocation of over $1.25 billion for the India AI Mission, which will cover various aspects of AI, including computing infrastructure capacity, skilling, innovation, datasets, and safe and trusted AI.

India’s Ministry of Electronics and Information Technology issued advisories related to deepfakes and generative AI in 2024.

Definition of AI

The Principles for Responsible AI describe AI as ‘a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. The natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also make decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time.’

Other laws and official documents that may impact the regulation of AI

Sources

Israel

Summary:

Israel does not yet have binding AI regulations but is advancing a flexible, principles-based framework to encourage responsible innovation. The government’s approach relies on ethical guidelines and voluntary standards tailored to specific sectors, with the potential for broader legislation if common challenges arise. Key milestones include a 2022 white paper on AI and the 2023 Artificial Intelligence Regulations and Ethics.

As of the time of this writing, no AI regulations currently carry the force of law in Israel. Israel’s approach to AI governance encourages responsible innovation in the private sector through a sector-specific, principles-based framework. This strategy uses non-binding tools, including ethical guidelines and voluntary standards, allowing for regulatory flexibility tailored to each sector’s needs. However, the policy also leaves room for the introduction of broader, horizontal legislation should common challenges arise across sectors.

A white paper on AI was published in 2022 by Israel’s Ministry of Innovation, Science and Technology in collaboration with the Ministry of Justice, followed by the Policy on Artificial Intelligence Regulations and Ethics published in 2023.  The AI Policy was developed pursuant to a government resolution that tasked the Ministry of Innovation, Science and Technology with advancing a national AI plan for Israel.

Definition of AI

The AI Policy describes an AI system as having ‘a wide range of applications such as autonomous vehicles, medical imaging analysis, credit scoring, securities trading, personalised learning and employment,’ notwithstanding that ‘the list of applications is constantly expanding.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Japan

Summary:

Japan currently has no binding AI regulations but relies on voluntary guidelines to encourage responsible AI development and use. The AI Guidelines for Business Version 1.0 promote principles like human rights, safety, fairness, transparency, and innovation, fostering a flexible governance model involving stakeholders across sectors. Recent developments include the establishment of the AI Safety Institute in 2024 and the draft ‘Basic Act on the Advancement of Responsible AI,’ which proposes legally binding rules for certain generative AI models, including vetting, reporting, and compliance standards.

At the time of this writing, no AI regulations currently carry the force of law in Japan

The updated AI Guidelines for Business Version 1.0 are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognised AI principles.

The principles outlined by the AI Guidelines are:

  • Human-centric – The utilisation of AI must not infringe upon the fundamental human rights guaranteed by the constitution and international standards.
  • Safety – Each AI business actor should avoid damage to the lives, bodies, minds, and properties of stakeholders.
  • Fairness – Elimination of unfair and harmful bias and discrimination.
  • Privacy protection – Each AI business actor respects and protects privacy.
  • Ensuring security – Each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
  • Transparency – Each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
  • Accountability – Each AI business actor is accountable to stakeholders to ensure traceability, conforming to common guiding principles, based on each AI business actor’s role and degree of risk posed by the AI system or service.
  • Education/literacy – Each AI business actor is expected to provide persons engaged in its business with education regarding knowledge, literacy and ethics concerning the use of AI in a socially correct manner, and provide stakeholders with education about complexity, misinformation, and possibilities of intentional misuse.
  • Ensuring fair competition – Each AI business actor is expected to maintain a fair competitive environment so that new businesses and services using AI are created.
  • Innovation – Each AI business actor is expected to promote innovation and consider interconnectivity and interoperability.

The Guidelines emphasise a flexible governance model where various stakeholders are involved in a swift and ongoing process of assessing risks, setting objectives, designing systems, implementing solutions, and evaluating outcomes. This adaptive cycle operates within different governance structures, such as corporate policies, regulatory frameworks, infrastructure, market dynamics, and societal norms, ensuring they can quickly respond to changing conditions.

The AI Strategy Council was established to explore ways to harness AI’s potential while mitigating associated risks. On May 22, 2024, the Council presented draft discussion points outlining considerations on the necessity and possible scope of future AI regulations.

A working group has proposed the ‘Basic Act on the Advancement of Responsible AI,‘ which would introduce a hard law approach to regulating certain generative AI foundation models. Under the proposed law, the government would designate which AI systems and developers fall under its scope and impose obligations related to the vetting, operation, and output of these systems, along with periodic reporting requirements. 

Similar to the voluntary commitments made by major US AI companies in 2023, this framework would allow industry groups and developers to establish specific compliance standards. The government would have the authority to monitor compliance and enforce penalties for violations. If enacted, this would represent a shift in Japan’s AI regulation from a soft law to a more binding legal framework.

The AI Safety Institute was launched in February 2024 to examine the evaluation methods for AI safety and other related matters. The Institute is established within the Information-technology Promotion Agency, in collaboration with relevant ministries and agencies, including the Cabinet Office.

Definition of AI

The AI Guidelines define AI as an abstract concept that includes AI systems themselves as well as machine-learning software and programs.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Saudi Arabia

Summary:

Saudi Arabia has no binding AI regulations but is advancing its AI agenda through initiatives under Vision 2030, led by the Saudi Data and Artificial Intelligence Authority. The Authority oversees the National Strategy for Data & AI, which includes developing startups, training specialists, and establishing policies and standards. In 2023, SDAIA issued a draft set of AI Ethics Principles, categorising AI risks into four levels: little or no risk, limited risk, high risk (requiring assessments), and unacceptable risk (prohibited). Recent 2024 guidelines for generative AI offer non-binding advice for government and public use. These efforts are supported by a $40 billion AI investment fund.

At the time of this writing, no AI regulations currently carry the force of law in Saudi Arabia. In 2016, Saudi Arabia unveiled a long-term initiative known as Vision 2030, a bold plan spearheaded by Crown Prince Mohammed Bin Salman. 

A key aspect of this initiative was the significant focus on advancing AI, which culminated in the establishment of the Saudi Data and Artificial Intelligence Authority (SDAIA) in August 2019. This same decree also launched the Saudi Artificial Intelligence Center and the Saudi Data Management Office, both operating under SDAIA’s authority. 

SDAIA was tasked with managing the country’s AI research landscape and enforcing new policies and regulations that aligned with its AI objectives. In October 2020, SDAIA rolled out the National Strategy for Data & AI, which broadened the scope of the AI agenda to include goals such as developing over 300 AI and data-focused startups and training more than 20,000 specialists in these fields.

SDAIA was tasked by the Council of Ministers’ Resolution No. 292 to create policies, governance frameworks, standards, and regulations for data and artificial intelligence, and to oversee their enforcement once implemented.  SDAIA have issued draft AI Ethics Principles in 2023. The document enumerates seven principles with corresponding conditions necessary for their sufficient implementation. They include: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and explainability, and accountability and responsibility.

Similar to the EU AI Act, the Principles categorise the risks associated with the development and utilization of AI into four levels with different compliance requirements for each:

  • Little or No Risk: Systems classified as posing little or no risk do not face restrictions, but the SDAIA recommends compliance with the AI Ethics Principles.
  • Limited Risk: Systems classified as limited risk are required to comply with the Principles.
  • High Risk: Systems classified as high risk are required to undergo both pre- and post-deployment conformity assessments, in addition to meeting ethical standards and relevant legal requirements. Such systems are noted for the significant risk they might pose to fundamental rights.
  • Unacceptable Risk: Systems classified as posing unacceptable risks to individuals’ safety, well-being, or rights are strictly prohibited. These include systems that socially profile or sexually exploit children, for instance.

On January 1, 2024, SDAIA released two sets of Generative AI Guidelines. The first is intended for government employees, while the second is aimed at the general public. 

Both documents offer guidance on the adoption and use of generative AI systems, using common scenarios to illustrate their application. They also address the challenges and considerations associated with generative AI, outline principles for responsible use, and suggest best practices. The Guidelines are not legally binding and serve as advisory frameworks.

Much of the attention surrounding Saudi Arabia’s AI advancements is driven by its large-scale investment efforts, notably a $40 billion fund dedicated to AI technology development.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Singapore

Summary:

Singapore has no binding AI regulations but promoted responsible AI through frameworks developed by the Infocomm Media Development Authority (IMDA). Key initiatives include the Model AI Governance Framework, which offers ethical guidelines for the private sector, and AI Verify, a toolkit for assessing AI systems’ alignment with these standards. The National AI Strategy and its 2.0 update emphasise fostering a trusted AI ecosystem while driving innovation and economic growth.

As of the time of this writing, no AI regulations currently carry the force of law in Singapore. Singapore’s AI regulations are largely shaped by the Infocomm Media Development Authority (IMDA), an independent government body that operates under the Ministry of Communications and Information. This statutory board plays a central role in guiding the nation’s approach to artificial intelligence policies and frameworks. IMDA takes a prominent position in shaping Singapore’s technology policies and refers to itself as the ‘architect of the nation’s digital future,’ highlighting its pivotal role in steering the country’s digital transformation.

In 2019, the Smart Nation and Digital Government offices introduced an extensive National AI Strategy, outlining Singapore’s goal to boost its economy and become a leader in the global AI industry. To support these objectives, the government also established a National AI Office within the Ministry to oversee the execution of its AI initiatives.

The Singapore government has developed various frameworks and tools to guide AI deployment and promote the responsible use of AI:

  • The Model AI Governance Framework, that offers comprehensive guidelines to private sector entities on tackling essential ethical and governance challenges in the implementation of AI technologies.
  • AI Verify, was developed by IMDA in collaboration with private sector partners, and supported by the AI Verify Foundation (AIVF) and is a testing framework and toolkit for AI governance, created to assist organisations in assessing the alignment of their AI systems with ethical guidelines using standardised evaluations.
  • The National Artificial Intelligence Strategy 2.0, highlighting Singapore’s vision and dedication to fostering a trusted and accountable AI environment and promoting innovation and economic growth through AI.

Definition of AI

The 2020 Framework defines AI as ‘a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).’

The 2024 Framework defines Generative AI as ‘AI models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable Generative AI to accept natural language prompts as input, including large language models.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Republic of Korea

Summary:

The Republic of Korea has no binding AI regulations but is actively developing its framework through the Ministry of Science and ICT and the Personal Information Protection Commission. Key initiatives include the 2019 National AI Strategy, the 2020 Human-Centered AI Ethics Standards, and the 2023 Digital Bill of Rights. Current legislative efforts focus on the proposed Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which adopts a ‘permit-first-regulate-later’ approach to foster innovation while addressing high-risk applications.

As of the time of this writing, no AI regulations currently carry the force of law in the Republic of Korea. However, two major institutions are actively guiding the development of AI-related policies: the Ministry of Science and ICT (MSIT) and the Personal Information Protection Commission (PIPC). While the PIPC concentrates on ensuring that privacy laws keep pace with AI advancements and emerging risks, MSIT leads the nation’s broader AI initiatives. Among these efforts is the AI Strategy High-Level Consultative Council, a collaborative platform where government and private stakeholders engage in discussions on AI governance.

The Republic of Korea has been progressively shaping its AI governance framework, beginning with the release of its National Strategy for Artificial Intelligence in December 2019. This was followed by the Human-Centered Artificial Intelligence Ethics Standards in 2020 and the introduction of the Digital Bill of Rights in May 2023. Although no comprehensive AI law exists as of yet, several AI-related legislative proposals have been introduced to the National Assembly since 2022. One prominent proposal currently under review is the Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which aims to consolidate earlier legislative drafts into a more cohesive approach.

Unlike the European Union’s AI Act, the Republic of Korea’s proposed legislation follows a ‘permit-first-regulate-later’ philosophy, which emphasises fostering innovation and industrial growth in AI technologies. The bill also outlines specific obligations for high-risk AI applications, such as requiring prior notifications to users and implementing measures to ensure AI systems are trustworthy and safe. The MSIT Minister announced the establishment of an AI Safety Institute at the 2024 AI Safety Summit.

Definition of AI

Under the proposed AI Act, ‘artificial intelligence’ is defined as the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgement, and language comprehension.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UAE

Summary:

The UAE currently lacks binding AI regulations but actively promotes innovation through frameworks like regulatory sandboxes and allowing real-world testing of new technologies under regulatory oversight. AI governance in the UAE is shaped by its complex jurisdictional landscape, including federal laws, Mainland UAE, and financial free zones such as DIFC and ADGM. Key initiatives include the 2017 National Strategy for Artificial Intelligence 2031, managed by the UAE AI and Blockchain Council, which focuses on fairness, transparency, accountability, and responsible AI practices. Dubai’s 2019 AI Principles and Ethical AI Toolkit emphasize safety, fairness, and explainability in AI systems. The UAE’s AI Ethics: Principles and Guidelines (2022) provide a non-binding framework balancing innovation and societal interests, supported by the beta AI Ethics Self-Assessment Tool to evaluate and refine AI systems ethically. In 2023, the UAE released Falcon 180B, an open-source large language model, and in 2024, the Charter for the Development and Use of Artificial Intelligence, which aims to position the UAE as a global AI leader by 2031 while addressing algorithmic bias, privacy, and compliance with international standards.

At the time of this writing, no AI regulations currently carry the force of law in the UAE. The regulatory landscape of the United Arab Emirates is quite complex due to its division into multiple jurisdictions, each governed by its own set of rules and, in some cases, distinct regulatory bodies. 

Broadly, the UAE can be viewed in terms of its Financial Free Zones, such as the Dubai International Financial Centre (DIFC) and the Abu Dhabi Global Market (ADGM), which operate under separate legal frameworks, and Mainland UAE, which encompasses all areas outside these financial zones. Mainland UAE is further split into non-financial free zones and the broader onshore region, where the general laws of the country apply. As the UAE is a federal state composed of seven emirates – Dubai, Abu Dhabi, Sharjah, Fujairah, Ras Al Khaimah, Ajman, and Umm Al-Quwain – each of them retains control over local matters not specifically governed by federal law. The UAE is a strong advocate for “regulatory sandboxes,” a framework that allows new technologies to be tested in real-world conditions within a controlled setting, all under the close oversight of a regulatory authority.

In 2017, the UAE appointed a Minister of State for AI, Digital Economy and Remote Work Applications and released the National Strategy for Artificial Intelligence 2031, with the aim to create the country’s AI ecosystem. The UAE Artificial Intelligence and Blockchain Council is responsible for managing the National Strategy’s implementation, including crafting regulations and establishing best practices related to AI risks, data management, cybersecurity, and various other digital matters.

The City of Dubai launched the AI Principles and Guidelines for the Emirate of Dubai in January 2019. The Principles promote fairness, transparency, accountability, and explainability in AI development and oversight. Dubai introduced an Ethical AI Toolkit outlining principles for AI systems to ensure safety, fairness, transparency, accountability, and comprehensibility.

The UAE AI Ethics: Principles and Guidelines, released in December 2022 under the Minister of State for Artificial Intelligence, provides a non-binding framework for ethical AI design and use, focusing on fairness, accountability, transparency, explainability, robustness, human-centered design, sustainability, and privacy preservation. Drafted as a collaborative, multi-stakeholder effort, the guidelines balance the need for innovation with the protection of intellectual property and invite ongoing dialogue among stakeholders. It aims to evolve into a universal, practical, and widely adopted standard for ethical AI, aligning with the UAE National AI Strategy and Sustainable Development Goals to ensure AI serves societal interests while upholding global norms and advancing responsible innovation.

To operationalise these principles, the UAE has introduced a beta version of its AI Ethics Self-Assessment Tool, designed to help developers and operators evaluate the ethical performance of their AI systems. This tool encourages consideration of potential ethical challenges from initial development stages to full system maintenance and helps prioritise necessary mitigation measures. While non-compulsory, it employs weighted recommendations—where ‘should’ indicates high priority and ‘should consider’ denotes moderate importance—and discourages implementation unless a minimum ethics performance threshold is met. As a beta version, the tool invites extensive user feedback and shared use cases to refine its functionality.

In 2023, the UAE, through the support of the Advanced Technology Research Council under the Abu Dhabi government, released the open-source large language model, Falcon 180B, named after the country’s national bird.

In July 2024, the UAE’s AI, Digital Economy, and Remote Work Applications Office released the Charter for the Development and Use of Artificial Intelligence. The Charter establishes a framework to position the UAE as a global leader in AI by 2031, prioritising human well-being, safety, inclusivity, and fairness in AI development. It addresses algorithmic bias, ensures transparency and accountability, and emphasises innovation while safeguarding community privacy in line with UAE data standards. The Charter also highlights the need for ethical oversight and compliance with international treaties and local regulations to ensure AI serves societal interests and upholds fundamental rights.

Definition of AI

The  AI Office has defined AI as ‘systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the data they collect’ in the 2023 AI Adoption Guideline in Government Services.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UK

Summary:

The UK currently has no binding AI regulations but adopts a principles-based framework allowing sector-specific regulators to govern AI development and use within their domains. Key principles outlined in the 2023 White Paper: A Pro-Innovation Approach to AI Regulation include safety, transparency, fairness, accountability, and contestability. The UK’s National AI Strategy, overseen by the Office for Artificial Intelligence, aims to position the country as a global AI leader by promoting innovation and aligning with international frameworks. Recent developments, including proposed legislation for advanced AI models and the Digital Information and Smart Data Bill, signal a shift toward more structured regulation. The UK solidified its leadership in AI governance by hosting the 2023 Bletchley Summit, where 28 countries committed to advancing global AI safety and responsible development.

As of the time of this writing, no AI regulations currently carry the force of law in the UK. The UK supports a principles-based framework for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK aims to position itself as a global leader in AI by establishing a flexible regulatory framework that fosters innovation and growth in the sector. In 2022, the Government  issued an AI Regulation Policy Paper followed by a White Paper in 2023 with the title ‘A Pro-Innovation Approach to AI Regulation.’

The White Paper lists five key principles designed to ensure responsible AI development: 

  1. Safety, Security, and Robustness. 
  2. Appropriate Transparency and Explainability.
  3. Fairness.
  4. Accountability and Governance.
  5. Contestability and Redress.

The UK Government set up an Office for Artificial Intelligence to oversee the implementation of the UK’s National AI Strategy, adopted in September 2021. The Strategy recognises the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors, and sets up a plan for the next decade to position the UK as a world leader in artificial intelligence. The AI office will perform various central functions to support the framework’s implementation, including: 

  1. monitoring and evaluating the overall efficacy of the regulatory framework;
  2. assessing and monitoring risks across the economy arising from AI;
  3. promoting interoperability with international regulatory frameworks.

Shifting away from the flexible regulatory approach, In July 2024, King Charles III suggested plans to enact legislation requiring developers of the most advanced AI models to meet specific standards. Additionally, the announcement included the Digital Information and Smart Data Bill, which will reform data-related laws to ensure the safe development and use of emerging technologies, including AI. The details of how these measures will be implemented remain unclear.

The UK hosted in November 2023 the Bletchley Summit, positioning itself as a leader in fostering international collaboration on AI safety and governance. At the Summit, a landmark declaration was signed by 28 countries, committing to collaborate on managing the risks of frontier AI technologies, ensuring AI safety, and advancing responsible AI development and governance globally.

Definition of AI

The White Paper describes AI as ‘products and services that are ‘adaptable’ and ‘autonomous.”

Other laws and official non binding documents that may impact the regulation of AI

Sources

Boon secures $20.5M to enhance AI tools for logistics

AI-powered logistics startup Boon has raised $20.5 million to revolutionise fleet and logistics operations. The funding, led by Marathon and Redpoint, includes $15.5 million from a Series A round and a previously undisclosed $5 million seed investment. The platform aims to streamline operations and improve efficiency by unifying data from diverse applications.

Boon targets inefficiencies in the logistics industry, particularly among small and medium-sized enterprises managing over 60 million fleet vehicles globally. Current tools, often fragmented across 15 to 20 applications, create administrative burdens. Boon’s AI agent addresses these challenges by automating processes, optimising workflows, and providing actionable insights.

Founder Deepti Yenireddy drew on her experience at fleet operations giant Samsara to design Boon. She assembled a team of experts from Apple, DoorDash, Google, and other leading firms to develop the platform. Boon plans to use the funding to expand its offerings, covering areas like container loading and staffing optimisation.

Early results have been promising. With paying customers representing 35,000 drivers and 10,000 vehicles, Boon reached an annual revenue run rate of $1 million within nine months. The company is hiring to accelerate growth and broaden its impact on the logistics sector.

Netherlands expands investment law to include AI and biotech

The Dutch government announced plans to expand its investment screening law to include emerging technologies like biotech, AI, and nanotechnology. The move aims to protect national security amid growing global tensions, with threats such as cyberattacks and espionage becoming more prevalent. Economy Minister Dirk Beljaarts emphasised the importance of safeguarding Dutch businesses, innovations, and the economy.

In addition to biotech and AI, the updated law will cover sensor and navigation technology, advanced materials, and nuclear technologies used in medicine. The government expects these changes to take effect by the second half of 2025.

Introduced in 2023, the investment screening law allows the Dutch government to block foreign takeovers of critical infrastructure or technology that could threaten national security. This comes after the Netherlands imposed restrictions on semiconductor exports to China under US pressure.

Meta projects Instagram to dominate US ad income

Instagram is poised to account for more than half of Meta Platforms’ US advertising revenue by 2025, according to research firm Emarketer. This anticipated growth is largely attributed to the platform’s enhanced monetisation strategies, particularly its focus on short-form video content such as Reels, which competes directly with TikTok and YouTube Shorts.

The increasing engagement with Reels has attracted marketers seeking to capitalise on the popularity of short videos, leading to a significant rise in ad placements. In 2024, Instagram’s ad revenue was primarily derived from its Feed (53.7%) and Stories (24.6%). However, the combined revenue share from Explore, Reels, and potentially Threads is projected to grow to 9.6% in 2025.

Jasmine Enberg, principal analyst at Emarketer, notes that users now spend nearly two-thirds of their Instagram time watching videos, underscoring the platform’s shift towards video-centric content. Additionally, if a TikTok ban were to be enforced in the US, Reels could become a prominent alternative for advertisers, further boosting Instagram’s market share.

Norway to host the 2025 Internet Governance Forum

Norway has been selected by the UN to host the 2025 Internet Governance Forum (IGF), marking a significant milestone as the largest UN meeting ever held in the country. Scheduled for June 2025, the forum will gather thousands of participants from governments, civil society, academia, and the private sector to address critical issues in global internet governance.

Karianne Tung, Norway’s Minister of Digitalisation and Public Governance, emphasised the importance of the IGF, stating, ‘In an era where some countries seek to restrict online freedoms, it is more vital than ever for nations like Norway to engage in discussions and negotiations regarding the frameworks that govern the internet.’ Foreign Minister Espen Barth Eide echoed this sentiment, highlighting Norway’s commitment to a free and open internet as fundamental to democracy and human rights.

The IGF 2025 will celebrate the forum’s 20th anniversary, offering a platform for international collaboration on themes such as digital inclusion, public policy, and online safety. Over five days, the event will feature hundreds of presentations, workshops, and meetings, with around 4,000 in-person and an equal number of virtual participants expected to contribute.

Norwegian stakeholders will have a unique opportunity to showcase local innovations and perspectives on the global stage. Selected over Russia as the host, Norway’s role underscores the international community’s trust in its ability to facilitate meaningful dialogue on the future of the internet.

As the digital landscape evolves, the 2025 IGF is poised to be pivotal in shaping a safe, inclusive, and democratic online space for all.

IGF 2024 closing ceremony: Shaping the future of internet governance

The 19th Internet Governance Forum (IGF) in Riyadh concluded with a forward-looking ceremony that reflected on its achievements while setting ambitious goals for the future. The forum, a key platform for global discussions on internet governance, highlighted the importance of inclusivity, digital equality, and adapting to emerging technological challenges.

Li Junhua, UN Under-Secretary-General for Economic and Social Affairs, emphasised the enduring relevance of the WSIS principles and the ethical considerations essential in navigating digital innovation. Vint Cerf, chair of the IGF leadership panel, proposed elevating the IGF to a permanent status within the UN structure to secure stable funding and expand its impact.

‘The IGF must evolve to deliver tangible results,’ Cerf remarked, suggesting a focus on measurable metrics and concrete outputs, including revisiting foundational documents and preparing for the next IGF in Oslo. Olaf Kolkman from the Internet Society reinforced the need for continuous self-assessment, urging the IGF to enhance its processes for greater stakeholder benefits.

Inclusivity was a dominant theme, with speakers advocating for broader representation in digital policymaking. Ghanaian physician Dr. Angela Sulemana underscored the transformative power of digital tools in healthcare, highlighting the value of diverse perspectives, especially from young professionals.

Dr. Latifa al-Abdul Karim, member of the Saudi Arabia’s Shura Council, called for legislative innovation to address digital challenges, emphasising collaboration, inclusivity, and safeguarding vulnerable groups, including children and the environment. Senior advisor in the Ministry of Communications of Cuba, Juan Fernandez, stressed the urgent need to bridge digital inequalities, particularly between developed and developing nations.

The forum also addressed pressing global issues, such as the digital divide and governance of emerging technologies like AI and quantum computing. The session closed with a call for stronger global digital cooperation and a shared commitment to implementing the Global Digital Compact.

As participants look to the IGF 2025 in Oslo, the focus remains on turning discussions into actionable outcomes, ensuring the internet remains a safe, inclusive, and transformative tool for all.

All transcripts from the Internet Governance Forum 2024 sessions can be found on dig.watch.

Shaping the future of the IGF: Reflections and aspirations

At the Internet Governance Forum (IGF) 2024 in Riyadh, the session ‘Looking Back, Moving Forward’ provided a platform to reflect on the forum’s 19-year history and envision its future role. Amid preparations for the World Summit on the Information Society (WSIS) Plus 20 review and the implementation of the Global Digital Compact (GDC), participants emphasised the IGF’s continued relevance as a multistakeholder platform for global internet governance.

A legacy of dialogue and collaboration

Speakers hailed the IGF’s unique role in fostering inclusive dialogue on digital policy. Timea Suto of the International Chamber of Commerce praised its vibrant ecosystem for addressing critical internet governance issues, while Valeria Betancourt from the Association for Progressive Communications highlighted its capacity to bring diverse stakeholders together for meaningful debates.

ICANN’s Göran Marby underscored the IGF’s centrality within the WSIS framework, describing it as a space for shaping narratives and informing policy through open discussion. Juan Fernandez from the Ministry of Communications of Cuba raised a critical point about representation, urging for more consistent and diverse attendance to ensure the forum remains truly inclusive. Other participants echoed this call and highlighted the importance of engaging voices from underrepresented regions and communities.

Evolving for greater impact

As the IGF approaches its 20th anniversary, there is broad consensus on the need to evolve its structure and mandate to enhance its effectiveness. Proposals included integrating the WSIS framework and GDC implementation into its work and making the IGF a permanent institution within the UN system.

‘Strengthening the IGF’s institutional foundation is crucial for its long-term impact,’ argued Vint Cerf, a founding father of the internet.

Speakers also stressed the importance of producing tangible outcomes. Valeria Betancourt and Göran Marby called for actionable recommendations and systematic progress tracking, while Lesotho’s ICT Minister, Nthati Moorosi, suggested special forums with private sector leaders to tackle connectivity challenges. These measures, they argued, would enhance the IGF’s relevance in addressing pressing digital issues.

Inclusivity and grassroots engagement

Enhancing inclusivity remained a recurring theme. Carol Roach, MAG Chair for IGF 2024, and Christine Arida, Board Member of the Strategic Advisory to the Executive President of the National Telecom Regulatory Authority of Egypt, highlighted the need to amplify voices from the Global South and engage underserved communities.

Leveraging national and regional IGFs (NRIs) was identified as a key strategy for grassroots engagement. ‘The IGF’s strength lies in its ability to facilitate conversations that reach the margins,’ noted Valeria Betancourt.

Balancing innovation with privacy and accessibility

Emerging technologies, particularly AI, featured prominently in discussions. Participants stressed the IGF’s role in addressing the governance challenges posed by rapid innovation while safeguarding privacy and inclusivity.

‘Multistakeholder processes must move beyond handshakes to deeper collaboration,’ remarked one speaker, capturing the need for cohesive efforts in navigating the evolving digital landscape.

Looking ahead

The session concluded with a collective vision for the IGF’s future. As it approaches its 20th year, the forum is tasked with balancing its role as a space for open dialogue with the need for concrete outcomes.

Strengthened partnerships, a clearer institutional framework, and an inclusive approach will be essential in ensuring the IGF remains a cornerstone of global internet governance. The journey forward will be defined by its ability to adapt and address the complex challenges of an increasingly interconnected world.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Democratising AI: the promise and pitfalls of open-source LLMs

At the Internet Governance Forum 2024 in Riyadh, the session Democratising Access to AI with Open-Source LLMs explored a transformative vision: a world where open-source large language models (LLMs) democratise AI, making it accessible, equitable, and responsive to local needs. However, this vision remains a double-edged sword, revealing immense promise and critical challenges.

Panelists, including global experts from India, Brazil, Africa, and the Dominican Republic, championed open-source AI to prevent monopolisation by large tech companies. Melissa Muñoz Suro, Director of Innovation in the Dominican Republic, showcased Taina, an AI project designed to reflect the nation’s culture and language. ‘Open-source means breaking the domino effect of big tech reliance,’ she noted, emphasising that smaller economies could customise AI to serve their unique priorities and populations.

Yet, as Muñoz Suro underscored, resource constraints are a significant obstacle. Training open-source models require computational power, infrastructure, and expertise, which are luxuries many Global South nations lack. A Global South AI expert, Abraham Fifi Selby echoed this, calling for ‘public-private partnerships and investment in localised data infrastructure’ to bridge the gap. He highlighted the significance of African linguistic representation, emphasising that AI trained in local dialects is essential to addressing regional challenges.

The debate also brought ethical and governance concerns into sharp focus. Bianca Kremer, a researcher and activist from Brazil, argued that regulation is indispensable to combat monopolies and ensure AI fairness. She cited Brazil’s experience with algorithmic bias, pointing to an incident where generative AI stereotypically portrayed a Brazilian woman from a favela (urban slum) as holding a gun. ‘Open-source offers the power to fix these biases,’ Kremer explained but insisted that burdensome regulation must accompany technological optimism.

Despite its potential, open-source AI risks misuse and dwindling incentives for large-scale investments. Daniele Turra from ISA Digital Consulting proposed redistributing computational resources—suggesting mechanisms like a ‘computing tax’ or infrastructure sharing by cloud giants to ensure equitable access. The session’s audience also pushed for practical solutions, including open datasets and global collaboration to make AI development truly inclusive.

While challenges persist, trust, collaboration, and local capacity-building remain critical to open-source AI’s success. As Muñoz Suro stated, ‘Technology should make life simpler, happier, and inclusive, and open-source AI if done right, is the key to unlocking this vision.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Tackling internet fragmentation: A global challenge at IGF 2024

At the Internet Governance Forum (IGF) 2024 in Riyadh, the main session ‘Policy Network on Internet Fragmentation’ delved into implementing Article 29C of the Global Digital Compact (GDC), which seeks to prevent internet fragmentation. A diverse panel comprising government officials, technical experts, and civil society representatives highlighted the multifaceted nature of this issue and proposed actionable strategies to address it.

The scope of internet fragmentation

Panellists underscored that internet fragmentation manifests on technical, governance, and user experience levels. While the global network of over 70,000 systems remains technically unified, fragmentation is evident in user experiences. Anriette Esterhuysen from the Association for Progressive Communications pointed out, ‘How you view the internet as fragmented or not depends on whose internet you think it is.’ She stressed that billions face access and content restrictions, fragmenting their digital experience.

Gbenga Sesan of Paradigm Initiative echoed this concern, noting that fragmentation undermines the goal of universal connectivity by 2030. The tension between a seamless technical infrastructure and fractured user realities loomed large in the discussion.

Operationalising the GDC commitment

Alisa Heaver from the Dutch Ministry of Economic Affairs and Climate highlighted the critical role of Article 29C as a blueprint for preventing fragmentation. She called for a measurable framework to track progress by the GDC’s 2027 review, emphasising that research on the economic impacts of fragmentation must be prioritised. ‘We need to start measuring internet fragmentation now more than ever,’ Heaver urged.

Strategies for collaboration and progress

Multistakeholder cooperation emerged as a cornerstone for addressing fragmentation. Wim Degezelle, a consultant with the IGF Secretariat, presented the Policy Network on Internet Fragmentation (PNIF) framework, while Amitabh Singhal of ICANN highlighted the IGF’s unique position in bridging technical and policy divides. Singhal also pointed to the potential renewal of the IGF’s mandate as pivotal in continuing these essential discussions.

The session emphasised inclusivity in technical standard-setting processes, with Sesan advocating for civil society’s role and audience members calling for stronger private sector engagement. Sheetal Kumar, co-facilitator of the session, stressed the importance of leveraging national and regional IGFs to foster localised dialogues on fragmentation.

Next steps and future outlook

The panel identified key actions, including developing measurable frameworks, conducting economic research, and utilising national and regional IGFs to sustain discussions. The upcoming IGF in 2025 was flagged as a milestone for assessing progress. Despite the issue’s complexity, the panellists were united in their commitment to fostering a more inclusive and seamless internet.

As Esterhuysen aptly summarised, addressing internet fragmentation requires a concerted effort to view the digital landscape through diverse lenses. This session reaffirmed that preventing fragmentation is not just a technical challenge but a deeply human one, demanding collaboration, research, and sustained dialogue.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.