o3 models set to enhance OpenAI’s capabilities

OpenAI has announced internal testing of its latest reasoning models, o3 and o3 mini, which aim to tackle complex problems more effectively than their predecessors. The o3 mini model is expected to launch by January, with the full o3 model to follow. These developments signal increased competition with rivals like Google, which recently released its second-generation Gemini AI model.

OpenAI’s advancements build on its earlier o1 models, released in September, which demonstrated improved reasoning in science, coding, and mathematics. The company is inviting external researchers to test the new o3 models before public release.

The announcement follows OpenAI’s $6.6 billion funding round in October, highlighting its growing influence in the generative AI market. As competition intensifies, both OpenAI and Google aim to push the boundaries of AI technology.

Robotic scientists aim to automate experiments

Tetsuwan Scientific, a startup founded by Cristian Ponce and Théo Schäfer, is developing robotic AI scientists designed to automate lab experiments. Inspired by the rapid evolution of AI models like GPT-4, these robots aim to address the repetitive and labour-intensive aspects of research. They combine low-cost robotic hardware with advanced software that interprets and executes scientific tasks autonomously.

The breakthrough came when Ponce tested AI’s ability to diagnose scientific data and offer solutions. However, existing lab robots lacked the ability to physically act on these insights. Tetsuwan’s solution integrates AI to give robots the context and flexibility to perform tasks like pipetting and analysing results without constant programming.

Currently working with La Jolla Labs in RNA therapeutic drug development, Tetsuwan has secured $2.7 million in funding to advance its technology. The ultimate goal is to create self-reliant AI scientists capable of automating the entire scientific process, from hypothesis to reproducible results, potentially accelerating innovation at an unprecedented pace.

Tech giants join forces for US defence contracts, FT says

Data analytics firm Palantir Technologies and defence tech company Anduril Industries are leading efforts to form a consortium of technology companies to bid jointly for US government contracts, according to a report from the Financial Times. The group is expected to include SpaceX, OpenAI, Scale AI, autonomous shipbuilder Saronic, and other key players, with formal agreements anticipated as early as January.

The consortium aims to reshape the defence contracting landscape by combining cutting-edge technologies from some of Silicon Valley’s most innovative firms. A member involved in the initiative described it as a move toward creating “a new generation of defence contractors.” This collective effort seeks to enhance the efficiency of supplying advanced defence systems, leveraging technologies like AI, autonomous vehicles, and other innovations.

The initiative aligns with President-elect Donald Trump’s push for greater government efficiency, spearheaded in part by Elon Musk, who has been outspoken about reforming Pentagon spending priorities. Musk and others have criticised traditional defence programs, such as Lockheed Martin’s F-35 fighter jet, advocating instead for the development of cost-effective, AI-driven drones, missiles, and submarines.

With these partnerships, the consortium hopes to challenge the dominance of established defence contractors like Boeing, Northrop Grumman, and Lockheed Martin, offering a modernised approach to defence technology and procurement in the US.

Free AI conversations with OpenAI’s new phone feature

OpenAI has introduced a new way to access its popular ChatGPT AI by phone. Users in the United States can now call 1-800-CHATGPT to speak with ChatGPT for up to 15 minutes per month at no cost. This innovative feature is powered by OpenAI’s Realtime API and marks a move towards making AI more approachable for everyday users.

For those outside the US, OpenAI has expanded access via WhatsApp, allowing global users to interact with ChatGPT through text. The initiative is part of OpenAI’s effort to offer a simplified version of ChatGPT, providing a ‘low-cost way’ to try the service through familiar communication channels.

OpenAI has reassured users that calls will not be used to train its models, distinguishing its approach from similar past services like Google’s now-defunct GOOG-411. With this launch, OpenAI continues to bridge the gap between technology and accessibility, making conversational AI more reachable than ever.

Matchmaking app turns to advanced AI

The world of online dating is set for a significant shake-up as companies turn to AI to enhance user experiences. Major platforms like Tinder, Hinge, and Bumble are introducing AI-powered features aimed at improving matchmaking, personalising user journeys, and offering support for daters.

Hinge, part of the Match Group, plans to launch an AI-driven dating coach next year, helping users refine profiles and navigate conversations. Similarly, Bumble’s AI safety tools and enhanced matchmaking algorithms are already shaping the dating experience. These innovations aim to move dating apps from self-service platforms to guided experiences tailored to individual needs.

Experts believe AI could reduce the frustrations of early-stage communication by identifying more compatible matches and even offering tools like AI concierges to assist with planning dates. While the integration of AI into online dating is still in its early stages, the industry is poised for transformative changes that could redefine how people connect online.

AI advances ovarian cancer detection and speeds up blood tests

AI is revolutionising medical testing, including early detection of ovarian cancer and faster identification of life-threatening infections like pneumonia. Researchers are leveraging AI to interpret complex patterns in blood tests, improving accuracy and speed in diagnosing diseases.

Dr Daniel Heller’s team at Memorial Sloan Kettering Cancer Center developed a nanotube-based blood test that uses AI to detect ovarian cancer earlier than traditional methods. Despite limited data, the technology shows promise, with further studies underway to enhance its effectiveness and expand its application.

AI is also transforming infectious disease diagnosis. California-based Karius uses AI to identify pneumonia-causing pathogens within 24 hours, cutting costs and improving treatment outcomes. Meanwhile, AstraZeneca‘s Dr Slavé Petrovski developed a platform that identifies over 120 diseases from United Kingdom biobank data. However, challenges persist, including a lack of data sharing among researchers, prompting calls for more collaborative efforts.

Overview of AI policy in 10 jurisdictions

Brazil

Summary:

Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspired by the EU’s AI Act, the bill proposes a risk-based framework, categorising AI systems as unacceptable (banned), high risk (strictly regulated), or low risk (less oversight). This effort builds on Brazil’s 2019 National AI Strategy, which emphasises ethical AI that benefits society, respects human rights, and ensures transparency. Using the OECD’s definition of AI, the bill aims to protect people while fostering innovation.

As of the time of writing, Brazil does not yet have any AI-specific regulations with the force of law. However, the country is actively working towards establishing a regulatory framework for artificial intelligence. Brazilian legislators are currently considering the Proposed AI Regulation Bill No. 2338/2023, though the timeline for its adoption remains uncertain.

Brazil’s journey toward AI regulation began with the launch of the Estratégia Brasileira de Inteligência Artificial (EBIA) in 2019. The strategy outlines the country’s vision for fostering responsible and ethical AI development. Key principles of the EBIA include:

  • AI should benefit people and the planet, contributing to inclusive growth, sustainable development, and societal well-being.
  • AI systems must be designed to uphold the rule of law, human rights, democratic values, and diversity, with safeguards in place, such as human oversight when necessary.
  • AI systems should operate robustly, safely, and securely throughout their lifecycle, with ongoing risk assessment and mitigation.
  • Organisations and individuals involved in the AI lifecycle must commit to transparency and responsible disclosure, providing information that helps:
  1. Promote general understanding of AI systems;
  2. Inform people about their interactions with AI;
  3. Enable those affected by AI systems to understand the outcomes;
  4. Allow those adversely impacted to challenge AI-generated results.

In 2020, Brazil’s Chamber of Deputies began working on Bill 21/2020, aiming to establish a Legal Framework of Artificial Intelligence. Over time, four bills were introduced before the Chamber ultimately approved Bill 21/2020.

Meanwhile, the Federal Senate established a Commission of Legal Experts to support the development of an alternative AI bill. The commission held public hearings and international seminars, consulted with global experts, and conducted research into AI regulations from other jurisdictions. This extensive process culminated in a report that informed the drafting of Bill 2338 of 2023, which aims to govern the use of AI.

Following a similar approach to the European Union’s AI Act, the proposed Brazilian bill adopts a risk-based framework, classifying AI systems into three categories:

  • Unacceptable risk (entirely prohibited),
  • High risk (subject to stringent obligations for providers), and
  • Non-high risk.

This classification aims to ensure that AI systems in Brazil are developed and deployed in a way that minimises potential harm while promoting innovation and growth.

Definition of AI 

As of the time of writing, the concept of AI adopted by the draft Bill is that adopted by the OECD: ‘An AI system is a machine-based system that can, for a given set of objectives defined by humans, make predictions, recommendations or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

Other laws and official documents that may impact the regulation of AI 

Sources

Canada

Summary:

Canada is progressing toward AI regulation with the proposed Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27. The Act focuses on regulating high-impact AI systems through compliance with existing consumer protection and human rights laws, overseen by the Minister of Innovation with support from an AI and Data Commissioner. AIDA also includes criminal provisions against harmful AI uses and will define specific regulations in consultation with stakeholders. While the framework is finalised, a Voluntary Code of Conduct promotes accountability, fairness, transparency, and safety in generative AI development.

As of the time of writing, Canada does not yet have AI-specific regulations with the force of law. However, significant steps have been taken toward establishing a regulatory framework. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

As of now, Bill C-27, the Digital Charter Implementation Act, 2022, remains under discussion and continues to progress through the legislative process. Currently, the Standing Committee on Industry and Technology (INDU) has announced that its review of the bill will stay on hold until at least February 2025. See here more details about the entire deliberation process.

The AIDA includes several key proposals:

  • High-impact AI systems must comply with existing Canadian consumer protection and human rights laws. Specific regulations defining these systems and their requirements will be developed in consultation with stakeholders to protect the public while minimising burdens on the AI ecosystem.
  • The Minister of Innovation, Science, and Industry will oversee the Act’s implementation, supported by an AI and Data Commissioner. Initially, this role will focus on education and assistance, but it will eventually take on compliance and enforcement responsibilities.
  • New criminal law provisions will prohibit reckless and malicious uses of AI that could harm Canadians or their interests.

In addition, Canada has introduced a Voluntary Code of Conduct for the responsible development and management of advanced generative AI systems. This code serves as a temporary measure while the legislative framework is being finalized.

The code of conduct sets out six core principles for AI developers and managers: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. For instance, managers are responsible for ensuring that AI-generated content is clearly labeled, while developers must assess the training data and address harmful biases to promote fairness and equity in AI outcomes.

Definition of AI

At its current stage of drafting, the Artificial Intelligence and Data Act provides the following definitions:

‘Artificial intelligence system is a system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.’

‘General-purpose system is an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.’

‘Machine-learning model is a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.’

Other laws and official documents that may impact the regulation of AI

Sources 

India

Summary:

India is advancing its AI governance framework but currently has no binding AI regulations. Key initiatives include the 2018 National Strategy for Artificial Intelligence, which prioritises AI applications in sectors like healthcare and smart infrastructure, and the 2021 Principles for Responsible AI, which outline ethical standards such as safety, inclusivity, privacy, and accountability. Operational guidelines released later in 2021 emphasise ethics by design and capacity building. Recent developments include the 2024 India AI Mission, with over $1.25 billion allocated for infrastructure, innovation, and safe AI, and advisories addressing deepfakes and generative AI.

As of the time of this writing, no AI regulations currently carry the force of law in India. Several frameworks are being formulated to guide the regulation of AI, including:

  • The National Strategy for Artificial Intelligence released in June 2018, which aims to establish a strong basis for future regulation of AI in India and focuses on AI intervention in healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation.
  • The Principles for Responsible AI released in February 2021, which serve as India’s roadmap for creating an ethical, responsible AI ecosystem across sectors.
  • The Operationalizing Principles for Responsible AI released in August 2021, which emphasises the need for regulatory and policy interventions, capacity building, and incentivising ethics by design regarding AI.

The Principles for Responsible AI identify the following broad principles for responsible management of AI, which can be leveraged by relevant stakeholders in India:

  • The principle of safety and reliability.
  • The principle of equality.
  • The principle of inclusivity and non-discrimination.
  • The principle of privacy and security.
  • The principle of transparency.
  • The principle of accountability.
  • The principle of protection and reinforcement of positive human values.

The Ministry of Commerce and Industry has established an Artificial Intelligence Task Force, which issued a report in March 2018.

In March 2024, India announced an allocation of over $1.25 billion for the India AI Mission, which will cover various aspects of AI, including computing infrastructure capacity, skilling, innovation, datasets, and safe and trusted AI.

India’s Ministry of Electronics and Information Technology issued advisories related to deepfakes and generative AI in 2024.

Definition of AI

The Principles for Responsible AI describe AI as ‘a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. The natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also make decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time.’

Other laws and official documents that may impact the regulation of AI

Sources

Israel

Summary:

Israel does not yet have binding AI regulations but is advancing a flexible, principles-based framework to encourage responsible innovation. The government’s approach relies on ethical guidelines and voluntary standards tailored to specific sectors, with the potential for broader legislation if common challenges arise. Key milestones include a 2022 white paper on AI and the 2023 Artificial Intelligence Regulations and Ethics.

As of the time of this writing, no AI regulations currently carry the force of law in Israel. Israel’s approach to AI governance encourages responsible innovation in the private sector through a sector-specific, principles-based framework. This strategy uses non-binding tools, including ethical guidelines and voluntary standards, allowing for regulatory flexibility tailored to each sector’s needs. However, the policy also leaves room for the introduction of broader, horizontal legislation should common challenges arise across sectors.

A white paper on AI was published in 2022 by Israel’s Ministry of Innovation, Science and Technology in collaboration with the Ministry of Justice, followed by the Policy on Artificial Intelligence Regulations and Ethics published in 2023.  The AI Policy was developed pursuant to a government resolution that tasked the Ministry of Innovation, Science and Technology with advancing a national AI plan for Israel.

Definition of AI

The AI Policy describes an AI system as having ‘a wide range of applications such as autonomous vehicles, medical imaging analysis, credit scoring, securities trading, personalised learning and employment,’ notwithstanding that ‘the list of applications is constantly expanding.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Japan

Summary:

Japan currently has no binding AI regulations but relies on voluntary guidelines to encourage responsible AI development and use. The AI Guidelines for Business Version 1.0 promote principles like human rights, safety, fairness, transparency, and innovation, fostering a flexible governance model involving stakeholders across sectors. Recent developments include the establishment of the AI Safety Institute in 2024 and the draft ‘Basic Act on the Advancement of Responsible AI,’ which proposes legally binding rules for certain generative AI models, including vetting, reporting, and compliance standards.

At the time of this writing, no AI regulations currently carry the force of law in Japan

The updated AI Guidelines for Business Version 1.0 are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognised AI principles.

The principles outlined by the AI Guidelines are:

  • Human-centric – The utilisation of AI must not infringe upon the fundamental human rights guaranteed by the constitution and international standards.
  • Safety – Each AI business actor should avoid damage to the lives, bodies, minds, and properties of stakeholders.
  • Fairness – Elimination of unfair and harmful bias and discrimination.
  • Privacy protection – Each AI business actor respects and protects privacy.
  • Ensuring security – Each AI business actor ensures security to prevent the behaviours of AI from being unintentionally altered or stopped by unauthorised manipulations.
  • Transparency – Each AI business actor provides stakeholders with information to the reasonable extent necessary and technically possible while ensuring the verifiability of the AI system or service.
  • Accountability – Each AI business actor is accountable to stakeholders to ensure traceability, conforming to common guiding principles, based on each AI business actor’s role and degree of risk posed by the AI system or service.
  • Education/literacy – Each AI business actor is expected to provide persons engaged in its business with education regarding knowledge, literacy and ethics concerning the use of AI in a socially correct manner, and provide stakeholders with education about complexity, misinformation, and possibilities of intentional misuse.
  • Ensuring fair competition – Each AI business actor is expected to maintain a fair competitive environment so that new businesses and services using AI are created.
  • Innovation – Each AI business actor is expected to promote innovation and consider interconnectivity and interoperability.

The Guidelines emphasise a flexible governance model where various stakeholders are involved in a swift and ongoing process of assessing risks, setting objectives, designing systems, implementing solutions, and evaluating outcomes. This adaptive cycle operates within different governance structures, such as corporate policies, regulatory frameworks, infrastructure, market dynamics, and societal norms, ensuring they can quickly respond to changing conditions.

The AI Strategy Council was established to explore ways to harness AI’s potential while mitigating associated risks. On May 22, 2024, the Council presented draft discussion points outlining considerations on the necessity and possible scope of future AI regulations.

A working group has proposed the ‘Basic Act on the Advancement of Responsible AI,‘ which would introduce a hard law approach to regulating certain generative AI foundation models. Under the proposed law, the government would designate which AI systems and developers fall under its scope and impose obligations related to the vetting, operation, and output of these systems, along with periodic reporting requirements. 

Similar to the voluntary commitments made by major US AI companies in 2023, this framework would allow industry groups and developers to establish specific compliance standards. The government would have the authority to monitor compliance and enforce penalties for violations. If enacted, this would represent a shift in Japan’s AI regulation from a soft law to a more binding legal framework.

The AI Safety Institute was launched in February 2024 to examine the evaluation methods for AI safety and other related matters. The Institute is established within the Information-technology Promotion Agency, in collaboration with relevant ministries and agencies, including the Cabinet Office.

Definition of AI

The AI Guidelines define AI as an abstract concept that includes AI systems themselves as well as machine-learning software and programs.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Saudi Arabia

Summary:

Saudi Arabia has no binding AI regulations but is advancing its AI agenda through initiatives under Vision 2030, led by the Saudi Data and Artificial Intelligence Authority. The Authority oversees the National Strategy for Data & AI, which includes developing startups, training specialists, and establishing policies and standards. In 2023, SDAIA issued a draft set of AI Ethics Principles, categorising AI risks into four levels: little or no risk, limited risk, high risk (requiring assessments), and unacceptable risk (prohibited). Recent 2024 guidelines for generative AI offer non-binding advice for government and public use. These efforts are supported by a $40 billion AI investment fund.

At the time of this writing, no AI regulations currently carry the force of law in Saudi Arabia. In 2016, Saudi Arabia unveiled a long-term initiative known as Vision 2030, a bold plan spearheaded by Crown Prince Mohammed Bin Salman. 

A key aspect of this initiative was the significant focus on advancing AI, which culminated in the establishment of the Saudi Data and Artificial Intelligence Authority (SDAIA) in August 2019. This same decree also launched the Saudi Artificial Intelligence Center and the Saudi Data Management Office, both operating under SDAIA’s authority. 

SDAIA was tasked with managing the country’s AI research landscape and enforcing new policies and regulations that aligned with its AI objectives. In October 2020, SDAIA rolled out the National Strategy for Data & AI, which broadened the scope of the AI agenda to include goals such as developing over 300 AI and data-focused startups and training more than 20,000 specialists in these fields.

SDAIA was tasked by the Council of Ministers’ Resolution No. 292 to create policies, governance frameworks, standards, and regulations for data and artificial intelligence, and to oversee their enforcement once implemented.  SDAIA have issued draft AI Ethics Principles in 2023. The document enumerates seven principles with corresponding conditions necessary for their sufficient implementation. They include: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and explainability, and accountability and responsibility.

Similar to the EU AI Act, the Principles categorise the risks associated with the development and utilization of AI into four levels with different compliance requirements for each:

  • Little or No Risk: Systems classified as posing little or no risk do not face restrictions, but the SDAIA recommends compliance with the AI Ethics Principles.
  • Limited Risk: Systems classified as limited risk are required to comply with the Principles.
  • High Risk: Systems classified as high risk are required to undergo both pre- and post-deployment conformity assessments, in addition to meeting ethical standards and relevant legal requirements. Such systems are noted for the significant risk they might pose to fundamental rights.
  • Unacceptable Risk: Systems classified as posing unacceptable risks to individuals’ safety, well-being, or rights are strictly prohibited. These include systems that socially profile or sexually exploit children, for instance.

On January 1, 2024, SDAIA released two sets of Generative AI Guidelines. The first is intended for government employees, while the second is aimed at the general public. 

Both documents offer guidance on the adoption and use of generative AI systems, using common scenarios to illustrate their application. They also address the challenges and considerations associated with generative AI, outline principles for responsible use, and suggest best practices. The Guidelines are not legally binding and serve as advisory frameworks.

Much of the attention surrounding Saudi Arabia’s AI advancements is driven by its large-scale investment efforts, notably a $40 billion fund dedicated to AI technology development.

Other laws and official non binding documents that may impact the regulation of AI

Sources

Singapore

Summary:

Singapore has no binding AI regulations but promoted responsible AI through frameworks developed by the Infocomm Media Development Authority (IMDA). Key initiatives include the Model AI Governance Framework, which offers ethical guidelines for the private sector, and AI Verify, a toolkit for assessing AI systems’ alignment with these standards. The National AI Strategy and its 2.0 update emphasise fostering a trusted AI ecosystem while driving innovation and economic growth.

As of the time of this writing, no AI regulations currently carry the force of law in Singapore. Singapore’s AI regulations are largely shaped by the Infocomm Media Development Authority (IMDA), an independent government body that operates under the Ministry of Communications and Information. This statutory board plays a central role in guiding the nation’s approach to artificial intelligence policies and frameworks. IMDA takes a prominent position in shaping Singapore’s technology policies and refers to itself as the ‘architect of the nation’s digital future,’ highlighting its pivotal role in steering the country’s digital transformation.

In 2019, the Smart Nation and Digital Government offices introduced an extensive National AI Strategy, outlining Singapore’s goal to boost its economy and become a leader in the global AI industry. To support these objectives, the government also established a National AI Office within the Ministry to oversee the execution of its AI initiatives.

The Singapore government has developed various frameworks and tools to guide AI deployment and promote the responsible use of AI:

  • The Model AI Governance Framework, that offers comprehensive guidelines to private sector entities on tackling essential ethical and governance challenges in the implementation of AI technologies.
  • AI Verify, was developed by IMDA in collaboration with private sector partners, and supported by the AI Verify Foundation (AIVF) and is a testing framework and toolkit for AI governance, created to assist organisations in assessing the alignment of their AI systems with ethical guidelines using standardised evaluations.
  • The National Artificial Intelligence Strategy 2.0, highlighting Singapore’s vision and dedication to fostering a trusted and accountable AI environment and promoting innovation and economic growth through AI.

Definition of AI

The 2020 Framework defines AI as ‘a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).’

The 2024 Framework defines Generative AI as ‘AI models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable Generative AI to accept natural language prompts as input, including large language models.’

Other laws and official non binding documents that may impact the regulation of AI

Sources

Republic of Korea

Summary:

The Republic of Korea has no binding AI regulations but is actively developing its framework through the Ministry of Science and ICT and the Personal Information Protection Commission. Key initiatives include the 2019 National AI Strategy, the 2020 Human-Centered AI Ethics Standards, and the 2023 Digital Bill of Rights. Current legislative efforts focus on the proposed Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which adopts a ‘permit-first-regulate-later’ approach to foster innovation while addressing high-risk applications.

As of the time of this writing, no AI regulations currently carry the force of law in the Republic of Korea. However, two major institutions are actively guiding the development of AI-related policies: the Ministry of Science and ICT (MSIT) and the Personal Information Protection Commission (PIPC). While the PIPC concentrates on ensuring that privacy laws keep pace with AI advancements and emerging risks, MSIT leads the nation’s broader AI initiatives. Among these efforts is the AI Strategy High-Level Consultative Council, a collaborative platform where government and private stakeholders engage in discussions on AI governance.

The Republic of Korea has been progressively shaping its AI governance framework, beginning with the release of its National Strategy for Artificial Intelligence in December 2019. This was followed by the Human-Centered Artificial Intelligence Ethics Standards in 2020 and the introduction of the Digital Bill of Rights in May 2023. Although no comprehensive AI law exists as of yet, several AI-related legislative proposals have been introduced to the National Assembly since 2022. One prominent proposal currently under review is the Act on the Promotion of AI Industry and Framework for Establishing Trustworthy AI, which aims to consolidate earlier legislative drafts into a more cohesive approach.

Unlike the European Union’s AI Act, the Republic of Korea’s proposed legislation follows a ‘permit-first-regulate-later’ philosophy, which emphasises fostering innovation and industrial growth in AI technologies. The bill also outlines specific obligations for high-risk AI applications, such as requiring prior notifications to users and implementing measures to ensure AI systems are trustworthy and safe. The MSIT Minister announced the establishment of an AI Safety Institute at the 2024 AI Safety Summit.

Definition of AI

Under the proposed AI Act, ‘artificial intelligence’ is defined as the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgement, and language comprehension.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UAE

Summary:

The UAE currently lacks binding AI regulations but actively promotes innovation through frameworks like regulatory sandboxes and allowing real-world testing of new technologies under regulatory oversight. AI governance in the UAE is shaped by its complex jurisdictional landscape, including federal laws, Mainland UAE, and financial free zones such as DIFC and ADGM. Key initiatives include the 2017 National Strategy for Artificial Intelligence 2031, managed by the UAE AI and Blockchain Council, which focuses on fairness, transparency, accountability, and responsible AI practices. Dubai’s 2019 AI Principles and Ethical AI Toolkit emphasize safety, fairness, and explainability in AI systems. The UAE’s AI Ethics: Principles and Guidelines (2022) provide a non-binding framework balancing innovation and societal interests, supported by the beta AI Ethics Self-Assessment Tool to evaluate and refine AI systems ethically. In 2023, the UAE released Falcon 180B, an open-source large language model, and in 2024, the Charter for the Development and Use of Artificial Intelligence, which aims to position the UAE as a global AI leader by 2031 while addressing algorithmic bias, privacy, and compliance with international standards.

At the time of this writing, no AI regulations currently carry the force of law in the UAE. The regulatory landscape of the United Arab Emirates is quite complex due to its division into multiple jurisdictions, each governed by its own set of rules and, in some cases, distinct regulatory bodies. 

Broadly, the UAE can be viewed in terms of its Financial Free Zones, such as the Dubai International Financial Centre (DIFC) and the Abu Dhabi Global Market (ADGM), which operate under separate legal frameworks, and Mainland UAE, which encompasses all areas outside these financial zones. Mainland UAE is further split into non-financial free zones and the broader onshore region, where the general laws of the country apply. As the UAE is a federal state composed of seven emirates – Dubai, Abu Dhabi, Sharjah, Fujairah, Ras Al Khaimah, Ajman, and Umm Al-Quwain – each of them retains control over local matters not specifically governed by federal law. The UAE is a strong advocate for “regulatory sandboxes,” a framework that allows new technologies to be tested in real-world conditions within a controlled setting, all under the close oversight of a regulatory authority.

In 2017, the UAE appointed a Minister of State for AI, Digital Economy and Remote Work Applications and released the National Strategy for Artificial Intelligence 2031, with the aim to create the country’s AI ecosystem. The UAE Artificial Intelligence and Blockchain Council is responsible for managing the National Strategy’s implementation, including crafting regulations and establishing best practices related to AI risks, data management, cybersecurity, and various other digital matters.

The City of Dubai launched the AI Principles and Guidelines for the Emirate of Dubai in January 2019. The Principles promote fairness, transparency, accountability, and explainability in AI development and oversight. Dubai introduced an Ethical AI Toolkit outlining principles for AI systems to ensure safety, fairness, transparency, accountability, and comprehensibility.

The UAE AI Ethics: Principles and Guidelines, released in December 2022 under the Minister of State for Artificial Intelligence, provides a non-binding framework for ethical AI design and use, focusing on fairness, accountability, transparency, explainability, robustness, human-centered design, sustainability, and privacy preservation. Drafted as a collaborative, multi-stakeholder effort, the guidelines balance the need for innovation with the protection of intellectual property and invite ongoing dialogue among stakeholders. It aims to evolve into a universal, practical, and widely adopted standard for ethical AI, aligning with the UAE National AI Strategy and Sustainable Development Goals to ensure AI serves societal interests while upholding global norms and advancing responsible innovation.

To operationalise these principles, the UAE has introduced a beta version of its AI Ethics Self-Assessment Tool, designed to help developers and operators evaluate the ethical performance of their AI systems. This tool encourages consideration of potential ethical challenges from initial development stages to full system maintenance and helps prioritise necessary mitigation measures. While non-compulsory, it employs weighted recommendations—where ‘should’ indicates high priority and ‘should consider’ denotes moderate importance—and discourages implementation unless a minimum ethics performance threshold is met. As a beta version, the tool invites extensive user feedback and shared use cases to refine its functionality.

In 2023, the UAE, through the support of the Advanced Technology Research Council under the Abu Dhabi government, released the open-source large language model, Falcon 180B, named after the country’s national bird.

In July 2024, the UAE’s AI, Digital Economy, and Remote Work Applications Office released the Charter for the Development and Use of Artificial Intelligence. The Charter establishes a framework to position the UAE as a global leader in AI by 2031, prioritising human well-being, safety, inclusivity, and fairness in AI development. It addresses algorithmic bias, ensures transparency and accountability, and emphasises innovation while safeguarding community privacy in line with UAE data standards. The Charter also highlights the need for ethical oversight and compliance with international treaties and local regulations to ensure AI serves societal interests and upholds fundamental rights.

Definition of AI

The  AI Office has defined AI as ‘systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the data they collect’ in the 2023 AI Adoption Guideline in Government Services.

Other laws and official non binding documents that may impact the regulation of AI

Sources

UK

Summary:

The UK currently has no binding AI regulations but adopts a principles-based framework allowing sector-specific regulators to govern AI development and use within their domains. Key principles outlined in the 2023 White Paper: A Pro-Innovation Approach to AI Regulation include safety, transparency, fairness, accountability, and contestability. The UK’s National AI Strategy, overseen by the Office for Artificial Intelligence, aims to position the country as a global AI leader by promoting innovation and aligning with international frameworks. Recent developments, including proposed legislation for advanced AI models and the Digital Information and Smart Data Bill, signal a shift toward more structured regulation. The UK solidified its leadership in AI governance by hosting the 2023 Bletchley Summit, where 28 countries committed to advancing global AI safety and responsible development.

As of the time of this writing, no AI regulations currently carry the force of law in the UK. The UK supports a principles-based framework for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains. The UK aims to position itself as a global leader in AI by establishing a flexible regulatory framework that fosters innovation and growth in the sector. In 2022, the Government  issued an AI Regulation Policy Paper followed by a White Paper in 2023 with the title ‘A Pro-Innovation Approach to AI Regulation.’

The White Paper lists five key principles designed to ensure responsible AI development: 

  1. Safety, Security, and Robustness. 
  2. Appropriate Transparency and Explainability.
  3. Fairness.
  4. Accountability and Governance.
  5. Contestability and Redress.

The UK Government set up an Office for Artificial Intelligence to oversee the implementation of the UK’s National AI Strategy, adopted in September 2021. The Strategy recognises the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors, and sets up a plan for the next decade to position the UK as a world leader in artificial intelligence. The AI office will perform various central functions to support the framework’s implementation, including: 

  1. monitoring and evaluating the overall efficacy of the regulatory framework;
  2. assessing and monitoring risks across the economy arising from AI;
  3. promoting interoperability with international regulatory frameworks.

Shifting away from the flexible regulatory approach, In July 2024, King Charles III suggested plans to enact legislation requiring developers of the most advanced AI models to meet specific standards. Additionally, the announcement included the Digital Information and Smart Data Bill, which will reform data-related laws to ensure the safe development and use of emerging technologies, including AI. The details of how these measures will be implemented remain unclear.

The UK hosted in November 2023 the Bletchley Summit, positioning itself as a leader in fostering international collaboration on AI safety and governance. At the Summit, a landmark declaration was signed by 28 countries, committing to collaborate on managing the risks of frontier AI technologies, ensuring AI safety, and advancing responsible AI development and governance globally.

Definition of AI

The White Paper describes AI as ‘products and services that are ‘adaptable’ and ‘autonomous.”

Other laws and official non binding documents that may impact the regulation of AI

Sources

Geothermal energy startups rise as tech giants seek clean power for AI

Geothermal energy is gaining momentum as Big Tech companies like Meta and Google turn to it to power their energy-hungry AI data centres. Startups such as Fervo Energy and Sage Geosystems are partnering with these firms to harness geothermal’s promise of carbon-free, reliable electricity. Unlike wind and solar, geothermal energy offers consistent power, though it faces challenges like high drilling costs and long approval timelines.

Oil and gas companies are also showing interest. Devon Energy and other mid-sized producers are investing in geothermal to meet their own energy needs. However, major oil players like Chevron and Exxon Mobil remain focused on natural gas, promoting it alongside carbon capture technology to reduce emissions.

Interest in geothermal is expanding, particularly in Texas, where abundant resources and streamlined regulations attract new projects. More than 60 geothermal startups have emerged in recent years, supported by improving investment conditions and bipartisan government initiatives like the CLEAN Act and HEATS Act. If these laws pass, they could further boost the sector by simplifying project approvals.

With geothermal’s competitive costs—averaging $64 per megawatt-hour—it may become a key part of a diverse energy mix. As AI-driven data centres grow, the demand for clean and consistent power is driving geothermal’s rise, offering a potential alternative to traditional fossil fuels.

Boon secures $20.5M to enhance AI tools for logistics

AI-powered logistics startup Boon has raised $20.5 million to revolutionise fleet and logistics operations. The funding, led by Marathon and Redpoint, includes $15.5 million from a Series A round and a previously undisclosed $5 million seed investment. The platform aims to streamline operations and improve efficiency by unifying data from diverse applications.

Boon targets inefficiencies in the logistics industry, particularly among small and medium-sized enterprises managing over 60 million fleet vehicles globally. Current tools, often fragmented across 15 to 20 applications, create administrative burdens. Boon’s AI agent addresses these challenges by automating processes, optimising workflows, and providing actionable insights.

Founder Deepti Yenireddy drew on her experience at fleet operations giant Samsara to design Boon. She assembled a team of experts from Apple, DoorDash, Google, and other leading firms to develop the platform. Boon plans to use the funding to expand its offerings, covering areas like container loading and staffing optimisation.

Early results have been promising. With paying customers representing 35,000 drivers and 10,000 vehicles, Boon reached an annual revenue run rate of $1 million within nine months. The company is hiring to accelerate growth and broaden its impact on the logistics sector.

IGF 2024 and the future of AI, digital divides, and internet governance

 Page, Text

Dear readers,

It has been a busy week as the Internet Governance Forum (IGF) 2024 has been at the centre of Diplo’s attention and that of the entire digital governance realm, addressing the most pressing digital issues of our time: the rapid evolution of AI, the digital divide, and the delicate balance of governance framework processes revolutionising the world. On 15 – 19 December, Diplo was closely involved in IGF 2024, this time in Riyadh, Saudi Arabia, reporting and contributing its knowledge to shape a human-centred digital future.

The forum brought together experts, policymakers, and stakeholders from around the globe, and discussions highlighted three dominant themes: AI governance, bridging the digital divide, and enhancing cybersecurity, underscoring the need for inclusive solutions and forward-thinking strategies.

 Advertisement, Sign, Symbol, Outdoors, Road

AI governance

AI took centre stage, as expected, with debates on governance, ethics, and its societal impact. Discussions explored a multifaceted approach, combining international regulatory frameworks, voluntary industry commitments, and bottom-up governance models sensitive to local contexts. The Council of Europe’s Framework Convention on AI and the G7 Hiroshima AI Process were spotlighted as global initiatives striving to balance innovation and the protection of human rights.

The potential of AI to deepen inequalities was another focal point, with calls to address AI divides between developed and developing nations. Discussions stressed the importance of building local AI ecosystems, promoting capacity development in the Global South, and ensuring equitable access to AI infrastructure. As concerns about AI transparency and accountability grew, frameworks like the ethical principles of the Digital Cooperation Organisation (DCO)  offered pathways to mitigate AI’s societal risks.

Diplo’s contribution to IGF 2024

Dr Jovan Kurbalija, Director of Diplo, approached the IGF in Riyadh with a historical perspective on AI’s roots in the Islamic Golden Age. He underscored the contribution of the Islamic mathematicians and the Islamic culture, which is at the foundation of the digital world. 

In the ‘Intelligent machines and society: An open-ended conversation’ session led by Diplo experts, attendees had the opportunity to explore AI’s profound philosophical, ethical, and practical implications, focusing on its impact on human identity, agency, and communication. Kurbalija introduced the concept of the ‘right to human imperfection’, urging the preservation of human flaws and agency amid AI-driven optimisation. 

Another leading expert and Director of Knowledge at Diplo, Sorina Teleanu, warned against the anthropomorphisation of AI and highlighted the risks surrounding brain data processing and questions of AI personhood, particularly with the emergence of artificial general intelligence (AGI). 

Jovan Kurbalija

Diplo ‘Unpacking the Global Digital Compact’

Sorina’s recent publication, Unpacking the Global Digital Compact: Actors, Issues and Processes, presented at the IGF, provides a detailed account of the GDC negotiations over an 18-month process, tracking and analysing changes across different versions of GDC drafts. The publication presents a unique interplay between zooming in on specific provisions, sometimes on the edge of linguistic pedantry, and zooming out to provide a broader perspective on digital governance and cooperation. The publication also places the GDC in the broader context of global digital governance and cooperation mechanisms. It offers a set of questions to reflect on as stakeholders explore the interplay between the processes, implementation, and follow-up of the GDC, WSIS, and Agenda 2030.

The panel also addressed AI governance, with Kurbalija advocating for decentralised development to prevent power centralisation, while Henri-Jean Pollet from ISPA Belgium stressed open-source models to ensure reliability. The evolving human-AI dynamic was discussed, including changes in communication and the need for AI ethics education, as raised by Mohammad Abdul Haque Anu. Kurbalija underscored Diplo’s focus on AI tools that augment human knowledge without replacing decision-making, ending the session with a call for continued exploration of the role of AI’ in shaping the future of humanity.

Digital divides: meaningful connectivity and inclusion

The persistent digital divide remained a complex challenge, with one-third of the global population still offline. IGF discussions moved beyond simple access, championing the concept of ‘meaningful connectivity’, which ensures a safe, productive, and enriching online experience. Targeted investments in rural infrastructure, unlicensed spectrum use, and satellite technology like low Earth orbit (LEO) satellites were proposed as solutions to connect underserved communities.

Gender disparities also took the spotlight. Statistics revealed stark inequalities, with women representing just 10% of executive roles in tech. Speakers called for mentorship programmes, cultural sensitivity, and capacity development to increase women’s participation in digital spaces. Examples like India’s Unified Payments Interface and Brazil’s PIX system showcased how the digital public infrastructure (DPI) can bridge economic gaps, provided they include robust consumer protections and digital literacy programmes.

IGF 2024 explores empowering Africa through digital legislation

Cybersecurity: resilience in a complex landscape

Cybersecurity sessions underscored the growing sophistication of cyber threats and the need for resilient digital infrastructure. Discussions called for universal cybersecurity standards flexible enough to adapt to diverse local contexts, while AI was recognised as both a solution and a risk for cybersecurity. AI enhances threat detection and automates responses, yet its vulnerabilities—like adversarial attacks and data poisoning—pose significant challenges.

Developing countries’ struggles to build cyber resilience were a recurring concern. Panellists emphasised capacity development, existing framework implementation, and tailored strategies. Cyber diplomacy emerged as a crucial tool, particularly in regions like Africa and the Middle East, where greater participation in global negotiations is needed to shape cyber norms and ensure equitable protections.

 Adult, Female, Person, Woman, People, Accessories, Glasses, Chair, Furniture, Electrical Device, Microphone, Crowd, Computer, Electronics, Laptop, Pc, Indoors, Computer Hardware, Hardware, Monitor, Screen, Bag, Handbag, Jewelry, Necklace, Lisa Badum, Mariah Gale

Content governance and environmental sustainability

The complexities of content moderation in diverse cultural contexts raised critical questions. While AI offers potential solutions for content moderation, its ethical implications and biases remain unresolved. Disinformation was another urgent issue, with experts advocating for digital literacy, fact-checking initiatives, and multistakeholder collaborations to preserve democratic integrity.

Sustainability intertwined with digital policy discussions, as the environmental impact of AI, e-waste, and data infrastructure came into focus. The digital sector’s 4% contribution to global emissions sparked calls for sustainable IT procurement, circular economy strategies, and greener AI standards. Harnessing AI to achieve sustainable development goals (SDGs) was also discussed, with its potential to accelerate progress through real-time data analysis and climate prediction.

Looking ahead: local realities and global cooperation

IGF expertise offered some advice for the future with discussions that stressed the importance of multistakeholder cooperation in translating global frameworks like the WSIS+20 and the Global Digital Compact into actionable local policies. In Riyadh, IGF 2024 reinforced that tackling digital challenges—from AI ethics to digital divides—requires a nuanced, multifaceted, holistic, and inclusive approach. The forum served as a sounding board for innovative ideas and a call to action: to build an equitable, sustainable, secure digital future for all. 

Related news:

OCPhoto.756118139.33961

Jovan Kurbalija, Director of Diplo, stressed the importance of understanding fundamental AI concepts to facilitate deeper conversations beyond the usual concerns about bias and ethics.

In other news..

Norway to host the 2025 Internet Governance Forum

Norway has been selected by the UN to host the 2025 Internet Governance Forum (IGF), marking a significant milestone as the largest UN meeting ever held in the country.

Musk faces scrutiny over national security concerns

Elon Musk and his company SpaceX are facing multiple federal investigations into their compliance with security protocols designed to protect national secrets.

Visit dig.watch now for more detailed info on IGF 2024 sessions, related updates, and other topics!

Marko and the Digital Watch team


Highlights from the week of 13-20 December 2024

OCPhoto.755943728.150418

The forum, under the theme ‘Building our multistakeholder digital future’, will explore four key areas: harnessing innovation while managing risks, enhancing digital contributions to peace and development, advancing human rights…

OCPhoto.755943727.472286

Experts from government, international bodies, and the private sector highlighted social media platforms as primary sources of rapidly spreading misinformation…

Diplo at IGF2024 featured

The session included interactive exercises and highlighted the necessity of a multistakeholder approach to address global disparities in AI technology distribution…

igf 2024 saudi arabia

Digital identity systems were deemed essential infrastructure for economic inclusion.

TikTok1

TikTok and ByteDance sought more time from the US Court of Appeals to argue their case at the Supreme Court, but this request was denied.

IGF 2024 digital innovation unhcr unicef UN pension fund unicc blockchain AI

UN leaders at IGF 2024 explored digital transformation, showcasing refugee-focused apps, child data rights frameworks, and blockchain security systems. Panellists stressed collaboration, inclusion, and ethical technology use for sustainable progress.

press 2333329 1280

Gender-based harassment and marginalisation were key themes at IGF 2024’s forum on journalist safety online.

OCPhoto.756212784.304239 1

The session focused on the potential of open-source large language models (LLMs) to democratise access to AI, particularly in fostering innovation and empowering smaller economies and the Global South.

OCPhoto.756041961.606749

Experts at IGF 2024 raised concerns over vague provisions in the UN Cybercrime Treaty threatening freedoms worldwide.

OCPhoto.756041950.002086 1 1

The discussion highlighted the importance of baseline cybersecurity measures, such as asset inventory and vulnerability management, and emphasised employee training and awareness.

OCPhoto.755967017.46886

Panelists from diverse sectors and regions discussed the significant challenges of misinformation, disinformation, and emerging technologies such as AI and deepfakes, which threaten democratic processes.


Reading corner

DALL%C2%B7E 2024 07 29 15.40.37 Generate an image featuring the TikTok logo alongside a US election ballot box with the American flag on it
dig.watch

Bytedance, the TikTok’s parent company, is going to divest its US operations by 19 January 2025 or face a ban in the country.