Artificial intelligence

ChatGPT has taken the world by storm in the last few months. It has been covered extensively in the media, became a topic for family dinner discussions, and sparked new debates on the good and bad of AI. Questions that have been asked before now seem more pervasive.

Will AI replace us? Are our societies ready to embrace the good that AI – ChatGPT included – has to offer while minimising the bad? Shall we pause AI developments – as Elon Musk, Yuval Harari, and others have called for in their Open Letter? Whom do we have to answer governance and policy calls on AI? Should it be the UN, the US Congress, or the European Parliament or…?

As governments, international organisations, experts, businesses, users, and others explore these and similar questions, our coverage of AI technology and policy is meant to help you stay up-to-date with developments in this field, grasp their meaning, and separate hype from reality.

About AI: A brief introduction

artificial intelligence concept 1024x682 1

Artificial intelligence (AI) might sound like something from a science fiction movie in which robots are ready to take over the world. While such robots are purely fixtures of science fiction (at least for now), AI is already part of our daily lives, whether we know it or not.

Think of your Google inbox: Some of the emails you receive end up in your spam folder, while others are marked as ‘social’ or ‘promotion’. How does this happen? Google uses AI algorithms to automatically filter and sort e-mails by categories. These algorithms can be seen as small programs that are trained to recognise certain elements within an email that make it likely to be a spam message, for example. When the algorithm identifies one or several of those elements, it marks the email as spam and sends it to your spam folder. Of course, algorithms do not work perfectly, but they are continuously improved. When you find a legitimate email in your spam folder, you can tell Google that it was wrongly marked as spam. Google uses that information to improve how its algorithms work.

AI is widely used in internet services: Search engines use AI to provide better search results; social media platforms rely on AI to automatically detect hate speech and other forms of harmful content; and, online stores use AI to suggest products you are likely interested in based on your previous shopping habits. More complex forms of AI are used in manufacturing, transportation, agriculture, healthcare, and many other areas. Self-driving cars, programs able to recognise certain medical conditions with the accuracy of a doctor, systems developed to track and predict the impact of weather conditions on crops – they all rely on AI technologies.

As the name suggests, AI systems are embedded with some level of ‘intelligence’ which makes them capable to perform certain tasks or replicate certain specific behaviours that normally require human intelligence. What makes them ‘intelligent’ is a combination of data and algorithms. Let’s look at an example which involves a technique called machine learning. Imagine a program able to recognise cars among millions of images. First of all, that program is fed with a high number of car images. Algorithms then ‘study’ those images to discover patterns, and in particular the specific elements that characterise the image of a car. Through machine learning, algorithms ‘learn’ what a car looks like. Later on, when they are presented with millions of different images, they are able to identify the images that contain a car. This is, of course, a simplified example – there are far more complex AI systems out there. But basically all of them involve some level of initial training data and an algorithm which learns from that data in order to be able to perform a task.

Some AI systems go beyond this, by being able to learn from themselves and improve themselves. One famous example is DeepMind’s AlphaGo Zero: The program initially only knows the rules of the Go game; however, it then plays the game with itself and learns from its successes and failures to become better and better.

Going back to where we started: Is AI really able to match human intelligence? In specific cases – like playing the game of Go – the answer is ‘yes’. That being said, what has been coined as ‘artificial general intelligence’ (AGI) – advanced AI systems that can replicate human intellectual capabilities in order to perform complex and combined tasks – does not yet exist. Experts have divided opinions on whether AGI is something we will see in the near future, but it is certain that scientists and tech companies will continue to develop more and more complex AI systems.


The policy implications of AI

Applying AI for social good is a principle that many tech companies have adhered to. They see AI as a tool that can help address some of the world’s most pressing problems, in areas such as climate change and disease eradication. The technology and its many applications certainly carry significant potential for good, but there are also risks. Accordingly, the policy implications of AI advancements are far‐reaching. While AI can generate economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security are also in focus.

As innovations in the field continue, more and more AI standards and AI governance frameworks are being developed to help ensure that AI applications have minimal unintended consequences.

AI illustration robot futureofwork 320x220 1

Social and economic

AI has significant potential to stimulate economic growth and contribute to sustainable development. But it also comes with disruptions and challenges.

AI illustration robot riskfree planet 320x220 1

Safety and security

AI applications bring into focus issues related to cybersecurity (from cybersecurity risks specific to AI systems to AI applications in cybersecurity), human safety, and national security.

AI illustration judge robot color EthicalIssues 320x220 1

Human rights

The uptake of AI raises profound implications for privacy and data protection, freedom of expression, freedom of assembly, non-discrimination, and other human rights and freedoms.

robot skeleton hamlet 320x220 1

Ethical concerns

The involvement of AI algorithms in judgments and decision-making gives rise to concerns about ethics, fairness, justice, transparency, and accountability.

Governing AI

When debates on AI governance first emerged, one overarching question was whether AI-related challenges (in areas such as safety, privacy, and ethics) call for new legal and regulatory frameworks, or whether existing ones could be adapted to also cover AI. 

Applying and adapting existing regulation was seen by many as the most suitable approach. But as AI innovation accelerated and applications became more and more pervasive, AI-specific governance and regulatory initiatives started emerging at national, regional, and international levels.

640px Flag of the United States.svg

USA Bill of Rights

The Blueprint for an AI Bill of Rights is a guide for a society that protects people from AI threats and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualising these principles in the technological design process. 

640px Flag of the Peoples Republic of China.svg

China’s Interim Measures for Generative Artificial Intelligence

Released in July 2023 and applicable starting 15 August 2023, the measures apply to ‘the use of generative AI to provide services for generating text, pictures, audio, video, and other content to the public in the People’s Republic of China’. The regulation covers issues related to intellectual property rights, data protection, transparency, and data labelling, among others.

Photo of European Union flag. Waving EU flag.

EU’s AI Act

Proposed by the European Commission in April 2021 and currently under negotiation at the level of EU institutions, the draft AI regulation introduces a risk-based regulatory approach for AI systems: if an AI system poses exceptional risks, it is banned; if an AI system comes with high risks (for instance, the use of AI in performing surgeries), it will be strictly regulated; if an AI system only involves limited risks, focus is placed on ensuring transparency for end users.

unesco 0

UNESCO Recommendation on AI Ethics

Adopted by UNESCO member states in November 2021, the recommendation outlines a series of values, principles, and actions to guide states in the formulation of their legislation, policies, and other instruments regarding AI. For instance, the document calls for action to guarantee individuals more privacy and data protection, by ensuring transparency, agency, and control over their personal data. Explicit bans on the use of AI systems for social scoring and mass surveillance are also highlighted, and there are provisions for ensuring that real-world biases are not replicated online.

OECD 1

OECD Recommendation on AI

Adopted by the OECD Council in May 2019, the recommendation encourages countries to promote and implement a series of principles for responsible stewardship of trustworthy AI, from inclusive growth and human-centred values to transparency, security, and accountability. Governments are further encouraged to invest in AI research and development, foster digital ecosystems for AI, shape enabling policy environments, build human capacities, and engage in international cooperation for trustworthy AI.

Image of Council of Europe

Council of Europe work on a Convention on AI and human rights

In 2021 the Committee of Ministers of the Council of Europe (CoE) approved the creation of a Committee on Artificial Intelligence (CAI) tasked with elaborating a legal instrument on the development, design, and application of AI systems based on the CoE’s standards on human rights, democracy and the rule of law, and conducive to innovation. Since 2022, CAI has been working on a [Framework] Convention on AI, Human Rights, Democracy and the Rule of Law.

Flag United Nations

Group of Governmental Experts on Lethal Autonomous Weapons Systems

Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) to explore the technical, military, legal, and ethical implications of LAWS.  The group has been convened on an annual basis since its creation. In 2019, it agreed on a series of Guiding principles, which, among other issues, confirmed the application of international humanitarian law to the potential development and use of LAWS, and highlighted that human responsibility must be retained for decisions on the use of weapons systems.

AL72s0cf Logo of the Global Partnership on Artificial Intelligence

Global Partnership on Artificial Intelligence

Launched in June 2022 and counting 29 members in 2023, the Global Partnership on Artificial Intelligence (GPAI) is a multistakeholder initiative dedicated to ‘sharing multidisciplinary research and identifying key issues among AI practitioners, with the objective of facilitating international collaboration, reducing duplication, acting as a global reference point for specific AI issues, and ultimately promoting trust in and the adoption of trustworthy AI’.

AI standards as a bridge between technology and policy

Despite their technical nature – or rather because of that – standards have an important role to play in bridging technology and policy. In the words of three major standard developing organisations (SDOs), standards can ‘underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustworthy AI development’. As hard regulations are being shaped to govern the development and use of AI, standards are increasingly seen as a mechanism to demonstrate compliance with legal provisions.

Right now standards for AI are developed within a wide range of SDOs at national, regional, and international levels. In the EU, for instance, the European Committee for Standardization (CEN), the European Electrotechnical Committee for Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) are working on AI standards to complement the upcoming AI Act. At the International Telecommunication Union (ITU), several study groups and focus groups within its Telecommunication Standardization Sector (ITU-T) are carrying out standardisation and pre-standardisation work across issues as diverse as AI-enabled multimedia applications, AI for health, and AI for natural disaster management. And the Joint Technical Committee 1 on Information Technology – an initiative of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) has a subcommittee dedicated to AI standards.

TOXvXnQI BrAIn 04 1024x1024 1

National AI strategies

As AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly aware that they need to keep up with this evolution and to take advantage of it. Many are developing national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. China, for example, released a national AI development plan in 2017, intended to help make the country the world leader in AI by 2030 and build a national AI industry worth of US$150 billion. In the United Arab Emirates (UAE), the adoption of a national AI strategy was complemented by the appointment of a State Minister for AI to work on ‘making the UAE the world’s best prepared [country] for AI and other advanced technologies’. Canada, FranceGermany and Mauritius were among the first countries to launch national AI strategies. These are only a few examples; there are many more countries that have adopted or are working on such plans and strategies, as the map below shows.  

Last updated: March 2024

In depth: Africa and artificial intelligence

Africa is making steps towards a faster uptake of AI, and AI-related investments and innovation are advancing across the continent. Governments are adopting national AI strategies, regional and continental organisations are exploring the same, and there is increasing participation in global governance processes focused on various aspects of AI.

You can see the cover page of the African report on digital foreign policy and diplomacy.

AI on the international level

The Council of Europe, the EU, OECD, and UNESCO are not the only international spaces where AI-related issues are discussed; the technology and its policy implications are now featured on the agenda of a wide range of international organisations and processes. Technical standards for AI are being developed at ITU, the ISO, the IEC, and other standard-setting bodies. ITU is also hosting an annual AI for Good summit exploring the use of AI to accelerate progress towards sustainable development. UNICEF has begun working on using AI to realise and uphold children’s rights, while the International Labour Organization (ILO) is looking at the impact of AI automation on the world of work. The World Intellectual Property Organization (WIPO) is discussing intellectual property issues related to the development of AI, the World Health Organization (WHO) looks at the applications and implications of AI in healthcare, and the World Meteorological Organization (WMO) has been using AI in weather forecast, natural hazard management, and disaster risk reduction.

As discussions on digital cooperation have advanced at the UN level, AI has been one of the topics addressed within this framework. The 2019 report of the UN High-Level Panel on Digital Cooperation tackles issues such as the impact of AI on labour markets, AI and human rights, and the impact of the misuse of AI on trust and social cohesion. The UN Secretary-General’s Roadmap on Digital Cooperation, issued in 2020, identifies gaps in international coordination, cooperation, and governance when it comes to AI. The Our Common Agenda report released by the Secretary-General in 2021 proposes the development of a Global Digital Compact (with principles for ‘an open, free and secure digital future for all’) which could, among other elements, promote the regulation of AI ‘to ensure that it is aligned with shared global values’. 

AI and its governance dimensions have featured high on the agenda of bilateral and multilateral processes such as the EU-US Trade and Technology Council, G7, G20, and BRICS. Regional organisations such as the African Union (AU), the Association of Southeast Asian Nations (ASEAN), and the Organization of American States (OAS) are also paying increasing attention to leveraging the potential of AI for economic growth and sustainable development.

In recent years, annual meetings of the Internet Governance Forum (IGF) have featured AI among their main themes.

Artificial Intelligence course

More on the policy implications of AI

The economic and social implications of AI

AI has significant potential to stimulate economic growth. In production processes, AI systems increase automation, and make processes smarter, faster, and cheaper, and therefore bring savings and increased efficiency. AI can improve the efficiency and the quality of existing products and services, and can also generate new ones, thus leading to the creation of new markets. It is estimated that the AI industry could contribute up to US$15.7 trillion to the global economy by 2030. Beyond the economic potential, AI can also contribute to achieving sustainable development goals (SDGs); for instance, AI can be used to detect water service lines containing hazardous substances (SDG 6 – clean water and sanitation), to optimise the supply and consumption of energy (SDG 7 – affordable and clean energy), and to analyse climate change data and generate climate modelling, helping to predict and prepare for disasters (SDG 13 – climate action). Across the private sector, companies have been launching programmes dedicated to fostering the role of AI in achieving sustainable development. Examples include IBM’s Social Science for GoodGoogle’s AI for Social Good, and Microsoft’s AI for Good projects.

For this potential to be fully realised, there is a need to ensure that the economic benefits of AI are broadly shared at a societal level, and that the possible negative implications are adequately addressed. The 2022 edition of the Government AI Readiness Index warns that ‘care needs to be taken to make sure that AI systems don’t just entrench old inequalities or disenfranchise people. In a global recession, these risks are evermore important.’ One significant risk is that of a new form of global digital divide, in which some countries reap the benefits of AI, while others are left behind. Estimates for 2030 show that North America and China will likely experience the largest economic gains from AI, while developing countries – with lower rates of AI adoption – will register only modest economic increases.

The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a ‘universal basic income’ that would compensate individuals for disruptions brought on the labour market by robots and by other AI systems. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements of the jobs market. This entails not only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.

Explore related digital policy topics and their links with AI

AI, safety, and security

AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with minimal unintended consequences. Beyond self-driving cars, the (potential) development of other autonomous systems – such as lethal autonomous weapons systems – has sparked additional and intense debates on their implications for human safety.

AI also has implications in the cybersecurity field. In addition to the cybersecurity risks associated with AI systems (e.g. as AI is increasingly embedded in critical systems, they need to be secured to potential cyberattacks), the technology has a dual function: it can be used as a tool to both commit and prevent cybercrime and other forms of cyberattacks. As the possibility of using AI to assist in cyberattacks grows, so does the integration of this technology into cybersecurity strategies. The same characteristics that make AI a powerful tool to perpetrate attacks also help to defend against them, raising hopes for levelling the playing field between attackers and cybersecurity experts.

Going a step further, AI is also looked at from the perspective of national security. The US Intelligence Community, for example, has included AI among the areas that could generate national security concerns, especially due to its potential applications in warfare and cyber defense, and its implications for national economic competitiveness.

Explore related digital policy topics and their links with AI

AI and human rights

AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Online services such as social media platforms, e-commerce stores, and multimedia content providers collect information about users’ online habits, and use AI techniques such as machine learning to analyse the data and to ‘improve the user’s experience’ (for example, Netflix suggests movies you might want to watch based on movies you have already seen). AI-powered products such as smart speakers also involve the processing of user data, some of it of personal nature. Facial recognition technologies embedded in public street cameras have direct privacy implications.

How is all of this data processed? Who has access to it and under what conditions? Are users even aware that their data is extensively used? These are only some of the questions generated by the increased use of personal data in the context of AI applications. What solutions are there to ensure that AI advancements do not come at the expense of user privacy? Strong privacy and data protection regulations (including in terms of enforcement), enhanced transparency and accountability for tech companies, and embedding privacy and data protection guarantees into AI applications during the design phase are some possible answers.

Algorithms, which power AI systems, could also have consequences on other human rights. For example, AI tools aimed at automatically detecting and removing hate speech from online platforms could negatively affect freedom of expression: Even when such tools are trained on significant amounts of data, the algorithms could wrongly identify a text as hate speech. Complex algorithms and human-biassed big data sets can serve to reinforce and amplify discrimination, especially among those who are disadvantaged.

Explore related digital policy topics and their links with AI

Ethical concerns

As AI algorithms involve judgements and decision-making – replicating similar human processes – concerns are being raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by or with the help of AI systems is one such concern, as illustrated in the debate over facial recognition technology (FRT). Several studies have shown that FRT programs present racial and gender biases, as the algorithms involved are largely trained on photos of males and white people. If law enforcement agencies rely on such technologies, this could lead to biassed and discriminatory decisions, including false arrests.

One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations when creating AI systems) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). The Institute of Electrical and Electronics Engineers’ Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is one example of initiatives that are aimed at ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms that can ‘explain themselves’. Being able to better understand how an algorithm makes a certain decision could also help improve that algorithm.


AI and other digital technologies and infrastructures

 Person, Astronomy, Outer Space

Telecom infrastructure

AI is used to optimise network performance, conduct predictive maintenance, dynamically allocate network resources, and improve customer experience, among others.

 Neighborhood, Art

Internet of things

The interplay between AI and IoT can be seen in multiple applications, from smart home devices and vehicle autopilot systems to drones and smart cities applications.

semiconductors

Semiconductors

AI algorithms are used in the design of chips, for improved performance and power efficiency, for instance. And then semiconductors themselves are used in AI hardware and research.

 Art, Drawing

Quantum computing

Although largely still a field of research, quantum computing promises enhanced computational power which, coupled with AI, can help address complex problems.

 Arch, Architecture, Art, Crib, Furniture, Infant Bed, Building, Factory, Manufacturing

Other advanced technologies

AI techniques are increasingly used in the research and development of other emerging and advanced technologies, from 3D printing and virtual reality, to biotechnology and synthetic biology.