Webinar: Data for change: listening to the voices of children about their experiences online

How does the Safe Online initiative support the development of evidence-based solutions to make the internet safe for children?

During this webinar, Serena Tommasino from the Safe Online Initiative at the End Violence Partnership will highlight the importance of evidence generation to inform prevention, response and cross-sectoral efforts from the Safe Online $71 million investment portfolio with impact in more than 80 countries globally.

This includes sharing the key findings from the large-scale research project Disrupting Harm, implemented in 13 countries in Africa and Asia between 2019-2022 and currently ongoing in 11 additional countries between 2022-2025 that will be showcased by Daniel Kardefelt-Winther from UNICEF Innocenti. The session will demonstrate ways that data and evidence can inform policy change and tech industry practices.

UAE Climate Tech

The UAE Ministry of Industry and Advanced Technology, ADNOC and Masdar are organising a conference called UAE Climate Tech on May 10th and 11th, 2023. The conference is being held in response to the pressing need for large-scale decarbonization and climate action while supporting social and economic growth. It will feature various technology, innovation, and investment opportunities. More than 100 companies will display their technologies, such as carbon capture, AI, robotics, digitalisation, hydrogen, alternative fuels, and low-carbon energy solutions. Further information about the event programme can be found here.

2050 Electronic & Electrical Waste Outlook in West Asia

The report ‘2050 Electronic and Electrical Waste Outlook in West Asia’ provides an overview of the electronic waste (e-waste) problem in West Asia and proposes a stepwise plan to help countries manage it in an environmentally friendly way. It identifies that 99.9% of e-waste equipment in the West Asia region is currently unmanaged or mismanaged. E-waste is either disposed of in landfills or handled by the informal sector, resulting in significant health and environmental consequences.

The report introduces two scenarios, the Business as Usual (BaU) and Circular Economy (CE), to project long-term e-waste outcomes in West Asia by 2050. If current practices continue (BaU scenario), the amount of electrical and electronic equipment (EEE) placed on the market (POM) in the region will double, as will the amount of e-waste generated by 2050, mostly coming from low- and middle-income countries. However, if a circular economy (CE) scenario is implemented, there could be a 33% reduction in EEE POM and a 14% reduction in e-waste generated compared to the BaU scenario.

In conclusion, the report recommends that substantial investments be made in e-waste management infrastructure, appropriate legislation be developed, strong long-term binding targets be established, and consumer awareness of the issue be raised throughout the West Asia region to unlock the benefits of the CE scenario.

Global Digital Regulatory Outlook 2023 – Policy and regulation to spur digital transformation

The Global Digital Regulatory Outlook 2023 assesses the progress of regulations in 193 countries worldwide, providing valuable insights for regulators and policy-makers seeking to understand and shape the regulatory landscape to harness the benefits of digital transformation. It discusses the importance of agile and iterative policy implementation for digital transformation, identifies five tensions in policy and regulatory models, and highlights nine regulatory issues that need attention of regulators, including Internet, cybersecurity, AI, regulatory sandboxes, among others.

The report also suggests five strategies to will drive digital transformation and introduces a unified framework for evaluating the readiness of national policy, legal and governance frameworks for digital transformation, assisting national ICT regulators in making evidence-based decisions. The analysis is based on ITU’s set of benchmarks and evidence-informed frameworks.

Germany Plans to Ban Huawei and ZTE from 5G Network | Digital Watch

Moscow, Russia 30 August 2019 Huawei telecom company logo on office building  against clear blue sky

Germany is planning to ban Huawei and ZTE from its 5G network due to national security concerns. However, this move could interfere with the rollout of 5G services in China.

The Chinese embassy expressed strong dissatisfaction towards Germany’s reported plans to ban Huawei and ZTE from the country’s 5G network.

The German government is accused of generalizing the concept of national security and abusing state power, in violation of economic laws and fair competition principles.

Western security officials claimed that Huawei and ZTE represent potential threats to national security due to their close ties with Beijing.

Germany passed legislation in 2021 to increase security standards for its 5G networks without implementing an outright ban on Chinese firms.

Huawei currently supplies nearly 60% of base stations and 5G services infrastructure in Germany.

China claims that the ban would interfere with the rollout of 5G services in the country, urging Germany to listen to rational voices within its own borders.

Cloud computing

AI and cloud computing

Together, AI and cloud computing enable advanced AI applications, scalable infrastructure, collaborative research, cost optimisation, and efficient resource management. But the misuse of AI can threaten the security of cloud infrastructure. 

The interplay between AI and cloud computing

The merging of AI’s capabilities with cloud-based computing environments, often referred to as the AI cloud, is already in progress. This can be seen in digital assistants, which combine AI technology with cloud resources and big data to deliver immediate services, such as facilitating purchases or real-time information like traffic and weather. Notable examples of this integration include virtual assistants like Siri, Amazon Alexa, and Google Home, as well as ChatGPT, which is powered by Microsoft Azure’s cloud infrastructure. Cloud computing does this by providing AI algorithms with the necessary computational power and scalability for large-scale data processing and complex computations. By using distributed computing in the cloud, AI tasks can be accelerated through parallel execution, reducing both development and deployment time. When edge computing and AI are combined, intelligence can be brought to the network edge, enabling real-time analysis and responsiveness. At the same time, AI techniques optimise cloud infrastructure management, enhancing performance and reducing costs.

AI, cloud computing and (data) security

The convergence of AI and cloud computing offers both opportunities and obstacles to security and privacy. On the one hand, AI-driven security solutions can contribute to improved cloud security by detecting threats, identifying anomalies, and utilising sophisticated encryption techniques. Furthermore, AI facilitates better authentication methods for cloud users. But AI can also pose threats to the security and privacy of cloud infrastructure and data. Malicious actors can target AI systems to manipulate or deceive AI algorithms, leading to unauthorised access and data breaches. Cyberattacks against cloud computing systems can also be automated and accelerated through AI, which are difficult to detect and may result in compromised data. Additionally, AI-powered malware and chatbots may be developed to deceive users and gain access to sensitive information.

Cloud computing caused a shift from storing data on hard disks on our computers to storing it on servers in the cloud. Some examples include your email account which you can access from different devices, and any photos, videos, or documents you store online (even if your account is private).

Cloud computing offers ubiquitous access to all of our data and services from any device anywhere where there is an Internet connection.

The first wave of cloud computing started with the use of online mail servers (Gmail, Yahoo, etc), social media applications (Facebook, Twitter, etc), and online applications (Wikis, blogs, Google docs, etc).

Apart from everyday applications, cloud computing is used extensively for business software. More and more of our digital assets are moving from our hard disks to the cloud. Due to their large server farms, tech giants such as Google, Microsoft, Apple, Amazon, and Facebook are among the main cloud computing players in the private sector.

Emerging technologies

AI and emerging technologies

Often referred to as an emerging technology, although it hasn’t been for some time, AI has not only spread across different industries and sectors, but it is increasingly influencing our daily lives. AI is also changing the dynamics of computing and accelerating the development of other emerging technologies, offering opportunities but also leading to new challenges.

AI to accelerate opportunities 

Data is a key ingredient for most emerging technologies. But it is AI that leverages the use of the data. AI allows designers to accurately predict, fine-tune, and adjust parameters for 3D printing. It helps to control processes, avoid errors, and save time. In biotech, AI can be used to analyse large-scale genomic data for personalised medicine, accelerate drug discovery processes through predictive modelling, and support brain-computer interfaces used, for instance, in brain-controlled prosthetic limbs. AI can also improve user experience in VR and AR. For this purpose, AI is used to create digital content, improve the display of digital information, and ensure safe interactions in mixed-reality settings.

AI increases challenges

Reverse engineering uses AI to reconstruct 3D-printed objects without the consent of the designers. AI systems in the field of biotechnology might be used to expose sensitive private data and breach privacy by analysing personal biometric information. The interplay between AI and neuroscience advancements – such as brain-computer interfaces – raises questions of privacy, security, and even human autonomy. The use of VR and AR devices holds a high risk for data misuse with the help of AI. By creating a digital environment and avatars with AI, the authenticity of the content and digital identity are at stake.

We live in an era of fast technological progress, with new digital devices, applications, and tools being developed almost on a daily basis. 3D printing, augmented reality (AR) and virtual reality (VR), biotechnology, and quantum technology are some of the most rapidly advancing areas, with many implications for society.

How is 3D printing impacting current manufacturing business models, and what consequences does it have for the future of work? Is AR an opportunity to improve the provision of education, especially in remote areas? And what are the ethical boundaries within which biotechnology should operate? These are some of the policy questions linked to these emerging technologies.

Blockchain

AI and blockchain technology

AI has various applications in blockchain technology. It has the potential to enhance blockchain systems by analysing smart contracts, detecting fraud, optimising scalability, and enabling tokenisation, among other issues. But it also comes with challenges, for instance, in the form of AI-driven attacks aimed at exploiting blockchain vulnerabilities.

AI to complement blockchain technology

AI algorithms are already used to optimise the consensus mechanism used on cryptocurrency blockchains by analysing and enhancing the efficiency and effectiveness of the consensus protocols. Using machine learning algorithms and data analysis, AI can identify patterns, optimise parameters, and predict successful consensus strategies. Additionally, AI can help address challenges related to scalability and energy consumption.

AI can also enable the tokenisation of assets, facilitating the creation and management of digital assets on blockchain platforms. Asset management systems powered by AI can automate processes like asset valuation, portfolio management, and investment decision-making. Security issues associated with blockchain can be identified and mitigated using AI. For instance, AI is used to analyse patterns in DDoS attacks and identify possible security holes in the code. AI techniques are also employed to verify smart contracts and reduce the likelihood of exploits and vulnerabilities. Furthermore, by analysing transaction patterns and identifying suspicious behaviour, AI can detect fraudulent activities within blockchain networks and help prevent illicit activities such as fraud and money laundering. Additionally, AI can help enhance the privacy and security of blockchain networks by developing advanced encryption algorithms and employing privacy-preserving techniques to protect sensitive data and transactions.

As is the case with other technologies, different blockchain systems are often incompatible with each other. AI solutions that enable different blockchains to communicate are in development and will potentially create new opportunities.

Challenges at the intersection of AI and blockchain 

The integration of AI and blockchain technology presents several challenges. Adversarial attacks are a significant concern, as AI can exploit blockchain system vulnerabilities and compromise security and integrity. The analytical capabilities of AI can potentially de-anonymise blockchain data, thereby raising privacy concerns. Additionally, the resource-intensive nature of AI systems often necessitates significant computational power; when integrated with blockchain systems, AI systems can exacerbate scalability and performance issues (i.e. the limited resources of blockchain networks may be strained by the processing power and storage requirements of AI tasks). Finally, governance and regulatory challenges arise when determining responsibility and accountability in decentralised AI-powered blockchain systems.


Digitalisation, e-commerce, and the emergence of e-money in our daily lives made the notion of non-physical currency quite common. Since the early 2000s, the idea of a digital payment system and a digital currency native to the Internet has become very attractive.

What is a blockchain? Simply put, it is a data ledger (think of an accounting ledger, which records every ‘in’ and ‘out’ transaction). The ledger is distributed, which means that many copies of the same ledger exist on computers worldwide. It is also protected by strong cryptography to protect it from malicious actors attempting to change any information within the blockchain.

How was this technology born? In 1992, W. Scott Stornetta and Stuart Haber presented the idea of blocks of digital data that are chained by cryptography to prevent tampering with time-stamped documents. By 2008, an anonymous person known by the name of Satoshi Nakamoto, proposed a new payment system to a group of prominent cryptographers and mathematicians through a cyberpunk mailing list.

The proposal, called Bitcoin: Peer-to-peer electronic payment system, was based on an online distributed ledger – verified by cryptography – functioning through a ‘proof-of-work’ consensus mechanism – the same technology that was being used to tackle spam. The term blockchain was not mentioned in the proposal; it was coined later on, with reference to Stornetta and Haber’s proposal.

How is new data added to a blockchain? Every computer (or node) synchronises the data through a consensus-based mechanism. Once data is added, it cannot be added or altered on a blockchain unless there is a consensus.

There are many types of blockchain databases. The main types are open blockchains, and closed or private blockchains.

Blockchain

Artificial intelligence

About AI: A brief introduction

artificial intelligence concept 1024x682 1

Artificial intelligence (AI) might sound like something from a science fiction movie in which robots are ready to take over the world. While such robots are purely fixtures of science fiction (at least for now), AI is already part of our daily lives, whether we know it or not.

Think of your Google inbox: Some of the emails you receive end up in your spam folder, while others are marked as ‘social’ or ‘promotion’. How does this happen? Google uses AI algorithms to automatically filter and sort e-mails by categories. These algorithms can be seen as small programs that are trained to recognise certain elements within an email that make it likely to be a spam message, for example. When the algorithm identifies one or several of those elements, it marks the email as spam and sends it to your spam folder. Of course, algorithms do not work perfectly, but they are continuously improved. When you find a legitimate email in your spam folder, you can tell Google that it was wrongly marked as spam. Google uses that information to improve how its algorithms work.

AI is widely used in internet services: Search engines use AI to provide better search results; social media platforms rely on AI to automatically detect hate speech and other forms of harmful content; and, online stores use AI to suggest products you are likely interested in based on your previous shopping habits. More complex forms of AI are used in manufacturing, transportation, agriculture, healthcare, and many other areas. Self-driving cars, programs able to recognise certain medical conditions with the accuracy of a doctor, systems developed to track and predict the impact of weather conditions on crops – they all rely on AI technologies.

As the name suggests, AI systems are embedded with some level of ‘intelligence’ which makes them capable to perform certain tasks or replicate certain specific behaviours that normally require human intelligence. What makes them ‘intelligent’ is a combination of data and algorithms. Let’s look at an example which involves a technique called machine learning. Imagine a program able to recognise cars among millions of images. First of all, that program is fed with a high number of car images. Algorithms then ‘study’ those images to discover patterns, and in particular the specific elements that characterise the image of a car. Through machine learning, algorithms ‘learn’ what a car looks like. Later on, when they are presented with millions of different images, they are able to identify the images that contain a car. This is, of course, a simplified example – there are far more complex AI systems out there. But basically all of them involve some level of initial training data and an algorithm which learns from that data in order to be able to perform a task.

Some AI systems go beyond this, by being able to learn from themselves and improve themselves. One famous example is DeepMind’s AlphaGo Zero: The program initially only knows the rules of the Go game; however, it then plays the game with itself and learns from its successes and failures to become better and better.

Going back to where we started: Is AI really able to match human intelligence? In specific cases – like playing the game of Go – the answer is ‘yes’. That being said, what has been coined as ‘artificial general intelligence’ (AGI) – advanced AI systems that can replicate human intellectual capabilities in order to perform complex and combined tasks – does not yet exist. Experts have divided opinions on whether AGI is something we will see in the near future, but it is certain that scientists and tech companies will continue to develop more and more complex AI systems.


The policy implications of AI

Applying AI for social good is a principle that many tech companies have adhered to. They see AI as a tool that can help address some of the world’s most pressing problems, in areas such as climate change and disease eradication. The technology and its many applications certainly carry significant potential for good, but there are also risks. Accordingly, the policy implications of AI advancements are far‐reaching. While AI can generate economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security are also in focus.

As innovations in the field continue, more and more AI standards and AI governance frameworks are being developed to help ensure that AI applications have minimal unintended consequences.

AI illustration robot futureofwork 320x220 1

Social and economic

AI has significant potential to stimulate economic growth and contribute to sustainable development. But it also comes with disruptions and challenges.

AI illustration robot riskfree planet 320x220 1

Safety and security

AI applications bring into focus issues related to cybersecurity (from cybersecurity risks specific to AI systems to AI applications in cybersecurity), human safety, and national security.

AI illustration judge robot color EthicalIssues 320x220 1

Human rights

The uptake of AI raises profound implications for privacy and data protection, freedom of expression, freedom of assembly, non-discrimination, and other human rights and freedoms.

robot skeleton hamlet 320x220 1

Ethical concerns

The involvement of AI algorithms in judgments and decision-making gives rise to concerns about ethics, fairness, justice, transparency, and accountability.

Governing AI

When debates on AI governance first emerged, one overarching question was whether AI-related challenges (in areas such as safety, privacy, and ethics) call for new legal and regulatory frameworks, or whether existing ones could be adapted to also cover AI. 

Applying and adapting existing regulation was seen by many as the most suitable approach. But as AI innovation accelerated and applications became more and more pervasive, AI-specific governance and regulatory initiatives started emerging at national, regional, and international levels.

640px Flag of the United States.svg

USA Bill of Rights

The Blueprint for an AI Bill of Rights is a guide for a society that protects people from AI threats and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualising these principles in the technological design process. 

640px Flag of the Peoples Republic of China.svg

China’s Interim Measures for Generative Artificial Intelligence

Released in July 2023 and applicable starting 15 August 2023, the measures apply to ‘the use of generative AI to provide services for generating text, pictures, audio, video, and other content to the public in the People’s Republic of China’. The regulation covers issues related to intellectual property rights, data protection, transparency, and data labelling, among others.

Photo of European Union flag. Waving EU flag.

EU’s AI Act

Proposed by the European Commission in April 2021, the EU AI Act was formally adopted by the European Council on 21 May 2024, and came into effect on 1 August of the same year. The AI regulation introduces a risk-based regulatory approach for AI systems: if an AI system poses exceptional risks, it is banned; if an AI system comes with high risks (for instance, the use of AI in performing surgeries), it will be strictly regulated; if an AI system only involves limited risks, focus is placed on ensuring transparency for end users.

unesco 0

UNESCO Recommendation on AI Ethics

Adopted by UNESCO member states in November 2021, the recommendation outlines a series of values, principles, and actions to guide states in the formulation of their legislation, policies, and other instruments regarding AI. For instance, the document calls for action to guarantee individuals more privacy and data protection, by ensuring transparency, agency, and control over their personal data. Explicit bans on the use of AI systems for social scoring and mass surveillance are also highlighted, and there are provisions for ensuring that real-world biases are not replicated online.

OECD 1

OECD Recommendation on AI

Adopted by the OECD Council in May 2019, the recommendation encourages countries to promote and implement a series of principles for responsible stewardship of trustworthy AI, from inclusive growth and human-centred values to transparency, security, and accountability. Governments are further encouraged to invest in AI research and development, foster digital ecosystems for AI, shape enabling policy environments, build human capacities, and engage in international cooperation for trustworthy AI.

Image of Council of Europe

Council of Europe work on a Convention on AI and human rights

In 2021 the Committee of Ministers of the Council of Europe (CoE) approved the creation of a Committee on Artificial Intelligence (CAI) tasked with elaborating a legal instrument on the development, design, and application of AI systems based on the CoE’s standards on human rights, democracy and the rule of law, and conducive to innovation. On 17 May 2024, the Committee of Ministers adopted the Framework Convention on AI, Human Rights, Democracy and the Rule of Law. The Convention will be opened for signature on 5 September 2024.

Flag United Nations

Group of Governmental Experts on Lethal Autonomous Weapons Systems

Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) to explore the technical, military, legal, and ethical implications of LAWS.  The group has been convened on an annual basis since its creation. In 2019, it agreed on a series of Guiding principles, which, among other issues, confirmed the application of international humanitarian law to the potential development and use of LAWS, and highlighted that human responsibility must be retained for decisions on the use of weapons systems.

AL72s0cf Logo of the Global Partnership on Artificial Intelligence

Global Partnership on Artificial Intelligence

Launched in June 2020 and counting 29 members in 2024, the Global Partnership on Artificial Intelligence (GPAI) is a multistakeholder initiative dedicated to ‘sharing multidisciplinary research and identifying key issues among AI practitioners, with the objective of facilitating international collaboration, reducing duplication, acting as a global reference point for specific AI issues, and ultimately promoting trust in and the adoption of trustworthy AI’.

 Logo

(AU) Continental AI Strategy

Adopted by the African Union Executive Council on July 18-19, 2024, the AI Strategy advocates for unified national approaches among AU member states to navigate the complexities of AI-driven transformation. It seeks to enhance regional and global cooperation, positioning Africa as a leader in inclusive and responsible AI development. The Continental AI Strategy emphasises a people-centric, development-oriented, and inclusive approach, structured around five key focus areas and fifteen policy recommendations.

AI standards as a bridge between technology and policy

Despite their technical nature – or rather because of that – standards have an important role to play in bridging technology and policy. In the words of three major standard developing organisations (SDOs), standards can ‘underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustworthy AI development’. As hard regulations are being shaped to govern the development and use of AI, standards are increasingly seen as a mechanism to demonstrate compliance with legal provisions.

Right now standards for AI are developed within a wide range of SDOs at national, regional, and international levels. In the EU, for instance, the European Committee for Standardization (CEN), the European Electrotechnical Committee for Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) are working on AI standards to complement the upcoming AI Act. At the International Telecommunication Union (ITU), several study groups and focus groups within its Telecommunication Standardization Sector (ITU-T) are carrying out standardisation and pre-standardisation work across issues as diverse as AI-enabled multimedia applications, AI for health, and AI for natural disaster management. And the Joint Technical Committee 1 on Information Technology – an initiative of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) has a subcommittee dedicated to AI standards.

TOXvXnQI BrAIn 04 1024x1024 1

National AI strategies

As AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly aware that they need to keep up with this evolution and to take advantage of it. Many are developing national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. China, for example, released a national AI development plan in 2017, intended to help make the country the world leader in AI by 2030 and build a national AI industry worth of US$150 billion. In the United Arab Emirates (UAE), the adoption of a national AI strategy was complemented by the appointment of a State Minister for AI to work on ‘making the UAE the world’s best prepared [country] for AI and other advanced technologies’. Canada, FranceGermany and Mauritius were among the first countries to launch national AI strategies. These are only a few examples; there are many more countries that have adopted or are working on such plans and strategies, as the map below shows.  

Last updated: September 2025

In depth: Africa and artificial intelligence

Africa is making steps towards a faster uptake of AI, and AI-related investments and innovation are advancing across the continent. Governments are adopting national AI strategies, regional and continental organisations are exploring the same, and there is increasing participation in global governance processes focused on various aspects of AI.

You can see the cover page of the African report on digital foreign policy and diplomacy.

AI on the international level

The Council of Europe, the EU, OECD, and UNESCO are not the only international spaces where AI-related issues are discussed; the technology and its policy implications are now featured on the agenda of a wide range of international organisations and processes. Technical standards for AI are being developed at ITU, the ISO, the IEC, and other standard-setting bodies. ITU is also hosting an annual AI for Good summit exploring the use of AI to accelerate progress towards sustainable development. UNICEF has begun working on using AI to realise and uphold children’s rights, while the International Labour Organization (ILO) is looking at the impact of AI automation on the world of work. The World Intellectual Property Organization (WIPO) is discussing intellectual property issues related to the development of AI, the World Health Organization (WHO) looks at the applications and implications of AI in healthcare, and the World Meteorological Organization (WMO) has been using AI in weather forecast, natural hazard management, and disaster risk reduction.

As discussions on digital cooperation have advanced at the UN level, AI has been one of the topics addressed within this framework. The 2019 report of the UN High-Level Panel on Digital Cooperation tackles issues such as the impact of AI on labour markets, AI and human rights, and the impact of the misuse of AI on trust and social cohesion. The UN Secretary-General’s Roadmap on Digital Cooperation, issued in 2020, identifies gaps in international coordination, cooperation, and governance when it comes to AI. The Our Common Agenda report released by the Secretary-General in 2021 proposes the development of a Global Digital Compact (with principles for ‘an open, free and secure digital future for all’) which could, among other elements, promote the regulation of AI ‘to ensure that it is aligned with shared global values’. 

AI and its governance dimensions have featured high on the agenda of bilateral and multilateral processes such as the EU-US Trade and Technology Council, G7, G20, and BRICS. Regional organisations such as the African Union (AU), the Association of Southeast Asian Nations (ASEAN), and the Organization of American States (OAS) are also paying increasing attention to leveraging the potential of AI for economic growth and sustainable development.

In recent years, annual meetings of the Internet Governance Forum (IGF) have featured AI among their main themes.

Artificial Intelligence course

More on the policy implications of AI

The economic and social implications of AI

AI has significant potential to stimulate economic growth. In production processes, AI systems increase automation, and make processes smarter, faster, and cheaper, and therefore bring savings and increased efficiency. AI can improve the efficiency and the quality of existing products and services, and can also generate new ones, thus leading to the creation of new markets. It is estimated that the AI industry could contribute up to US$15.7 trillion to the global economy by 2030. Beyond the economic potential, AI can also contribute to achieving sustainable development goals (SDGs); for instance, AI can be used to detect water service lines containing hazardous substances (SDG 6 – clean water and sanitation), to optimise the supply and consumption of energy (SDG 7 – affordable and clean energy), and to analyse climate change data and generate climate modelling, helping to predict and prepare for disasters (SDG 13 – climate action). Across the private sector, companies have been launching programmes dedicated to fostering the role of AI in achieving sustainable development. Examples include IBM’s Social Science for GoodGoogle’s AI for Social Good, and Microsoft’s AI for Good projects.

For this potential to be fully realised, there is a need to ensure that the economic benefits of AI are broadly shared at a societal level, and that the possible negative implications are adequately addressed. The 2022 edition of the Government AI Readiness Index warns that ‘care needs to be taken to make sure that AI systems don’t just entrench old inequalities or disenfranchise people. In a global recession, these risks are evermore important.’ One significant risk is that of a new form of global digital divide, in which some countries reap the benefits of AI, while others are left behind. Estimates for 2030 show that North America and China will likely experience the largest economic gains from AI, while developing countries – with lower rates of AI adoption – will register only modest economic increases.

The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a ‘universal basic income’ that would compensate individuals for disruptions brought on the labour market by robots and by other AI systems. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements of the jobs market. This entails not only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.

Explore related digital policy topics and their links with AI

AI, safety, and security

AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with minimal unintended consequences. Beyond self-driving cars, the (potential) development of other autonomous systems – such as lethal autonomous weapons systems – has sparked additional and intense debates on their implications for human safety.

AI also has implications in the cybersecurity field. In addition to the cybersecurity risks associated with AI systems (e.g. as AI is increasingly embedded in critical systems, they need to be secured to potential cyberattacks), the technology has a dual function: it can be used as a tool to both commit and prevent cybercrime and other forms of cyberattacks. As the possibility of using AI to assist in cyberattacks grows, so does the integration of this technology into cybersecurity strategies. The same characteristics that make AI a powerful tool to perpetrate attacks also help to defend against them, raising hopes for levelling the playing field between attackers and cybersecurity experts.

Going a step further, AI is also looked at from the perspective of national security. The US Intelligence Community, for example, has included AI among the areas that could generate national security concerns, especially due to its potential applications in warfare and cyber defense, and its implications for national economic competitiveness.

Explore related digital policy topics and their links with AI

AI and human rights

AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Online services such as social media platforms, e-commerce stores, and multimedia content providers collect information about users’ online habits, and use AI techniques such as machine learning to analyse the data and to ‘improve the user’s experience’ (for example, Netflix suggests movies you might want to watch based on movies you have already seen). AI-powered products such as smart speakers also involve the processing of user data, some of it of personal nature. Facial recognition technologies embedded in public street cameras have direct privacy implications.

How is all of this data processed? Who has access to it and under what conditions? Are users even aware that their data is extensively used? These are only some of the questions generated by the increased use of personal data in the context of AI applications. What solutions are there to ensure that AI advancements do not come at the expense of user privacy? Strong privacy and data protection regulations (including in terms of enforcement), enhanced transparency and accountability for tech companies, and embedding privacy and data protection guarantees into AI applications during the design phase are some possible answers.

Algorithms, which power AI systems, could also have consequences on other human rights. For example, AI tools aimed at automatically detecting and removing hate speech from online platforms could negatively affect freedom of expression: Even when such tools are trained on significant amounts of data, the algorithms could wrongly identify a text as hate speech. Complex algorithms and human-biassed big data sets can serve to reinforce and amplify discrimination, especially among those who are disadvantaged.

Explore related digital policy topics and their links with AI

Ethical concerns

As AI algorithms involve judgements and decision-making – replicating similar human processes – concerns are being raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by or with the help of AI systems is one such concern, as illustrated in the debate over facial recognition technology (FRT). Several studies have shown that FRT programs present racial and gender biases, as the algorithms involved are largely trained on photos of males and white people. If law enforcement agencies rely on such technologies, this could lead to biassed and discriminatory decisions, including false arrests.

One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations when creating AI systems) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). The Institute of Electrical and Electronics Engineers’ Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is one example of initiatives that are aimed at ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms that can ‘explain themselves’. Being able to better understand how an algorithm makes a certain decision could also help improve that algorithm.


AI and other digital technologies and infrastructures

 Person, Astronomy, Outer Space

Telecom infrastructure

AI is used to optimise network performance, conduct predictive maintenance, dynamically allocate network resources, and improve customer experience, among others.

 Neighborhood, Art

Internet of things

The interplay between AI and IoT can be seen in multiple applications, from smart home devices and vehicle autopilot systems to drones and smart cities applications.

semiconductors

Semiconductors

AI algorithms are used in the design of chips, for improved performance and power efficiency, for instance. And then semiconductors themselves are used in AI hardware and research.

 Art, Drawing

Quantum computing

Although largely still a field of research, quantum computing promises enhanced computational power which, coupled with AI, can help address complex problems.

 Arch, Architecture, Art, Crib, Furniture, Infant Bed, Building, Factory, Manufacturing

Other advanced technologies

AI techniques are increasingly used in the research and development of other emerging and advanced technologies, from 3D printing and virtual reality, to biotechnology and synthetic biology.