The economic and social implications of AI
AI has significant potential to stimulate economic growth. In production processes, AI systems increase automation, and make processes smarter, faster, and cheaper, and therefore bring savings and increased efficiency. AI can improve the efficiency and the quality of existing products and services, and can also generate new ones, thus leading to the creation of new markets. It is estimated that the AI industry could contribute up to US$15.7 trillion to the global economy by 2030. Beyond the economic potential, AI can also contribute to achieving sustainable development goals (SDGs); for instance, AI can be used to detect water service lines containing hazardous substances (SDG 6 – clean water and sanitation), to optimise the supply and consumption of energy (SDG 7 – affordable and clean energy), and to analyse climate change data and generate climate modelling, helping to predict and prepare for disasters (SDG 13 – climate action). Across the private sector, companies have been launching programmes dedicated to fostering the role of AI in achieving sustainable development. Examples include IBM’s Social Science for Good, Google’s AI for Social Good, and Microsoft’s AI for Good projects.
For this potential to be fully realised, there is a need to ensure that the economic benefits of AI are broadly shared at a societal level, and that the possible negative implications are adequately addressed. The 2022 edition of the Government AI Readiness Index warns that ‘care needs to be taken to make sure that AI systems don’t just entrench old inequalities or disenfranchise people. In a global recession, these risks are evermore important.’ One significant risk is that of a new form of global digital divide, in which some countries reap the benefits of AI, while others are left behind. Estimates for 2030 show that North America and China will likely experience the largest economic gains from AI, while developing countries – with lower rates of AI adoption – will register only modest economic increases.
The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a ‘universal basic income’ that would compensate individuals for disruptions brought on the labour market by robots and by other AI systems. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements of the jobs market. This entails not only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.
Explore related digital policy topics and their links with AI
AI, safety, and security
AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with minimal unintended consequences. Beyond self-driving cars, the (potential) development of other autonomous systems – such as lethal autonomous weapons systems – has sparked additional and intense debates on their implications for human safety.
AI also has implications in the cybersecurity field. In addition to the cybersecurity risks associated with AI systems (e.g. as AI is increasingly embedded in critical systems, they need to be secured to potential cyberattacks), the technology has a dual function: it can be used as a tool to both commit and prevent cybercrime and other forms of cyberattacks. As the possibility of using AI to assist in cyberattacks grows, so does the integration of this technology into cybersecurity strategies. The same characteristics that make AI a powerful tool to perpetrate attacks also help to defend against them, raising hopes for levelling the playing field between attackers and cybersecurity experts.
Going a step further, AI is also looked at from the perspective of national security. The US Intelligence Community, for example, has included AI among the areas that could generate national security concerns, especially due to its potential applications in warfare and cyber defense, and its implications for national economic competitiveness.
Explore related digital policy topics and their links with AI
AI and human rights
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Online services such as social media platforms, e-commerce stores, and multimedia content providers collect information about users’ online habits, and use AI techniques such as machine learning to analyse the data and to ‘improve the user’s experience’ (for example, Netflix suggests movies you might want to watch based on movies you have already seen). AI-powered products such as smart speakers also involve the processing of user data, some of it of personal nature. Facial recognition technologies embedded in public street cameras have direct privacy implications.
How is all of this data processed? Who has access to it and under what conditions? Are users even aware that their data is extensively used? These are only some of the questions generated by the increased use of personal data in the context of AI applications. What solutions are there to ensure that AI advancements do not come at the expense of user privacy? Strong privacy and data protection regulations (including in terms of enforcement), enhanced transparency and accountability for tech companies, and embedding privacy and data protection guarantees into AI applications during the design phase are some possible answers.
Algorithms, which power AI systems, could also have consequences on other human rights. For example, AI tools aimed at automatically detecting and removing hate speech from online platforms could negatively affect freedom of expression: Even when such tools are trained on significant amounts of data, the algorithms could wrongly identify a text as hate speech. Complex algorithms and human-biassed big data sets can serve to reinforce and amplify discrimination, especially among those who are disadvantaged.
Explore related digital policy topics and their links with AI
Ethical concerns
As AI algorithms involve judgements and decision-making – replicating similar human processes – concerns are being raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by or with the help of AI systems is one such concern, as illustrated in the debate over facial recognition technology (FRT). Several studies have shown that FRT programs present racial and gender biases, as the algorithms involved are largely trained on photos of males and white people. If law enforcement agencies rely on such technologies, this could lead to biassed and discriminatory decisions, including false arrests.
One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations when creating AI systems) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). The Institute of Electrical and Electronics Engineers’ Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is one example of initiatives that are aimed at ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.
Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms that can ‘explain themselves’. Being able to better understand how an algorithm makes a certain decision could also help improve that algorithm.