On this page: Policy implications | Governmental initiatives | AI on the international scene | AI applications | Research
Artificial intelligence (AI) has been around for many years. Many consider that the official birth of AI as an academic discipline and field of research was in 1956, when participants at the Dartmouth Conference coined the term ‘AI’ and talked about the fact that ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to stimulate it.’ From that moment on, AI has been continuously evolving and has found its use in many areas, from manufacturing, transporation and and agriculture, to online services and cybersecurity solutions.
For example, several companies are working towards enabling self-driving cars, new automatic translation tools are being developed, and researchers are proposing AI-based technologies for various purposes such as detection of abusive domain names at the time of registration.
Internet companies are also increasingly developing AI tools to respond to different needs. Jigsaw, a Google initiated start up, has been working on Conversation AI, a tool aimed to automatically detect hate speech and other forms of verbal abuse and harassment online. Facebook has built an AI program, called DeepText, that could help catch spam and other unwanted messages, and is using AI to combat the spreading of terrorism content via its network.
These and other similar advances have or are expected to have implications in several policy areas (economic, societal, education, etc), and governments, the technical community, and private sector actors worldwide are increasingly considering them.
The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences.
AI has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that possible negative implications are adequately addressed.
One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and other AI systems.
There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. It is also often underlined that adapting the work force to the AI requirements does not mean only preparing the new generations, but also allowing the current work force to re-skill and up-skill itself.
Developments
Artificial intelligence applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks.
On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.
Developments
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data.
AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data – sometimes labeled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. In this context, developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications.
Algorithms, which feed AI systems, could also have consequences on other human rights, such as freedom of expression, and several civil society groups and intergovernmental organisations are looking into such issues.
Developments
As AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern, as illustrated in the debate that has surrounded Jigsaw’s Conversation AI tool.
While potentially addressing problems related to misuse of the Internet public space, the software also raises a major ethical issue: How can machines determine what is and what is not appropriate language? One way of addressing some of these concerns could be to combine ethical training for technologists (encouraging them to prioritise ethical consideratins in the creation of autonomous technologies) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design).
Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. The Institute of Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.
Developments
One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.
Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are raised as to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended, among others, that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'. Such a proposal, however, was received with reticence by some, as demonstrated in an open letter addressed to the European Commission by over 150 AI and robotics experts, industry leaders, and other experts. In their view, creating a legal personality for a robot is inappropriate from an ethical and legal perspective: while being aware of the importance of addressing the issue of liability of autonomous robots, they believe that 'creating a legal status of electronic person would be ideological and non-sensical and non-pragmatic'.
Developments
While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. Many are elaborating national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements.
China, for example, has released a national AI development plan in 2017, intended to contribute to making the country the world leader in AI by 2030 and build a national AI industry of $150 billion. The United Arab Emirates (UAE) also have an AI strategy whose main aim is to support the development of AI solutions for several vital sectors in the country, such as transportation, healthcare, space exploration, smart consumption, water, technology, education, and agriculture. The country has even appointed of a State Minister for AI, to work on ‘making the UAE the world’s best prepared for AI and other advanced technologies’. In 2018, France and Germany were among the countries which followed this trend of of launching national AI development plans. These are only a few examples, and there are many more countries working on such plans and strategies on an ongoing basis.
Developments
Artificial intelligence and its various existing and potential applications feature more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, has facilitated discussions on intelligent transport systems, while the International Labour Organisation (ILO) has started looking at the impact of automation on he world of work. AI has also been featuring high on the agenda of meetings such as the World Economic Forum, G7 Summits, and OECD gatherings. All these entities and processes are exploring different policy implications of AI and suggesting approaches for tackling inherent challenges.
Developments
Some intergovernmental organisations have established processes to look at certain aspects of AI and its uses. Within the UN System, for example, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on lethal autonomous weapons systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS. The Council of Europe has set up a Committee of Experts to study the human rights dimensions of automated data processing and different forms of AI. The European Commission created a High-Level Expert Group on Artificial Intelligence to support the implementation of an European strategy on AI and to elaborate recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.
In 2013, the Meeting of State Parties to the Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects (Convention on Certain Conventional Weapons) agreed on a mandate on LAWS and mandated the creation of a group of experts 'to discuss the questions related to emerging technologies in the areas of lethal autonomous weapons systems'. The group was convened three times, in 2014, 2015, and 2016, and produced reports which fed into meetings of the High Contracting Parties to the Convention. In 2016, the CCW High Contracting Parties decided to establish a Group of Governmental Experts on LAWS (CCW GGE), to build on the work of the previous groups of experts.
Mandate. CCW GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons.
Composition. The group has an open-ended nature, and is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.
Meetings
Developments
Resources
The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) was set up by the Council of Europe Committee of Ministers with the following tasks:
Mandate. The group is carrying its work between January 2018 and December 2019.
Composition. The group is composed on 13 experts, comprising seven government or member States’ representatives, designated by the Steering Committee on Media and Information Society (CDMSI), and six independent experts, appointed by the Secretary General with recognised expertise in the fields of freedom of expression and independence of the media online and off-line.
Meetings
Developments
Additional resources
Artificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution, and is implemented in many areas. Let’s take a look…
AI is also seen as a technology that can enhance human intelligence and assist humans in performing certain tasks.
Developments
Several companies, from Google to Uber, are working towards enabling self-driving cars powered by AI systems. Some have already started testing such cars on the roads. Drones powered by AI are no longer news, while autonomous weapons raise concerns about their potential implications for the human kind.
Developments
Scientists are looking at ways in which AI can enhance other technologies such as the IoTs. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency.
AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid.
Internet companies are increasingly using AI algorithms to deal with hate speech, terrorism content, and other forms of extremist content online. Researchers propose new algorithms that could be used in content control policy, one example being an AI algorithm for identifying racist code words (code words used to substitute references to communities) on social media.
Developments
Internet and tech companies employ AI to improve existing services or design new ones. Few examples: Twitter has started using AI to improve users’ experience, Google has launched a new job search engine based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from Calendar.help to the AI chatbot Zo).
Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.
Developments
AI applications in the medical field range from medical robots to algorithms that could improve medical diagnosis and treatment.
Developments
AI and robotics are the drivers of the fourth industrial revolution, as automated systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc.
The private sector and the academic community alike are continously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, other focus on trying to address issues such as accountability and responsibility in AI algorithms. In October 2017, for example, researchers at Columbia and Lehigh universities have developed a tool, called DeepXplore, that could help bring transparency into AI systems, through a process described as ‘reverse engineering the learning process to understand its logic’. And in June 2018, IBM has presented an AI system that can engage in reasoned arguments with humans on complex topics.
Developments
Author and curator: Sorina Teleanu
Keep me posted! Sign up for the GIP newsletter for updates, and bookmark this page for the latest analysis and developments.
[Last updated: 21 February 2019]
">Except where otherwise noted, the content on this website is licensed by DiploFoundation under CC BY-NC-ND 4.0 International. External content is licensed by the respective authors. Please inform us when making use of the content.