Artificial intelligence: Policy implications, applications, and developments

On this page: Policy implications | Governmental initiatives International processes | AI applications | Research

Artificial intelligenceThe field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as autonomous vehicles and smart buildings, medical robots, communications, and intelligent education systems.

For example, several companies are working towards enabling self-driving cars, new automatic translation tools are being developed, and researchers are proposing AI-based technologies for various purposes such as detection of abusive domain names at the time of registration. 

Internet companies are also increasingly developing AI tools to respond to different needs. Jigsaw, a Google initiated start up, has been working on Conversation AI, a tool aimed to automatically detect hate speech and other forms of verbal abuse and harassment online. Facebook has built an AI program, called DeepText, that could help catch spam and other unwanted messages, and is using AI to combat the spreading of terrorism content via its network. 

These and other similar advances are expected to have implications in several policy areas (economic, societal, education, etc), and governments, the technical community, and private sector actors worldwide are increasingly considering them.


The policy implications of artificial intelligence

The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences. 

Economic and social

AI has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that possible negative implications are adequately addressed. 

One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and other AI systems. 

There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. It is also often underlined that adapting the work force to the AI requirements does not mean only preparing the new generations, but also allowing the current work force to re-skill and up-skill itself.

Developments

Safety and security

Artificial intelligence applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks.

On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.

Developments

Privacy and data protection

Artificial intelligence - Internet of Things - Big dataAI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data.

AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data – sometimes labeled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. In this context, developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications. 

Developments

Ethics

AI - ethics and accuntabilityAs AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern, as illustrated in the debate that has surrounded Jigsaw’s Conversation AI tool.

While potentially addressing problems related to misuse of the Internet public space, the software also raises a major ethical issue: How can machines determine what is and what is not appropriate language? One way of addressing some of these concerns could be to combine ethical training for technologists (encouraging them to prioritise ethical consideratins in the creation of autonomous technologies) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design).

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. The Institute of Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.

Developments

Legal

One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.

Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are raised as to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended, among others, that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'.

Developments


Governmental initiatives

While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. Many are elaborating national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. 

Developments


International processes 

Artificial intelligence and its various existing and potential applications feature more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, has facilitated discussions on intelligent transport systems, while the International Labour Organisation (ILO) has started looking at the impact of automation on he world of work. Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on lethal autonomous weapons systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS.

Group of Governmental Experts on Lethal Autonomous Weapons Systems

In 2013, the Meeting of State Parties to the Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects (Convention on Certain Conventional Weapons) agreed on a mandate on LAWS and mandated the creation of a group of experts 'to discuss the questions related to emerging technologies in the areas of lethal autonomous weapons systems'. The group was convened three times, in 2014, 2015, and 2016, and produced reports which fed into meetings of the High Contracting Parties to the Convention. In 2016, the CCW High Contracting Parties decided to establish a Group of Governmental Experts on LAWS (CCW GGE), to build on the work of the previous groups of experts. 

Mandate. CCW GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons. 

Composition. The group has an open-ended nature, and is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.

Meetings

Developments

  • 20 November 2017: At the end of its November 2017 meeting, the CCW GGE adopted a report outlining several conclusions and recommendations. Among these: international humanitarian law applies fully to all weapons systems, including the potential development and use of LAWS; responsibility for the deployment of any new weapon systems in armed conflicts remains with states; given the dual nature of technologies in the area of intelligent autonomous systems, the Group's work should not hamper progress in or access to civilian reserach and development and use of these technologies; there is a need to future assess the aspects of human-machine interaction in the development, deployment, and use of emerging technologies in the area of LAWS; there should also be further discussions on possible options for addressing the humanitarian and international security challenges osed by emerging technologies in the area of LAWS. The Group also recommended that it meets for a duration of 10 days in 2018 in Geneva.
  • 4 September 2017: The Chairperson of CCW GGE submitted a Food-for-thought Paper outlining a series of questions that could form the basis for discussion at the group's meeting in November. The proposed questions revolve arond three main areas: technology (e.g. whether the technologies that could contribute to LAWS could be broadly characterised as AI/autonomous systems; whether there is likely to be a shift from narrow AI to general AI; etc.), military effects (e.g. whether LAWS could be accommodated under existig chains of military command and control), and legal and ethical issues (e.g. where does legal accountability and liability reside for autonomous systems? what are the main features of national/regional laws planned or already in place for autonomous systems? etc.).

Resources

 

Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence

The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) was set up by the Council of Europe Committee of Ministers with the following tasks:

  • Prepare follow up with a view to the preparation of a possible standard setting instrument on the basis of the study on the human rights dimensions of automated data processing techniques (in particular algorithms and possible regulatory implications).
  • Carry out a study on the development and use of new digital technologies and services, including different forms of artificial intelligence, as they may impact peoples’ enjoyment of fundamental rights and freedoms in the digital age – with a view to give guidance for future standard-setting in this field.
  • Carry out a study on a possible standard-setting instrument on forms of liability and jurisdictional issues in the application of civil and administra defamation laws in Council of Europe member states. 

Mandate. The group will carry its work between January 2018 and December 2019.

Composition. The group is composed on 13 experts, comprising seven government or member States’ representatives, designated by the Steering Committee on Media and Information Society (CDMSI), and six independent experts, appointed by the Secretary General with recognised expertise in the fields of freedom of expression and independence of the media online and off-line. 

Meetings

  • 6–7 March 2018: First meeting

The applications of artificial intelligence

AI applicationsArtificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution, and is implemented in many areas. Let’s take a look…

AI-augmented systems

AI is also seen as a technology that can enhance human intelligence and assist humans in performing certain tasks.

Developments

Autonomous systems (cars, weapons, etc.)

Several companies, from Google to Uber, are working towards enabling self-driving cars powered by AI systems. Some have already started testing such cars on the roads. Drones powered by AI are no longer news, while autonomous weapons raise concerns about their potential implications for the human kind. 

Developments

Internet of Things

Scientists are looking at ways in which AI can enhance other technologies such as the IoTs. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency.

Cybersecurity and cybercrime

AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid.

Content policy

Internet companies are increasingly using AI algorithms to deal with hate speechterrorism content, and other forms of extremist content online. Researchers propose new algorithms that could be used in content control policy, one example being an AI algorithm for identifying racist code words (code words used to substitute references to communities) on social media.

Developments

Designing or improving online services

Internet and tech companies employ AI to improve existing services or design new ones. Few examples: Twitter has started using AI to improve users’ experience, Google has launched a new job search engine based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from Calendar.help to the AI chatbot Zo).

Developments

Translation

Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.

Developments

Healthcare

AI applications in the medical field range from medical robots to algorithms that could improve medical diagnosis and treatment.

Industrial applications

AI and robotics are the drivers of the fourth industrial revolution, as automated systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc.

Developments


Ongoing research

The private sector and the academic community alike are continously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, other focus on trying to address issues such as accountability and responsibility in AI algorithms.

Developments

 

Author and curator: Sorina Teleanu

Keep me posted!  Sign up for the GIP newsletter for updates, and bookmark this page for the latest analysis and developments.

 

[Last updated: 18 June 2018]

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top