Artificial intelligence: Policy implications, applications, and developments

Share on FacebookTweet

Artificial intelligenceArtificial intelligence (AI) has been around for many years. Many consider that the official birth of AI as an academic discipline and field of research was in 1956, when participants at the Dartmouth Conference coined the term ‘AI’ and talked about the fact that ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to stimulate it.’ From that moment on, AI has been continuously evolving and has found its use in many areas, from manufacturing, transporation and and agriculture, to online services and cybersecurity solutions.​

For example, several companies are working towards enabling self-driving cars, new automatic translation tools are being developed, and researchers are proposing AI-based technologies for various purposes such as detection of abusive domain names at the time of registration. 

Internet companies are also increasingly developing AI tools to respond to different needs. Jigsaw, a Google initiated start up, has been working on Conversation AI, a tool aimed to automatically detect hate speech and other forms of verbal abuse and harassment online. Facebook has built an AI program, called DeepText, that could help catch spam and other unwanted messages, and is using AI to combat the spreading of terrorism content via its network. 

These and other similar advances have or are expected to have implications in several policy areas (economic, societal, education, etc), and governments, the technical community, and private sector actors worldwide are increasingly considering them.


The policy implications of artificial intelligence

The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences. 

Economic and social

AI has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that possible negative implications are adequately addressed. 

One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and other AI systems. 

There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. It is also often underlined that adapting the work force to the AI requirements does not mean only preparing the new generations, but also allowing the current work force to re-skill and up-skill itself.

Developments

Safety and security

Artificial intelligence applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks.

On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.

Developments

Privacy, data protection, and other human rights

Artificial intelligence - Internet of Things - Big dataAI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data.

AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data – sometimes labeled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. In this context, developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications. 

Algorithms, which feed AI systems, could also have consequences on other human rights, such as freedom of expression, and several civil society groups and intergovernmental organisations are looking into such issues.

Developments

Ethics

AI - ethics and accuntabilityAs AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern, as illustrated in the debate that has surrounded Jigsaw’s Conversation AI tool.

While potentially addressing problems related to misuse of the Internet public space, the software also raises a major ethical issue: How can machines determine what is and what is not appropriate language? One way of addressing some of these concerns could be to combine ethical training for technologists (encouraging them to prioritise ethical consideratins in the creation of autonomous technologies) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design).

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. The Institute of Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.

Developments

Legal

One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.

Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are raised as to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended, among others, that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'. Such a proposal, however, was received with reticence by some, as demonstrated in an open letter addressed to the European Commission by over 150 AI and robotics experts, industry leaders, and other experts. In their view, creating a legal personality for a robot is inappropriate from an ethical and legal perspective: while being aware of the importance of addressing the issue of liability of autonomous robots, they believe that 'creating a legal status of electronic person would be ideological and non-sensical and non-pragmatic'. ​

Developments


Governmental initiatives

While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. Many are elaborating national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements.

China, for example, ​has released a national AI development plan in 2017, intended to contribute to making the country the world leader in AI by 2030 and build a national AI industry of $150 billion. ​The United Arab Emirates (UAE) also have ​an AI strategy whose main aim is to support the development of AI solutions for several vital sectors in the country, such as transportation, healthcare, space exploration, smart consumption, water, technology, education, and agriculture. The country has even appointed of a State Minister for AI, to work on ‘making the UAE the world’s best prepared for AI and other advanced technologies’. In 2018, France and Germany were among the countries which followed this trend of of launching national AI development plans. These are only a few examples, and there are many more countries working on such plans and strategies on an ongoing basis, as the map below shows.

 

Developments


AI on the international scene

Artificial intelligence and its various existing and potential applications feature more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, has facilitated discussions on intelligent transport systems, while the International Labour Organisation (ILO) has started looking at the impact of automation on he world of work. AI has also been featuring high on the agenda of meetings such as the World Economic Forum, G7 Summits, and OECD gatherings. All these entities and processes are exploring different policy implications of AI and suggesting approaches for tackling inherent challenges.​

Developments


International processes

Some intergovernmental organisations have established processes to look at certain aspects of AI and its uses.  Within the UN System, for example, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on lethal autonomous weapons systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS. The Council of Europe has set up a Committee of Experts to study the human rights dimensions of automated data processing and different forms of AI. The European Commission created a High-Level Expert Group on Artificial Intelligence to support the implementation of an European strategy on AI and to elaborate recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.​

Group of Governmental Experts on Lethal Autonomous Weapons Systems

In 2013, the Meeting of State Parties to the Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects (Convention on Certain Conventional Weapons) agreed on a mandate on LAWS and mandated the creation of a group of experts 'to discuss the questions related to emerging technologies in the areas of lethal autonomous weapons systems'. The group was convened three times, in 20142015, and 2016, and produced reports which fed into meetings of the High Contracting Parties to the Convention. In 2016, the CCW High Contracting Parties decided to establish a Group of Governmental Experts on LAWS (CCW GGE), to build on the work of the previous groups of experts. 

Mandate. CCW GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons. 

Composition. The group has an open-ended nature, and is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.

Meetings

Developments

  • 23 October 2018: In its Report of the 2018 session, the CCW GGE reiterated the applicability of international humanitarian law to the development and use of LAWs and noted that human responsibility must be retained when it comes to decisions on the used of weapons systems. The report also summarisied the group discussions on the human element in lethal force, aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of LAWS, and on the potential military implications of related technologies. When it comes to the possible policy options for addressing the humanitarian and international security challenges posed by emerging technologies in the context of LAWSs, the report outlines the four proposals discussed within the Group: a legally-binding instrument, a political declaration, and clarity on the implementation of existing obligations under international law, in particularinternational humanitarian law.
  • 20 November 2017: At the end of its November 2017 meeting, the CCW GGE adopted a report outlining several conclusions and recommendations. Among these: international humanitarian law applies fully to all weapons systems, including the potential development and use of LAWS; responsibility for the deployment of any new weapon systems in armed conflicts remains with states; given the dual nature of technologies in the area of intelligent autonomous systems, the Group's work should not hamper progress in or access to civilian research and development and use of these technologies; there is a need to future assess the aspects of human-machine interaction in the development, deployment, and use of emerging technologies in the area of LAWS; there should also be further discussions on possible options for addressing the humanitarian and international security challenges osed by emerging technologies in the area of LAWS. The Group also recommended that it meets for a duration of 10 days in 2018 in Geneva.

Resources


Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence

The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) was set up by the Council of Europe Committee of Ministers with the following tasks:

  • Prepare follow up with a view to the preparation of a possible standard setting instrument on the basis of the study on the human rights dimensions of automated data processing techniques (in particular algorithms and possible regulatory implications).
  • Carry out a study on the development and use of new digital technologies and services, including different forms of artificial intelligence, as they may impact peoples’ enjoyment of fundamental rights and freedoms in the digital age – with a view to give guidance for future standard-setting in this field.
  • Carry out a study on a possible standard-setting instrument on forms of liability and jurisdictional issues in the application of civil and administra defamation laws in Council of Europe member states. 

Mandate. The group is carrying its work between January 2018 and December 2019.

Composition. The group is composed on 13 experts, comprising seven government or member States’ representatives, designated by the Steering Committee on Media and Information Society (CDMSI), and six independent experts, appointed by the Secretary General with recognised expertise in the fields of freedom of expression and independence of the media online and off-line. 

Meetings

  • Upcoming:
    • 18–19 March 2019, Strasbourg | Third meeting of MSI-AUT​ – Draft agenda
  • Previous:
    • 17–18 September 2018, Strasboug |  Second meeting of MSI-AUT​
    • 6–7 March 2018 | First meeting of MSI-AUT

Developments

  • 13 February 2019 | The Committee of Ministers of the Council of Europe adopted the Declaration on the manipulative capabilities of algorithmic processes.
  • 16 November 2018 | MS-AUT publishes a Draft Declaration of the Committee of Ministers on the manipulative capabilities of algorithmic processes. The document draws the attention of states to the rights of all human beings to take decisions and form opinions independently of automated systems. It underlines the risks of using massive amounts of personal and non-personal data to sort and micro-target people, to identify vulnerabilities, and to reshape social environments to achieve specific goals and vested interests. The draft encourages states (1) to consider additional protective frameworks to address the impacts of the targeted use of data on the exercise of human rights; (2) to initiate inclusive public debates on permissible forms of persuasion and unacceptable manipulation; (3) to take measures to ensure that effective legal guarantees are in place against such forms of interference; and (4) to empower users by promoting digital literacy on how much data are generated and used for commercial purposes.
  • 12 November 2018 | MS-AUT publishes a Draft Recommendation of the Committee of Ministers to member States on human rights impacts of algorithmic systems. The document outlines that the misuse of algorithmic systems can jeopardise the rights to privacy, freedom of expression, and prohibition of discrimination provided by the European Convention for the Protection of Human Rights and Fundamental Freedoms. Although public and private sector initiatives to develop ethical guidelines for the design, development, and deployment of algorithmic systems are welcome, they do not substitute the duty of states to guarantee that human rights obligation are embedded into all steps of their algorithmic operations. In addition, states should ensure appropriated regulatory frameworks to promote human rights-respecting technological innovation by all actors. The Recommendation also outlines a series of guidelines for states on actions to be taken vis-a-vis the human rights impacts of algorithmic system, such as include data quality and modelling standards; principles of transparency and contestability; provision of effective judicial and non-judicial remedies to review algorithmic decisions; the implementation of precautionary measures to maintain control over the use of algorithmic systems; and empowerment through research and public awareness. Lastly, the document undrlines responsibilities for private actors with respect to human rights and fundamental freedoms that states should aim to ensure, including guidelines on data quality and modelling, transparency, effective remedies, and precautionary measures.  ​
  • 9 November 2018 | MSI-AUT publishes a draft Study of the implications of advanced digital techologies (including AI systems) for the concept of responsibility within a human rights framework. The study examines, among others, the possible risks, harms, and wrongs that the application of advanced technologies might have especially in relation to the rights to a fair trials and to 'due process', the right to freedom of expression and information, the right to privacy and data protection, and the right to protection against discrimination in the exercise of rights and freedoms. One of the key conclusions of the study is that 'those who deploy and reap the benefits of advanced digital technologies (including AI) in the provision of services must be responsible for their adverse consequences'. As such, it is suggested that states introduce regulations which ensure that the responsibility for the adverse risks, harms and wrongs arising from the operation of advanced digital technologies is dully allocated. 
  • 9 October 2018 | MSI-AUT publishes a Draft study on forms of liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states.

Additional resources


The applications of artificial intelligence

Artificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution, and is implemented in many areas. Let’s take a look…

AI-augmented systems

AI is also seen as a technology that can enhance human intelligence and assist humans in performing certain tasks.

Developments

Autonomous systems (cars, weapons, etc.)

Several companies, from Google to Uber, are working towards enabling self-driving cars powered by AI systems. Some have already started testing such cars on the roads. Drones powered by AI are no longer news, while autonomous weapons raise concerns about their potential implications for the human kind. 

Developments

Internet of Things

Scientists are looking at ways in which AI can enhance other technologies such as the IoTs. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency.

Cybersecurity and cybercrime

AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid.

Content policy

Internet companies are increasingly using AI algorithms to deal with hate speechterrorism content, and other forms of extremist content online. Researchers propose new algorithms that could be used in content control policy, one example being an AI algorithm for identifying racist code words (code words used to substitute references to communities) on social media.

Developments

Designing or improving online services

Internet and tech companies employ AI to improve existing services or design new ones. Few examples: Twitter has started using AI to improve users’ experience, Google has launched a new job search engine based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from Calendar.help to the AI chatbot Zo).

Translation

Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.

Developments

Healthcare and medical sciences

AI applications in the medical field range from medical robots to algorithms that could improve medical diagnosis and treatment.

Developments

Industrial applications

AI and robotics are the drivers of the fourth industrial revolution, as automated systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc.


Ongoing research

The private sector and the academic community alike are continously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, other focus on trying to address issues such as accountability and responsibility in AI algorithms. In October 2017, for example, researchers at Columbia and Lehigh universities have developed a tool, called DeepXplore, that could help bring transparency into AI systems, through a process described as ‘reverse engineering the learning process to understand its logic’. And in June 2018, IBM has presented an AI system that can engage in reasoned arguments with humans on complex topics.

Developments

 

Author and curator: Sorina Teleanu

Keep me posted!  Sign up for the GIP newsletter for updates, and bookmark this page for the latest analysis and developments.

 

[Last updated: 10 July 2019]

Share on FacebookTweet