Artificial intelligence: Policy implications, applications, and developments

The field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as autonomous vehicles and smart buildings, medical robots, communications, and intelligent education systems.

For example, several companies are working towards enabling self-driving cars, new automatic translation tools are being developed, and researchers are proposing AI-based technologies for various purposes such as detection of abusive domain names at the time of registration. 

Internet companies are also increasingly developing AI tools to respond to different needs. Jigsaw, a Google initiated start up, has been working on Conversation AI, a tool aimed to automatically detect hate speech and other forms of verbal abuse and harassment online. Facebook has built an AI program, called DeepText, that could help catch spam and other unwanted messages, and is using AI to combat the spreading of terrorism content via its network. 

These and other similar advances are expected to have implications in several policy areas (economic, societal, education, etc), and governments, the technical community, and private sector actors worldwide are increasingly considering them.

On this page: Policy implications | Governmental initiatives International processes | AI applications | Research

Artificial intelligence

The policy implications of artificial intelligence

The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences. 

Economic and social

Al has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that possible negative implications are adequately addressed. 

One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and other AI systems. 

There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. It is also often underlined that adapting the work force to the AI requirements does not mean only preparing the new generations, but also allowing the current work force to re-skill and up-skill itself.


29 March 2018: G7 employment and innovation ministers discuss ways to prepare for jobs of the future

20 February 2018: Employees are cautiously optimistic about AI, survey finds

8 November 2017: Taxing robots will not protect jobs, report says

20 October 2017: Uk businesses call for a joint commission to look at AI impact on people and jobs

11 October 2017: 30% of jobs in OECD countries at risk of automation

Safety and security

Artificial intelligence applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks.

On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.


20 February 2018: Report outlines security threats from malicious use of AI

13 February 2018: Researchers demonstrate AI systems can learn from implicit human feedback

13 February 2018: AI featured in the annual threat assessment of the US intelligence community

12 June 2017: Researchers try to make AI safer by having algorithms learn from human feedback

Privacy and data protection

AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data.

AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data – sometimes labeled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. In this context, developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications. 


14 August 2017: Amazon uses AI to identify and protect sensitive data

Artificial intelligence - Internet of Things - Big data


As AI algorithms involve judgements and decision-making – replacing similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern, as illustrated in the debate that has surrounded Jigsaw’s Conversation AI tool.

While potentially addressing problems related to misuse of the Internet public space, the software also raises a major ethical issue: How can machines determine what is and what is not appropriate language? One way of addressing some of these concerns could be to combine ethical training for technologists (encouraging them to prioritise ethical consideratins in the creation of autonomous technologies) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design).

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. The Institute of Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.


9 April 2018: AI experts ask governments to introduce algorithmic impact assessments

25 January 2018: UK Prime Minister calls for ethical rules for AI

18 October 2017: Researchers call for more accountability in AI systems

3 October 2017: DeepMind launches Ethics & Society Unit

23 August 2017: Germany adopts ethics guidelines for automated driving

19 July 2017: IEEE to develop standard for personal data AI agent

12 June 2017: Researchers call for guidelines on AI accountability

AI - ethics and accuntability


One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.

Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are raised as to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended, among others, that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'.


5 April 2018: AI experts concerned about the idea of granting legal status to robots

6 November 2017: AI bot granted residence in Tokyo

25 October 2017: Saudi Arabia grants citizenship to a robot

24 October 2017: Tech industry groups adopts AI policy principles

10 October 2017: Estonia to address the legal status of AI

11 July 2017: Researchers able to identify specific neurons responsible for AI decisions


Governmental initiatives

While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. Many are elaborating national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. 


16 April: UK parliamentary committee issues recommendations on AI

10 April 2018: European countries sign declaration on AI cooperation

29 March 2018: French president launches national AI strategy

21 March 2018: Bill introduced in the US Congress on AI and security

9 March 2018: European Commission forms experts groups on AI and liability and new technologies

5 March 2018: UAE forms Council to oversee AI integration in public sector

19 February 2018: India sets up its first AI institute

9 February 2018: Indian government creates committees on AI

12 January 2018: France: Foreign takeover of AI and data companies might require governmental approval

5 January 2018: Chinese government invests in an AI technology park

12 December 2017: Bill introduced in the US Congress to promote AI development

19 October 2017: UAE launches AI strategy and appoints minister for AI

16 October 2017: Report outlines recommendations for UK to advance in AI

4 October 2017: US Senate committee endorses bill on self-driving cars

14 September 2017: India’s AI task force starts working

13 September 2017: US government approves new guidelines for self-driving cars

1 September 2017: Russian president warns about global monopolies in AI

21 August 2017: Taiwan to invest in AI initiatives

20 July 2017: China releases AI development plan

19 July 2017: UK Parliament launches inquiry into AI implications


International processes 

Artificial intelligence and its various existing and potential applications feature more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, has facilitated discussions on intelligent transport systems, while the International Labour Organisation (ILO) has started looking at the impact of automation on he world of work. Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on lethal autonomous weapons systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS.

Group of Governmental Experts on Lethal Autonomous Weapons Systems

In 2013, the Meeting of State Parties to the Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects (Convention on Certain Conventional Weapons) agreed on a mandate on LAWS and mandated the creation of a group of experts 'to discuss the questions related to emerging technologies in the areas of lethal autonomous weapons systems'. The group was convened three times, in 2014, 2015, and 2016, and produced reports which fed into meetings of the High Contracting Parties to the Convention. In 2016, the CCW High Contracting Parties decided to establish a Group of Governmental Experts on LAWS (CCW GGE), to build on the work of the previous groups of experts. 

Mandate. CCW GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons. 

Composition. The group has an open-ended nature, and is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.



  • 20 November 2017: At the end of its November 2017 meeting, the CCW GGE adopted a report outlining several conclusions and recommendations. Among these: international humanitarian law applies fully to all weapons systems, including the potential development and use of LAWS; responsibility for the deployment of any new weapon systems in armed conflicts remains with states; given the dual nature of technologies in the area of intelligent autonomous systems, the Group's work should not hamper progress in or access to civilian reserach and development and use of these technologies; there is a need to future assess the aspects of human-machine interaction in the development, deployment, and use of emerging technologies in the area of LAWS; there should also be further discussions on possible options for addressing the humanitarian and international security challenges osed by emerging technologies in the area of LAWS. The Group also recommended that it meets for a duration of 10 days in 2018 in Geneva.
  • 4 September 2017: The Chairperson of CCW GGE submitted a Food-for-thought Paper outlining a series of questions that could form the basis for discussion at the group's meeting in November. The proposed questions revolve arond three main areas: technology (e.g. whether the technologies that could contribute to LAWS could be broadly characterised as AI/autonomous systems; whether there is likely to be a shift from narrow AI to general AI; etc.), military effects (e.g. whether LAWS could be accommodated under existig chains of military command and control), and legal and ethical issues (e.g. where does legal accountability and liability reside for autonomous systems? what are the main features of national/regional laws planned or already in place for autonomous systems? etc.).



Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence

The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) was set up by the Council of Europe Committee of Ministers with the following tasks:

  • Prepare follow up with a view to the preparation of a possible standard setting instrument on the basis of the study on the human rights dimensions of automated data processing techniques (in particular algorithms and possible regulatory implications).
  • Carry out a study on the development and use of new digital technologies and services, including different forms of artificial intelligence, as they may impact peoples’ enjoyment of fundamental rights and freedoms in the digital age – with a view to give guidance for future standard-setting in this field.
  • Carry out a study on a possible standard-setting instrument on forms of liability and jurisdictional issues in the application of civil and administra defamation laws in Council of Europe member states. 

Mandate. The group will carry its work between January 2018 and December 2019.

Composition. The group is composed on 13 experts, comprising seven government or member States’ representatives, designated by the Steering Committee on Media and Information Society (CDMSI), and six independent experts, appointed by the Secretary General with recognised expertise in the fields of freedom of expression and independence of the media online and off-line. 


  • 6–7 March 2018: First meeting


The applications of artificial intelligence

Artificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution, and is implemented in many areas. Let’s take a look…


AI applications


AI-augmented systems

AI is also seen as a technology that can enhance human intelligence and assist humans in performing certain tasks.


4 Februay 2018: China to use AI to enhance its nuclear submarines

Autonomous systems (cars, weapons, etc.)

Several companies, from Google to Uber, are working towards enabling self-driving cars powered by AI systems. Some have already started testing such cars on the roads. Drones powered by AI are no longer news, while autonomous weapons raise concerns about their potential implications for the human kind. 


4 April 2018: Researchers concerned about reported work on autonomous weapons at a South Korean university

2 April 2018: California goes ahead with plans to allow fully autonomous cars

6 March 2018: The UK to review legislation to prepare for self-driving vehicles

5 March 2018: Researchers bring driverless cars closer to seeing around corners

1 March 2018: Arizona and California allow fully autonomous cars on public roads

15 February 2018: Germany not to acquire autonomous weapons

25 January 2018: Self-driving buses in trial in Stockholm

5 January 2018: Half of new cars in China to be powered with AI by 2020

7 November 2017: Waymo tests self-driving cars with no safety driver

11 October 2017: California proposes changes to its self-driving cars regulations

4 October 2017: US Senate committee endorses bill on self-driving cars

10 September 2017: UK not to use fully autonomous weapons

4 September 2017: Elon Musk says AI could lead to third world war

23 August 2017: Germany adopts guidelines for automated driving

20 August 2017: AI companies write to UN on autonomous weapons

11 August 2017: Autonomous vehicles to have profound impact on labour, US gov study says

Internet of Things

Scientists are looking at ways in which AI can enhance other technologies such as the IoTs. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency.

Cybersecurity and cybercrime

AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid.

Content policy

Internet companies are increasingly using AI algorithms to deal with hate speechterrorism content, and other forms of extremist content online. Researchers propose new algorithms that could be used in content control policy, one example being an AI algorithm for identifying racist code words (code words used to substitute references to communities) on social media.


15 June 2017: Facebook talks about using both AI and human expertise to tackle online terrorism content

Designing or improving online services

Internet and tech companies employ AI to improve existing services or design new ones. Few examples: Twitter has started using AI to improve users’ experience, Google has launched a new job search engine based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from to the AI chatbot Zo).


21 June 2017: AI being used in payment fraud prevention for e-commerce


Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.


14 March 2018: Microsoft announces milestone in machine translation


AI applications in the medical field range from medical robots to algorithms that could improve medical diagnosis and treatment.

Industrial applications

AI and robotics are the drivers of the fourth industrial revolution, as automated systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc.


17 October 2017: Intel announces first neural network processor


Ongoing research

The private sector and the academic community alike are continously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, other focus on trying to address issues such as accountability and responsibility in AI algorithms.


13 February 2018: Researchers demonstrate AI systems can learn from implicit human feedback

5 December 2017: Researchers develop system enabling self-supervised robot learning

25 October 2017: Researchers develop tool to bring transparency into AI

18 October 2017: AlphaGo Zero: DeepMind’s newest AI system that learns from itself

15 September 2017: Facebook opens AI lab in Canada

7 September 2017: MIT and IBM launch AI research lab

4 September 2017: AI research institute established in Australia

28 July 2017: Researchers work on empowering AI with imagination

12 July 2017: Microsoft launches AI for Earth initiative and Research AI lab

11 July 2017: Researchers able to identify specific neurons responsible for AI decisions


Author and curator: Sorina Teleanu

Keep me posted!  Sign up for the GIP newsletter for updates, and bookmark this page for the latest analysis and developments.


[Last updated: 16 April 2018]


The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee


GIP Digital Watch is operated by

Scroll to Top