Artificial intelligence (AI) has been around for many years. Many consider that the official birth of AI as an academic discipline and field of research was in 1956, when participants at the Dartmouth Conference coined the term ‘AI’ and talked about the fact that ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to stimulate it.’ From that moment on, AI has been continuously evolving and has found its use in many areas, from manufacturing, transportation and agriculture, to online services and cybersecurity solutions.
For example, several companies are working towards enabling self-driving cars, new automatic translation tools are being developed, and researchers are proposing AI-based technologies for various purposes such as detection of abusive domain names at the time of registration.
Internet companies are also increasingly developing AI tools to respond to different needs. Jigsaw, a Google initiated start-up, has been working on Conversation AI, a tool aimed to automatically detect hate speech and other forms of verbal abuse and harassment online. Facebook has built an AI program, called DeepText, that could help catch spam and other unwanted messages, and is using AI to combat the spreading of terrorism content via its network.
These and other similar advances have or are expected to have implications in several policy areas (economic, societal, education, etc), and governments, the technical community, and private sector actors worldwide are increasingly considering them.
The policy implications of AI are far‐reaching. While AI can potentially lead to economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security have also been brought into focus, with calls being made for the development of standards that can help ensure that AI applications have minimum unintended consequences.
Economic and social
AI has significant potential to lead to economic growth. Used in production processes, AI systems bring automation, making processes smarter, faster, and cheaper, and therefore bringing savings and increased efficiency. AI can improve the efficiency and quality of existing products and services, and also generate new ones, thus leading to the creation of new markets. For this potential to be fully explored, there is a need to ensure that the economic benefits of AI are broadly shared at the level of the society, and that the possible negative implications are adequately addressed.
One such possible implication is related to the disruptions that AI systems could bring on the labour market. Concerns are raised that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and other AI systems.
There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost, without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. It is also often underlined that adapting the workforce to the AI requirements does not mean only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.
- 26 June 2019 | EU High-Level Expert Group publishes AI policy and investment recommendations
- 12 June 2019 | Up to one in five jobs to go to robots, according to CIO survey
- 21 February 2019 | UK to allocate up to £100 million for AI-focused higher education
- 3 January 2019 | India to introduce AI as an optional subject in secondary education
- 10 December 2018 | Study looks into the effect of AI on the future of humans
- 29 March 2018 | G7 employment and innovation ministers discuss ways to prepare for jobs of the future
- 20 February 2018 | Employees are cautiously optimistic about AI, survey finds
Safety and security
Artificial intelligence applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations, and have minimum unintended consequences. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks.
On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in email applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.
- 19 December 2018 | ENISA defines AI number one concern for European cybersecurity
- 20 February 2018 | Report outlines security threats from malicious use of AI
- 13 February 2018 | Researchers demonstrate AI systems can learn from implicit human feedback
- 13 February 2018 | AI featured in the annual threat assessment of the US intelligence community
Privacy, data protection, and other human rights
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Such concerns are well illustrated by the increasingly important interplay between AI, the Internet of Things (IoT), and big data.
AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data – sometimes labelled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. In this context, developers of AI systems are asked to ensure the integrity of the used data, as well as embed privacy and data protection guarantees into AI applications.
Algorithms, which feed AI systems, could also have consequences on other human rights, such as freedom of expression, and several civil society groups and intergovernmental organisations are looking into such issues.
- 17 September 2019 | At least 75 countries use AI surveillance technology, report finds
- 16 September 2019 | Report explores algorithmic bias in UK policing sector
- 12 September 2019 | California is close to banning facial recognition technology
- 4 September 2019 | UK court rules facial recognition use by police lawful
- 3 September 2019 | Facebook introduces changes to its facial recognition settings
- 20 August 2019 | Ugandan police admit using facial recognition technology
- 19 August 2019 | US presidential candidate calls for ban on police use of facial recognition
- 8 August 2019 | Lawsuit against Facebook's use of facial recognition given green light
- 16 July 2019 | Oakland city to ban facial recognition technology
- 13 June 2019 | San Francisco to use AI to prevent bias in prosecutions
- 12 June 2019 | Amazon executive calls for regulation of face recognition technology
- 17 May 2019 | UNESCO issues recommendations to address gender bias in AI applications
- 16 May 2019 | San Francisco bans facial recognition technology
- 17 April 2019 | Microsoft refuses to sell facial recognition technology to California LEA
- 6 March 2019 | UNESCO: ROAM principles should apply to AI development
- 13 February 2019 | Council of Europe adopts declaration on manipulative capabilities of algorithmic processes
- 16 November 2018 | Council of Europe Committee of Experts drafts Declaration on the Manipulative Capabilities of Algorithmic Processes
- 12 November 2018 | Council of Europe Committee of Experts publishes a Draft Recommendation of the Committee of Ministers to member States on human rights impacts of algorithmic systems
- 31 October 2018 | UN Special Rapporteur explores AI implications for human rights
- 23 October 2018 | Data protection and privacy commissioners adopt declaration on ethics and AI
- 14 May 2018 | UK Information Commissioner concerned about the use of facial recognition technology by the police
As AI algorithms involve judgements and decision-making – replicating similar human processes – concerns have been raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by AI systems is one such concern, as illustrated in the debate that has surrounded Jigsaw’s Conversation AI tool.
While potentially addressing problems related to misuse of the Internet public space, the software also raises a major ethical issue: How can machines determine what is and what is not appropriate language? One way of addressing some of these concerns could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations in the creation of autonomous technologies) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design).
Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. The Institute of Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.
- 26 May 2019 | Chinese institutes and companies launch Beijing AI Principles
- 8 April 2019 | EU High-Level Expert Group presents ethics guidelines for AI
- 4 April 2019 | Google dissolves advisory council on AI
- 26 March 2019 | Google appoints advisory council on AI
- 19 March 2019 | Stanford University launches Institute for Human-Centered AI
- 20 January 2019 | Facebook and the Technical University of Munich open AI ethics centre
- 13 December 2018 | New institute in Australia to explore ethics and AI
- 3 December 2018 | European Ethical Charter on the use of AI in judicial systems
- 16 November 2018 | UN Rapporteur on poverty and human rights calls for more transparency around AI
- 30 October 2018 | Telefonica adopts ethical principles to guide its AI work
- 18 June 2018 | European Commission hosts high-level meeting on AI and ethics
- 7 June 2018 | Google outlines AI principles
- 3 May 2018 | Facebook outlines its work on preventing bias in AI
- 9 April 2018 | AI experts ask governments to introduce algorithmic impact assessments
- 25 January 2018 | UK Prime Minister calls for ethical rules for AI
One overarching question is whether AI-related challenges (especially regarding safety, privacy and data protection, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress.
Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are raised as to the legal status of AI machines (i.e. robots): should they be regarded as natural persons, legal persons, animals or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended, among others, that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'. Such a proposal, however, was received with reticence by some, as demonstrated in an open letter addressed to the European Commission by over 150 AI and robotics experts, industry leaders, and other experts. In their view, creating a legal personality for a robot is inappropriate from an ethical and legal perspective: while being aware of the importance of addressing the issue of liability of autonomous robots, they believe that 'creating a legal status of electronic person would be ideological and non-sensical and non-pragmatic'.
- 4 February 2019 | Policy-makers should embrace AI innovation instead of over-regulating it, report says
- 12 December 2018 | Google CEO says the industry should be trusted to regulate AI
- 16 December 2018 | AI Now calls for governmental regulation of AI
- 30 November 2018 | US FCC chairman emphasises the need for 'regulatory humility' towards AI
- 31 October 20198 | Public Knowledge calls for a US federal authority to focus on AI
- 5 April 2018 | AI experts concerned about the idea of granting legal status to robots
While AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly understanding that they need to keep up with this evolution and indeed take advantage of it. Many are elaborating national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements.
China, for example, has released a national AI development plan in 2017, intended to contribute to making the country the world leader in AI by 2030 and build a national AI industry of $150 billion. The United Arab Emirates (UAE) also have an AI strategy whose main aim is to support the development of AI solutions for several vital sectors in the country, such as transportation, healthcare, space exploration, smart consumption, water, technology, education, and agriculture. The country has even appointed of a State Minister for AI, to work on ‘making the UAE the world’s best prepared for AI and other advanced technologies’. In 2018, France and Germany were among the countries which followed this trend of launching national AI development plans. These are only a few examples, and there are many more countries working on such plans and strategies on an ongoing basis, as the map below shows.
- 12 September 2019 | US Air Force publishes AI strategy
- 6 September 2019 | US Department of Energy establishes AI office
- 5 September 2019 | Switzerland's digital strategy calls for transparent algorithmic decision-making systems
- 30 August 2019 | Saudi Arabia creates commission and centre for AI
- 12 August 2019 | Malta launches public consultation on ethical AI framework
- 9 August 2019 | USA outlines plan for federal engagement in AI standardisation
- 8 August 2019 | UK to invest £250 million in AI for health
- 30 July 2019 | EU states need more time to develop AI strategies
- 24 July 2019 | New York establishes AI commission
- 2 July 2019 | US NIST outlines proposals for federal engagement in setting AI standards
- 21 June 2019 | USA updates its National AI Research and Development Plan
- 21 May 2019 | Government AI Readiness Index 2019 released
- 16 May 2019 | UK appoints AI Council
- 6 May 2019 | Czech Republic adopts national AI strategy
- 20 April 2019 | Dubai Future Council on AI meets for the first time
- 15 April 2019 | Portugal uses AI platform to boost exports
- 11 April 2019 | Slovenia to set up international AI research centre
- 22 March 2019 | Flemish Region in Belgium launches AI plan
- 20 March 2019 | US government launches AI-dedicated website
- 7 March 2019 | UAE government launches Think AI initiative
- 21 February 2019 | UK to allocate up to £100 million for AI-focused higher education
- 19 February 2019 | UK updates code of conduct for AI in the health system
- 11 February 2019 | US president launches the American AI Initiative
- 11 February 2019 | US Defense Department launches AI strategy
- 7 February 2019 | India to step up AI development plans
- 1 February 2019 | US army activates AI task force
- 18 January 2019 | Members appointed to the US National Security Commission for Artificial Intelligence
- 16 November 2018 | Germany to invest 3 billion euros in AI by 2025
- 10 November 2018 | UAE establishes lab to develop AI regulations
- 1 November 2018 | Malta establishes task force to develop national AI strategy
- 6 June 2018 | Singapore launches AI governance and ethics initiatives
- 11 May 2018 | US White House creates Select Committee to advance R&D in AI
- 11 May 2018 | India: Government-appointed task force issues recommendations on AI
- 16 April | UK parliamentary committee issues recommendations on AI
- 29 March 2018 | French president launches national AI strategy
- 21 March 2018 | Bill introduced in the US Congress on AI and security
- 5 March 2018 | UAE forms Council to oversee AI integration in public sector
- 19 February 2018 | India sets up its first AI institute
- 9 February 2018 | Indian government creates committees on AI
- 12 January 2018 | France: Foreign takeover of AI and data companies might require governmental approval
- 5 January 2018 | Chinese government invests in an AI technology park
Artificial intelligence and its various existing and potential applications feature more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, has facilitated discussions on intelligent transport systems, while the International Labour Organisation (ILO) has started looking at the impact of automation on the world of work. AI has also been featuring high on the agenda of meetings such as the World Economic Forum, G7 Summits, and OECD gatherings. All these entities and processes are exploring different policy implications of AI and suggesting approaches for tackling inherent challenges.
- 11 September 2019 | Council of Europe establishes Ad Hoc Committee on AI
- 26 August 2019 | G7 leaders agree on Strategy for an open, free and secure digital transformation, which also tackles AI
- 22 August 2019 | Digital industry associations outline recommendations for the G7 Summit
- 9 June 2019 | G20 Digital Economy Ministers endorse AI principles
- 22 May 2019 | OECD Council adopts AI recommendations
- 25 January 2019 | AI among the most prominent topics at the World Economic Forum in Davos
- 1 January 2019 | AI4EU project officially launched
- 31 December 2018 | The IGF publishes final output of the Best Practice Forum on AI, IoT and big data
- 7 December 2018 | European Commission presents a Coordinated Plan on AI
- 9 June 2018 | G7 leaders agree on a common vision for the future of AI
- 25 April 2018 | European Commission outlines AI approach
- 10 April 2018 | European countries sign declaration on AI cooperation
- 9 March 2018 | European Commission forms experts groups on AI and liability and new technologies
Some intergovernmental organisations have established processes to look at certain aspects of AI and its uses. Within the UN System, for example, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on lethal autonomous weapons systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS. The Council of Europe has set up a Committee of Experts to study the human rights dimensions of automated data processing and different forms of AI. The European Commission created a High-Level Expert Group on Artificial Intelligence to support the implementation of a European strategy on AI and to elaborate recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.
Group of Governmental Experts on Lethal Autonomous Weapons Systems
In 2013, the Meeting of State Parties to the Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects (Convention on Certain Conventional Weapons) agreed on a mandate on LAWS and mandated the creation of a group of experts 'to discuss the questions related to emerging technologies in the areas of lethal autonomous weapons systems'. The group was convened three times, in 2014, 2015, and 2016, and produced reports which fed into meetings of the High Contracting Parties to the Convention. In 2016, the CCW High Contracting Parties decided to establish a Group of Governmental Experts on LAWS (CCW GGE), to build on the work of the previous groups of experts.
Mandate. CCW GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons.
Composition. The group has an open-ended nature, and is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.
- 20–21 August 2019, Geneva | Second meeting of the 2019 GGE on LAWS
- 25–29 March 2019, Geneva | First meeting of the 2019 GGE on LAWS
- 27-31 August 2018, Geneva | Second meeting of the 2018 GGE on LAWS
- 9-13 April 2018, Geneva | First meeting of the 2018 GGE on LAWS
- 13–17 November 2017, Geneva | Meeting of the 2017 GGE on LAWS
- 21 August 2019: An advanced version of the Draft Report of the 2019 session of GGE LAWS was made available.
- 23 October 2018: In its Report of the 2018 session, the CCW GGE reiterated the applicability of international humanitarian law to the development and use of LAWs and noted that human responsibility must be retained when it comes to decisions on the used of weapons systems. The report also summarised the group discussions on the human element in lethal force, aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of LAWS, and on the potential military implications of related technologies. When it comes to the possible policy options for addressing the humanitarian and international security challenges posed by emerging technologies in the context of LAWSs, the report outlines the four proposals discussed within the Group: a legally-binding instrument, a political declaration, and clarity on the implementation of existing obligations under international law, in particular international humanitarian law.
- 20 November 2017: At the end of its November 2017 meeting, the CCW GGE adopted a report outlining several conclusions and recommendations. Among these: international humanitarian law applies fully to all weapons systems, including the potential development and use of LAWS; responsibility for the deployment of any new weapon systems in armed conflicts remains with states; given the dual nature of technologies in the area of intelligent autonomous systems, the Group's work should not hamper progress in or access to civilian research and development and use of these technologies; there is a need to future assess the aspects of human-machine interaction in the development, deployment, and use of emerging technologies in the area of LAWS; there should also be further discussions on possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS. The Group also recommended that it meets for a duration of 10 days in 2018 in Geneva.
- Report of the 2018 session of the GGE on LAWS
- Chair's summary of the discussion on agenda items 6 a, b, c and d, 9–13 April 2018
- Working papers submitted for the first and for the second 2018 meetings of the GGE on LAWS
- Working papers submitted before the GGE's November 2017 meeting: Chairperson Food-for-though paper | Netherlands | Belgium | Germany and France | Netherlands and Switzerland | United States of America I and II | Russian Federation | Switzerland | Non-Aligned Movement and Other States Parties to the CCW
- Searching for meaningful human control: The April 2018 meeting on lethal autonomous weapons systems, by Barbara Rosen Jacobson, DiploFoundation (April 2018)
- Lethal Autonomous Weapons Systems: Mapping the GGE debate, by Barbara Rosen Jacobson, DiploFoundation (November 2017)
- Defending the boundary: Constraints and requirements on the use of autonomous weapon systems under international humanitarian and human rights law, by the Geneva Academy of International Humanitarian Law and Human Rights (May 2017)
- Artificial intelligence: Lethal autonomous weapons systems and peace time threats, by Regina Surber, ICT4Peace Foundation
Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence
The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) was set up by the Council of Europe Committee of Ministers with the following tasks:
- Prepare follow up with a view to the preparation of a possible standard setting instrument on the basis of the study on the human rights dimensions of automated data processing techniques (in particular algorithms and possible regulatory implications).
- Carry out a study on the development and use of new digital technologies and services, including different forms of artificial intelligence, as they may impact peoples’ enjoyment of fundamental rights and freedoms in the digital age – with a view to give guidance for future standard-setting in this field.
- Carry out a study on a possible standard-setting instrument on forms of liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states.
Mandate. The group is carrying its work between January 2018 and December 2019.
Composition. The group is composed of 13 experts, comprising seven government or member States’ representatives, designated by the Steering Committee on Media and Information Society (CDMSI), and six independent experts, appointed by the Secretary-General with recognised expertise in the fields of freedom of expression and independence of the media online and off-line.
- 18–19 March 2019, Strasbourg | Third meeting of MSI-AUT
- 17–18 September 2018, Strasbourg | Second meeting of MSI-AUT
- 6–7 March 2018 | First meeting of MSI-AUT
- 26 June 2019 | MS- AUT launched public consultation on a consolidated Draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems
- 13 February 2019 | The Committee of Ministers of the Council of Europe adopted the Declaration on the manipulative capabilities of algorithmic processes.
- 16 November 2018 | MS-AUT publishes a Draft Declaration of the Committee of Ministers on the manipulative capabilities of algorithmic processes. The document draws the attention of states to the rights of all human beings to take decisions and form opinions independently of automated systems. It underlines the risks of using massive amounts of personal and non-personal data to sort and micro-target people, to identify vulnerabilities, and to reshape social environments to achieve specific goals and vested interests. The draft encourages states (1) to consider additional protective frameworks to address the impacts of the targeted use of data on the exercise of human rights; (2) to initiate inclusive public debates on permissible forms of persuasion and unacceptable manipulation; (3) to take measures to ensure that effective legal guarantees are in place against such forms of interference; and (4) to empower users by promoting digital literacy on how much data are generated and used for commercial purposes.
- 12 November 2018 | MS-AUT publishes a Draft Recommendation of the Committee of Ministers to member States on human rights impacts of algorithmic systems. The document outlines that the misuse of algorithmic systems can jeopardise the rights to privacy, freedom of expression, and prohibition of discrimination provided by the European Convention for the Protection of Human Rights and Fundamental Freedoms. Although public and private sector initiatives to develop ethical guidelines for the design, development, and deployment of algorithmic systems are welcome, they do not substitute the duty of states to guarantee that human rights obligations are embedded into all steps of their algorithmic operations. In addition, states should ensure appropriated regulatory frameworks to promote human rights-respecting technological innovation by all actors. The Recommendation also outlines a series of guidelines for states on actions to be taken vis-a-vis the human rights impacts of algorithmic system, such as include data quality and modelling standards; principles of transparency and contestability; provision of effective judicial and non-judicial remedies to review algorithmic decisions; the implementation of precautionary measures to maintain control over the use of algorithmic systems; and empowerment through research and public awareness. Lastly, the document underlines responsibilities for private actors with respect to human rights and fundamental freedoms that states should aim to ensure, including guidelines on data quality and modelling, transparency, effective remedies, and precautionary measures.
- 9 November 2018 | MSI-AUT publishes a draft Study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. The study examines, among others, the possible risks, harms, and wrongs that the application of advanced technologies might have especially in relation to the rights to a fair trials and to 'due process', the right to freedom of expression and information, the right to privacy and data protection, and the right to protection against discrimination in the exercise of rights and freedoms. One of the key conclusions of the study is that 'those who deploy and reap the benefits of advanced digital technologies (including AI) in the provision of services must be responsible for their adverse consequences'. As such, it is suggested that states introduce regulations which ensure that the responsibility for the adverse risks, harms and wrongs arising from the operation of advanced digital technologies is dully allocated.
- 9 October 2018 | MSI-AUT publishes a Draft study on forms of liability and jurisdictional issues in the application of civil and administrative defamation laws in Council of Europe member states.
Artificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution, and is implemented in many areas. Let’s take a look…
AI is also seen as a technology that can enhance human intelligence and assist humans in performing certain tasks.
- 11 January 2019 | Malaysian judges to start using AI
- 20 December 2018 | Parliament and state legislature in India to use AI
- 4 February 2018 | China to use AI to enhance its nuclear submarines
Autonomous systems (cars, weapons, etc.)
Several companies, from Google to Uber, are working towards enabling self-driving cars powered by AI systems. Some have already started testing such cars on the roads. Drones powered by AI are no longer news, while autonomous weapons raise concerns about their potential implications for the humankind.
- 19 August 2019 | New report looks at the role of tech companies in the development of autonomous weapons
- 25 March 2019 | UN Secretary-General calls for a ban on lethal autonomous weapons
- 1 March 2019 | AI weapons may be harder to control than nuclear ones, warns Henry Kissinger
- 6 February 2019 | The UK government initiated a process of public consultations concerning the advanced trials of automated vehicles
- 31 January 2019 | Singapore issues a set of provisional national standards for driverless vehicles
- 14 January 2019 | The US Department of Transportation proposes new rules for operating drones
- 7 January 2019 | The UK Department for Transport published the results of public consultations concerning the use of drones in the UK
- 1 January 2019 | Ontario's Minister of Transportation expands its automated vehicle pilot program
- 19 December 2018 | The British Standards Institute developed new cybersecurity standard for self-driving vehicles
- 6 June 2018 | US government seeking new powers to counter threatening drones
- 17 May 2018| European Commission outlines vision on automated mobility
- 17 May 2018| Fully self-driving cars tested on public roads in Texas
- 13 May 2018| Dubai police to deploy driverless cars by 2020
- 10 May 2018| US government approves drone-testing projects
- 7 May 2018| Researchers build self-driving cars that can navigate without maps
- 4 April 2018 | Researchers concerned about reported work on autonomous weapons at a South Korean university
- 2 April 2018 | California goes ahead with plans to allow fully autonomous cars
- 6 March 2018 | The UK to review legislation to prepare for self-driving vehicles
- 5 March 2018 | Researchers bring driverless cars closer to seeing around corners
- 1 March 2018 | Arizona and California allow fully autonomous cars on public roads
- 15 February 2018 | Germany not to acquire autonomous weapons
- 25 January 2018 | Self-driving buses in trial in Stockholm
- 5 January 2018 | Half of new cars in China to be powered with AI by 2020
Internet of Things
Scientists are looking at ways in which AI can enhance other technologies such as the IoT. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency.
Cybersecurity and cybercrime
AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid.
Internet companies are increasingly using AI algorithms to deal with hate speech, terrorism content, and other forms of extremist content online. Researchers propose new algorithms that could be used in content control policy, one example being an AI algorithm for identifying racist code words (code words used to substitute references to communities) on social media.
- 17 September 2019 | Facebook and UK's Metropolitan Police partner to tackle streaming of armed attacks
- 11 September 2019 | Chinese regulators to impose obligations for AI algorithms to promote 'mainstream values'
- 18 July 2019 | AI ‘is not a silver bullet’ for moderating online content, report stresses
- 8 July 2019 | Instagram launches new AI-powered feature against online bullying
- 15 March 2019 | Facebook announces AI tool to fight revenge porn
- 15 November 2018 | Facebook to change its news feed algorithm
Designing or improving online services
Internet and tech companies employ AI to improve existing services or design new ones. Few examples: Twitter has started using AI to improve users’ experience, Google has launched a new job search engine based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from Calendar.help to the AI chatbot Zo).
Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.
- 12 June 2018 | Google uses AI for offline translation
- 14 March 2018 | Microsoft announces milestone in machine translation
Healthcare and medical sciences
AI applications in the medical field range from medical robots to algorithms that could improve medical diagnosis and treatment.
- 24 April 2019 | AI used to translate brain signals into speech
- 19 February 2019 | UK updates code of conduct for AI in the health system
AI and robotics are the drivers of the fourth industrial revolution, as automated systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc.
The private sector and the academic community alike are continuously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, others focus on trying to address issues such as accountability and responsibility in AI algorithms. In October 2017, for example, researchers at Columbia and Lehigh universities have developed a tool, called DeepXplore, that could help bring transparency into AI systems, through a process described as ‘reverse engineering the learning process to understand its logic’. And in June 2018, IBM has presented an AI system that can engage in reasoned arguments with humans on complex topics.
- 4 September 2019 | AI system passes eighth-grade science test
- 19 August 2019 | Google develops real-time hand-tracking technology
- 13 August 2019 | Google works on improving speech recognition for people with impaired speech
- 12 August 2019 | Amazon announces improvements to its facial recognition technology
- 30 July 2019 | Scientists find a new way to decode speech from the human brain
- 22 July 2019 | Microsoft and OpenAI partner to develop artificial general intelligence
- 16 July 2019 | Elon Musk's Neuralink reveals work on brain-machine interfaces
- 2 July 2019 | Scientists use AI to develop flu vaccine
- 5 June 2019 | Study shows AI can be trained to fake UN political speeches
- 17 April 2019 | Google opens AI centre in Ghana
- 16 January 2019 | Microsoft signed a deal with the district government of Pudong New Area (Shanghai) to launch an AI and IoT lab in Shanghai
- 15 October 2018 | MIT established new college to focus on AI
- 18 June 2018 | IBM launches AI system that can debate with humans
- 15 June 2018 | Researchers bring AI closer to understanding 3D spaces
- 14 June 2018 | Researchers develop AI system able to see through walls
- 13 February 2018 | Researchers demonstrate AI systems can learn from implicit human feedback