Artificial intelligence
Artificial intelligence (AI) might sound like something from a science fiction movie in which robots are ready to take over the world. While such robots are purely fixtures of science fiction (at least for now), AI is already part of our daily lives, whether we know it or not.
Think of your Google inbox: Some of the e-mails you receive end up in your spam folder, while others are marked as ‘social’ or ‘promotion’. How does this happen? Google uses AI algorithms to automatically filter and sort e-mails by categories. These algorithms can be seen as small programs that are trained to recognise certain elements within an e-mail that make it likely to be a spam message, for example. When the algorithm identifies one or several of those elements, it marks the e-mail as spam and sends it to your spam folder. Of course, algorithms do not work perfectly, but they are continuously improved. When you find a legitimate e-mail in your spam folder, you can tell Google that it was wrongly marked as spam. Google uses that information to improve how its algorithms work.
AI is widely used in Internet services: Search engines use AI to provide better search results; social media platforms rely on AI to automatically detect hate speech and other forms of harmful content; and, online stores use AI to suggest products you are likely interested in based on your previous shopping habits. More complex forms of AI are used in manufacturing, transportation, agriculture, healthcare, and many other areas. Self-driving cars, programs able to recognise certain medical conditions with the accuracy of a doctor, systems developed to track and predict the impact of weather conditions on crops – they all rely on AI technologies.
As the name suggests, AI systems are embedded with some level of ‘intelligence’ which makes them capable to perform certain tasks or replicate certain specific behaviours that normally require human intelligence. What makes them ‘intelligent’ is a combination of data and algorithms. Let’s look at an example which involves a technique called machine learning. Imagine a program able to recognise cars among millions of images. First of all, that program is fed with a high number of car images. Algorithms then ‘study’ those images to discover patterns, and in particular the specific elements that characterise the image of a car. Through machine learning, algorithms ‘learn’ what a car looks like. Later on, when they are presented with millions of different images, they are able to identify the images that contain a car. This is, of course, a simplified example – there are far more complex AI systems out there. But basically all of them involve some level of initial training data and an algorithm which learns from that data in order to be able to perform a task. Some AI systems go beyond this, by being able to learn from themselves and improve themselves.
One famous example is DeepMind’s AlphaGo Zero: The program initially only knows the rules of the Go game, however it then plays the game with itself and learns from its successes and failures to become better and better. Going back to where we started: Is AI really able to match human intelligence? In specific cases – like playing the game of Go – the answer is ‘yes’. That being said, what has been coined as ‘artificial general intelligence’ (AGI) – advanced AI systems that can replicate human intellectual capabilities in order to perform complex and combined tasks – does not yet exist. Experts have divided opinions on whether AGI is something we will see in the near future, but it is certain that scientists and tech companies will continue to develop more and more complex AI systems. What are the policy implications of AI? Applying AI for social good is a principle that many tech companies have adhered to. They see AI as a tool that can help address some of the world’s most pressing problems, in areas such as climate change and disease eradication. The technology and its many applications certainly carry a significant potential for good, but there are also risks. Accordingly, the policy implications of AI advancements are far‐reaching. While AI can generate economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security are also in focus. As there continue to be innovations in the field, more and more stakeholders are calling for AI standards and AI governance frameworks to help ensure that AI applications have minimal unintended consequences.
The economic and social implications
AI has significant potential to stimulate economic growth. In production processes, AI systems increase automation, and make processes smarter, faster, and cheaper, and therefore bring savings and increased efficiency. AI can improve the efficiency and the quality of existing products and services, and can also generate new ones, thus leading to the creation of new markets. At its current pace, it is estimated that the AI industry could contribute up to US$15.7 trillion to the global economy by 2030. Beyond the economic potential, AI can also contribute to achieving some of the sustainable development goals (SDGs). In fact, AI applications have already been developed and deployed to help address challenges in areas covered by the SDGs, such as climate change and health. In China, for example, IBM Green Horizon is using AI to predict levels of air pollution. Machine learning and facial recognition are being used as part of the MERON (Method for Extremely Rapid Observation of Nutritional Status) system in order to detect malnutrition. Several companies have launched programmes dedicated to fostering the role of AI in achieving sustainable developments. Examples include IBM’s Social Science for Good, Google’s AI for Social Good, and Microsoft’s AI for Good projects.
For this potential to be fully realised, there is a need to ensure that the economic benefits of AI are broadly shared at a societal level, and that the possible negative implications are adequately addressed. One significant risk is that of a new form of global digital divide, in which some countries reap the benefits of AI, while others are left behind. Estimates for 2030 show that North America and China will likely experience the largest economic gains from AI, while developing countries - with lower rates of AI adoption - will register only modest economic increases. The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a 'universal basic income' that would compensate individuals for disruptions brought on the labour market by robots and by other AI systems. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements on the jobs market. This entails not only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.
Developments
- 10 May 2022 | Ireland appoints AI ambassador
- 8 December 2021 | UNESCO and partners launch initiative to promote data and algorithms literacy
- 25 March 2021 | UK Trades Union Congress outlines recommendations to ensure AI improves working lives
- 3 March 2021 | AI Index Report 2021 published
- 1 February 2021 | Saudi Arabia launches centre for AI for energy
- 19 January 2021 | UK competition authority launches consultation on algorithms and competition
- 5 August 2020 | Australia assists Vietnam in using AI for post-COVID-19 recovery
Safety and security considerations
AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with minimal unintended consequences. Beyond self-driving cars, the (potential) development of other autonomous systems - such as lethal autonomous weapons systems - has sparked additional and intense debates on their implications for human safety. AI also has implications in the cybersecurity field. On the one hand, there are cybersecurity risks specific to AI systems. As AI is increasingly embedded in critical systems, they need to be secured to potential cyber-attacks. On the other hand, AI has applications in cybersecurity; the technology is being used, for example, in e-mail applications to perform spam filtering, but it is also increasingly employed in applications aimed to detect more serious cybersecurity vulnerabilities and address cyber-threats.
Going a step further, some see AI as an issue with implications for national security. The US Intelligence Community, for example, has included AI among the areas that could generate national security concerns, especially due to its potential applications in warfare and cyber defense, and its implications for national economic competitiveness.
Developments
- 15 September 2021 | AUKUS security partnership to cover cyber capabilities, AI and quantum technologies
- 1 March 2021 | US National Security Commission on AI releases final report
- 15 December 2020 | ENISA publishes report on AI cybersecurity challenges
- 19 November 2020 | New report underlines malicious uses of artificial intelligence
Human rights
AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Online services such as social media platforms, e-commerce stores, and multimedia content providers collect information about users’ online habits, and use AI techniques such as machine learning to analyse the data and to ‘improve the user’s experience’ (for example, Netflix suggests movies you might want to watch based on movies you have already seen). AI-powered products such as smart speakers also involve the processing of user data, some of it of personal nature. Facial recognition technologies embedded in public street cameras have direct privacy implications. How is all of this data processed? Who has access to it and under what conditions? Are users even aware that their data is extensively used? These are only some of the questions generated by the increased use of personal data in the context of AI applications. What solutions are there to ensure that AI advancements do not come at the expense of user privacy? Strong privacy and data protection regulations (including in terms of enforcement), enhanced transparency and accountability for tech companies, and embedding privacy and data protection guarantees into AI applications during the design phase are some possible answers. Algorithms, which power AI systems, could also have consequences on other human rights, and several civil society groups and intergovernmental organisations are looking into such issues. For example, AI tools aimed at automatically detecting and removing hate speech from online platforms could negatively affect freedom of expression: Even when such tools are trained on significant amounts of data, the algorithms could wrongly identify a text as hate speech.
Developments
- 12 May 2022 | US government issues warning about disability discrimination caused by AI tools used for employment decisions
- 10 May 2022 | Civil society groups call on MEPs to ban biometric mass surveillance
- 1 March 2022 | Civil society calls on EU to ban predictive AI in policing and criminal justice
- 14 February 2022 | Texas attorney general sues Meta over biometric data collection
- 30 November 2021 | Civil society groups urge EU to prioritise fundamental rights in AI regulation
- 29 November 2021 |UK government publishes standard for algorithmic transparency
- 20 November | UK Information Commissioner’s Office to fine Clearview AI
- 15 October 2021 | Moscow adds facial recognition payment at 240 metro stations
- 8 October 2021 | USA to develop AI bill of right
- 6 October 2021 | European Parliament calls for ban on automated recognition in public spaces
- 15 September 2021 | UN High Commissioner for Human Rights calls for moratorium on AI systems carrying high risks for human rights
- 13 September 2021 | European Commission launches InTouchAI.eu initiative to promote human-centric AI
- 25 August 2021 | Investigation reveals use of Clearview AI software in 24 countries
- 30 July 2021 | China’s Supreme People’s Court issues rules to regulate use of facial recognition technology
- 1 July 2021 | Maine, USA enacts regulations for facial surveillance systems
- 29 June 2021 | US federal agencies should better assess risks associated with facial recognition technology, says government report
- 21 June 2021 | EU data protection bodies call for ban on use of AI for automated recognition of human features in public spaces
- 18 June 2021 | UK Information Commissioner issues opinion on live facial recognition in public space
- 11 June 2021 | Council of Europe ministerial conference tackles AI and freedom of expression
- 27 May 2021 | Australian Human Rights Commission recommends moratorium on high-risk facial recognition technology
- 27 May 2021 | Civil rights groups file privacy complaints against facial recognition company Clearview AI
- 19 May 2021 | Amazon extends moratorium on police use of facial recognition
- 16 April 2021 | Italian data protection authority: Sari facial recognition system proposed by Ministry of Interior could lead to mass surveillance
- 16 April 2021 | Members of European Parliament and civil society groups call on Commission to ban biometric mass surveillance
- 8 March 2021 | MEPs call on the European Commission to prioritise human rights in AI legislative proposal
- 12 February 2021 | Minneapolis City Council bans use of facial recognition technology
- 11 February 2021 | Police use of Clearview AI found unlawful in Sweden
- 2 February 2021 | Clearview AI found in breach of Canadian privacy laws
- 28 January 2021 | Council of Europe calls for strict regulation of facial recognition
- 26 January 2021 | Amnesty International calls for ban on facial recognition in New York
- 21 January 2021 | Group advising UK government issues recommendations on public-private collaboration in using live facial recognition technology
- 12 January 2021 | Civil society organisations call on European Commission to introduce red lines in upcoming AI legislative proposal
- 7 January 2021 | European Commission registers citizens’ initiative calling for ban on biometric mass surveillance
- 14 December 2020 | Massachusetts bill banning public agencies from using facial recognition returned by state governor
- 14 December 2020 | EU Agency for Fundamental Rights Issues report on AI and fundamental rights
- 10 December 2020 | New York City Council bans businesses from using facial recognition without public notice
- 3 December 2020 | UK Surveillance Camera Commissioner issues guidelines for police on use of facial recognition technology
- 1 December 2020 | Massachusetts legislators adopt bill banning public agencies from using facial recognition
- 17 November 2020 | Los Angeles Police bans use of external facial recognition systems
- 12 November 2020 | European NGOs launch ‘Reclaim Your Face’ campaign against facial recognition in public spaces
- 12 November 2020 | Canadian data protection authority issues proposals for regulating AI
- 4 November 2020 | Portland city (USA) votes to ban use of facial recognition by city agencies
- 21 October 2020 | Presidency of the Council of the EU issues conclusions on AI and human rights
- 21 October 2020 | Global Privacy Assembly adopts resolution on AI and facial recognition technology
- 8 October 2020 | Vermont, USA places moratorium on police use of facial recognition technology
- 9 September 2020 | Portland city (USA) introduces restrictions on facial recognition technology
Ethical concerns
As AI algorithms involve judgements and decision-making - replicating similar human processes - concerns are being raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by or with the help of AI systems is one such concern, as illustrated in the debate over facial recognition technology (FRT). Several studies have shown that FRT programs present racial and gender biases, as the algorithms involved are largely trained on photos of males and white people. If law enforcement agencies rely on such technologies, this could lead to biased and discriminatory decisions, including false arrests. One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations when creating AI systems) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). The Institute of Electrical and Electronics Engineers’ Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is one example of initiatives that are aimed at ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems. Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms than can 'explain themselves'. Being able to better understand how an algorithm makes a certain decision could also help improve that algorithm.
Developments
- 16 March 2022 | Socio-technical approach is needed to mitigate bias in AI, NIST report argues
- 15 September 2021 | IEEE launches standard addressing ethical concerns during system design
- 2 July 2021 | UNESCO gets closer to adopting recommendation on AI ethics
- 10 January 2021 | New York City Council proposes bill to regulate automated employment decision tools
- 1 December 2020 | Working group created to advance ethical and responsible AI
- 5 November 2020 | Pope Francis emphasises that AI progress must serve humankind
- 22 July 2020 | US Intelligence Community releases AI ethics principles
Governing AI
One overarching question is whether AI-related challenges (especially regarding safety, privacy, and ethics) call for new legal and regulatory frameworks, or whether existing ones can be adapted to address them. Adapting current regulation is seen by many as the most suitable approach for the time being. Governments are advised that, when considering regulatory approaches towards AI, attention should be paid to ensuring that such approaches do not hinder innovation and progress. Aspects related to accountability and liability in AI systems are also viewed as important legal issues to consider, and questions are being raised regarding the legal status of AI machines (i.e. robots): Should they be regarded as natural persons, legal persons, animals, or objects, or should a new category be created? In a January 2017 report containing recommendations to the European Commission on civil law rules on robotics, the European Parliament recommended that the Commission considers 'creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently'. Such a proposal, however, was received with reticence by some, as demonstrated in an open letter addressed to the European Commission by over 150 AI and robotics experts, industry leaders, and other experts. In their view, creating a legal personality for a robot is inappropriate from an ethical and legal perspective: While being aware of the importance of addressing the issue of liability of autonomous robots, they believe that 'creating a legal status of electronic person would be ideological and non-sensical and non-pragmatic'.
Developments
- 20 July 2022 | UK government issues AI action plan and outlines AI regulatory approaches
- 16 June 2022 | AI act tabled in the Canadian parliament
- 26 May 2022 | Singapore launches AI testing framework
- 1 April 2022 | China to inspect Big Tech’s algorithms to ensure compliance with rules
- 29 January 2022 | Chinese authorities propose regulation for deepfakes
- 6 January 2022 | China publishes regulation for algorithms used in recommendation systems
- 21 December 2021 | European Patent Office confirms decision that AI cannot be considered inventor
- 29 November 2021 | UK government publishes standard for algorithmic transparency
- 8 October 2021 | USA to develop AI ‘bill of rights’
- 30 July 2021 | Australian court rules that AI can be recognised as inventor in patent submissions
- 28 July 2021 | South Africa grants patent for AI-created invention
- 1 July 2021 | Maine, USA enacts regulations for facial surveillance systems
- 30 June 2021 | US Government Accountability Office issues AI accountability framework
- 21 April 2021 | European Commission publishes proposal for regulating AI
- 14 April 2021 | EU draft regulation on AI leaked online
- 30 March 2021 | Council of Europe’s Ad hoc Committee on AI launches multistakeholder consultation
- 22 March 2021 | Over 20 companies call on G7 to establish Data and Technology Forum
- 8 March 2021 | MEPs call on the European Commission to prioritise human rights in AI legislative proposal
- 17 December 2020 | Council of Europe’s CAHAI adopts feasibility study on AI legal framework
- 14 December 2020 | Council of Europe’s CAHAI releases publication titled ‘Towards regulation of AI systems’
- 3 December 2020 | US president issues executive order on trustworthy AI in federal government
- 17 November 2020 | White House issues Guidance for Regulation of Artificial Intelligence Applications
- 12 November 2020 | Canadian data protection authority issues proposals for regulating AI
- 4 November 2020 | Portland city (USA) votes to ban use of facial recognition by city agencies
- 22 October 2020 | Council of Europe’s Parliamentary Assembly calls for legally binding instrument to govern AI
- 8 October 2020 | Vermont, USA places moratorium on police use of facial recognition technology
- 9 September 2020 | Portland city (USA) introduces restrictions on facial recognition technology
- 28 July 2020 | New Zealand launches Algorithm Charter
AI governmental initiatives
As AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly aware that they need to keep up with this evolution and to indeed take advantage of it. Many are developing national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. China, for example, released a national AI development plan in 2017, intended to help make the country the world leader in AI by 2030 and build a national AI industry worth $150 billion. The United Arab Emirates (UAE) also has an AI strategy, that aims to support the development of AI solutions for several vital sectors in the country, such as transportation, healthcare, space exploration, smart consumption, water, technology, education, and agriculture. The country has even appointed a State Minister for AI to work on ‘making the UAE the world’s best prepared [country] for AI and other advanced technologies’. In 2018, France and Germany were among the countries that followed this trend of launching national AI development plans. These are only a few examples: There are many more countries working on such plans and strategies on an ongoing basis, as the map below shows.
- Read more about governmental AI initiatives and stay up to date with developments
- See also: Artificial intelligence in Africa: National strategies and initiatives
AI on the international scene
AI and its various existing and potential applications are featured more and more often on the agenda of intergovernmental and international organisations. The International Telecommunication Union (ITU), for example, is facilitating discussions on intelligent transport systems, while the International Labour Organisation (ILO) is looking at the impact of AI automation on the world of work. AI has also been featuring highly on the agenda of meetings such as the World Economic Forum, G7 Summits, and OECD gatherings. All of these entities and processes are exploring different policy implications of AI and suggesting approaches for tackling the challenges inherent to the technology. Some intergovernmental organisations have established processes to look at certain aspects of AI and its uses. Within the UN System, for example, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), to explore the technical, military, legal, and ethical implications of LAWS.
Read more about the GGE on LAWS on the dedicated process page
The Council of Europe set up a Committee of Experts to study the human rights dimensions of automated data processing and of different forms of AI, and an Ad Hoc Committee on AI, to examine the feasibility of a legal framework for the development, design and application of AI. This was later followed by a Committee on AI dedicated to establishing an international negotiation process and conducting work to elaborate a legal framework on the development, design, and application of AI, based on the Council of Europe’s standards on human rights, democracy and the rule of law. The European Commission created a High-Level Expert Group on Artificial Intelligence to support the implementation of a European strategy on AI and to elaborate recommendations on future-related policy development and on ethical, legal, and societal issues related to AI, including socio-economic challenges. Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the UK, the USA, and the EU have launched a Global Partnership on Artificial Intelligence (GPAI) - an international and multistakeholder initiative dedicated to guiding 'the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth'.
- See also: AI in Africa: Continental policies and initiatives | Africa's participation in international processes related to AI
Developments
- 3 May 2022 | European Parliament: EU should be a global standard-setter in AI
- 4 April 2022 | Council of Europe’s Committee on AI holds inaugural meeting
- 22 February 2022 | OECD launches framework for classifying AI systems
- 17 December 2021 | Group of governmental experts on autonomous weapons to continue work in 2022
- 8 December 2021 | UNESCO and partners launch initiative to promote data and algorithms literacy
- 30 November 2021 | Civil society groups urge EU to prioritise fundamental rights in AI regulation
- 24 November 2021 | UNESCO adopts recommendation on ethics of AI
- 22 October 2021 | NATO publishes its first AI strategy
- 6 October 2021 | European Parliament calls for ban on automated recognition in public spaces
- 15 September 2021 | IEEE launches standard addressing ethical concerns during system design
- 15 September 2021 | UN High Commissioner for Human Rights calls for moratorium on AI systems carrying high risks for human rights
- 15 September 2021 | Intergovernmental organisations launch globalpolicy.AI platform on AI governance
- 13 September 2021 | European Commission launches InTouchAI.eu initiative to promote human-centric AI
- 2 July 2021 | UNESCO gets closer to adopting recommendation on AI ethics
- 21 June 2021 | EU data protection bodies call for ban on use of AI for automated recognition of human features in public spaces
- 18 June 2021 | OECD issues report on state of implementation of AI principles
- 11 June 2021 | Council of Europe ministerial conference tackles AI and freedom of expression
- 20 May 2021 | OECD launched consultations on framework for classifying AI systems
- 21 April 2021 | European Commission publishes proposal for regulating AI
- 19 April 2021 | Chair summary highlights GGE on LAWS's work
- 14 April 2021 | EU draft regulation on AI leaked online
- 30 March 2021 | Council of Europe’s Ad hoc Committee on AI launches multistakeholder consultation
- 17 March 2021 | Council of Europe’s Committee of Ministers adopts declaration on AI-enabled social services decision-making
- 17 March 2021 | European Parliament committee adopts draft resolution on AI in education, culture, and audiovisual sector
- 8 March 2021 | MEPs call on the European Commission to prioritise human rights in AI legislative proposal
- 9 February 2021 | ITU launches focus group on AI for natural disaster management
- 28 January 2021 | Council of Europe calls for strict regulation of facial recognition
- 20 January 2021 | European Parliament adopts report on civil and military uses of AI
- 17 December 2020 | Council of Europe’s CAHAI adopts feasibility study on AI legal framework
- 14 December 2020 | Council of Europe’s CAHAI releases publication titled ‘Towards regulation of AI systems’
- 12 December 2020 | European Parliament committee adopts guidelines for military and non-military use of AI
- 5 November 2020 | Freedom Online Coalition issues statement on AI and human rights
- 22 October 2020 | Council of Europe’s Parliamentary Assembly calls for legally binding instrument to govern AI
- 21 October 2020 | Presidency of the Council of the EU issues conclusions on AI and human rights
- 21 October 2020 | Global Privacy Assembly adopts resolution on AI and facial recognition technology
- 1 October 2020 | European Parliament’s Legal Affairs Committee adopts reports on AI; the resolutions are adopted in the Parliament on 20 October
- 23 September 2020 | European Parliament's AI committee held constitutive meeting
- 28 July 2020 | IEEE Standards Association announces three new AI initiatives
- 22 July 2020 | G20 Digital Ministers reiterate commitment to promoting human-centred AI
The applications of AI
AI has been around for many years. Launched as a field of research more than 60 years ago, AI has now applications in many areas, from online services to industry and healthcare. Let’s take a look.
AI chatbots and assistants
Multiple tech companies have developed AI chatbots and virtual assistants that are intended to make people’s lives easier. At their very core, AI-powered chatbots facilitate communication between users and devices, most often via text-based commands. In general, chatbots are programed to provide specific replies to specific questions or statements. More advanced, virtual assistants - embedded in desktop computers, smartphones, smart speakers, and other IoT devices - can perform Internet searches, manage calendars, control media players, etc. Most of them act on voice command; the activation is triggered either by keyword (like ‘Hey Google’ for Google Assistant) or after the user taps an icon (as with Siri on Mac computers). Some of the most advanced AI assistants are embedded in smart homes systems: They allow users to control home IoT-powered devices simply by voice (controlling music volumes, turning on the heating system, opening the garage door, etc) Apple’s Siri, Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana are some of the most famous and widely-used examples of AI virtual assistants. As virtual assistants are empowered with new capabilities and are more and more used by the mass-market, issues of privacy and data protection come into focus. What happens with the data collected by these assistants? Is this data stored locally only or transferred to companies and used for other purposes?
Online services and applications
Internet companies increasingly rely on AI to improve their online services or design new ones. AI algorithms are behind search engines, social media platforms, and online stores, among others. For example, Twitter is using AI to improve users’ experience, Google operates a job search engine that is based on AI algorithms, and Microsoft has a range of AI-based intelligent applications (from Calendar.help to the AI chatbot Zo). AI is also used by Internet platforms to identify and remove hate speech, terrorism content, and other forms of harmful online content. Researchers are exploring new algorithms that could be more efficiently used in content-control policy, as well as reduce the risk of bias and the possible negative consequences on the freedom of expression.
Translation
Researchers have been working on improving the accuracy of translation tools by using AI. Examples include Microsoft Translator and Google Translate. The World Intellectual Property Organization (WIPO) also developed an AI-based tool to facilitate the translation of patent documents.
Internet of Things
AI and the Internet of Things complement each other. AI provides ‘thinking’ for IoT devices, making them ‘smart’. These devices, in turn, generate significant amounts of data - sometimes labelled as big data. This data is then analysed and used for the verification of initial AI algorithms and for the identification of new cognitive patterns that could be integrated into new AI algorithms. The interplay between AI and IoT can already be seen in multiple applications. Examples include smart home devices able to learn users’ preferences and adapt to their habits, vehicle autopilot systems, drones, smart cities applications, etc. Scientists are continuously looking at new ways in which AI and IoT can work together. A team at the Massachusetts Institute of Technology (MIT), for example, has developed a chip that could enable IoT devices to run powerful AI algorithms locally, thus improving their efficiency. The policy implications of AI and IoT-powered applications cover issues such as privacy, data protection, cybersecurity, and cybercrime.
Cybersecurity and cybercrime
As cyber-threats become increasingly complex, AI has the potential to assist organisations in dealing with cybersecurity and cybercrime challenges more efficiently. AI techniques and AI data analytics assist cybersecurity professionals in understanding cyber-threats and related risks better, allowing them to respond faster and with more confidence. AI is also used in detecting breaches, threats, and possible attacks, as well as in devising responses to such risks. AI applications range from tools that can help catch spam and other unwanted messages on social networks or in e-mails, to algorithms that can help protect systems and networks as complex as the CERN grid. The use of AI in authentication, identity, and access management solutions is increasingly relevant, some of which involve the scanning and recognition of biometrics (such as fingerprints, retinas, and palm prints). Many tech companies are developing AI-based cybersecurity applications and platforms, and, as more and more start-ups are launched in this field, innovative solutions are developed on a continuous basis. At the same time, cyber criminals are also turning to AI to speed-up their game: They can rely on AI to test and improve their malware, and devise malware that is immune to existing cybersecurity solutions.
Autonomous systems (cars, weapons, etc)
Several tech companies (e.g. Google’s Waymo, Uber) and automobile manufacturers (e.g. Audi, Ford, Tesla) are working towards enabling autonomous cars powered by AI systems. Their ultimate objective is to develop fully autonomous vehicles based on systems that are able to completely control a vehicle without any human intervention. At the moment, the technology has advanced to allow what is known as ‘high automation’ - the vehicle can perform all driving functions autonomously, under certain conditions, and the driver may have the option to control the vehicle. As per the automation levels developed by the Society of Automotive Engineers, this is indicative of level 4 automation, one step away from full automation. Companies such as Waymo, General Motors, and Uber are already deploying level 4 autonomous vehicles in some cities (particularly in the USA) as part of pilot projects. Because the testing and operation of autonomous vehicles has safety and security implications, authorities are increasingly moving towards introducing regulations - or at the very least guidelines - to govern these activities.
Read more about Autonomous vehicles on the dedicated page
The automotive industry is not the only one exploring the use of AI to bring in autonomous technologies. Drones powered by AI are no longer news, however they too have safety, security, and privacy implications. And the potential development of autonomous weapon systems raises concerns about their potential implications for humankind.
Healthcare and medical sciences
AI applications in the medical field range from surgical robots to algorithms that could improve medical diagnosis and treatment. In healthcare, big data and machine learning are improving diagnostic setting and the ability to establish customised treatments for different diseases and medical conditions. AI is already used to improve and speed up the detection of diseases such as cancer, and tech companies are continuously working on developing new AI-powered tools that can assist in the early and accurate detection of medical conditions. Moreover, AI-powered devices are used to monitor a person’s health condition, and even caregiving robots are being developed to provide nursing services. In medical research, scientists can now use big data, algorithms, and AI to explore and analyse vast amounts of data, improving their work and making it faster and more accurate. For example, researchers are using AI to develop anti-flu vaccines and to translate human brain signals into speech.
Industrial applications
AI and robotics are the drivers of the fourth industrial revolution, especially as smart systems are increasingly being deployed in IT, manufacturing, agriculture, power grids, rail systems, etc. Big data and AI could help factories better understand their processes and identify solutions to make them more efficient and reduce energy consumption. Some factories, for example, are already using AI to optimise their processes and adapt them to new circumstances, as well as to detect and to predict malfunctions in their equipment before they appear. Manufacturers can also use AI to test new ideas, with tools such as Autodesk and generative design. Moreover, the use of AI to improve process efficiency can also lead to reduced environmental impact and cutting waste. AI applications in agriculture include autonomous robots that are able to harvest crops and to perform other agricultural tasks, AI-powered hardware and software that monitor and analyse crops and soil conditions (e.g. drones to collect data and AI techniques to analyse it), and algorithms that track and predict weather and other environmental conditions that can affect crops. In the energy sector, AI-powered robots are tasked with inspecting, repairing, and maintaining energy installations. Power grid operators are also using AI to analyse vast amounts of data to improve grid management and monitor the relation between electricity supply and demand. In the railway system, AI solutions are deployed for monitoring railway networks and assisting in maintenance operations. There are many other examples of AI being used in various industrial sectors, and many more applications are being developed on an ongoing basis. As the industry increasingly relies on AI solutions, multiple policy issues are brought into focus, from the impact on the labour market to the need to protect AI-dependent infrastructures from cyber-risks.
Financial services
AI is increasingly used by financial institutions like banks and credit lenders to make credit decisions. For example, algorithms and machine learning analyse different types of information to help decide whether to offer a loan to a potential customer. Improving predictions and managing risks are other areas where AI has proven to be useful for financial institutions. In 2017, for example, traders relied on analytical solutions provided by AI company Kensho to predict an extended drop in the British pound. AI is also demonstrating growing efficiency in fraud prevention and detection. The technology is being used in credit card fraud detection systems: It relies on information about a client’s buying behaviour and location history to identify potential fraudulent activities that contradict their usual spending habits. In the banking sector, AI applications range from AI-powered assistants that help clients with tasks such as scheduling payments and checking balances, to apps that offer personalised financial advice. Other uses of AI in the financial sector cover trading and investment banking activities (for example, for investment research purposes or for predictive analytics), underwriting (to predict whether a loan applicant is likely to pay back the loan), insurance services (e.g. automating claims processes for insurance companies or customising insurance policies), and authentication and identity verification (e.g. software that identifies a customer via facial or fingerprint recognition, in online banking systems or at ATMs).
Education
AI holds a lot of promise in the education sector. AI tools are used by educational institutions to bring more efficiency into the performance of administrative tasks, to automate grading tasks (especially for multiple-answer tests), or to speed up admission processes. AI is also increasingly employed in the development of smart content. For example, Cram101, developed by Content Technologies, relies on AI to make textbook content more comprehensible to students by summarising chapters, providing flashcards and tests, etc. Intelligent tutoring systems involve the use of AI to adapt the educational system to the characteristics and needs of each student. The Chinese-based company Squirrel, for example, focuses on helping students score better on standardised tests. Courses are divided into many small elements called knowledge points. For each point, there are video lectures, notes, examples, and exercises. Throughout the study process, the system determines the knowledge points that the student needs to focus more on and adapts the curriculum accordingly. Such a system is described as adaptive learning: It determines what students know and do not know and focuses on the latter. Going a step forward, personalised learning aims to customise the learning process to not only what students know and do not know, but also to what they want to learn, and how they learn best. Personalised learning frameworks rely on AI to analyse vast amounts of information about students and to provide new content and learning experiences that meet the students’ specific profiles. Some schools have started experimenting with virtual facilitators and intelligent tutors. For example, schools in Bengaluru, India use robots to complement human teachers. The robots are taught to deliver certain lessons and to respond to frequently asked questions from students. This, in turn, gives teachers more time to focus on the children and on more personalised learning. As the potential of AI in education is continuously explored, questions are also raised. Is adaptive learning indeed useful? Or does it focus too much on standardised learning and testing, while not actually preparing students to adapt to the fast-changing world of work? Would students be better off learning via intelligent platforms or from robots, or would they miss the interaction with human teachers?
Public sector
AI is increasingly used in the public sector, in public administration, law enforcement, judicial systems, etc. Due to its ability to process vast amounts of information and identify connections between data sets, AI brings more efficiency in administrative processes, helping to improve the provision of public services. Smart virtual assistants are already used by public authorities to improve interaction with citizens - examples include Latvia’s UNA and Singapore’s Ask Jamie. Parliamentary processes can also benefit from AI tools. The Indian Parliament, for example, has embarked on a journey to use AI for more efficient data processing and for simplifying and improving legislative work. Law enforcement agencies also rely on AI in some of their work. For example, facial recognition technology can help them identify criminals. Judges and courts may turn to AI in the hope that it would help them issue more consistent decisions or make the justice system cheaper and fairer. But things do not always go as planned and unintended consequences are poised to appear, as using AI can also lead to biased and discriminatory decisions.
Entertainment
AI applications in the entertainment sector are numerous and cover the movie industry, sports, games, and fashion, among others. Customised user experience is one illustration of AI used in these sectors. Netflix, for example, relies on machine learning to suggest movies that its users likely want to watch. The personal styling service Stitch Fix uses data and algorithms to pick clothing items and accessories that match its customers’ style and preferences. Using AI in designing clothes is a reality as well, as demonstrated by Glitch, a company founded by two computer scientists. In the gaming industry, AI is used to create a more enjoyable player experience. In sports, the technology has multiple applications, from assessing the performance of players and predicting fatigue and injuries, to optimising broadcasting and advertising activities. AI is also starting to be used in audiovisual content production. In the movie industry, for example, IBM’s Watson and its underlying machine learning techniques were used to develop the trailer for 20th Century Fox’s movie Morgan, while McCann Erickson Japan developed an AI-powered creative director to direct the production of TV commercials. US-based Digital Domain uses AI to produce advanced visual effects for movies, while Belgian company Scriptbook claims its AI algorithms can predict whether a film will be successful by analysing the script. Flow Machines and Amadeus Code employ algorithms to assist artists or amateurs in creating music. AI’s potential in content production also generates concerns and one increasingly relevant example is that of deepfakes - the use of AI to create fake video and audio recordings which could be used for malicious purposes. There are also questions about the use of AI in personalising user experiences, particularly the question of choice: If we simply rely on ‘recommendations’ made by content streaming platforms or online stores, to what extent are our choices really personal?
AI research and development
The private sector and the academic community alike are continuously exploring new avenues of AI development. While some focus on developing new AI-based applications in areas such as those mentioned above, others focus on trying to address issues such as accountability and responsibility in AI algorithms. In October 2017, for example, researchers at Columbia and Lehigh universities have developed a tool, called DeepXplore, that could help bring transparency into AI systems, through a process described as ‘reverse engineering the learning process to understand its logic’. And in June 2018, IBM has presented an AI system that can engage in reasoned arguments with humans on complex topics.
Developments
- 28 July 2022 | DeepMind uses AI to predict the structure of almost all proteins
- 30 May 2022 | Google bans deepfake projects on its Colab platform
- 13 April 2022 | US researchers develop devices to allow AI to work without connecting to internet
- 24 January 2022 | Meta announces AI supercomputer
- 11 January 2022 | Malta launches AI research fund
- 20 December 2021 | US administration launches portal for AI researchers
- 16 June 2021 | Experts develop AI model to detect and attribute deepfakes
- 14 May 2021 | Facebook researchers teach AI to forget
- 6 May 2021 | Researchers develop AI model able to detect sarcasm in social media
- 22 March 2021 | Researchers use ultrasound and machine learning to decode and predict movement intentions in brain
- 11 January 2021 | Artificial Intelligence Lab for Biosciences announced in Netherlands
- 13 October 2020 | Researchers develop AI system that mimics biological models to function with small number of neurons
- 19 May 2020 | Microsoft announces AI supercomputer
- 24 March 2020 | US universities launch projects to demystify AI black boxes
- 25 February 2020 | Researchers connect brain and artificial neurons via the Internet