Human rights and the governance of artificial intelligence
February 2020
Policy Reports
INTRODUCTION
AI has the potential to revolutionize the way both the public and the private sectors operate. AI technologies currently power virtual assistants on smart devices, provide fraud alerts for banking applications and help improve health diagnostics. AI solutions are also increasingly used in sectors such as law enforcement, judicial decision-making, border security, international migration management and the military.
To date, there is no single agreed definition of AI. In general, AI can be understood as the ‘systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals’. Simply put, AI involves the use
of techniques allowing machines to come closer to some aspects of human cognition. Machine learning is one of these techniques, by which machines are trained to perform
tasks that are generally associated with human intelligence such as natural language processing. Deep learning, a subset of machine learning, is also increasingly being relied
on for image and face recognition. Machines learn from vast amounts of data using algorithms (i.e. sets of instructions used to solve problems). AI algorithms can analyse data,
find patterns, make inferences and predict behaviour at a level and speed greatly surpassing human capabilities. Deep learning structures algorithms into layers to create an
artificial neural network, enabling machines to learn and make decisions on their own.
Artificial intelligence (AI) is bound to enable innovation in the decades to come, so much so that some say that it has become the new electricity. However, if that truly is the case, then policymakers, business and civil society must understand what the opportunities and challenges are before they turn the switch on.