Emerging technologies and human rights

Report

Session date
to

Session ID
21

Session page

[Read more session reports and live updates from the EuroDig 2019]

This session, moderated by Ms Viveka Bonde (LightNet Foundation) and Mr Jan Kleijssen (Director of the Information Society and Action against Crime Directorate, Council of Europe), featured discussions on how to ensure that emerging technologies' development is in line with human rights, and focused in particular on the role that states and companies should play in responding to these issues.

Kleijssen introduced the discussion by presenting briefly the work of the Council of Europe on human rights online, its conventions, but also its new mandate to establish a legal instrument addressing directly the impacts of artificial intelligence (AI).

First, Mr Olivier Bringer (Head of Next Generation Internet Unit, European Commission) argued that regulators face a number of challenges in tackling these issues, partly in relation to the conflicting temporalities of policy-making and technologies. For Bringer, there is already a significant corpus of legislation available, and so there is no need to reinvent the wheel as part of future regulations. Instead, more resources need to be made available to regulators and for increasing the collaboration between them. Digital literacy is also fundamental, as policymakers, but also judges, need to acquire the necessary skills to understand AI technologies.

Ms Lise Fuhr (Director-General, European Telecommunications Network Operators' Association (ETNO)) presented the position of the telecom industry on these issues. Fuhr insisted that more work needs to be done on the principles guiding AI, but that new regulations should not prevent the development of these technologies. AI is now used in a wide range of sectors, illustrating how complex and problematic its regulation can be. When considering changes to the regulatory framework, Fuhr insisted that predictability and legal certainty remain essential for the business sector.

Ms Joanna Goodey (Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA)) argued that when it comes to AI, there are still debates whether there should be a unified framework for regulation, or rather sectoral approaches. In the EU, the General Data Protection Regulation (GDPR) is the most updated piece of legislation in relation to AI, but a significant number of the existing rules were developed in a pre-emerging technologies period.

Insisting that fundamental rights need to be applied also in the context of emerging technologies, Goodey argued that the issues of redress and access to justice were fundamental for end-users. Redress, by means of penalties or legal actions, needs to be available, thus highlighting the shortcomings of self-regulatory approaches. New regulatory frameworks for AI need to mobilise the existing tools (such as the ombudsperson) and the models provided by other sectors (in particular in terms of agency and oversight) to enforce the rights of individuals.

Mr Joe McNamee (Independent Expert) argued that these new issues can be addressed building on our past experiences in the Internet regulation, as well as already established instruments, such as the European Convention on Human Rights. There is no need to reinvent these instruments, but only to continue to re-adapt them. Law and human rights instruments are essential in protecting individuals, but fundamental rights remain the responsibility of states. As shown by the recent EU Terrorism Directive, governments in Europe tend to avoid accountability and rather incentivise private actors to directly enforce their rules, for instance, to unilaterally remove online content of users, despite the potential interferences with their freedom of expression.

Mr Max Senges (Programme Manager, Google Research & Education) argued that we should remember that the Internet is a recent technological achievement, with a surprisingly resilient architecture. Senges indicated that at the institutional level the Internet Governance Forum (IGF) should play a greater role on these issues, also in coupling with other IG institutions. There is a need for more collaboration, to find solutions that encompass the perspectives of all stakeholders. Senges also argued that there is nothing like an Internet industry. The Internet is instead ‘eating the world’, as everything becomes Internet governance-related. As opposed to regulations, there is a great need to work on the principles and ethics allowing for the use of AI in all sectors.
 

By Clément Perarnaud

Share on FacebookTweet