Moderated by Mr Andrew Smith (Advocacy Officer, Article 19), this side event addressed the challenges and opportunities that new technologies and artificial intelligence (AI) systems bring to the right to privacy and the protection of civic space. Germany and Brazil have shared a draft resolution at the 42nd Human Rights Council (HRC) on ‘Privacy in the digital age’, and this session aimed at identifying the priorities for protecting human rights online. In his introduction, Smith referred to the recent conclusions of the Special Rapporteur on freedom of expression in his report to the UN General Assembly in 2018, about the impacts of AI.
Mr Francisco Vera (Advocacy Officer, Privacy International) addressed the numerous links between AI, facial recognition, and identification (ID) systems. Vera first signalled that ID and surveillance systems only differ in their respective purposes, but fundamentally rely on very similar databases. The creation and consolidation of large databases with personal information can have many legitimate uses, but also constitutes great potential for abuse, in particular, via the increasing connection of ID systems and facial recognition technologies. According to Vera, though there is no one-size-fits all solution, the regulation of digital identity systems needs a clear definition of legal bases, and to follow the principles of necessity and proportionality. Finally, privacy and security need to be taken into account from the inception of these systems.
Ms Vidushi Marda (Digital Programme Officer, ARTICLE 19) focussed on the biggest challenges raised by AI and facial recognition in terms of human rights. According to him, facial recognition systems are fundamentally designed to violate privacy, since they require citizens to give up intimate information for the technology to function. The fact that several actors, including the city of San Francisco, have chosen to ban the use of facial recognition technologies by public authorities, demonstrates that aside from regulation, ‘we need to think about taking certain systems off the table’. The unpredictability, opacity, and inaccuracy of these systems should invite us to reconsider their use, especially when they have a direct effect on the human rights of individuals. The use of AI in content moderation shows, for instance, how these systems cannot understand context or satire, and will always remain imperfect.
Mr Danilo Krivokapic (SHARE Foundation, Serbia) first addressed the issue of transparency and accountability, by presenting the current situation in Serbia, where authorities are developing new surveillance camera systems. The documentation about this project remains confidential, despite the fact that it will affect all of Serbia’s citizens. Moreover, he recalled that despite the fact that a number of actors (such as Microsoft) are pushing for regulating the use of facial recognition, this area is already regulated, as mass surveillance is forbidden by international standards and more than 130 countries have already established a framework for data protection.