The session addressed the need for governance innovations and practical applications of artificial intelligence (AI) to protect the information ecosystem and human rights.
In his keynote address, Mr Jovan Kurbalija (Executive Director & Co-Lead, United Nations High Level Panel on Digital Cooperation) pointed to the importance of applying values to the digital reality and to the necessity of bringing innovators and regulators to a common level of understanding about AI technologies.
Kurbalija explained that considerations about ethics, human rights, and legal liabilities were, among others, ways to transfer values to the technologies. He highlighted that the sustainable development goals (SDGs) could provide a very useful safeguard for technological development and that new technologies should be developed in line with the different SDGs.
According to Kurbalija, another issue is the dichotomy between the promises that come with new developments and their actual delivery. He mentioned the example of the decentralisation of work through digitally connected devices and that studies show that remote work still remains at a rather low percentage.
Moreover, he mentioned that in an age where everything is being optimised for maximal efficiency, we should consider having the right to be imperfect.
Mr Marc Warner (CEO, Faculty) explained that disinformation is not a new phenomenon, but that it has become easier to spread wrongful information through today’s digitised world. He further noted that we might soon be confronted with ‘deepfakes’ in addition to the already circulating fake news. Deepfakes are a way of combining existing image and audio files in a way that allows editors to copy people’s faces and features whilst making them say anything the creator would like them to say.
Warner noted that deepfakes could be widely deployed as early as 2020 and highlighted the growing concern for future elections and other important events. He explained that these videos are improving rapidly and could become indistinguishable from real ones in some three to four years.
Warner spoke about some of Faculty’s work within the Alliance of Democracies Foundation which is trying to combat the hazard of deepfakes through public awareness campaigns, empowerment of the media – by equipping them with adequate tools and training, and connecting contributors to combine ideas for new data approaches
Mr Bertram Malle (Professor, Department of Cognitive, Linguistic, and Psychological Science, Brown University) explained the importance of teaching robots to abide by norms and highlighted that robots will increasingly interact with humans in all aspects of their lives.
Malle noted that robots cannot be ‘raised’ nor educated the same way humans are, but that they can learn through observation and by instructions obtained through their programming instead. Therein, ensuring the continuous learning of the machines and the flexibility of their programming to adapt to evolving norms is essential.
He pointed to the difficulties of teaching robots given that machines must be prepared for the norms of their specific environment and that norms are not heterogeneous. Malle also spoke about the necessity of having evaluation mechanisms to assess the machines’ norm competence before their deployment.
Ms Wafa Ben-Hassine (MENA Policy Counsel Global Policy Counsel, Access Now) identified the growing use of AI in criminal justice decisions as a key concern in the future and noted the dangers of psychological profiling for election purposes as a worrying trend. She recognised the many benefits of AI but stressed that an assessment of the environments in which it is being deployed must be done, given that technological solutions need to be specific and cannot take a universal approach.
Ben-Hassine further spoke about the responsibility of states in developing human rights impact assessments before the deployment and development of AI as well as in establishing procedures for remedy. She also highlighted the importance of holding businesses and developers accountable for their products and pointed out that infringements and failure to comply with the rules should have a significant impact on their revenues.
Ms Regina Surber (Scientific AdvisorI, CT4Peace and the Zurich Hub for Ethics and Technology) said that the creation of psychological profiles and the tailoring of information with the intent to manipulate people violates human dignity.
In terms of the benefits of AI technologies, Surber mentioned the ability to outsource many of our tasks which will in turn create space to reflect on human achievements, new norms, and inspire more reflection on how we want to live.
Surber further noted that tech companies are currently setting principles and are ahead of the regulators. States therefore have to quickly close that information gap in order to enforce human rights frameworks and norms.
Mr Mark Latonero (Research Lead, Data & Human Rights, Data & Society) said that it was a mistake to think that risks of technology will only appear in the future and noted that the biggest risks are currently unfolding as the technology is being developed. He emphasised that using AI for the allocation of resources and decisions in fields such as credit decisions, jail sentences, and job applications is one of the most important issues given that these decisions can severely alter human lives. Latonero insisted that existing human rights laws already provide a globally recognised framework and expressed his concern that conversations about AI developments did not include more issues on human dignity.
Ms Malavika Jarayam (Executive Director, Digital Asia Hub) criticised the lack of focus on human rights in the debate about AI and said that rights concerns are often dismissed too easily. She explained that this was often due to the fact that discussions about rights were often considered as slowing down progress, which is why more attention is given to business and innovation perspectives.
According to Jarayam, a way to enjoy and reap the benefits of this new technology would be by applying it to fields that do not require human data, such as agriculture or scientific purposes. This would provide the developers with more information that could in later stages be used to further develop AI applications to be applied to humans.
Moreover, Jarayam stressed that solutions for challenges posed by AI must be discussed in an interdisciplinary manner in order to capture all positions.