Disruptive technology I: What does artificial intelligence mean for human rights due diligence?

26 Nov 2018 01:00h

Event report

The session was organised by Business for Social Responsibility (BSR) and Article One. It was co-moderated by Mr Dunstan Allison-Hope (Managing Director, Business for Social Responsibility (BSR)) and Mr Faris Natour (Co-Founder and Principal, Article One). In his opening remarks, Allison-Hope mentioned the risks related to artificial intelligence (AI) in relation to the complexity of the deployed technology. He further noted the speed with which developments take place in the field, the uncertainty connected to AI developments, and how human rights protection mechanisms can be implemented within these evolutions. Natour highlighted that we are already in the middle of AI driven transformations and that AI is already being used in many sectors and in tools such as translation apps, face recognition, and search engine algorithms.

Ms Hibah Kamal-Grayson (Public Policy Manager, Human Rights and Internet Governance, Google) said that at its core, AI is an adaptive technology which is already used in various fields, such as spam detection, as well as for more complex issues such as wildfire predictions.

She highlighted that Google abides by the existing laws, but that also respects the principles set out by the Global Network Initiative and its internal AI principles published in June 2018. Kamal-Grayson recognised that while Google works according to its own principles, they can be expanded upon and that they are a good starting point. She further pointed out that core tensions arise from the difficulty of developing all encompassing due diligence standards, while designing them in a way to make them enforceable. ‘No one stakeholder, no one sector will be able to figure out this challenge by itself’.

Ms Eimear Farrell (Advocate and Advisor, Technology and Human Rights, Amnesty Tech, Amnesty International) spoke about the UN Global Compact’s – Project Breakthrough which aims to analyse how AI can help to achieve the sustainable development goals (SDG).

Farrell identified the increasing implementation of human rights language in AI principles rather than the referencing of ethical standards as a very positive development in the field. According to her, the UN Guiding Principles on Business and Human Rights needs to adapt to, and incorporate, technological developments of AI. Keeping in mind the Declaration of Human Rights, Farrell mentioned that it was very innovative when it was first adopted, and that new frameworks for AI also need to be bold. Therein, she also saw a role for civil society to engage with companies and not just call them out on human rights abuses.

Mr Steve Crown (Deputy General Counsel, Human Rights, Microsoft) said that ethics and human rights language are both included in Microsoft’s guiding principles, but that which classification was highlighted most also depended on the audience.

Through human rights impact assessments, Microsoft has tried to figure out where its responsibility lies when thinking about its entire value chain. He mentioned the need to give AI users guidance, similar to the instructions for use of medicine, in order to inform them about the appropriate uses and limitations of the technology. Given the adaptive nature of AI, simply accessing the source code of certain algorithms is insufficient to understand certain processes. It is therefore important to inform the users about the potential uses of the technology they are using.

Ms Sabrina Rau (Senior Research Officer, Big Data and Technology Project, School of Law Human Rights Centre, University of Essex) explained that while AI learns and adapts through algorithms, data and big data are the elements fueling the technology and should thus be given more attention. According to Rau, due diligence for human rights must be respected at all stages of the value chains given that wrongful or skewed data can lead to unwanted outcomes. Data must therefore also be monitored and managed along the value chain. Rau also mentioned the importance of due diligence of business relationships and ensuring transparency throughout the processes.

Ms Kelli Schlegel (Manager, Human Rights, Intel) said that in order to implement privacy by design, it needs to be a core concern for companies. As technology is increasingly widespread, more issues are coming to light. Developers must therefore be trained in human rights protection and due diligence mechanisms in order to understand how their creations can impact them and how to develop technology that operates within the boundaries of due diligence. Schlegel also mentioned that implementing due diligence is often easier in existing processes than when designing new applications. However, once the diligence phase is in place, having a review board or a way for employees to raise concerns regarding the respect of human rights need to be implemented.

Mr Minwoo Kim (Research Professor, Korea University Human Rights Center) explained that AI carries many risks, as it amplifies privacy issues due to the data it collects and requires to function. The trend of decentralisation only increases these difficulties. Kim further noted that privacy by design only provides protection for one human right, but that due diligence should take the entirety of the human rights framework into account.

Ms Olga DiPretoro (Program Officer, Winrock International) spoke about developments in which the due diligence role for companies has been reinforced, and mentioned the example of the US Tariff Act which requires companies to prove continuous monitoring of their value chains.

According to DiPretoro, data checks need to be enhanced in order to reinforce due diligence mechanisms. Companies’ data has so far been assessed individually and the data and the results of its analyses are not being shared within the industry. She pointed to duplications of efforts because audit results were not shared, thereby limiting companies’ abilities to make improvements. She urged for an improvement of business and civil society collaboration on information sharing that will allow consistent analysis.

A representative from SAP mentioned that AI is good for discovering patterns and making improvements to processes. According to him, human rights should be viewed as a business process that starts with the fundamental commitment of the UN’s Guiding Principles on Business and Human Rights. While the commitment to these principles does not involve AI directly it is a crucial step for the respect of due diligence. He noted that AI can provide help in assessing actual and potential impacts of businesses and human rights by predicting risks or visualise relationships between suppliers and customers. Therein, AI can be used to monitor value chain processes, ongoing interactions of communities and their application of AI technology, and analysing contracts and other legal documents to identify weak human rights protection mechanisms. This information can then be used in a collaborative framework and used to benchmark businesses’ performances.