Algorithmic transparency and the right to explanation

12 Nov 2018 11:15h - 12:15h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

This breakout session explored the concept of algorithmic transparency, why it is important, and how to make artificial intelligence (AI) and its related systems more understandable to the public. The discussion also reflected upon the General Data Protection Regulation (GDPR) that, in a broad sense, prohibits personal information from being subjected to automated decision-making without prior consent of the user.

Mr Alex Comninos, Independent Researcher, and Ms Deborah Brown, Association for Progressive Communications, moderated the discussion. Comninos reminded the audience that AI is a 62 year old system, but is currently very popular as part of the ecosystem of the Internet of Things (IoT), and big data. It is accessible and profitable, however not yet understandable to the public. He invited the room to unpack the conceptualisation of science fiction narratives of AI, and instead discuss country-specific examples of AI explanations.

Ms Imane Bello, Lecturer and Research Lecturer, Sciences Po, began by saying that AI is biased and contextual. The concept of algorithmic bias is linked to algorithmic systems that are computational and predictive. For the process of automating decision-making, computer programs are used to identify patterns and further build on them. The large amount of historical training data is often incomplete or unrepresentative of a certain part of the population. This leads to the ‘Reproduction of culturally-engrained biases because algorithmic systems are intrinsically situated in places where they are developed and deployed’, Bello said. Algorithms do not predict the future, but based on old information, decide on the present. This immediately impacts GDPR provisions as safeguarding human rights. GDPR allows for exceptions and these instances should the focus today.

Ms Karen Reilly, Technical Community, Western Europe and Others Group, noted as her subject of interest the issues of AI and its application as a social problem. According to Reilly, technologists do not pay enough attention to social sciences, making AI development teams too homogeneous. In direct response to algorithmic transparency, she sees insufficiently planned technologies as the biggest risk. The technology community should document their work process better and in a more transparent manner, to be able to explain end outcomes based on every step of developing and deploying AI. Currently a lot of the solutions to AI transparency are tools such as end user needs assessments, documenting code, and knowing where things are in the infrastructure. 

Ms Lorena Jaume-Palasi, Founder, Ethical Tech Society, pointed to the human object of technology as her focus. The current debate on algorithms and automated processes is very focused on the mathematical relevance of the system and statistical bias. Statistical bias becomes relevant once humans interact with the technology. Jaume-Palasi talked about criminal justice algorithms; in particular the COMPASS software in the United States and research done by ProPublica. One side of the debate is the mathematical bias of the software, and the other works to understand how people use the software to accept or reject decisions suggested by algorithms. ‘It is important to understand what the factors are that manipulate people to use the technology in legitimate or illegitimate ways’, she noted.

During the interactive discussion, the audience made several points. The COMPASS software raised the question of the relationship between industrial trade secrets (black box software) and public interest into the purpose for which it is used. Jaume-Palasi responded by saying that companies need to allow for more oversight into their mathematical formula used behind automated decisions, without disregarding the reality of companies wishing to keep their secrets safe. The process of coding and creating algorithms is already changing, she added, and governments are demanding more input into AI technologies and methodologies. 

The panel agreed that a holistic approach is needed, one that addresses a multidisciplinary set of perspectives in order to avoid reinforcing age-old systems of bias. Ultimate transparency of algorithms to the public, however, should not be advocated. Algorithms can be audited and transparency enhanced in applications of public significance, and some governments are already advocating this. 

A question was raised from the audience whether explainability of algorithms relates to algorithmic models, or the code itself. Comninos replied by saying that algorithmic transparency goes beyond the algorithm itself, and we should include the people-system around it. AI systems are highly interdisciplinary, so future discussion should focus on the interpretation of algorithmic output. Lastly, the discussion addressed the political conversation about AI decision-making, and it was agreed that enhancing education and literacy about AI systems is a good step forward.

 

By Jana Mišić