Who is in Charge? Accountability for Algorithms on Platforms

Session: WS 98

12 Nov 2018 - 12:15 to 13:45

#IGF2018, #WS98

Report

[Read more session reports and live updates from the 13th Internet Governance Forum]

Algorithms have a big social and economic impact for both the public and the private sector, as well as for users. We should establish some frameworks to regulate the process of using algorithms for good purpose. It is essential that the design and the use of algorithms should be both transparent and explainable. However, it is rather early to think of a possible regulation of this technology.

The organiser of the workshop Ms Kristina Olausson, Policy Officer, European Telecommunications Network Operators' Association (ETNO), asked the audience to split into two groups for a break-out session to discuss who should be held accountable for the impact of algorithms with the panellists

Mr Phillip Malloch, Vice President, Telia Company, came first to summarise the discussions in his group.

They had broken down the topic into areas:

  • how can we make the utilisation of algorithms really understandable for all those people involved?
  • how can we reconcile transparency with people's intellectual property rights in the private and in the commercial commercial space?
  • what is the role of a government actor, the private sector and others?
  • what is the purpose for initiating the use of algorithms?

Malloch mentioned the idea ofthe potential moratorium on ‘war situations, weapons or other situations where there's probably a strong human element necessary to make a decision’. Moreover, he stressed, that for the application of algorithms, we do have existing principles from the UN principles on human rights, and now we should think how to extrapolate them for regulation of artificial intelligence (AI) and algorithm usage. Participants raised the question of a potential forum for regulating algorithms and mentioned the European Commission efforts, as well as the European Union ( EU) and Grup of Seven (G7) more generally. Finally, Malloch suggested the notion of the trust that all actors need to value while creating and regulating AI and algorithms. However, there should be some level of oversight. 

Ms Lorena Jaume-Palasí, Executive Director, AlgorithmWatch, started her summary with the idea of ‘explainability’ and various definitions of it. The definitions depended on what type of data was used for the algorithm, the parameters of how the data was weighted and impacted the work of the algorithm and what is the output of the algorithm, is it discriminative in its social impact?

In addition, she described the difference between transparency and explainability: ‘An explanation is reconstruction.  It's always a justification’. Transparency has different dimensions, to whom, and of what. Also, it is different from the developer’s, the user’s, and the policymaker’s points of view.

Ms Karen Reilly, Managing Director of Tungsten Lab, continued the conversation summary, drawing attention to the impact of algorithm outputs that use large datasets. The data itself may not be sensitive, but the output from this dataset can be, for example in intersections of health, race, gender, or economic status. So explainabilty should also encompass this value. 

Finally, Ms Fanny Hedvigi, European Policy Manager, Access Now, reported about the implications of the EU's General Data Protection Regulation (GDPR) for algorithms, whether the data protection regulation is actually related to explainability, and possible redress against automated decisions influencing human rights based on an algorithm. Finally, she raised the issue about the difference between personal data and datasets based on them, and insights and conclusions that the private sector or other actors can draw from the datasets. The latter may not be protected by the law, and this can become a challenge for law enforcement authorities.

The moderator, Mr Gonzalo Lopez-Barajas, Manager Public Policy & Internet, Telefónica, added several other points at the end. Firstly, the public sector should also be transparent in using algorithms for citizens. Users should be trusted when they say that algorithms create harm for them. Harmful algorithms should be stopped. Finally, he concluded that it is too early to regulate the algorithms technology, in order not to hamper innovations.

 

By Ilona Stadnik

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top