Big-data, Business, and Respect for Human Rights (OF49)

Session: OF49 

19 Dec 2017 - 17:15 to 18:15

#IGF2017, #OF49

Report

[Read more session reports and live updates from the 12th Internet Governance Forum]

This Open Forum, organised by the Council of Europe, the European Broadcasting Union, and the Federal Department of Foreign Affairs of Switzerland, addressed the human rights and responsible business challenges related to the management of big data. Moderator Mr Lee Hibbard, Internet Governance Co-ordinator at the Council of Europe, started by asking the audience to reflect on whether they are either worried or excited about big data. The feedback was split between caution and optimism, with privacy and discrimination concerns on the one hand, and on the other hand a recognition of the enormous opportunities that big data could provide for society.

Mr Giacomo Mazzone, Head of Institutional Relations of the European Broadcasting Union, explained that addressing big data challenges requires collaboration across sectors. In the media sector, challenges materialise in the move from a ‘linear’ to an ‘on-demand’ world, where personal data on user behaviour is collected and could provide sensitive insights, often through third parties. Understanding how these challenges are addressed in other industries could help provide a solution.

Mr Rémy Friedmann, Senior Advisor, Desk for Human Security and Business, Swiss Federal Department of Foreign Affairs, explained how the government of Switzerland is addressing big data concerns through its National Action Plan on Business and Human Rights, which enshrines the duty of the state, the responsibility of businesses, and the requirement of individuals’ access to remedy. The relevance of big data in this framework is cross-sectoral, which raises the question whether we can identify negative human rights impacts and act (preventatively) with other governments and stakeholders.

Amb. Corina Călugăru, Coordinator of Information Policy (TC-INF), Ambassador, Permanent Representative of the Republic of Moldova to the Council of Europe, then presented the Council of Europe’s work in relation to the data protection issues that have been raised with the advent of big data. She explained that there are existing mechanisms to address the challenge, including the Council of Europe’s Internet Governance Strategy, which involves the cooperation with Internet companies to ensure a more protective way to work on big data. Council of Europe’s Convention 108 addresses data protection and is currently in the process of being modernised to be able to tackle new challenges. She closed her presentation by explaining that the new issues, mechanisms, and terminology related to big data can be difficult to digest for governments and civil society, and that it is important to develop an understanding among all stakeholders of how to protect human rights, the rule of law, and democracy in the big data age.

Ms Alessandra Pierucci, Chair of the Data Protection Committee of the Council of Europe, further highlighted the relevance of Convention 108, which is legally binding and open to ratification by states outside of the Council of Europe. Its modernisation process includes new provisions on transparency (the duty of data controllers to explain to data subjects how their data is being used), new rights (including the right to object and not to be subject to automated decisions) and guidelines on big data. The Council of Europe recognises the innovative potential of big data, but also notes the need to understand the ethical and social risks generated by the use of big data.

Mr Philippe Cudré-Mauroux, Full Professor at the University of Fribourg, leading the eXascale Infolab, provided an overview of the uses and methods of data science, which are often meant to develop descriptive, predictive, and – increasingly – prescriptive models. As data science makes it impossible to clearly understand in advance what kind of model will be developed and how it will be used, the very essence of data science might conflict with privacy rights. Transparency from industry players, privacy-preserving data science, and the automated brokering of personal data might help to make the two more compatible.

Mr John Morrison, Executive Director of the Institute for Human Rights and Business, emphasised the complexity of the topic of big data and explained that existing human rights frameworks are ‘stretched to the limit’ trying to provide answers to its challenges. In addition, industry – and especially the ‘industrial Internet’ – is insufficiently addressing the human rights concerns of big data, attending mainly the short-term risk of data security. In the medium term, however, reliance on algorithms could lead to discrimination, algorithmic profiling could steer behaviour, and facial recognition could be misused by repressive regimes. Ultimately, in the long term, we need to address the question of human agency: what happens when humans are isolated from decision-making processes? Morrison concluded that rather than transparency, the focus needs to be on individual and collective consent on how data is used.

Mr Bernard Shen, Assistant General Counsel at Microsoft, ended the series of presentations on a more positive note: artificial intelligence (AI) – generated by big data – always starts with a human purpose and can be used for good, he said. Nevertheless, it is important to be aware that it can be imperfectly applied, on the basis of incomplete or unrepresentative data. Partnerships on AI are needed to learn from each other and develop good practices, and industry needs to live up to human rights principles.

In the discussion that followed, an audience member asked whether perfect data – removed from human bias – could mitigate some of the human rights challenges. Yet, the panellists warned that biases are wired into the data, and that it is dangerous to think that they could be removed. Nevertheless, Shen reminded the group that human decision-making is also prone to flaws and unconscious bias, and it would be false to assume that human decisions are always superior to those generated by AI. In fact, AI might be able to detect and correct human bias. One participant indicated that rather than looking at biases in AI, we need to consider how it risks being used against individuals and to look into who has access to big data and thereby the power to misuse it.

In their concluding statements, Hibbard raised the question of how to enforce legal instruments developed to mitigate big data’s human rights risks, and Mazzone added that trust is necessary in order to be able to capture the opportunities of big data. Friedmann concluded by underlining the importance for companies to be aware of their responsibilities and the need for all stakeholders to address these risks and work together to maximise the positive potential of big data.

By Barbara Rosen Jacobsen

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top