Expert workshop on the right to privacy in the digital age

26 Feb 2018

Event report

FIRST DAY

The exploratory workshop on the right to privacy in the digital age took place on 19 – 20 February 2018, at the Office fo the High Commissioner for Human Rights (OHCHR) in Geneva, Switzerland. The workshop was held following the Human Rights Council (HRC) Resolution 34/7 on March 23 2017 and it had ‘the purpose of identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard’.

The session was opened by Ms Peggy Hicks, director of the Thematic Engagement, Special Procedures and Right to Development Division of the Office of the High Commissioner for Human Rights (OHCHR), who reflected upon the fact that the right to privacy lies specifically at the intersection between the human rights discourse and digital technology. Although data driven technology offers multiple opportunities for society at large, at the same time, it also poses some challenges, especially because ensuring the right to privacy is also linked to the enjoyment of other rights such as the freedom of expression and of association.

In particular, Hicks affirmed that there is currently an urgent need to strengthen the protection of human rights vis-à-vis:

  • Digital surveillance (especially for human rights defenders and activists).
  • Data processing, both by the government and private actors.
  • Internet of Things (IoT): ‘smart’ devices will continue to create new sources of data thus posing new threats for individuals and groups.

FIRST PANEL – Setting the scene: the role of the right to privacy within the human rights framework and for civic protection

The first session focused on the position occupied by the right to privacy within the existing human rights legal framework.

Ms Anja Seibert-Fohr, UN Human Rights Committee member, affirmed that the arising challenges for the right to privacy posed by social media can be effectively tackled by existing provisions, such as the International Covenant of Civil and Political Rights (ICCPR), in particular Article 17, ‘1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.’ She considered that an effective guarantee to the right to privacy is a problem that affects multiple countries. The main issues under consideration are regarding states’ access to data collected and/or administered by private parties, and whether such data can be transferred outside the country. The actual scope of application of Article 17 is rather broad: ‘interference’ includes electronic surveillance, online surveillance, online tapping and metadata collection. Moreover, the claim arguing that the ICCPR does not apply extraterritorially has to be rejected as well.

The interpretation of Article 17 by the UN Human Rights Committee regards the existence of substantive standards protecting the right to privacy: generally speaking the convention does not establish a total ban on surveillance, but it makes sure that when interference occurs, it is of arbitrary nature and is regulated by statute. In short, interference is possible only if authorised by the law and restricted to the circumstances of the case.

Mr Joe Cannataci, UN special rapporteur on the right to privacy, considered the legal protection granted by the Council of Europe’s Convention (CoE) 108. He said that the right to privacy is a fundamental and universal human right; however, it is neither absolute nor self-executing. Its main purpose is to ensure the protection of individuals, because citizens need to have available safeguards and judicial remedies regardless of whether the threat is national or international. The basic principles under application are usually necessity and proportionality. At the national level, effective guarantees are not always granted. For example, all the legislation concerning the oversight of domestic intelligence within the UN’s  member states, requires amendments and reinforcement. Moreover, 75% of the UN’s member states have no system of detailed safeguards in place when it comes to the surveillance of their citizens by other states. He concluded by remarking that although developed within a European framework, Convention 108 is becoming the de-facto global convention on the protection of personal data because there are also non-European states who have (or are waiting to) ratify it. Moreover, in regards to surveillance, UN member states are in need of developing a new regulatory framework because the existing Convention 108 is not sufficient.

Ms Anita Ramasastry, chair of the UN Working Group on Business and Human Rights, focused on the relevance of the UN guiding principles on Business and Human Rights which were introduced in 2011, after tech companies in the US were accused of human rights abuses after providing user data to the government. She illustrated the three main pillars upon which the guiding principles rest.

  1. States’ existing obligations to respect, protect and fulfil human rights and fundamental freedoms when companies transfer data. For example, the European Parliament has a data export restriction, i.e. prohibiting tech companies from transferring data to countries where the right to privacy is not ensured.
  2. Corporate respect for human rights and applicable laws: for example when considering the principle of due diligence and the ‘cause contribution’ and ‘direct link’ test.
  3. Access to remedy: both businesses and states have an obligation to provide effective remedies to breaches, both via judicial and non-judicial means to ensure reparation.

Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, mainly addressed the interlinkage between the right to privacy and other human rights. In particular, he maintained that ‘privacy is under threat in ways that neither we, n are not familiar with and law is not familiar with’. Legally speaking, under Article 2 of the ICCPR, state parties are bound to ensure the respect for the rights included in the Covenant. Artile 17 not only ensures the right to privacy, but it also protects against discrimination and interferences in the enjoyment of such rights. Moreover, the restrictions that may imposed by states under Article  19(3) do not apply to the right to hold an opinion (Art. 19(1)) and most importantly, they need to be put under the ‘necessary and proportionate’ test.  He then concluded his presentation by affirming that privacy is seriously at risk and that encryption and anonymity in digital communication enable individuals to exercise their rights to freedom of opinion and expression in the digital age and, as such, deserve strong protection (as highlighted in the UN 2015 Report on ‘Encryption, anonymity and human rights framework’).
 

SECOND PANEL – Surveillance and communications interception

Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School, moderated the second panel on the interlinkage of surveillance and the right to privacy. She referred to the 2014 OHCHR report on the ‘Right to privacy in the digital age’ and reasoned on the lack of transparency regarding implementation of surveillance techniques and in the application of the principle of proportionality.

Mr Arthur Gwagwa, ‎senior Internet policy & human rights research fellow, Centre for

Intellectual Property and Informational Technology Law, at the Strathmore University in Kenya, considered that since the publication of the aforementioned report, two important developments have taken place. First, in 2015 there was the appointment of Prof. Joe Cannataci as the first ever special rapporteur on the right to privacy. Second, during 2016, the discussion regarding the right to privacy started to also focus on the role of companies. He stressed the importance of including companies in the discussion because they are often found to be involved in censorship together with governments. He also warned about new forms of surveillance, such as ‘social media intelligence’, which uses users various tools to monitor data on the public domain, for example on social media. He concluded his speech by stating that there are three main dimensions to consider when addressing security: individual, business, and society, and that security is not only a matter of technology, but also a matter of policy reach.

Ms Sarah McKune, senior researcher at Citizen Lab,University Toronto, Canada, focused on cyber-threats targeting civil society and international cybersecurity initiatives. She explained that with the current technological developments, new forms of targeted surveillance are now possible (such as: phishing operations and network injections). She specified that legally speaking, there is no justification (under the principles of legality, necessity and proportionality) for the use of such surveillance by governments. She concluded by considering that what needs to be done is to ensure that a system designed for lawful interception is not used by governments for unlawful purposes against civil society leaders.

Ms Gail Kent, global security expert at Facebook, United Kingdom, considered Facebook’s policy regarding data sharing. She specified that Facebook, being a US-based company, abides by US national laws, which means that they provide content and data to governments, according to the law. This means that Facebook is allowed under certain circumstances to provide basic subscriber information (username, IP addresses, phone number and last log-in) but not to share lifetime content and metadata. Moreover, she explained that when a government is requesting access to data from Facebook, there is a three-step approach in place. First, the government is required to use its national legal framework allowing for data access, then the jurisdiction of the individual data requested is considered, and lastly, to make sure that such a request complies with human rights standards.
 

THIRD PANEL Securing and protecting online confidentiality

The session, moderated by Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, discussed the variety of ways in which online confidentiality is under threat and measures to better protect it.

Mr ‘Gbenga Sesan, executive director of the Paradigm Initiative, Nigeria, talked about the differences in the meaning of confidentiality among various groups. In a study on different African countries, the trend observed – especially in electoral contexts – is that of using information as an opportunity to gain increased control. Sesan noted that the power dynamic is not tilted in favour of citizens; rather, confidentiality is linked to the sense of privilege attached to senior authority. While the tensions between security and privacy remain relevant, this conversation needs to be held together with civil society rather than in silos. Progress has been made in the area of encryption, in particular at the level of awareness among citizens and deployment in services used on a daily basis. In conclusion, Sesan stressed the importance of consent and choice in the use of information: ‘confidentiality is a choice – the fact that I have put my personal information online is a choice – it does not mean I waived my entire right to privacy for something else’.

Ms Fanny Hidvégi, European policy manager at Access Now, Brussels, highlighted the actions taken by states. She started with the Hungarian example: a bill that was introduced in 2015 to ban encrypted services and demand mandatory backdoors but did not pass into law. There is a need for a public debate over encryption, the zero-sum framing of the discussion is counterproductive. ‘We have a right to both security and privacy. Necessary cannot be just useful, reasonable and desirable, we need to search for evidence in that field’, she pointed out. EU policy work on encryption is currently taking place in three stages. 1) Direct co-operation with law enforcement agencies (undermining the MLAT reform to have direct co-operation with law enforcement agencies), 2) Regulation of vulnerabilities and dual use export technologies, and 3) E-privacy reform. According to Hidvégi, the latter is the most important law on the confidentiality of information which exists in the EU and is related to economic aspects and increasing trust in the Single Digital Market.

Shifting perspective to the national level, Mr Eduardo Bertoni, director of the National Access to Public Information Agency (NAPI) of Argentina, offered the example of his own country in complying with international standards. Argentina passed its first access to information law in 2016 and created NAPI as an oversight body. There was a deliberate decision to include the oversight of the data protection law in the mandate of NAPI, alongside the monitoring of access to information. In practice, there is a possible conflict between access to information and freedom of expression, on the one hand, and data protection on the other, therefore NAPI has two directorates at the same level: one for access to information and one for data protection. In cases in which both rights are concerned, NAPI internal regulations say that a transparent decision needs to be taken after hearing both directorates. Since 2016, work has been ongoing in Argentina to change the data protection law: the draft that will go before congress shortly includes specific provisions on privacy by design. Currently, Argentina is in the process of becoming a party to Convention 108 of the Council of Europe.

The ensuing discussion with the participants touched on the secrecy of telecommunications, the example of the Dutch law on encryption, the difficulty to find a common ground in different ministries if the work is done in silos, responsibility at different levels and easiness of encryption tools and the evolution of international law. Two main take-aways were put forward by the moderator: first, the simplicity of tools is key, but the flipside of that is that the convenience and efficiency of the digital age all have privacy implications; second, challenges in protecting online confidentiality exist, not just for governments or civil society, but for everyone.
 

FOURTH PANEL Processing of personal data by individuals, governments, business enterprises and private organisations

The moderator, Ms Malavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, provided a background for the discussion, highlighting recent developments that raise some concerns: India’s Aadhar system is privacy-invasive, but often presented as a great example of e-governance; the Chinese social credit system provides inspiration or many legislations to beta-test it in environments which have weak rights protections; the social contract between citizens and states is evolving as the collection and processing of data is increasingly performed by private actors;  among the youth, there is an invasion of privacy by peers. The panellists were given seven minutes to discuss the work of their organisations and the challenges faced in addressing personal data protection in this complex environment.

Ms Nighat Dad, executive director of the Digital Rights Foundation, Pakistan, discussed the absence of a privacy law or a data protection legislation in her country. In her opinion, telecommunication operators starting to work in Pakistan benefit from the fact that there are no local laws to protect users, and in practice these companies do not observe the same procedures as they do elsewhere, where legislation is in force.

The world’s biggest biometric database is currently being developed in Pakistan, it will have more than 200 million people registered through biometric procedures. Dad also discussed the system of mass surveillance in cities – sometimes in the ‘safe city’ projects that are being rolled out. There is no transparency about the collection, processing and distribution of data in ‘safe cities’. Work is also currently underway for a new cybercrime law, which features several provisions that give mass-surveillance powers to different bodies of the government without any accountability measures in place.

Ms Sophie Kwasny, head of the Data Protection Unit, Council of Europe, discussed the modernisation of the Council of Europe (CoE) Convention 108. There is no equivalent at the UN level on data protection, thus Convention 108 is unique. It was open for signature in 1981 to all countries and, alongside the membership of the CoE (47 countries), 4 more have acceded. Nearly 70 countries are observers.

A shift that Kwasny observed in the field of data protection is that governments have started calling for regulation and for an increase in the level of protection afforded. Modernisation of Convention 108 goes hand in hand with that, as the responsibility angle was stressed in the work undertaken over the last seven years with the involvement of over 50 countries. The changes pertain to increasing the rights of the data subjects, increased transparency and accountability, assessment of the likely impact on human rights prior to data processing, and the introduction of data breach notification obligations. In conclusion, Kwasny highlighted the horizontal scope of the convention (processing by law enforcement, private sector, etc.).

Mr Chawki Gaddes, professor of constitutional law and president of the National Authority for the Protection of Personal Data of Tunisia, presented the experience of his country as a precursor for data protection in the region, starting with a related provision in 2002 in the constitution. In 2014, the new constitution enlarged the scope of protection to private life more generally. In 2017, Tunisia became the 51st member of Convention 108 and is currently planning a new data protection law in accordance with the GDPR. In Gaddes’s view, to have the culture of privacy and data protection internalised on a mass scale, greater efforts need to be put into awareness raising and education. In transitional contexts, the role of a data protection authority is not to sanction, but to support the development of a rights-respecting culture. Among the activities included in their work, conferences and public events for judges and legal personnel are included. A recent success he noted was the withdrawal of the law project on biometric cards, which was declared unconstitutional.

The discussions that followed touched on incentives for data protection safeguards at the national level, the challenges faced by the Global South in protecting rights, as well as space for multistakeholder dialogue. An example cited was the strengthening of co-operation between the  CoE and the private sector: in a recent exchange of letters, major Internet companies and associations committed to work together with the pan-European organisation on issues such as child online protection, freedom of expression or cybercrime.

 

SECOND DAY

FIRST PANEL New and emerging issues

Mr Danilo Doneda, independent consultant and professor, at the State University of Rio de Janeiro, in Brazil, considered that new technologies are posing different challenges vis-à-vis data protection regulation. The main issue under consideration is that technological developments and automation have created an imbalance of privacy. This is because data protection principles are based upon the idea that individuals have control over and knowledge on how their data is used. He suggested that in order to improve data protection, preventive measures need to be taken and the focus should not only rest upon the individual, but rather it should be embedded in the technology itself (i.e. privacy by design) and take into consideration the social implications of data usage. He concluded by reaffirming that new uses of data demand new regulatory frameworks. However, the risk of bureaucratisation and excessive technicisms can create significant imbalances of power and information making it difficult (if not impossible) for citizens to be conscious ‘agents’ in the data protection and privacy discussion.

Ms Malavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, drew attention to the fact that transparency and accountability are lacking when it comes to relying on algorithms and Artificial Intelligence (AI). In particular, she considered whether AI and computation can reduce discrimination, when the data they are upon express discrimination. For example, in the case of the house-sharing application Airbnb, the software’s design favours discrimination as it asks users to provide personal data (i.e. names) in order to secure the booking. In a study done a year back, bookings with foreign names or names that ‘sounded black’ had a 30% higher chance of being rejected. She concluded that AI can be an empowering instrument in allowing people to overcome local physical and language barriers; however, as long as it remains a ‘black box’ lacking transparency, it also poses serious challenges.

Mr Alessandro Mantelero, associate professor of Law, Polytechnic University of Turin, Italy, told the audience not to forget the collective dimension and the impact that AI has on society at large. In particular, he considered that the forms and categories of discrimination created by algorithms are different than those ‘traditional’ categories and groups which are objects of discrimination. This is because such categories are defined artificially by computation, which means that they change rapidly every 3 seconds when an individual’s behaviour changes without them  being aware of it. He maintained that the debate regarding AI should be an ethical discussion comprising of specific rights (e.g. right to privacy), consumer protection, and taking into consideration the existing human rights legal framework. Although AI poses concrete legal issues, such discussion is ethical in nature because the mechanism in which AI operates is not consistent with the current societal values. He concluded his presentation by stressing the necessity to:

  • Consider the collective dimension when addressing data protection rather than focusing only on the individual level.
  • Criticise the ‘purpose’ argument: the problem with AI is that the purpose driving the computations is unknown, and in the case of machine learning. such purpose is evident although it is not clear how to reach it.
  • Move away from traditional approaches considering data protection only through the lenses of impact assessment. Rather, the discussion needs to also include societal and ethical values.
     

SECOND PANEL Safeguards, oversight and remedies

The final session was opened by Mr Thorsten Wetzling, project director at the Privacy Project, Stiftung Neue Verantwortung, Germany, who explained the rationale of a multistakeholder approach to ensuring the rule of law to provide effective oversight and remedy to various data collection processes. The evolving practice of data collection (also via phishing, network exploitation, etc.) requires new mechanisms to govern as technological innovations continue to challenge the legal system. There is no shortage of guiding principles promoting effective intelligence oversight, yet it remains an ambitious and unattained benchmark. Modern security and intelligence agencies use a wide range of digital tools, thus effective checks and balances are imperative to monitor and to sanction the potential abuse of power. Importantly, oversight is not a fixed concept, it is work in progress, therefore a broader set of perspectives is needed.

According to Ms Katitza Rodriguez, international rights director at the Electronic Frontier Foundation (EFF), cross-border access to data by law enforcement agencies is a key priority right now, as many institutions and governments are exploring new cross-border data paths that threaten user privacy and data protection. Among these, she made a reference to the CLOUD Act proposal in the US, the European e-evidence debates, the negotiation of the second additional protocol to the Budapest convention, and provisions in bilateral agreements. Her intervention focused on the safeguards currently missing, as advocated by the EFF. They include: individual notice requirement (users should be notified as early as possible to challenge the decision or seek remedy), judicially authorised access, strong factual basis for surveillance, dual privacy protection norms, and content meta-data protected.

Mr Eduardo Bertoni, director of the National Access to Public Information Agency of Argentina (NAPI), focused on the data processing practices of corporations and drew on his experience heading an oversight body in Argentina. Jurisdiction, in his view, is the key problem faced when protecting cross-border data in data processing by global corporations. According to its mandate, NAPI has to control the implementation and application of the law of data protection for people in Argentina. As an Argentinian, though, it is difficult to exercise your right to access, suppress or correct information in a global database pertaining to an Internet company. This issue is taken into account in the draft of the new data protection law, whose provisions are similar to the GDPR in this regard: the processing of the data of Argentinian citizens needs to be in line with the national law. In Bertoni’s view, there are still many grey areas when it comes to jurisdiction, and clarity is needed as to when to apply national law.

The perspective of Internet service providers (ISPs) was provided by Mr Mike Silber, head of the legal and commercial department at Liquid Telecom, South Africa. Among the key pillars of the eco-system in which ISPs operate are cybercrime and privacy legislation, as well as lawful interception and customer protection. They cannot be considered in isolation, since one affects the rest: it is important for the impact of cybercrime laws on privacy to be clearly understood at the drafting stage, in order to provide a consistent legal framework. Operators ask for a mechanism to deal with the right balance as embedded in the law, rather than being requested to make subjective decisions and policy determinations on the fly. Conventions are useful, provided they are well thought-through, but he called for caution in over-reliance on conventions that are not well-defined. Silber also called attention to the work of technical bodies and the move towards standardisation (the Internet Engineering Task Force (IETF) integrating security considerations and privacy considerations in its requests for comments).

Lastly, Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School talked about the right to remedy. What we have done in the digital era was to interpret and apply the existing framework of rights, which remain valid. By challenging practices, laws and policies in court and in front of other dispute resolution mechanisms, we can push the boundaries, as in the case of the right to be forgotten or data protection. ‘The right to remedy continues to be poorly discussed, because we are still learning the practices’, stressed McGregor. There is confusion over remedy in relation to algorithms, since it is understood as a way to solve a problem in a technology, rather than as a right under the international human rights law. Under the latter, states have an obligation to ensure effective access to justice where there is a claim; this is also reflected in the Ruggie framework on Business and Human Rights. However, remedies are generally thought of retrospectively, whereas it would be important to build them in from the start in our norms. Even if remedy without border is a key principle of international human rights, implementation remains a challenge. The three main obstacles McGregor identified for the effective implementation of the right to remedy are:

  1. The lack of regulation in certain areas of digital space, such as intelligence sharing between states or the Internet of Things.
  2. The lack of transparency and notification obligations towards the user (if one does not know that their rights are violated, one cannot assert a claim in this regard).
  3. The use of evidence and the possibility to challenge that – additionally challenged by the use of predictive technology and algorithms (credit score systems, etc.)

The discussion concluded that oversight is critical in the digital age and it should be included in both mechanisms of internal review and in the design of platforms for clarifying issues as they emerge. There is a need to strengthen oversight co-operation, for example, between local data protection agencies and security bodies, but also at the international level. A call was made for multistakeholder participation in establishing oversight mechanisms. To seek remedy, notification is mandatory, but not enough: there is a risk of perpetuating a digital divide by establishing different regimes for access to data and protection of rights. For geopolitical reasons, some forms of protection are better than others in the area of privacy and data protection. To address these issues, a mapping of different types of harm and possibilities to redress them would be a useful starting point.

12:50 – 13:00     Concluding remarks

The summary of the discussion was provided by the Office of the High Commissioner, UN Human Rights, the host of the event. The main take-aways from the sessions were:

  • more work and further guidance are needed to unpack the available legal framework for the protection of privacy
  • in addition to developing the principles, greater effort is needed to ensure adequate implementation
  • there is still a lack of adequate legal and procedural guidance at the national level, and innovative institutional set-ups can be the way forward; it is critical to give all human rights sufficient weight in the design of new institutions
  • there is an increased reliance on extraterritoriality and demands for access to data stored abroad, and many attacks were noted on encryption and individual rights
  • there is an emergence of powerful data-driven technology that brings both risks and opportunities for consent and anonymity  
  • the protection of children’s rights in digital space was a new discussion point and it needs to continue
  • learning from each other is fundamental and there is a need to work across silos

Written contributions that add to the discussion can be submitted via email (to privacyworkshop@ohchr.org) in the next month fo

FIRST DAY

The exploratory workshop on the right to privacy in the digital age took place on 19 - 20 February 2018, at the Office fo the High Commissioner for Human Rights (OHCHR) in Geneva, Switzerland. The workshop was held following the Human Rights Council (HRC) Resolution 34/7 on March 23 2017 and it had ‘the purpose of identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard’.

The session was opened by Ms Peggy Hicks, director of the Thematic Engagement, Special Procedures and Right to Development Division of the Office of the High Commissioner for Human Rights (OHCHR), who reflected upon the fact that the right to privacy lies specifically at the intersection between the human rights discourse and digital technology. Although data driven technology offers multiple opportunities for society at large, at the same time, it also poses some challenges, especially because ensuring the right to privacy is also linked to the enjoyment of other rights such as the freedom of expression and of association.

In particular, Hicks affirmed that there is currently an urgent need to strengthen the protection of human rights vis-à-vis:

  • Digital surveillance (especially for human rights defenders and activists).
  • Data processing, both by the government and private actors.
  • Internet of Things (IoT): ‘smart’ devices will continue to create new sources of data thus posing new threats for individuals and groups.


FIRST PANEL – Setting the scene: the role of the right to privacy within the human rights framework and for civic protection

The first session focused on the position occupied by the right to privacy within the existing human rights legal framework.

Ms Anja Seibert-Fohr, UN Human Rights Committee member, affirmed that the arising challenges for the right to privacy posed by social media can be effectively tackled by existing provisions, such as the International Covenant of Civil and Political Rights (ICCPR), in particular Article 17, ‘1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.’ She considered that an effective guarantee to the right to privacy is a problem that affects multiple countries. The main issues under consideration are regarding states’ access to data collected and/or administered by private parties, and whether such data can be transferred outside the country. The actual scope of application of Article 17 is rather broad: ‘interference’ includes electronic surveillance, online surveillance, online tapping and metadata collection. Moreover, the claim arguing that the ICCPR does not apply extraterritorially has to be rejected as well.

The interpretation of Article 17 by the UN Human Rights Committee regards the existence of substantive standards protecting the right to privacy: generally speaking the convention does not establish a total ban on surveillance, but it makes sure that when interference occurs, it is of arbitrary nature and is regulated by statute. In short, interference is possible only if authorised by the law and restricted to the circumstances of the case.

Mr Joe Cannataci, UN special rapporteur on the right to privacy, considered the legal protection granted by the Council of Europe’s Convention (CoE) 108. He said that the right to privacy is a fundamental and universal human right; however, it is neither absolute nor self-executing. Its main purpose is to ensure the protection of individuals, because citizens need to have available safeguards and judicial remedies regardless of whether the threat is national or international. The basic principles under application are usually necessity and proportionality. At the national level, effective guarantees are not always granted. For example, all the legislation concerning the oversight of domestic intelligence within the UN’s  member states, requires amendments and reinforcement. Moreover, 75% of the UN’s member states have no system of detailed safeguards in place when it comes to the surveillance of their citizens by other states. He concluded by remarking that although developed within a European framework, Convention 108 is becoming the de-facto global convention on the protection of personal data because there are also non-European states who have (or are waiting to) ratify it. Moreover, in regards to surveillance, UN member states are in need of developing a new regulatory framework because the existing Convention 108 is not sufficient.

Ms Anita Ramasastry, chair of the UN Working Group on Business and Human Rights, focused on the relevance of the UN guiding principles on Business and Human Rights which were introduced in 2011, after tech companies in the US were accused of human rights abuses after providing user data to the government. She illustrated the three main pillars upon which the guiding principles rest.

  1. States’ existing obligations to respect, protect and fulfil human rights and fundamental freedoms when companies transfer data. For example, the European Parliament has a data export restriction, i.e. prohibiting tech companies from transferring data to countries where the right to privacy is not ensured.
  2. Corporate respect for human rights and applicable laws: for example when considering the principle of due diligence and the ‘cause contribution’ and ‘direct link’ test.
  3. Access to remedy: both businesses and states have an obligation to provide effective remedies to breaches, both via judicial and non-judicial means to ensure reparation.

Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, mainly addressed the interlinkage between the right to privacy and other human rights. In particular, he maintained that ‘privacy is under threat in ways that neither we, n are not familiar with and law is not familiar with’. Legally speaking, under Article 2 of the ICCPR, state parties are bound to ensure the respect for the rights included in the Covenant. Artile 17 not only ensures the right to privacy, but it also protects against discrimination and interferences in the enjoyment of such rights. Moreover, the restrictions that may imposed by states under Article  19(3) do not apply to the right to hold an opinion (Art. 19(1)) and most importantly, they need to be put under the ‘necessary and proportionate’ test.  He then concluded his presentation by affirming that privacy is seriously at risk and that encryption and anonymity in digital communication enable individuals to exercise their rights to freedom of opinion and expression in the digital age and, as such, deserve strong protection (as highlighted in the UN 2015 Report on ‘Encryption, anonymity and human rights framework’).
 

SECOND PANEL – Surveillance and communications interception

Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School, moderated the second panel on the interlinkage of surveillance and the right to privacy. She referred to the 2014 OHCHR report on the ‘Right to privacy in the digital age’ and reasoned on the lack of transparency regarding implementation of surveillance techniques and in the application of the principle of proportionality.

Mr Arthur Gwagwa, ‎senior Internet policy & human rights research fellow, Centre for

Intellectual Property and Informational Technology Law, at the Strathmore University in Kenya, considered that since the publication of the aforementioned report, two important developments have taken place. First, in 2015 there was the appointment of Prof. Joe Cannataci as the first ever special rapporteur on the right to privacy. Second, during 2016, the discussion regarding the right to privacy started to also focus on the role of companies. He stressed the importance of including companies in the discussion because they are often found to be involved in censorship together with governments. He also warned about new forms of surveillance, such as ‘social media intelligence’, which uses users various tools to monitor data on the public domain, for example on social media. He concluded his speech by stating that there are three main dimensions to consider when addressing security: individual, business, and society, and that security is not only a matter of technology, but also a matter of policy reach.

Ms Sarah McKune, senior researcher at Citizen Lab,University Toronto, Canada, focused on cyber-threats targeting civil society and international cybersecurity initiatives. She explained that with the current technological developments, new forms of targeted surveillance are now possible (such as: phishing operations and network injections). She specified that legally speaking, there is no justification (under the principles of legality, necessity and proportionality) for the use of such surveillance by governments. She concluded by considering that what needs to be done is to ensure that a system designed for lawful interception is not used by governments for unlawful purposes against civil society leaders.

Ms Gail Kent, global security expert at Facebook, United Kingdom, considered Facebook’s policy regarding data sharing. She specified that Facebook, being a US-based company, abides by US national laws, which means that they provide content and data to governments, according to the law. This means that Facebook is allowed under certain circumstances to provide basic subscriber information (username, IP addresses, phone number and last log-in) but not to share lifetime content and metadata. Moreover, she explained that when a government is requesting access to data from Facebook, there is a three-step approach in place. First, the government is required to use its national legal framework allowing for data access, then the jurisdiction of the individual data requested is considered, and lastly, to make sure that such a request complies with human rights standards.
 

THIRD PANEL Securing and protecting online confidentiality

The session, moderated by Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, discussed the variety of ways in which online confidentiality is under threat and measures to better protect it.

Mr ‘Gbenga Sesan, executive director of the Paradigm Initiative, Nigeria, talked about the differences in the meaning of confidentiality among various groups. In a study on different African countries, the trend observed – especially in electoral contexts – is that of using information as an opportunity to gain increased control. Sesan noted that the power dynamic is not tilted in favour of citizens; rather, confidentiality is linked to the sense of privilege attached to senior authority. While the tensions between security and privacy remain relevant, this conversation needs to be held together with civil society rather than in silos. Progress has been made in the area of encryption, in particular at the level of awareness among citizens and deployment in services used on a daily basis. In conclusion, Sesan stressed the importance of consent and choice in the use of information: ‘confidentiality is a choice – the fact that I have put my personal information online is a choice – it does not mean I waived my entire right to privacy for something else’.

Ms Fanny Hidvégi, European policy manager at Access Now, Brussels, highlighted the actions taken by states. She started with the Hungarian example: a bill that was introduced in 2015 to ban encrypted services and demand mandatory backdoors but did not pass into law. There is a need for a public debate over encryption, the zero-sum framing of the discussion is counterproductive. ‘We have a right to both security and privacy. Necessary cannot be just useful, reasonable and desirable, we need to search for evidence in that field’, she pointed out. EU policy work on encryption is currently taking place in three stages. 1) Direct co-operation with law enforcement agencies (undermining the MLAT reform to have direct co-operation with law enforcement agencies), 2) Regulation of vulnerabilities and dual use export technologies, and 3) E-privacy reform. According to Hidvégi, the latter is the most important law on the confidentiality of information which exists in the EU and is related to economic aspects and increasing trust in the Single Digital Market.

Shifting perspective to the national level, Mr Eduardo Bertoni, director of the National Access to Public Information Agency (NAPI) of Argentina, offered the example of his own country in complying with international standards. Argentina passed its first access to information law in 2016 and created NAPI as an oversight body. There was a deliberate decision to include the oversight of the data protection law in the mandate of NAPI, alongside the monitoring of access to information. In practice, there is a possible conflict between access to information and freedom of expression, on the one hand, and data protection on the other, therefore NAPI has two directorates at the same level: one for access to information and one for data protection. In cases in which both rights are concerned, NAPI internal regulations say that a transparent decision needs to be taken after hearing both directorates. Since 2016, work has been ongoing in Argentina to change the data protection law: the draft that will go before congress shortly includes specific provisions on privacy by design. Currently, Argentina is in the process of becoming a party to Convention 108 of the Council of Europe.

The ensuing discussion with the participants touched on the secrecy of telecommunications, the example of the Dutch law on encryption, the difficulty to find a common ground in different ministries if the work is done in silos, responsibility at different levels and easiness of encryption tools and the evolution of international law. Two main take-aways were put forward by the moderator: first, the simplicity of tools is key, but the flipside of that is that the convenience and efficiency of the digital age all have privacy implications; second, challenges in protecting online confidentiality exist, not just for governments or civil society, but for everyone.
 

FOURTH PANEL Processing of personal data by individuals, governments, business enterprises and private organisations

The moderator, Ms Malavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, provided a background for the discussion, highlighting recent developments that raise some concerns: India’s Aadhar system is privacy-invasive, but often presented as a great example of e-governance; the Chinese social credit system provides inspiration or many legislations to beta-test it in environments which have weak rights protections; the social contract between citizens and states is evolving as the collection and processing of data is increasingly performed by private actors;  among the youth, there is an invasion of privacy by peers. The panellists were given seven minutes to discuss the work of their organisations and the challenges faced in addressing personal data protection in this complex environment.

Ms Nighat Dad, executive director of the Digital Rights Foundation, Pakistan, discussed the absence of a privacy law or a data protection legislation in her country. In her opinion, telecommunication operators starting to work in Pakistan benefit from the fact that there are no local laws to protect users, and in practice these companies do not observe the same procedures as they do elsewhere, where legislation is in force.

The world’s biggest biometric database is currently being developed in Pakistan, it will have more than 200 million people registered through biometric procedures. Dad also discussed the system of mass surveillance in cities – sometimes in the ‘safe city’ projects that are being rolled out. There is no transparency about the collection, processing and distribution of data in ‘safe cities’. Work is also currently underway for a new cybercrime law, which features several provisions that give mass-surveillance powers to different bodies of the government without any accountability measures in place.

Ms Sophie Kwasny, head of the Data Protection Unit, Council of Europe, discussed the modernisation of the Council of Europe (CoE) Convention 108. There is no equivalent at the UN level on data protection, thus Convention 108 is unique. It was open for signature in 1981 to all countries and, alongside the membership of the CoE (47 countries), 4 more have acceded. Nearly 70 countries are observers.

A shift that Kwasny observed in the field of data protection is that governments have started calling for regulation and for an increase in the level of protection afforded. Modernisation of Convention 108 goes hand in hand with that, as the responsibility angle was stressed in the work undertaken over the last seven years with the involvement of over 50 countries. The changes pertain to increasing the rights of the data subjects, increased transparency and accountability, assessment of the likely impact on human rights prior to data processing, and the introduction of data breach notification obligations. In conclusion, Kwasny highlighted the horizontal scope of the convention (processing by law enforcement, private sector, etc.).

Mr Chawki Gaddes, professor of constitutional law and president of the National Authority for the Protection of Personal Data of Tunisia, presented the experience of his country as a precursor for data protection in the region, starting with a related provision in 2002 in the constitution. In 2014, the new constitution enlarged the scope of protection to private life more generally. In 2017, Tunisia became the 51st member of Convention 108 and is currently planning a new data protection law in accordance with the GDPR. In Gaddes’s view, to have the culture of privacy and data protection internalised on a mass scale, greater efforts need to be put into awareness raising and education. In transitional contexts, the role of a data protection authority is not to sanction, but to support the development of a rights-respecting culture. Among the activities included in their work, conferences and public events for judges and legal personnel are included. A recent success he noted was the withdrawal of the law project on biometric cards, which was declared unconstitutional.

The discussions that followed touched on incentives for data protection safeguards at the national level, the challenges faced by the Global South in protecting rights, as well as space for multistakeholder dialogue. An example cited was the strengthening of co-operation between the  CoE and the private sector: in a recent exchange of letters, major Internet companies and associations committed to work together with the pan-European organisation on issues such as child online protection, freedom of expression or cybercrime.

 

SECOND DAY

FIRST PANEL New and emerging issues

Mr Danilo Doneda, independent consultant and professor, at the State University of Rio de Janeiro, in Brazil, considered that new technologies are posing different challenges vis-à-vis data protection regulation. The main issue under consideration is that technological developments and automation have created an imbalance of privacy. This is because data protection principles are based upon the idea that individuals have control over and knowledge on how their data is used. He suggested that in order to improve data protection, preventive measures need to be taken and the focus should not only rest upon the individual, but rather it should be embedded in the technology itself (i.e. privacy by design) and take into consideration the social implications of data usage. He concluded by reaffirming that new uses of data demand new regulatory frameworks. However, the risk of bureaucratisation and excessive technicisms can create significant imbalances of power and information making it difficult (if not impossible) for citizens to be conscious ‘agents’ in the data protection and privacy discussion.

Ms Malavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, drew attention to the fact that transparency and accountability are lacking when it comes to relying on algorithms and Artificial Intelligence (AI). In particular, she considered whether AI and computation can reduce discrimination, when the data they are upon express discrimination. For example, in the case of the house-sharing application Airbnb, the software’s design favours discrimination as it asks users to provide personal data (i.e. names) in order to secure the booking. In a study done a year back, bookings with foreign names or names that ‘sounded black’ had a 30% higher chance of being rejected. She concluded that AI can be an empowering instrument in allowing people to overcome local physical and language barriers; however, as long as it remains a ‘black box’ lacking transparency, it also poses serious challenges.

Mr Alessandro Mantelero, associate professor of Law, Polytechnic University of Turin, Italy, told the audience not to forget the collective dimension and the impact that AI has on society at large. In particular, he considered that the forms and categories of discrimination created by algorithms are different than those ‘traditional’ categories and groups which are objects of discrimination. This is because such categories are defined artificially by computation, which means that they change rapidly every 3 seconds when an individual’s behaviour changes without them  being aware of it. He maintained that the debate regarding AI should be an ethical discussion comprising of specific rights (e.g. right to privacy), consumer protection, and taking into consideration the existing human rights legal framework. Although AI poses concrete legal issues, such discussion is ethical in nature because the mechanism in which AI operates is not consistent with the current societal values. He concluded his presentation by stressing the necessity to:

  • Consider the collective dimension when addressing data protection rather than focusing only on the individual level.
  • Criticise the ‘purpose’ argument: the problem with AI is that the purpose driving the computations is unknown, and in the case of machine learning. such purpose is evident although it is not clear how to reach it.
  • Move away from traditional approaches considering data protection only through the lenses of impact assessment. Rather, the discussion needs to also include societal and ethical values.
     

SECOND PANEL Safeguards, oversight and remedies

The final session was opened by Mr Thorsten Wetzling, project director at the Privacy Project, Stiftung Neue Verantwortung, Germany, who explained the rationale of a multistakeholder approach to ensuring the rule of law to provide effective oversight and remedy to various data collection processes. The evolving practice of data collection (also via phishing, network exploitation, etc.) requires new mechanisms to govern as technological innovations continue to challenge the legal system. There is no shortage of guiding principles promoting effective intelligence oversight, yet it remains an ambitious and unattained benchmark. Modern security and intelligence agencies use a wide range of digital tools, thus effective checks and balances are imperative to monitor and to sanction the potential abuse of power. Importantly, oversight is not a fixed concept, it is work in progress, therefore a broader set of perspectives is needed.

According to Ms Katitza Rodriguez, international rights director at the Electronic Frontier Foundation (EFF), cross-border access to data by law enforcement agencies is a key priority right now, as many institutions and governments are exploring new cross-border data paths that threaten user privacy and data protection. Among these, she made a reference to the CLOUD Act proposal in the US, the European e-evidence debates, the negotiation of the second additional protocol to the Budapest convention, and provisions in bilateral agreements. Her intervention focused on the safeguards currently missing, as advocated by the EFF. They include: individual notice requirement (users should be notified as early as possible to challenge the decision or seek remedy), judicially authorised access, strong factual basis for surveillance, dual privacy protection norms, and content meta-data protected.

Mr Eduardo Bertoni, director of the National Access to Public Information Agency of Argentina (NAPI), focused on the data processing practices of corporations and drew on his experience heading an oversight body in Argentina. Jurisdiction, in his view, is the key problem faced when protecting cross-border data in data processing by global corporations. According to its mandate, NAPI has to control the implementation and application of the law of data protection for people in Argentina. As an Argentinian, though, it is difficult to exercise your right to access, suppress or correct information in a global database pertaining to an Internet company. This issue is taken into account in the draft of the new data protection law, whose provisions are similar to the GDPR in this regard: the processing of the data of Argentinian citizens needs to be in line with the national law. In Bertoni’s view, there are still many grey areas when it comes to jurisdiction, and clarity is needed as to when to apply national law.

The perspective of Internet service providers (ISPs) was provided by Mr Mike Silber, head of the legal and commercial department at Liquid Telecom, South Africa. Among the key pillars of the eco-system in which ISPs operate are cybercrime and privacy legislation, as well as lawful interception and customer protection. They cannot be considered in isolation, since one affects the rest: it is important for the impact of cybercrime laws on privacy to be clearly understood at the drafting stage, in order to provide a consistent legal framework. Operators ask for a mechanism to deal with the right balance as embedded in the law, rather than being requested to make subjective decisions and policy determinations on the fly. Conventions are useful, provided they are well thought-through, but he called for caution in over-reliance on conventions that are not well-defined. Silber also called attention to the work of technical bodies and the move towards standardisation (the Internet Engineering Task Force (IETF) integrating security considerations and privacy considerations in its requests for comments).

Lastly, Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School talked about the right to remedy. What we have done in the digital era was to interpret and apply the existing framework of rights, which remain valid. By challenging practices, laws and policies in court and in front of other dispute resolution mechanisms, we can push the boundaries, as in the case of the right to be forgotten or data protection. ‘The right to remedy continues to be poorly discussed, because we are still learning the practices’, stressed McGregor. There is confusion over remedy in relation to algorithms, since it is understood as a way to solve a problem in a technology, rather than as a right under the international human rights law. Under the latter, states have an obligation to ensure effective access to justice where there is a claim; this is also reflected in the Ruggie framework on Business and Human Rights. However, remedies are generally thought of retrospectively, whereas it would be important to build them in from the start in our norms. Even if remedy without border is a key principle of international human rights, implementation remains a challenge. The three main obstacles McGregor identified for the effective implementation of the right to remedy are:

  1. The lack of regulation in certain areas of digital space, such as intelligence sharing between states or the Internet of Things.
  2. The lack of transparency and notification obligations towards the user (if one does not know that their rights are violated, one cannot assert a claim in this regard).
  3. The use of evidence and the possibility to challenge that – additionally challenged by the use of predictive technology and algorithms (credit score systems, etc.)

The discussion concluded that oversight is critical in the digital age and it should be included in both mechanisms of internal review and in the design of platforms for clarifying issues as they emerge. There is a need to strengthen oversight co-operation, for example, between local data protection agencies and security bodies, but also at the international level. A call was made for multistakeholder participation in establishing oversight mechanisms. To seek remedy, notification is mandatory, but not enough: there is a risk of perpetuating a digital divide by establishing different regimes for access to data and protection of rights. For geopolitical reasons, some forms of protection are better than others in the area of privacy and data protection. To address these issues, a mapping of different types of harm and possibilities to redress them would be a useful starting point.

12:50 - 13:00     Concluding remarks

The summary of the discussion was provided by the Office of the High Commissioner, UN Human Rights, the host of the event. The main take-aways from the sessions were:

  • more work and further guidance are needed to unpack the available legal framework for the protection of privacy
  • in addition to developing the principles, greater effort is needed to ensure adequate implementation
  • there is still a lack of adequate legal and procedural guidance at the national level, and innovative institutional set-ups can be the way forward; it is critical to give all human rights sufficient weight in the design of new institutions
  • there is an increased reliance on extraterritoriality and demands for access to data stored abroad, and many attacks were noted on encryption and individual rights
  • there is an emergence of powerful data-driven technology that brings both risks and opportunities for consent and anonymity  
  • the protection of children’s rights in digital space was a new discussion point and it needs to continue
  • learning from each other is fundamental and there is a need to work across silos

Written contributions that add to the discussion can be submitted via email (to privacyworkshop@ohchr.org) in the next month fo