Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Udbhav Tiwari
Mozilla Corporation, owned by Mozilla Foundation, is a unique organization in the technology sector. It operates without the typical incentives for profit maximization and prioritizes user welfare and the public interest. While initially having a strong policy against data collection, Mozilla had to make changes due to limitations in product development. They have since explored privacy-preserving ways of collecting information, separating the “who” from the “what” to protect user privacy.
Privacy-preserving technologies have become increasingly feasible with the proliferation of internet availability, bandwidth, and computational power. Privacy has emerged as a key differentiating factor for products, leading to increased investment in privacy-focused solutions.
Mozilla has taken a critical stance on Google’s Chrome Privacy Sandbox set of technologies, acknowledging improvements but asserting the need for technical validation. They are also exploring the use of Privacy-Preserving Technologies (PETs) like Decentralized Ad Delivery (DAP) and Oblivious HTTP (OHTP) for telemetry information collection.
While recognizing the value of advertising to support internet publishers, Mozilla deems the current state of the advertising ecosystem unsustainable. They have introduced features like Firefox’s “Total Cookie Protection” to enhance user privacy while still allowing essential functionality.
Mozilla has raised concerns about Google’s Privacy Sandbox standards potentially becoming the de facto norms, with the potential to impact privacy and competition. They advocate for responsible implementation of PETs to strike a balance between privacy and data collection.
Human involvement in data collection decisions is crucial to consider the risks to user privacy. Mozilla emphasizes the importance of accountability and responsible practices.
In summary, Mozilla Corporation distinguishes itself in the technology sector with its focus on user welfare and the public interest. They actively explore privacy-preserving technologies, criticize Google’s Privacy Sandbox, and advocate for responsible data collection practices. Through their efforts, Mozilla aims to foster a more privacy-protective and user-centered tech industry.
Wojciech Wiewiórowski
The European Data Protection Supervisor (EDPS) plays an essential role in safeguarding privacy within the European Union (EU). Their key priority is the effective implementation of privacy laws through the use of tools. The EDPS serves as a supervisor for EU institutions and offers advice during the legislative process, ensuring that privacy concerns are integrated into decision-making. Their ultimate goal is to promote a safer digital future by advocating for the use of IT architects and a comprehensive privacy engineering approach.
In line with the EDPS’s efforts, Wojciech Wiewiórowski, a prominent figure in the field, acknowledges and supports the work of non-governmental organizations (NGOs) in enforcing privacy policies. He recognizes the vital role that NGOs play and suggests that their work should have been undertaken by data protection commissions much earlier. This recognition highlights the importance of collaboration between regulatory bodies and NGOs in effectively safeguarding individuals’ privacy rights.
Furthermore, Eurostat, the statistical office of the European Union, has developed privacy-preserving tools such as trusted execution environments and trusted smart surveys. These innovative tools aim to ensure privacy while conducting official statistics. The United Nations has included these tools in their guide on privacy enhancing technologies for official statistics, further validating their importance and effectiveness in maintaining data privacy.
Overall, the European Data Protection Supervisor, Wojciech Wiewiórowski, and Eurostat are actively working to uphold privacy rights and create a safer digital environment. Their focus on utilizing tools and collaborating with NGOs demonstrates their commitment to establishing a robust framework for data protection. Embracing these initiatives provides individuals with greater confidence in the privacy of their personal information.
Clara Clark Nevola
Privacy enhancing technologies (PETs) are becoming increasingly important in today’s digital era as they enable data sharing while protecting privacy. The Information Commissioner’s Office (ICO) in the UK has recognised the significance of PETs and has released guidelines that outline how these technologies can support data minimisation, security, and protection.
The ICO’s guidelines highlight the role that PETs play in achieving data minimisation, which refers to the practice of only collecting and retaining the minimum amount of personal data necessary for a specific purpose. By implementing PETs, organisations can ensure that they are processing and sharing data only to the extent required, thereby reducing the risk of potential breaches or misuse.
Furthermore, PETs contribute to data security, addressing concerns about the potential vulnerability of shared data. Different types of PETs, such as homomorphic encryption, secure multi-party computation, and zero-knowledge proofs, offer various solutions for securing data in different sharing scenarios. Homomorphic encryption allows computations to be done on encrypted data without having to decrypt it, while secure multi-party computation enables multiple parties to perform a computation on their data without revealing any sensitive information. Zero-knowledge proofs allow the verification of a claim without revealing the supporting data. These technologies can help protect data integrity while allowing for collaboration and data sharing.
Anonymisation or de-identification is another key aspect of PETs. By applying these techniques, organisations can remove or alter personal identifiers, making it more difficult to link shared data to specific individuals. This helps to protect privacy while still allowing for data analysis and research.
Despite the clear benefits of PETs, challenges remain. Technical standards for PETs need to be developed to ensure interoperability and ease of implementation. Additionally, the costs associated with implementing PETs can be high, posing a barrier to adoption for some organisations. Awareness and understanding of PETs also need to be improved, particularly among lower-tech organisations that could greatly benefit from them.
Data sharing itself poses challenges beyond legal considerations. Organisational and business barriers, such as concerns about reputation and commercial interests, can hinder data sharing efforts. Stakeholders often express reluctance to share their data due to uncertainties about how it will be used or what the outcomes may be.
To overcome these challenges, the ICO advocates for partnerships and collaborations between PET developers, academics, and traditional organisations like local governments and health bodies. By bringing together experts from different fields, these partnerships can elevate awareness and understanding of PETs and facilitate their adoption by traditional organisations.
In conclusion, privacy enhancing technologies are crucial tools for enabling data sharing and protecting privacy in the digital era. The ICO’s guidelines demonstrate how PETs can support data minimisation, security, and protection. While challenges exist in terms of technical standards, costs, and awareness, partnerships between PET developers and traditional organisations can help overcome these obstacles. By promoting the adoption of PETs, organisations can achieve a balance between data sharing and privacy protection, fostering innovation and collaboration while safeguarding individuals’ personal information.
Suchakra Sharma
The speakers in the discussion present different perspectives on privacy in software development. One speaker argues in favour of considering Privacy Enhancement Technologies (PETs) from the software perspective. This involves examining how software handles data, as it can provide insights into developers’ intentions and identify potential privacy violations. The speaker highlights the importance of evaluating the software in order to predict and prevent privacy breaches. As a solution, Privado is developing a tool that can assess how software handles data.
On the other hand, another speaker focuses on the significance of technically verifiable Privacy Impact Assessments (PIAs) in ensuring proactive privacy. They note that during software development, the necessary information for PIAs is already available. By incorporating PIAs into the development process, privacy regulations can be adhered to right from the design phase to deployment. To facilitate this, a tool has been built to perform verifiable PIAs, identifying potential privacy violations in advance. This approach is seen as a guarantee for proactive privacy.
The third speaker explores the possibility of certifying software for privacy compliance. They highlight the importance of evaluating the data processing and handling intentions of software. By doing so, privacy compliance checks can be conducted before the software is deployed. They suggest that regulatory laws such as GDPR and CCPA can be translated into fine-grained checks and tests for compliance. This certification process is considered a potential solution to ensure privacy in software development.
In conclusion, the speakers all emphasize the need to evaluate how software handles data and ensure compliance with privacy regulations throughout the entire software development lifecycle. By considering PETs, performing verifiable PIAs, and certifying software for privacy compliance, proactive measures can be taken to protect privacy. These perspectives highlight the increasing importance of addressing privacy concerns in the software development process.
Maximilian Schrems
NOIP, an organisation, has developed a system that automates the generation and management of complaints about General Data Protection Regulation (GDPR) compliance. This system has proven to be effective in achieving a 42% compliance rate by proactively sending guidelines to companies.
The system operates by performing an auto-scan of websites to identify potential GDPR violations, which is then followed by manual verification. Once a violation is detected, the system auto-generates a complaint, which is then transferred to the violating company for action. Additionally, a platform is used for companies to provide feedback and declare their compliance.
Interestingly, the system has observed a domino effect, wherein even companies that were not directly intervened with have shown improved compliance. This suggests that the awareness and actions taken by some companies have influenced others in the industry to improve their GDPR compliance as well.
Data protection authorities recognise the potential for efficiency that new technologies can bring, but they also express concerns and high levels of interest. They acknowledge that utilising new technologies, such as the automated GDPR compliance system, can increase efficiency by eliminating trivial tasks and increasing the quality of work through the use of well-proven templates.
However, implementing new technology poses certain challenges. The adoption of new technology requires technical infrastructures, such as programmers, to support its implementation. Additionally, a culture shift is necessary for organisations to focus on specific tasks related to the new technology and adapt to the changes it brings.
In conclusion, NOIP’s automated system for GDPR compliance has achieved a significant compliance rate and has demonstrated the potential for technology to enforce and improve GDPR compliance in a more efficient manner. While there are challenges associated with implementing new technology, the benefits of increased efficiency and quality are substantial. It is noteworthy that the system has also influenced compliance improvement among companies that were not directly addressed, highlighting its positive impact on the industry as a whole.
Nicole Stephensen
The analysis explores different perspectives on privacy-enhancing technology and data protection. One argument presented is that privacy-enhancing technology should not replace good decision-making. It is emphasised that governments and organizations have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations. This suggests that while privacy-enhancing technology can be beneficial, it should not be solely relied upon to make ethical and responsible decisions regarding data privacy.
Another argument highlighted is the struggle faced by organizations in identifying and mitigating risks, particularly when dealing with large volumes of data or complex vendor relationships. Data leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifies as a personal data breach. This indicates that organizations may face challenges in effectively managing and protecting data, especially in situations involving extensive data sets or intricate vendor arrangements.
However, the analysis also acknowledges the utility of privacy-enhancing technologies in controlling data leakage. Specifically, the example of Q-Privacy is provided as a tool that allows organizations to audit for data leakage and enforce rules about data usage. This suggests that privacy-enhancing technologies, particularly those focused on data accountability, can play a valuable role in preventing and controlling data leakage incidents.
Furthermore, the importance of prioritizing purpose specification and collection minimization in data protection practices is highlighted. The argument put forward states that these are the building blocks for a culture that limits the use and disclosure of personal data as much as possible. This implies that organizations should be cautious in collecting only necessary data and clearly defining the purposes for which it will be used. By doing so, they can actively contribute to a privacy-conscious environment.
Lastly, the analysis identifies several barriers to the implementation of privacy-enhancing technologies. These include the privacy maturity of the technology suppliers, their geographical location, and the budget of the organization. Additionally, it is noted that decision makers in the privacy domain tend to be more in the legal space and have a less technical focus, which could also be a barrier for adoption. This suggests that a multifaceted approach is necessary to address these barriers and promote the effective adoption and integration of privacy-enhancing technologies.
In conclusion, the analysis provides an overview of various perspectives on privacy-enhancing technology and data protection. It emphasizes the importance of good decision-making, compliance with privacy laws and community expectations, risk identification and mitigation, data accountability tools, purpose specification, and collection minimization in ensuring effective data protection practices. Moreover, the analysis sheds light on the challenges and barriers associated with the implementation of privacy-enhancing technologies, highlighting the need for a comprehensive approach to overcome these obstacles.
Christian Reimsbach Kounatze
In the realm of technology and privacy, it has been established that these two areas can provide scalable solutions to effectively address problems. Maximilian Schrems, a prominent figure in this field, emphasizes the advantages of implementing efficient systems that can eliminate trivial work and enhance the overall quality of work. By using proven templates and carefully selecting cases to work on, these systems greatly improve efficiency and productivity.
Privacy tools, in particular, are seen as indispensable in supporting the work of agencies involved in data protection. These tools enable agencies to effectively navigate the complex landscape of privacy management. However, barriers hinder the widespread adoption of privacy-enhancing technologies. Factors such as low budgets, a lack of technical focus in decision-making teams, and the prioritization of larger organizations impede the adoption and implementation of these technologies. Addressing these issues is crucial to fully benefitting from the advantages offered by privacy-enhancing technologies.
Automation is widely regarded as a crucial component in privacy management. It allows for scaling efforts and addressing the challenges posed by the ever-increasing scale of privacy concerns. However, human involvement should not be replaced entirely. Speakers agree that a balance must be struck between automation and human decision-making. While automation can streamline processes, human oversight and decision-making play an integral role in ensuring ethical and responsible practices. Striking this balance is key to realizing the full potential of automation in privacy management.
In conclusion, the speakers at the event highlighted the significant role that technology, privacy tools, and human involvement play in addressing problems and supporting the work of agencies in the realm of privacy and data protection. Scalable solutions, efficient systems, and the adoption of privacy-enhancing technologies are essential in tackling the challenges at hand. While automation is critical, it should not replace the human touch. By acknowledging these factors and working towards effective implementation, privacy can be ensured in an increasingly digital world.
Session transcript
Christian Reimsbach Kounatze :
Okay, I would say our speaker has arrived, so we can actually start the session. So welcome everyone to this session, IGF session, on emerging privacy-enhancing technologies, but also a little bit more. So maybe as a short introduction, my name is Christian Reimsburg. I’m a member of the OECD Secretariat in charge of privacy and data governance. And today we have a good, interesting set of different speakers that will talk to us about the role, essentially the role of technologies for enhancing privacy and data governance with trust. We will not only talk about classic privacy-enhancing technologies such as, let’s pick one, homomorphic encryption, or federated learning, which has been discussed in the past a lot. But we actually will have a broader discussion about what is the role of digital technologies for not only being, for going beyond to be the problem when it comes to privacy, but also to become a solution or to be used as a solution, and what are the challenges related to that. So we have different speakers that will make an intervention. We will start, indeed, with the role of privacy-enhancing technologies, but then we will move to broader discussions. And without further ado, I would like to invite our very first speaker from the UK’s Data Protection Authority to make her intervention. And maybe I would let each of you briefly introduce yourself, because maybe that’s a little bit quicker instead of going through each of you individually. I would say let’s start with the very first presentation and Clara the floor is yours But maybe very briefly if I may say so so the idea in terms of the run-up show is to have a series of interventions by our speakers They have roughly seven minutes and after that we will have a first Set of questions and discussions and we will then open the floor roughly 30 minutes before the end open the floor to the audience And we may have also a second round after that So be prepared and Clara the floor is is yours if you may introduce yourself very briefly Also talk a little bit about the ICO if you want and then go ahead with with a subject matter. Thank you Thank you. And can I just check other slides showing while on your side? It’s showing pretty well. Yes. Okay, great
Clara Clark Nevola:
So, my name is Clara Clark Nevella and I’m joining you from the UK this morning. Well my morning I guess your afternoon and and thanks very much for having me and I’m sorry. I can’t be there in person so My bit will be to talk about the privacy enhancing technologies aspect and in doing so I’ll also kind of introduce my role and maybe the role of the information commissioner and So the way that and the ICO regard privacy enhancing technologies is basically as a tool to enable data sharing If you’re not familiar with the information commissioners office where the UK’s independent data protection authority We regulate data protection and wider information rights and where as other data protection authorities we are Independent of government but publicly funded and We produce guidance. We take enforcement action. We provide advice and support for organizations members of the public and and we also engage with governments and other stakeholders on Advancing policy positions in this area. So within this within a technology policy team. And our role is to anticipate and understand and shape how emerging technologies and innovation impact people in society. And that’s very much how I’ve approached privacy enhancing technologies. So maybe the first question would be, well, what are privacy enhancing technologies? But actually I’m not going to start from there because I think, although it’s interesting to understand how they work and what they do, it’s, I think, more interesting to approach what does a privacy enhancing technology actually do? So this is quite a vague term, it covers multiple disparate techniques and I see it more as a sort of toolbox that each one of toolbox can do a different thing. And instead of explaining, you know, what is a hammer? It’s more interesting to see, well, what can a hammer do for you? So if you have some furniture that you need to assemble, how can you put this furniture together? So not so much like how do you make a screwdriver or the technical components of a screwdriver, but the screwdriver allows you to screw in two pieces together. And with that optic is how I’d invite you to approach privacy enhancing technologies. So instead of focusing straight away on the tools, I’ll start with explaining what the problem is. So what is this furniture that we’re trying to assemble? And broadly, the furniture we’re trying to assemble, the problem statement that we have is that data sharing is difficult. There’s lots of different scenarios in which data sharing has challenges. And these challenges are sometimes data protection law, but in many cases, they’re much broader. So they will be reputational, commercial, organizational barriers. So typical scenarios of data sharing involve two or more organizations who are trying to share data between each other. So for example, a hospital and the local government might want to. to share data to see what the overlap of patients or social services is. And we have a scenario, another very common scenario is publication of data. So this is no longer reciprocal sharing, but outputting of data to an audience or to the public at large. Then we have sort of putting multiple databases into one. So one organization ingests data from multiple sources. So you might think of a local government wanting to make road layout improvements and they need to take in data from the police and maybe from hospital, maybe citizen feedback, site campaign data, and they need to bring it all together. And another typical scenario is just the need to keep that secure. So for example, if a government uses an external provider to host data, they may need to be sure that it’s particularly secure. So these are the sort of problem statements we have with various tasks to be done. And now I’ll move on to explain what tools might be the best to use for this. And this is kind of where the privacy-enhancing technology bit fits in. So what are they, what do we do? So in the first scenario where you need to share data between multiple parties, I’ve clumped together the types of privacy-enhancing technologies that would be useful in that scenario. So I won’t dwell on them in detail. Given the time constraint, I’m happy to go back to them later if anyone has questions, but I’ll just give a brief overview. And homomorphic encryption, the kind of underlying concept is that it allows computations to be performed on encrypted data without the data first being decrypted, which keeps the data much more secure and minimizes the access that you can have to it. Secure multi-party computation is a relatively similar protocol, but more suitable for large groups. And zero-knowledge groups are a bit different. They refer to a protocol where one person needs to prove something to somebody else. we could say typically whether you’re above a certain age, so you’re eligible to do a certain activity, drive a car, purchase alcohol. And instead of revealing the underlying data, so maybe data birth, you can just prove that you’re over whatever the threshold is. So it minimizes data that’s shared. For publication and ingestion, two techniques are both applicable. So differential privacy is a way to prevent information about specific individuals being revealed or inferences about them being made. So it adds noise, adds records and measures how much information about a certain person is revealed. Well, synthetic data is essentially artificial data, which replicates the patterns or statistical properties of the real data. So you would have a real data set generate a synthetic data set that maintains its same properties, but is not the real underlying data. So either anonymizes or significantly de-identifies the data depending on which route you go down. And then finally, trusted execution, not finally, but I’ll say federated learning first. Federated learning is very useful for ingesting data from multiple sources. So typically you would need to move all the data across to a central hub. So imagine you are developing a tool for medical imaging. You need to collect all the medical images from a whole group of hospitals to have a large enough data set to train the model that you’re then going to use to detect these images. With federated learning, you avoid the need to move the data across and you instead train a model locally and then bring together centrally the improvements in that model. So it really reduces the need to share data. And then finally, trusted execution environments are essentially a security application that sort of makes… hardware and software that allows data to be isolated within a system. So that’s a whistle-stop tool. And I’ll move on to talk a little bit about our involvement in this area as the ICO. So this year, in June this year, we published Guidance on Privacy Enhancing Technologies. So if you’d like to know more detail about anything that I’ve talked about, I would highly recommend you read the guidance. And we focused on the link between these technologies and the benefits they bring to data protection law. So how privacy enhancing technologies can support data minimization, data security, and data protection by design and by default. And we’ve provided explanations for all the technologies, people who are not familiar with them, and also that mapping between the use of the tool and the compliance with the law to help both decision makers in organizations and developers technologies. And that’s a flavor of what the guidance contains. This kind of one-to-one mapping with, OK, you’re using a tool. How should you use a tool and how will it help? And we’ve also provided examples of scenarios in which pets could be appropriate. And I’m sure we’ll come back to talking a bit more about the risks and benefits. But I think it’s important to note that privacy enhancing technologies really, really help with data sharing and data reuse. But they’re still a relatively emerging field. And there’s some very, there’s some great examples of them being used already in practice. So they’re not an academic concept, but they’re still relatively new. So this issue still with maturity and expertise. I’m just, I can see Christian’s looking at me. So I’m going to finish up, which is that there’s still a few challenges to solve. mentioned. So, you know, they’re a great screwdriver but they’re maybe not yet an electric screwdriver and there’s still issues to understand how we can match up well the users of privacy enhancing technologies with the developers. So, how do you bring the expertise to the people who need them and how can technical standards develop in this area and how can costs be brought down. So, that’s my introduction to privacy enhancing technologies and I’ll hand back to Christian.
Christian Reimsbach Kounatze :
Thank you very much, Clara. And maybe before we move on to the next presenter, I just want to provide a little bit of context why we started with Clara’s presentation because I realized that I missed maybe to clarify that point. And the reason is because privacy enhancing technologies have been traditionally been looked at as the, this has been essentially the first kind of approaches and tools if you want. When you ask, if you ask people think about the role of digital technologies and how we can protect privacy, people would traditionally or typically point to privacy enhancing technologies. And as Clara’s presentation has highlighted, this has evolved definitely. So, there are now new types of privacy enhancing technologies that she addressed. And maybe one, if I may, Clara, ask you one question because it actually also opens up a little bit for latest discussion. Why has the ICO decided to look into this and to publish the guidance? If you could elaborate that a little bit before we then move to our next
Clara Clark Nevola:
presenter who is sitting next to me. Of course. So, we’ve long been advocates for kind of responsible data sharing and it’s something that stakeholders frequently tell us that data sharing is really hard, that even no matter how much we say data protection is not a barrier to data sharing, there are always challenges. And a lot of the challenges we’re seeing were not so much legal. but they were more organizational and business wise in the sense of you would have a data set and you would not want to share it because you don’t know what’s going to happen to it afterwards which is a legitimate concern and with privacy enhancing technology you can massively reduce that risk so I was talking about homomorphic encryption if you hand over a data set to a third party you don’t know what they’re going to do with that you know you have a contract to say how they can use it but you don’t have ultimate visibility over it while if you implement homomorphic encryption there’s a technical limit to the queries that you can put in so you have a guarantee that the data is only being queried for a pre-approved set of things so we thought it’s exciting and useful to develop data sharing yeah thank you I think now it’s a good time to move to our next speaker I will also ask you to introduce yourself but maybe as a as a kind of a
Christian Reimsbach Kounatze :
context why you are next essentially we thought that everybody probably knows the Mozilla Foundation and they are obviously also user using privacy enhancing technologies and we will hear about that so essentially it is a good illustration about not only the potential of privacy enhancing technologies but also an example where every one of us is potentially interacting with this kind of technology so again I will please introduce yourself maybe you want to talk about the
Udbhav Tiwari:
Mozilla Foundation eventually and yes thank you so hi I’m Udbhav Tiwari I work with the Mozilla’s public policy team where I’m the head of global product policy my job is to work with internal technical experts and external regulators and lawmakers to help them understand the consequences of regulation as well as ways in which that regulation could be improved to further Mozilla’s mission and Mozilla is a unique organization because we’re of course known most for our browser but we’re actually a corporation that’s owned by a foundation so the Mozilla Corporation has a single share that’s owned by the Mozilla Foundation and that means that most of the typical incentives that apply in the technology sector don’t necessarily apply to us shareholder pressure the driver or pressure for profits and which at some level we believe are responsible for some of the more egregious practices when it comes to data collection in the space. And the reason that context, I think, is particularly important for this session is Mozilla as an organization, when we started the Firefox browser now almost 25 years ago, for the first maybe 10 to 15 years, had a very strong policy of simply not collecting any data at all. And usually, when organizations say that, they’re actually talking about user data. So for example, even today, Mozilla’s browsing history is end-to-end encrypted, which means that if you have history, say, on your desktop and you’re accessing it on your phone, the only two places where that history exists in an unencrypted format are those two devices. Mozilla does not have access to that. But 15 years ago, we didn’t even collect any telemetry. And obviously, both it came from our very strong privacy credentials and the idea that we would not collect any data at all, even if it’s not directly about our users or their practices. But ultimately, we realized, as we became a more popular browser, that for a product that people used to access hundreds of millions, in fact, billions of websites around the internet, not having access to any telemetry would mean that we would never be able to make a product that would actually serve our users. Because that telemetry was used to detect which websites were breaking and which websites were throwing compatibility errors so that we could then go investigate those websites and speak directly to developers in a manner that we could resolve that and make changes in our products to help make sure that they don’t happen again. And that’s the period when we started exposing privacy-preserving ways of collecting this information, which within Mozilla essentially means separating the who from the what. And that separation for us has been quite a long journey. And that journey specifically, I think, over the last three to four years has crystallized around maybe three issues. And I think those are the three kind of maybe samples that I will be talking about to both explore Mozilla’s thinking, but also to react to developments that are taking place in the external world. The first is there’s definitely been a recognition that the proliferation of internet availability, bandwidth, and connectivity, along with computational power, has enabled certain kinds of privacy-preserving technologies today that were not available or not as feasible a few years ago. The second is that privacy post-2014 has definitely, both because of laws like the GDPR, but also because of reputational concerns, actively started to become a differentiator between products. And people are choosing products because of privacy. So the net investment that is coming into the space in these technologies has increased. And finally, and this is both related to Mozilla, but something that we don’t do ourselves, is the developments that are taking place in the advertising ecosystem. Specifically, Google’s Chrome Privacy Sandbox set of technologies, which have garnered a lot of attention over the last couple of years, for attempting to do all of the parts of the advertising ecosystem, targeting, attribution, remarketing, in a more privacy-preserving manner. And Mozilla has arguably been one of the biggest and most vocal critics of some of these technologies. Because we think, while they are better than the current practices that are enabled by the third party ecosystem, the technical validation of many of the claims that they make still requires some work. And those are the three things that I’m actually going to talk about. On the first piece, which is Mozilla’s own practices, there are, I would say, two standards that people at Mozilla have been integrally involved in, that are now almost done at the IETF. One of them is actually done. One of them is oblivious HTTP.
Christian Reimsbach Kounatze :
And the other is DAP, which is the Distributed Aggregation Protocol. Both of these standards essentially
Udbhav Tiwari:
work by, firstly, sending data in a manner where there is an intermediary or a proxy in between that separates where the data is coming from from what the actual substance of that data is. For the individuals in the room and on the session, if you use Apple’s private relay service, which is available on iOS, it works in a very similar manner in order to set it so that even Apple does not know either your DNS lookups or your browsing history, because it’s first sent to a proxy, where the proxy strips the information about where it’s coming from. And then it’s sent to the destination. ultimately. Mozilla is actively exploring ways in which we could use these technologies in order to collect telemetry information. And we expect to make some announcements on this regard in the coming weeks and months. There’s been a lot of progress. But one of the things that has actually held us back, I would say, is that the number of players in the ecosystem that are willing to engage with these technologies is still actually quite limited, both from the demand side, which is how many players actually want to collect technologies with these privacy-preserving manners and in this manner. And as you can imagine, the more suppliers they are, the more customers they are, the more competition is, the cheaper they will be, has definitely not happened yet, despite the fact that in comparison to some of the more complicated and possibly more promising technologies like homomorphic encryption, these are much, much cheaper. And it’s not actually technology that is holding the deployment of DAP or the deployment of oblivious HTTP back. It’s the fact that there are actually very few service providers that provide the infrastructure to be able to utilize these technologies, which are, relatively speaking, much easier to implement than differential privacy or homomorphic encryption. On the second point, which is Mozilla’s own thinking with regard to the developments in this space, I would say that when it comes to the evolution around targeted advertising that’s taken place, it’s almost certain now that the only browser in the market that still collects or has not disabled third-party cookies yet by default is Google Chrome. And the pressure that Google has been subject to by privacy advocates, by regulators on this has been quite high. So what Google has done is now proposed a set of technologies called the Privacy Sandbox Technologies that attempt to do what the current advertising ecosystem does in a more privacy-preserving manner. What Mozilla has said on this more broadly is that we support the idea. We support the concept of why the idea exists, because Mozilla, for example, does not block ads by default in Mozilla Firefox. We do believe that advertising is a valid way to support. publishers on the internet. However, we do think that the current state of the advertising ecosystem is absolutely unsustainable. And that’s the reason we block trackers, that’s the reason we block fingerprinters, and all of the underlying infrastructure that may enable the advertising ecosystem, including third-party cookies, are actively harmful to user privacy and security. And we’ve done a lot of technical work in the last couple of years in order to implement that. The biggest one there is TCP, or Total Cookie Protection, which actually creates jars of information in which people can, when websites, when you visit a website, say the newyorktimes.com, and there’s a button on the newyorktimes.com that lets you like a Facebook, like it on Facebook or share it on Facebook, Facebook actually gets the ability to drop a cookie onto your computer that will then also note the fact that you’ve been to newyorktimes.com, you’ve been to instagram.com, you’ve been to washingtonpost.com, which may also have that button. And what Firefox does is it creates jars where each time a website is accessed, there’s a separate jar in which the cookie for that website and many other identifiers are dropped, and these jars cannot talk to each other. So that’s a way of limiting the harm of the ecosystem by still giving users the ability to gain from the benefits of third-party cookies, because we also use heuristics in order to determine is this an advertising third-party cookie or is it a third-party cookie that’s actually enabling single sign-on, which is essentially when you click on sign in with Google or sign in with Apple on different websites as well. And as we develop these technologies, the one thing that we realize is firstly, it’s actually possible to give users a good balanced experience between those two things, which is not having tracking, but still allowing them to support publishers if they choose to do so, and giving them the option to say, go to the Mozilla add-on store and download an ad blocker if that’s what they want to do. So we think that that choice has been very valuable. And finally, because I know I’m at time as well, is on the Google Privacy Sandbox piece, what we have said is that right now, there’s a very serious. risk that the standards and technologies under Google Privacy Sandbox will become the de facto way in which large parts of these activities are carried out on the internet and we think that that’s both a privacy concern but also more importantly a competition concern because it’s that interplay between privacy and competition where traditional advertisers who are not Google don’t like those technologies because they say that that will mean Google’s own technology and first-party motive data will become more valuable and people like us and privacy advocates don’t like those technologies because they don’t go far enough right so it’s definitely a scenario where everyone is like quite unhappy with the state of play but what Mozilla thinks is that if these standards are going to be deployed and they are Google has announced that they will stop third-party cookies by the end of next year we think that they should happen at standards bodies because there is a process in standards bodies like the W3C and like the IETF that vets and validates these standards for both their technical capabilities as well as for their potential for interoperability with other ecosystems in a world where more than 60% of the individuals who use the internet are running on a variant of Chrome which is the chromium browser engine these technologies have a very strong ability to shape what the future of the internet and advertising and tracking may look like and while they are privacy enhancing technologies if privacy enhancing technologies like them are adopted at the scale at which they will be adopted they need a lot more scrutiny than they have received so far and which is why we’ve advocated a lot with the Competition and Markets Authority in the UK and we’ve also had engaged with many other regulators around the world both privacy and competition advocating for why processes need to be better than some of them have included conversations with Google as well so with that I’ll end and happy to answer any questions. Thank you, thank you very much, I think you raised quite a
Christian Reimsbach Kounatze :
number of points that will that we will definitely need to come back to during our discussion and one of the points if I may because it actually also opens up a little bit with the door for the next intervention to some extent. But let’s say more broadly for all of us, it’s a question about the difficulties related to the validating the technical claims, and what is actually that means for the selection and also for policy makers and the regulators that are trying to promote the adoption of privacy enhancing technologies, but also the issue of interoperability. I think this is maybe a topic that I also would like us to discuss about. But what I also find interesting was that you were talking about the current state of the ecosystem, of the advertisement ecosystem, and highlighting that there are obviously some challenges. And I think our next speakers, starting with Max and then Stefan, will address exactly that state. But what is more interesting, and this is really why I look also forward to date presentation is because they are essentially talking now about a different role of the technologies for supporting privacy, which is the enforcement side. So because interestingly, you talked about that a lot of those technologies have played, gained a higher adoption because or thanks to the GDPR. So we have a legal regime in place, but apparently we will hear what is happening with cookies and how they’re being used. And so without further ado, I will give you the floor, Max. And I understand you will co-present with Stefan. So I’ll let you manage that between the two of you. The floor is yours. And if you may introduce yourself and what your NGO does. Thanks a lot.
Maximilian Schrems:
Thanks for the invitation and early morning from Vienna. Stefan is on the second one for practicality reasons. I’m just gonna do the presentation myself. Stefan is the developer that actually works on a lot of these things. So if to get it maybe also out of the policy only discussion and maybe some hands-on discussion, that is especially what Stefan would be here for. So I’m just going to run through our presentation, trying to be as quick as possible. So fundamentally, we at NOIP do different enforcement projects. We have deep dives if there’s really a big legal issue, but we also see that there’s just mass violations. So violations where the GDPR is just violated, just like I usually compare it to speeding, where it’s not a big complicated legal case. It’s not a big overly dramatic situation, but we just see mass violations where people just basically do that and violate the law in masses. Typically in the privacy or in the even digital community, we’re still working on most of that in a rather analog way. Typically when lawyers work on digital issues, it gets as digital as word usually, and that’s about it. So the idea was if we have these hundreds and hundreds of violations, we have to speed up, especially we’re a small organization with being based mainly on donations. So you have to be efficient in what you’re doing as well, which is a similar issue for governments as well, I guess. What we thought about on how to approach all of that is a bit like a speeding camera. I can tell you from an Austrian perspective, if you speed in Austria, typically your license plate is automatically read by the speeding camera. The speed is absolutely automatically calculated and it’s automatically transferred into a ticket that you would get mailed to you and you basically get a code to pay to find. There is no human intervention in any of these legal procedures anymore. They’re fully automated, and that’s basically for these standard violations, what we do in other areas of the law as well, because it’s just inefficient to have people for that. Now, we thought to kind of take that thinking and apply it to especially web technologies right now in future plans that could also be used for mobile technologies, for example. And the idea was basically to come up with a multi-step system that allows us to generate complaints automatically, manage them automatically, and also settle. cases with the companies automatically without the need to send hundreds of emails back and forth. This all is in background basically a big MongoDB, now a PostgresDB in the file system where all of that lands for the tech geeks. I’m just going to go very roughly through the steps of how all of this works to make it a bit practical. What we started with is OneTrust. It’s the biggest provider for cookie banners, so it’s kind of the standard cookie banners you at least see in the European Union. They’re typically done by four or five big tech bigger service providers. Websites usually don’t have their own cookie banner, they usually use one of these services. That also allowed us to scale up because we know thousands of websites are using exactly the same software to do this cookie banner and OneTrust actually has a JSON configuration file where most of the configurations of the cookie are stored so we can actually or the computer can read it quite well because for example this is like the banner show reject all button false so it basically doesn’t show you reject button on the first layer and you can take it right from the JSON file to know that that is there or not there. Same thing for many other of the configurations of a cookie banner. In the background OneTrust provides a interface where the admin can change that so we also took screenshots to explain to the companies which button they would have to fix to make sure that they comply with the GDPR and that was kind of like the systems basically the back end and the technology like the technological way of saving these settings and that we could basically auto collect. We did a first kind of code search there’s a website for example called public www where you can search like you can search on google on this you can search for code in the website so to see which software a website is using and thereby you get a list of all the websites that use in this case the OneTrust cookie banner and that already allows us to only focus on the websites that actually have it used and not have to scrape the whole web for random pages that actually use OneTrust. What we then have is that we actually first auto-scan the website to see if there’s any violations, and then we actually have a manual scan where an individual really goes to the website and checks it. We did have a two-screen setup usually where there’s a test environment on one side which was a virtual machine. We’re right now changing that and then you have basically a management interface where you can manage the case yourself. We need to do that also because under the law, we need to have a data subject to someone that is directly concerned to actually bring a case. All of that basically gives you a big fancy list where you can filter all the cases and then take a case and do your assessment. We only filed if the human and the computer basically decided that it’s a violation. So we have this two people have to agree system to make sure that there’s a low error rate. Once you’ve done that, we basically auto-generated a complaint, which is text blocks that generate the PDF, where you have certain elements that are filled automatically, certain elements that turn on and off, depending on which violations you found on the website, basically fed from the JSON file and what you found in that. We typically then send that to the individual company first. That was one of the biggest issues because we had to make sure that they don’t think it’s them, because if you get like there’s a legal procedure against you, most people will just throw it away. We even tried to use some of the systems that the companies use. They typically use, for example, A-B testing to figure out which type of interaction works the best. So we A-B tested that as well and saw for different types of e-mails, we sent to the company, we get a better or lower compliance rate. So we thought if they can manipulate the users into clicking the Yes button with A-B testing, we can probably manipulate them into compliance with the law. By doing A-B testing, that was the approach there. As I said, we even have a full guideline on how to be fully compliant. So it was served on a silver spoon, on a silver plate to actually have that done. If companies actually decided to comply with that, they could go to a platform where they could log in with case number with a password and then were able to actually let us know that they have fully complied and that they have fixed the problem. We then automatically were able to scan that and prove that. And also from a lawyer’s perspective, we were able to get all the feedback from the companies in an automated format. So we didn’t have hundreds of emails with some law firms that send you endless text. We basically got that in a structured way, the feedback as well. Now what’s super interesting is if you look at that from a statistical point of view. We were doing the first version and that’s pretty much what I showed to you in more of a duct tape technology version. We just did a first test and saw how well it worked. And what was interesting was two things. First of all, we had a 42% compliance rate just by sending the companies an email with a specific instruction of what’s legal, what’s not legal, and that there would be further action taken if they’re not compliant. And that was already a huge number. That’s better than what we get from the data protection authorities if we’re doing cases there. So that was really interesting that we had a very good compliance rate here. The second thing that was interesting, that’s kind of dependent on the violation. I don’t go into that, but it’s different per violation on how good the compliance was. There was only, and that’s a side note, there was only about 18% that fully complied because we typically had six or seven violations and they fixed some of them. So the 40% is the overall number of violations. But the really interesting part was the domino effect that came out of it. Typically in law, we do not go after every person and go after everybody that’s speeding. We intervene often enough that people feel, oh, speeding can actually be a problem. And what we saw is we scanned about 5,000 pages and then actually only sent an email to about 500. When we continued with the rest, we suddenly saw that hundreds of the other websites have all fixed their cookie banner, even though we’ve never intervened with them. So what happened? In the background, companies understood, oh, there is actually no enforcement action going on. I heard that from a colleague. I heard that from software provider that also sent emails around. And suddenly we saw a huge number of compliance without even intervening. And that’s exactly this idea of general deterrent. that we usually have in other areas of the law that work well once you can speed it up and be a credible threat or a credible interference. Now, just to wrap it up, we also upgrade this now to actually become like a long-term project, which is Stefan’s main job right now to get all of that in a very structured and very nice way of using it. We also do that in a way that the authorities could use it in the future, ideally. What we added is basically a bigger admin panel where you can manage all of these cases and make it all much more modular. So you basically can go between the steps back and forth when it used to be like more linear. That adds a lot of the options to manage cases better and also attribute cases better and filter cases better. So we can, for example, say we only bring certain cases in anymore. The other thing that we basically do here is that we upgrade a lot of the individual functionality that we can actually, the first version wasn’t cookie banners, but you can use that tomorrow for tracking pixels, for some other web technology, some script, anything else. And these modular parts, you can basically plug into the software and take it back out. That is fundamentally what this is gonna make a lot different. The rest is mainly really making the interfaces usable for an average lawyer so that we don’t need a tech person every time you need to change something in a PDF. That is the elements that we’re working on right now. For us, that was really one of the most useful projects I think we’ve done, especially considering like input-output ratios and really moving enforcement forward. So on that side, I think it’s a very interesting approach in the sense that we’re kind of working in a digital sphere, but still do kind of pretty analog procedures. And we could probably learn from a lot of areas on how we can do that better. So thanks for that. And I hope if there’s questions, especially technical questions, that Stefan can jump in on all of these. Okay, thank you.
Christian Reimsbach Kounatze :
Thank you very much, Max. I think I have probably one brief question if you could elaborate on that because it will actually. also be a good transition for the next speaker, which is I think you mentioned that you had talked to data protection authorities. Could you briefly say what the feedbacks were that you received on that and how high is the interest among those data protection authorities to implement this kind of tool in their processes? So I think on a very personal level,
Maximilian Schrems:
if I may say, I think the answer was a mix of fear because too much, never seen that different world and high interest in the sense of really how can we be efficient in our work and also, let’s say, get rid of useless work for employees in the sense of like a lot of these tiny things are just very trivial. You don’t need a lawyer for a lot of that. Usually, I think one element that I forgot to mention, the quality usually gets better because if you have a one-time template that was proven well by the more senior people, you know that what you’re doing here is going to produce good results while if you have some, let’s say, more junior person that has to do that the first time, you have a very good chance that something is going to go wrong or that something gets forgotten. So it’s also, I think, efficiency plus quality that you can get through systems that work well. However, the big problem in reality is you need to implement that, you need to have programmers, you need to have people that really understand that, and you need to have the management skills for it in the sense of to really find the right cases because this doesn’t work for every case. A big thing also with us was to not get entangled into details anymore, to really tell the lawyers, we’re only doing these two things. There may be 10 other violations on the website, which we just ignore them for now because that doesn’t scale. That is a bit of a, let’s say, culture change that you need as well, even with annoyed to say, okay, that is really a thing where we just go for this one topic, we do that well and quick, and the next time we do the next topic, which is a very different approach in procedures where we usually do everything.
Christian Reimsbach Kounatze :
Cool. I think just as to highlight, and I’ve noted that because it’s a good topic for the later discussion because you just mentioned the word scale. And I think this is definitely one of the common themes when it comes to using technologies for addressing privacy problems that we have a potential, let’s say solution, or let’s say a support of a solution or part of the solution that basically helps us scale with the scale of the problem, so to speak. But we will get to that point, hopefully. Now it’s my pleasure to give the floor to the European Data Protection Supervisor. And I guess, obviously, one particular question given, Wojciech, that you are following Mark’s presentation is the question, to what extent are these tools are relevant for your agency, but also for basically your colleagues’ agencies? And maybe also talk about the role of privacy, or technologies more broadly, for supporting your work and your cause. So the floor is yours. Thank you very much.
Wojciech Wiewiórowski:
Thank you for having me there. Thank you for being able to talk with you, even in such an early morning here in Brussels. So all the best from Brussels. First few words on the institution itself, the European Data Protection Supervisor. I guess most of you are familiar with it, but for those who first time hear about the very complicated system of the governance of privacy in Europe, the European Data Protection Supervisor is the supervisor of the EU, European Union institutions, bodies, and agencies. So I’m not the super data protection commissioner for all Europe, but I’m the commissioner for the EU bodies and EU institutions. At the same time, we have 27 member states jurisdictions and 27 data protection commissioners in each of the member state. Some of them have an even more complicated structure. Anyway, what is rather more important for today’s discussion is not our supervisory role towards the EU institutions, but the fact that we are advisors in the legislative process in the European Union, and also the fact that we are the providers of the Secretariat for the European Data Protection Board, which is consisting of all these data protection authorities. So I’m not speaking in the name of all these authorities, but I can somehow provide you with the approach that we have among the European data protection authorities. Well, that’s a good idea to put me just after Max, because I can somehow react to what he said about the resonance that his work makes among the data protection authorities and in the market. That’s true that there are a lot of data protection authorities who are interested in the practical deployment of the solutions similar to the ones that NOIBS does. That’s also true that for some data protection authorities, it’s strange that the NGO, the civic society movement, can do the things which are called enforcement. Actually, it is enforcement. That is the way to make the thing running. And I’m saying that also as the person who always said that what Max did in his life was the thing that the data protection commissioners should do 10 years before. And they never asked the right questions. Anyway, coming back to the main point of discussion, that’s true that these very tools that are prepared by NOIBS, including also the information retrieval systems which are connected with it, are the things that should exist in most of the data protection authorities, especially those that are… that have really independent IT structure from the other institutions. We rather try, as data protection authorities, we rather try to deal with the legal and guidelines way of doing the things. But it’s true that some of the data protection authorities do have their laboratories and do have their IT teams that are preparing the tools as well. We try to do it as the European Data Protection Supervisor as well, because we still remember that there is a kind of limit for the legislative actions that we can do. Making more law does not necessarily help. What actually, the point on which we are in the European Union is that we have the law, and the law is not bad. The thing is that we have to operationalize it also by promoting the role of the IT architects and promotion of the comprehensive privacy engineering approach. So that is something which lies in the roots of our strategy as the EDPS. And for this mandate strategy, shaping the safer digital future, a new strategy for the new decade, we, as one of the pillars, put the tools. The tools, so we are going to use the tools and we are going to develop the new ones. Of course, as I said, it’s not that easy for all the data protection authorities to create the laboratory where these tools are really produced. But the authorities like ICO, like KNIL, like Canadian authority, like some of the German authorities are ready to do it and are ready to prepare their own tools. What we do as EDPS, apart from the very small… things connected with the remote control and the remote audits, we try to organize the society. We have the IPAN, which is Internet Privacy Engineering Network, which is a platform for engineers that are preparing the best solutions to discuss on them and also to disseminate information about different solutions which are done by different organizations. But we also try to make use of the fact that the European Union is, European Union institutions, that’s 70 institutions which have their own achievements in this field. And let me here just give two examples of such solutions, which are both coming from the Eurostat, which is the statistical office and the agency which is dealing with statistics in the European Union. And they are both also giving us examples in the current guide on privacy enhancing technologies for official statistics, which have been produced by the United Nations. So the first one is the processing of longitudinal mobile network operators data, where Eurostat has developed the proof concept solution with a technology provider with the main goal of this project is to explore the feasibility of secure private computing solutions for privacy preserving processing of mobile network operator data. The technology itself is, for the project was a trusted execution environment with the hardware isolation, which has been delivered by the market. So this is not only the Eurostat who… preparing that, Eurostat is deploying it and let’s say localizing it, but the business is involved in it. And the second one, also from Eurostat, is developing of trusted smart surveys. And once again, that’s the situation in which Eurostat is trying to localize on the IT infrastructure for the EU institutions the solution which is prepared for the market. So these are the things that we develop, these are the things that we try to promote, and this is a kind of culture which we try to deploy among the clerks of the quite byzantic institution as the European Union administration is. Thank you very much, Wojciech. Just a question, because I think
Christian Reimsbach Kounatze :
what I liked about the examples that you pointed out was that you are essentially directing, or your speech was basically directing us towards a solution how to promote the use of those different technologies, and you gave also examples of, let’s say, data protection authorities that were kind of leading the way. I was wondering also if you could talk a little bit about the importance of guidance in that particular role, given that, or maybe we can talk about that when we later on, when we talk about solutions, how to promote that, because obviously this is where the UK ICO guidance plays a role. So let’s put that on the side because I just realized that time is running and we need to move on, sorry, to our next speaker. And here obviously I would say start and give you the floor, Shushokra, to maybe introduce yourself and introduce your organization and what it does and how it relates actually to the discussion about technology and the role of technology for privacy protection. I think one of the key elements, at least from my understanding, is that what you are doing is helping us scale with the problem and help us address some of the issues related to privacy. But I let you talk and introduce yourself. So I’ll just share my screen as well so
Suchakra Sharma:
everybody can see and then we can talk. So I am Shushokra. I am chief scientist of this upstart, nice little upstart called Privado. And what we are trying to do is to look at PETs from a very different perspective. The way PETs have been developed right now is that the solution providers are using it or the way privacy itself generally is looked at is from the perspective of user. But what we are thinking is that data is not floating in the ether everywhere. It is moving from one system to another system by software. So why not just look at the software itself which is handling the data. It can give you an interesting perspective of what was the intention of the developer when they were developing the software. And then you can track what is happening. So essentially we are trying to catch privacy violations before even they manifest inside the system. So even before you release software you can actually understand how it’s going to handle data. And if you do it at all the points in the chain of where the software is handling the data that that’s where it would be. So you know like as Max was pointing out you know automating everything a ticketing system. So that ticketing system is using a software. When it takes a photo of a car, it captures some information, that’s private information, and then it translates into ticket, which goes through five, six systems behind. Those are all points where data is flowing. How about we understand that whole system itself, and then we can predict what’s going to happen to the data. So that’s the perspective. So I’m Suchak Rai, I’m the Chief Scientist here. I have a PhD in Computer Engineering, have been working in cybersecurity for six years, something now, and almost two years now in privacy. I’m going to implement all the learnings that I have from the cybersecurity industry in this environment now. Okay, so visiting a doctor, this is how you do it these days. You fill up a form, you have a lot of private information there, PHI information, doctor looks at it, keeps it safe for some time, and then it gets shredded, hopefully. But now we have something new in this millennium. We have a software, and the software is now handling your data. So things have not changed much, but now with the advent of the software, what has happened is that this data gets exchanged through multiple hands, goes through logs, gets to an advertiser. You don’t even know what that software is doing. You just trust it. You go to a doctor’s office, you fill in the details, you just trust it. But what’s happening behind, and this is true because we have observed software, we have analyzed it, we know that it’s using a lot of technologies that proliferate this data. So essentially what happens is at the development time of the software, you have no data. You just have the intention of what to do with the data. But as the software gets deployed, some of the data gets put into an analytic service, some goes to a third party, and then databases everywhere, the data expands, you know? So it’s nice. if we try to look at the software itself, because that’s where the intention of what to do with the data is, and you can actually do it. So what happens to your data is actually defined in software. So at the time when the software is built or it’s getting deployed in those locations, we can get information about data inventory, doctor’s name, patient’s name, etc. We can get a map of the data. The intention of the software is to take the patient’s name and put it to this analytic service based on where the data is going to be stored. For example, the data that it’s taken and it’s going to be put in a data center in US East, you can actually get the location of where the data will be. Again, there’s no data that is being processed. It’s just the intention of what to do with the data. Our third-party transfers. So if that doctor software has some weird connection which goes to some other connection, goes to another piece of software, and that is using advertising, you can actually track it all the way. This gives us something which I would like to call as technically verifiable PIAs. So every organization tries to do PIAs, Privacy Impact Assessments. But that cycle is too long. There are documents that have to be filled and then you go back to the engineers, you go back to the developers, and then the lawyers also get involved. They want to see the document in a specific format. But what if you have all this information very early on in the game? So if you try to do it at that stage, it’s easy, it’s early, and it’s proactive privacy. If you try to do it at later stages, try to understand where the data went and use 10 other technologies, it’s a little bit late at that time. So this is one PET that we would like to say. It’s like expansion of PETs by actually making the software itself secure, making the software itself not leak your private information in many places. So one example is, In Canada, I’m in Toronto right now, it’s pretty late, and there is a directive which is released by the government on privacy impact assessments. And we see that all the organizations have to actually fill in this BIAs, go through a process. And Canada had this dental benefit last year, and they created a summary. And there’s a small text which fulfills one of the points which says individuals submit their personal information on the CRH, the Canadian Revenue Agency website. It’s using HTTPS and et cetera, et cetera. That’s about it. And to get this kind of assessment, they would have been going through multiple places, looking at previous assessments, looking at software. But software changes so rapidly. The moment you introduce a new kind of dental benefit or a vaccination plan, this is rapid. So the software gets developed rapidly, and you never know what went inside it. But you have all this information, we discussed that. This is already there because when the software was developed, we know what is supposed to happen to the data. So what if you can, so imagine, before even making that kind of a service public, what if you could find whether it’s collecting your PII, PHI, and it’s transmitting it to some other weird service that you never imagined. These days it can be open LLMs. And you can actually do it. And we have built a tool, which is also open source, you should check it out, which allows you to really identify that if a developer decided to collect address inside the software, it can say new data element found. This is at this exact place. If it’s a violation, if it’s a privacy violation, fix it very early on. You don’t have to wait for a big assessment and then going back. You can immediately know that, yeah. And today, this developer sat down and they decided to. collect address information. You have this information right there in itself. You can then see the flow, where it went. You can actually analyze the software. Just like a human is writing, our tool tries to analyze that software to see the intention of the human. You can see that that will eventually go to OpenAI, it will go to MongoDB database somewhere, or it gets leaked to a console, which again is a privacy issue. People don’t understand, but it is a very big privacy issue. You can get this deeper understanding just by looking at code, because code has the intention of what the developer wanted to do with the data. That’s essentially what it is, and having these technically verifiable PIA opens a new door. You get a chain of trust, so this accountability perspective also comes into picture here. You get a chain of trust because you have a record of modifications right from the design to development and to deployment. You have an opportunity to certify software now. You can have privacy-certified applications because you know that this application is handling private data in a more secure manner. They have not integrated these weird advertising things inside them. You can try and translate privacy intentions of legal directives that we were seeing, big documents, into very fine-grained checks, which are followed. This can open doors to actually understand high-level laws like GDPR, CCPA, and the nuances in them, and convert them to really fine checks that can be run on software to say that, yeah, this is compliant with it, and this is even before it gets deployed, so kind of like automating what Max is trying to do in a manner, but doing it very, very early, even before the software gets developed. And then it again opens a paradigm for privacy engineers. They can now proactively help build privacy-respecting apps because privacy engineering gets involved. It’s a new role. that should be there, it’s very important, and they can help build privacy respecting app. But what we have also observed is, it cannot replace human processes. They are absolutely essential. So what if there’s no policy to share the document? You can do as much nice things as possible on the software side, but that’s essentially what it is. Yeah, that’s about it. Questions?
Christian Reimsbach Kounatze :
Thank you very much, Susharkar. I was actually, and I also thank you very much for making the connection to the previous presentation by Max. And obviously one of the question that I was wondering is, this is also an approach that theoretically NGOs could use to, or privacy advocates could use to kind of, yeah, enforce a privacy law or even, and obviously also data protection authorities could use when they are doing in-house screening to our impact assessment and the likes. But obviously we also have a set of professions that are operating within the firms. And I would say this is a good link to our next speaker, Nicole. So if you could introduce yourself and how your work and your experience relates to what the previous speakers have said. The floor is yours.
Nicole Stephensen:
Thank you so much. Hello, everyone. I feel really honored and delighted to be following such a wonderful group of presenters. Thank you so much for having me today. My name is Nicole Stevenson and I’m a partner at IIS Partners, which is an Australian privacy and data protection consultancy. And we’re in our 20th year of operation. You’ll also hear from my accent that I am a Canadian, which means I’m both a Canadian and an Australian citizen, but I’ve been living here in Australia for 20 years now. I lead our privacy services functions at IIS Partners, where my specialism is in privacy program management and culture building. So you can just… can sort of picture how I’m potentially going to wrap up today’s session. And I’d like to really start with the essence of my interventions in mind and put to the group that privacy-enhancing technology should not replace good decision-making at the outset. So our governments and organizations still have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations. Now, in my work, there is a large focus on strategic privacy risk management, which is natural, right? Because the work of a privacy consultant often relates to identifying and mitigating risk around decisions that have already been taken. So for example, organizational information policy or practice projects or programs, and then, of course, technology deployments. And sometimes I find that our governments and organizations can be educated on what their risks are. But particularly where there are large volumes of personal data or complex vendor relationships involved, they might struggle to solve for these using conventional methods. So as an example, where there’s a risk of unauthorized disclosure of personal data into those vendor processing environments, such as through vendor APIs or single sign-on digital handshakes, it can be quite difficult for organizations to test whether a risk exists only in the realm of possibility, right? And we often see those types of risks borne out in privacy impact assessments, right? Consultants like me say, oh, you might have a risk of unauthorized disclosure here. But is that only in the realm of possibility, or is it actually playing out in reality? Now, unauthorized disclosures to vendors that are processing personal data on an organization’s behalf often happen without any real awareness of the organization. And we refer to this, or we often refer to this as data leakage. But this is also. really highly likely to qualify as a personal data breach, depending on the jurisdiction that you’re in. And although I’m a huge proponent of administrative controls like contracts, data leakage isn’t something that a contract with a vendor is going to eliminate properly, or even control for sufficiently at the outset. All of you know, when we are remediating data breaches, this is actually a backward looking exercise. This is where I think privacy enhancing technologies do have a deep potential utility. Now, in the context of controlling for data leakage, so let’s use this as our sort of example space. Privacy enhancing technologies are probably gonna take the form of data accountability tools. And this is more of a gray area category for pets, right? As compared with some of the technologies that have been discussed already here today, where technology can assist an organization to enforce rules about what should or should not happen to personal data. And the rules are gonna be found in things like our data protection laws that are applicable to the organization. Or they might be set out and or they might be set out as commitments to the community in the context of an organization’s privacy policy, or they might be expressed as contractual provisions between the organization and its various vendors or service providers. Now, all of this said though, the implementation of privacy enhancing technologies doesn’t remove from our governments or organizations those initial accountabilities that are associated with things like purpose specification. Why do we need the data in the first place? Do we have a fit and proper purpose for collecting and using it? And then of course that collection minimization, are we only collecting the personal data that we need to fill that proper purpose? Because these are those vital building blocks, right? For enforcing a climate or a culture that limits use and disclosure of personal data to the greatest extent possible with or without the involvement of privacy enhancing technologies. Now, all of that said, right? And in my experience. the business case for implementing privacy-enhancing technologies, at least as I’ve seen here in Australia, can be complicated by a number of factors, including whether the pet supplier is a small business or startup, because they themselves might lack the necessary privacy or cyber maturity. I’m not saying that’s in all cases, but it can certainly be in many cases, particularly where there’s not that bucket of vendor capital sitting behind the small business or startup. Second is the geographical location of the pet supplier. There are many associated legal requirements or barriers that might impact an organization or government’s ability to engage that pet supplier. There might also be some socio-political biases depending on where that supplier is. If we look at privacy in the Western conceptualization of privacy, if we’re looking at a potential pet supplier that’s based somewhere that doesn’t have those same socio-political norms or ideals, that might be a barrier. Finally, budget of the government agency or organization. One thing that we’re noticing is that where privacy-enhancing technologies are dealing with large volumes of data, if they are being priced based on units of data or volume of data, sometimes the budget can blow out and really remove from the government agency or organization the ability to use that technology at all. Now, I wanted to share with you that IIS Partners recently established a subsidiary company, and it’s called TrustWorks 360. That’s because we think privacy-enhancing technologies are a thing and are an important thing in Australia and in the wider global market. TrustWorks 360 is working to bring privacy-enhancing technologies and other privacy and security management solutions to the ANZ and Asia-Pac market, which is where we play. The feedback so far has been that it’s a real challenge. I approached actually one of our privacy-enhancing technology partners when I was considering the comments that I would bring to the group today. They are called Q-Privacy, and they deploy tools that both allow organizations to audit for data leakage, so remembering that context, that example I gave you before, and then also establish and enforce rules that ensure only the personal data specified for a processing purpose is able to be pulled into those vendor environments. Now, I think that this type of data accountability tool is exciting for the global privacy marketplace, and I think it’s got great utility for organizations that deal with large volumes of data that can’t possibly be monitored by a person, right? And in these cases, and with my consulting hat on, I would say that automated solutions are much more ideal than relying on, say, the privacy officer or the DPO in an organization to try to get a handle on this. But there are barriers to uptake, and when I asked Q-Privacy to share what, in their experience, those barriers are, they gave me a couple of points to share with the room. The first is that there seems to be a low priority for uptake of pets in, you know, sort of your small to medium organizations or your smaller governments because there’s such a focus on big tech from a regulatory perspective, and if everybody’s eyes are on big tech, it means that no one sees what we’re doing over here, right? So we’re sort of risk managing our decisions in relation to privacy, possibly waiting for a data breach before we take action on anything. Second is that there tends to be an avoidance for zero-trust approaches to personal information or personal data management of the likes that Q-Privacy is deploying, and low budgets. And so there tends to be more of a focus on those third-party risk assessment tools and using standard legal contracts and treating those as sufficient. And finally, the most decision makers in the domain of privacy tend to be more in that legal space, right? So we tend to see legal teams or potentially corporate services teams dealing with privacy issues for the government or their organizations, and they have a less technical focus. So, you know, the barrier, the lack of privacy engineers or folks that understand how privacy enhancing technologies is a barrier for uptake. And with that, because I know we want to have at least 15 minutes for questions, I will end my discussion here. And again, thank you to all of you and to the room for attending
Christian Reimsbach Kounatze :
today. Thank you. Thank you very much, Nicole. And I think you pointed out a number of questions that I would like us to discuss. Just wanted to invite the audience in the room, but as well online to feel free to raise questions. But given, I mean, I have a couple of them, so I will take my privilege as a moderator maybe to ask a few of them. And I mean, one is definitely the question about adoption that was raised. If we all agree that all those technologies are great, why is it that not everyone is using it? I mean, some of these technologies have been around for a long time. So how comes that it still seems to be something exotic that needs to be discussed at the IGF? So this would be my number one question. And another one, if I may, and then because that’s actually the one that strikes me out of the discussion, kind of everyone has agreed, seems to agree that automation is great. It’s needed to scale with the problem clearly. But at the same time, everyone seems to be saying, or at least I heard this multiple times, humans should not be replaced. There should be a role for humans to be kept in the process. So if you could elaborate on that. because I think that’s maybe something that may, some people may, yeah, for different reasons try to forget or ignore. So I let you maybe intervene. We start maybe with Clara and we keep the orders of intervention. If you could address some of these points, put the emphasis where you wish to do. So Clara, if you could start. And maybe, sorry, before you do, I just wanted to acknowledge and express my appreciation that you are joining, some of you at least, are joining from very early in the morning. In particular, you, Shukra, from Canada. So this is very much appreciated.
Clara Clark Nevola:
Well, it’s starting to get light, you can see in the background. Nearly normal morning now. I think I’ll take your first one about like why do we not see it ingrained yet. It’s something that we’re working on at the ICO. It’s our next step after the Privacy Enhancing Technologies Guidance. And I think basically our explanation for this is that the organisations who would most benefit from Privacy Enhancing Technologies do not yet know that they exist. So there’s a real interest in them in community. And that’s where the use case is. Is my sound OK? Your sound is OK. The video is freezing a little bit, but we can hear you well. OK, thank you. So basically the kind of lower tech organisations are not yet aware of these technologies. And one of the things that we’re working on is how can we bring people who are more experts in pets, so pets developers, academics, organisations who are more technically minded, who have already implemented them, together with more traditional organisations, local government, health bodies, to really understand like why would you use a pet? So that’s my explanation for question one, and I’ll hand over.
Udbhav Tiwari:
Okay. So. I think that on the point of question two and why humans are important, and it’s not just a question of automation. There is a very real risk that we also often discuss within Mozilla, that privacy enhancing technologies may make it so that people just start collecting even more data than they already do because it’s so easy to collect it, and a lot of the risks that are associated with it no longer exist. Independent of the technology that is being deployed or whether you’re using a tool to check code or whether you’re trying to make sure that data leaks don’t take place, I think it’s really important for organizations to first question, should this kind of data be collected in the first place? Is there a real use for it? What use will it be put for? Is it worth the risk of what may happen if this data ends up leaking? As much as they invest in resources and tools in order to protect that information. For me at least, that’s the primary reason why human beings are important, because the decisions of what to collect are obviously made by human beings. If you are collecting more information than you need and it ends up leaking, rather than investing in the tooling around preventing that from happening, maybe you should reconsider whether that data should have been collected in the first place or not. I think that it’s been a very enlightening conversation also because there are two parts. One is privacy enhancing technologies once the data exists within an organization, but there is also the piece of privacy enhancing technologies that allow you to collect data without the parts that actually sometimes even make it private, which are identifiable information. So for example, both of the things that I had mentioned, Oblivious HTTP and DAP, allow you to collect information in a manner that is aggregated, equally useful, but with almost zero consequence to what happens if that entire piece ends up being in the real world. Because it’s been collected in a manner where the unique identifiers no longer correspond to the people who would actually operate them. So I think that’s also an interesting point to keep in mind for the first one. Thank you. Max, if you’d like to say a few words.
Nicole Stephensen:
It’s like privacy enhancing technology to be quite impenetrable for organizations and governments. Folks who are not technical, who are not engineers, who may not even be policy people, right, with an awareness of what privacy enhancing technologies do. Finding a way to capture what they are in plain language, almost like a sales pamphlet. You know, these are the types of privacy enhancing technologies that are out there. This is what they look like and this is how they can be deployed within an organization or government. That type of stepped approach, I think, would be really, really useful, particularly in jurisdictions like this one.
Christian Reimsbach Kounatze :
Thank you very much to all of you for being here in person, online, to incredible hours. I took note of the different suggestions and what is also great is that I think with this event we have been able also to extend the understanding of what PETS or what the role of digital technologies could be beyond just those almost today traditional technologies to something that is much, much broader. And with that, thank you very much and we look forward to definitely continue the conversation. Thank you. Bye. Thank you. Thank you. It was a good discussion. Yes, definitely. Some of the other, especially the Privido presentation was stuff that I’d heard about, but I didn’t know that it had advanced to some of the things that they are attempting to do. I mean, for me, if I were to deploy that software in a company, I would probably say that is such a huge privacy risk, just the software, because it actually has to have access to everything. Yes, that’s an interesting point. Let’s say, I had a separate conversation with him about that and obviously one of the challenges that this cannot be run by a third party, except obviously you would have to really put in place trust mechanisms because not only the problem of privacy is one. I could imagine that being the least, the smallest problem in the company. Tristan, your mic is still live, just to let you know. Thank you. Thank you. Thank you. Thank you. you you you you you
Speakers
Christian Reimsbach Kounatze
Speech speed
169 words per minute
Speech length
2384 words
Speech time
844 secs
Arguments
Technologies, especially in the realm of privacy, can help in addressing problems by providing solutions that scale with the scale of the problem
Supporting facts:
- Maximilian Schrems discussed how efficiency and quality could be improved through the implementation of systems that work well
- These systems allow for getting rid of trivial work, improving quality through the use of proven templates, and also in choosing the right cases to work on
Topics: Technology, Privacy, Scale, Problem-solving
Need to understand the reasons for low adoption of privacy-enhancing technologies despite their benefits
Supporting facts:
- Some of these technologies have been around for a long time, but they seem to be something exotic that needs discussion at the IGF
- Certain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller organizations could be impeding the widespread adoption of these technologies
Topics: privacy-enhancing technologies, adoption
Importance of maintaining human roles while using automated tools
Supporting facts:
- Speakers agreed that automation is great and needed to scale the problem of privacy management, but humans should not be replaced
- There needs to be a balance where automation scales up efforts while human involvement is retained to manage and make decisions
Topics: automation, human roles, technology
Report
In the realm of technology and privacy, it has been established that these two areas can provide scalable solutions to effectively address problems. Maximilian Schrems, a prominent figure in this field, emphasizes the advantages of implementing efficient systems that can eliminate trivial work and enhance the overall quality of work.
By using proven templates and carefully selecting cases to work on, these systems greatly improve efficiency and productivity. Privacy tools, in particular, are seen as indispensable in supporting the work of agencies involved in data protection. These tools enable agencies to effectively navigate the complex landscape of privacy management.
However, barriers hinder the widespread adoption of privacy-enhancing technologies. Factors such as low budgets, a lack of technical focus in decision-making teams, and the prioritization of larger organizations impede the adoption and implementation of these technologies. Addressing these issues is crucial to fully benefitting from the advantages offered by privacy-enhancing technologies.
Automation is widely regarded as a crucial component in privacy management. It allows for scaling efforts and addressing the challenges posed by the ever-increasing scale of privacy concerns. However, human involvement should not be replaced entirely. Speakers agree that a balance must be struck between automation and human decision-making.
While automation can streamline processes, human oversight and decision-making play an integral role in ensuring ethical and responsible practices. Striking this balance is key to realizing the full potential of automation in privacy management. In conclusion, the speakers at the event highlighted the significant role that technology, privacy tools, and human involvement play in addressing problems and supporting the work of agencies in the realm of privacy and data protection.
Scalable solutions, efficient systems, and the adoption of privacy-enhancing technologies are essential in tackling the challenges at hand. While automation is critical, it should not replace the human touch. By acknowledging these factors and working towards effective implementation, privacy can be ensured in an increasingly digital world.
Clara Clark Nevola
Speech speed
178 words per minute
Speech length
2146 words
Speech time
722 secs
Arguments
Privacy enhancing technologies are a tool to enable data sharing
Supporting facts:
- The Information Commissioner’s Office in the UK has released guidelines detailing how privacy enhancing tech can support data minimization, data security, and data protection by design and by default
Topics: Data Protection, Data Sharing, Privacy Enhancing Technologies
Data sharing has challenges beyond just data protection laws, including reputational, commercial, and organizational barriers
Supporting facts:
- Data sharing often involves multiple organisations, and can be for reciprocal purposes, for publication, or for the aggregation of information from multiple sources into a single database
Topics: Data Protection, Data Sharing, Data Governance
Still challenges to overcome around the maturity, expertise, and cost associated with privacy enhancing technologies
Supporting facts:
- While there are some good examples of these technologies being used in practice, technical standards still need to be developed, costs can be high, and there is a need to bring the expertise to the people who require these technologies
Topics: Data Protection, Privacy Enhancing Technologies
Privacy enhancing technologies have become crucial to protect privacy in the digital era
Supporting facts:
- Privacy enhancing technologies are typically the first approach when thinking about the role of digital technologies in privacy protection
- Recent developments have introduced new types of privacy enhancing technologies
Topics: Digital Technologies, Privacy Enhancing Technologies
Organisations most likely to benefit from Privacy Enhancing Technologies(PETs) don’t know about their existence.
Supporting facts:
- The use case for PETs is currently limited within the expert communities where they’re known and understood.
- Lower-tech organisations that would benefit greatly from PETs use are not aware of these technologies.
Topics: Privacy Enhancing Technologies, awareness, adoption
There’s need to bring experts in PETs together with traditional organisations.
Supporting facts:
- In order to elevate awareness and understanding of PETs, there needs to be partnerships and collaborations between PETs developers, academics, technically minded organisationsand traditional organisations like local governments and health bodies
Topics: Privacy Enhancing Technologies, health bodies, local government
Report
Privacy enhancing technologies (PETs) are becoming increasingly important in today’s digital era as they enable data sharing while protecting privacy. The Information Commissioner’s Office (ICO) in the UK has recognised the significance of PETs and has released guidelines that outline how these technologies can support data minimisation, security, and protection.
The ICO’s guidelines highlight the role that PETs play in achieving data minimisation, which refers to the practice of only collecting and retaining the minimum amount of personal data necessary for a specific purpose. By implementing PETs, organisations can ensure that they are processing and sharing data only to the extent required, thereby reducing the risk of potential breaches or misuse.
Furthermore, PETs contribute to data security, addressing concerns about the potential vulnerability of shared data. Different types of PETs, such as homomorphic encryption, secure multi-party computation, and zero-knowledge proofs, offer various solutions for securing data in different sharing scenarios. Homomorphic encryption allows computations to be done on encrypted data without having to decrypt it, while secure multi-party computation enables multiple parties to perform a computation on their data without revealing any sensitive information.
Zero-knowledge proofs allow the verification of a claim without revealing the supporting data. These technologies can help protect data integrity while allowing for collaboration and data sharing. Anonymisation or de-identification is another key aspect of PETs. By applying these techniques, organisations can remove or alter personal identifiers, making it more difficult to link shared data to specific individuals.
This helps to protect privacy while still allowing for data analysis and research. Despite the clear benefits of PETs, challenges remain. Technical standards for PETs need to be developed to ensure interoperability and ease of implementation. Additionally, the costs associated with implementing PETs can be high, posing a barrier to adoption for some organisations.
Awareness and understanding of PETs also need to be improved, particularly among lower-tech organisations that could greatly benefit from them. Data sharing itself poses challenges beyond legal considerations. Organisational and business barriers, such as concerns about reputation and commercial interests, can hinder data sharing efforts.
Stakeholders often express reluctance to share their data due to uncertainties about how it will be used or what the outcomes may be. To overcome these challenges, the ICO advocates for partnerships and collaborations between PET developers, academics, and traditional organisations like local governments and health bodies.
By bringing together experts from different fields, these partnerships can elevate awareness and understanding of PETs and facilitate their adoption by traditional organisations. In conclusion, privacy enhancing technologies are crucial tools for enabling data sharing and protecting privacy in the digital era.
The ICO’s guidelines demonstrate how PETs can support data minimisation, security, and protection. While challenges exist in terms of technical standards, costs, and awareness, partnerships between PET developers and traditional organisations can help overcome these obstacles. By promoting the adoption of PETs, organisations can achieve a balance between data sharing and privacy protection, fostering innovation and collaboration while safeguarding individuals’ personal information.
Maximilian Schrems
Speech speed
218 words per minute
Speech length
2595 words
Speech time
713 secs
Arguments
Enforcing the adoption of technology to speed up GDPR compliance
Supporting facts:
- NOIP developed a system for auto-generating complaints about GDPR compliance and managing them.
- An auto-scan checks websites for violations, followed by manual verification.
- The system auto-generates a complaint, which gets transferred to the violating company.
- A platform is used for companies to provide feedback and declare compliance.
- This system achieved a 42% compliance rate by proactively sending guidelines to companies.
- Improved compliance was observed even from companies that were not directly intervened with (domino effect).
Topics: GDPR, OneTrust, Data privacy, Automation
Feedback from data protection authorities included fear and high interest
Supporting facts:
- Authorities fear new technology and change but recognize potential for efficiency
Topics: Data Protection, Machine Learning, Efficiency
Adopting new technology can boost efficiency and quality
Supporting facts:
- Efficiency can be increased by getting rid of trivial tasks
- Increased quality by using a well-proven template
Topics: Data Protection, Technology Adoption, Efficiency, Quality
Report
NOIP, an organisation, has developed a system that automates the generation and management of complaints about General Data Protection Regulation (GDPR) compliance. This system has proven to be effective in achieving a 42% compliance rate by proactively sending guidelines to companies.
The system operates by performing an auto-scan of websites to identify potential GDPR violations, which is then followed by manual verification. Once a violation is detected, the system auto-generates a complaint, which is then transferred to the violating company for action.
Additionally, a platform is used for companies to provide feedback and declare their compliance. Interestingly, the system has observed a domino effect, wherein even companies that were not directly intervened with have shown improved compliance. This suggests that the awareness and actions taken by some companies have influenced others in the industry to improve their GDPR compliance as well.
Data protection authorities recognise the potential for efficiency that new technologies can bring, but they also express concerns and high levels of interest. They acknowledge that utilising new technologies, such as the automated GDPR compliance system, can increase efficiency by eliminating trivial tasks and increasing the quality of work through the use of well-proven templates.
However, implementing new technology poses certain challenges. The adoption of new technology requires technical infrastructures, such as programmers, to support its implementation. Additionally, a culture shift is necessary for organisations to focus on specific tasks related to the new technology and adapt to the changes it brings.
In conclusion, NOIP’s automated system for GDPR compliance has achieved a significant compliance rate and has demonstrated the potential for technology to enforce and improve GDPR compliance in a more efficient manner. While there are challenges associated with implementing new technology, the benefits of increased efficiency and quality are substantial.
It is noteworthy that the system has also influenced compliance improvement among companies that were not directly addressed, highlighting its positive impact on the industry as a whole.
Nicole Stephensen
Speech speed
176 words per minute
Speech length
1786 words
Speech time
609 secs
Arguments
Privacy-enhancing technology should not replace good decision-making at the outset.
Supporting facts:
- Governments and organizations have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations.
- Strategic privacy risk management often relates to identifying and mitigating risk around decisions that have already been taken.
Topics: Privacy-enhancing technology, Decision making, Data protection
Organizations tend to struggle with risk identification and mitigation particularly where large volumes of data or complex vendor relationships are involved.
Supporting facts:
- Data leakage often happens without awareness of the organization and this qualifies as a personal data breach.
Topics: Risk management, Data Management
Privacy enhancing technologies have utility in controlling for data leakage through data accountability tools.
Supporting facts:
- Q-Privacy is an example of a tool that allows organizations to audit for data leakage and to enforce rules about data usage.
Topics: Privacy enhancing technologies, Data leakage, Data accountability
Report
The analysis explores different perspectives on privacy-enhancing technology and data protection. One argument presented is that privacy-enhancing technology should not replace good decision-making. It is emphasised that governments and organizations have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations.
This suggests that while privacy-enhancing technology can be beneficial, it should not be solely relied upon to make ethical and responsible decisions regarding data privacy. Another argument highlighted is the struggle faced by organizations in identifying and mitigating risks, particularly when dealing with large volumes of data or complex vendor relationships.
Data leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifies as a personal data breach. This indicates that organizations may face challenges in effectively managing and protecting data, especially in situations involving extensive data sets or intricate vendor arrangements.
However, the analysis also acknowledges the utility of privacy-enhancing technologies in controlling data leakage. Specifically, the example of Q-Privacy is provided as a tool that allows organizations to audit for data leakage and enforce rules about data usage. This suggests that privacy-enhancing technologies, particularly those focused on data accountability, can play a valuable role in preventing and controlling data leakage incidents.
Furthermore, the importance of prioritizing purpose specification and collection minimization in data protection practices is highlighted. The argument put forward states that these are the building blocks for a culture that limits the use and disclosure of personal data as much as possible.
This implies that organizations should be cautious in collecting only necessary data and clearly defining the purposes for which it will be used. By doing so, they can actively contribute to a privacy-conscious environment. Lastly, the analysis identifies several barriers to the implementation of privacy-enhancing technologies.
These include the privacy maturity of the technology suppliers, their geographical location, and the budget of the organization. Additionally, it is noted that decision makers in the privacy domain tend to be more in the legal space and have a less technical focus, which could also be a barrier for adoption.
This suggests that a multifaceted approach is necessary to address these barriers and promote the effective adoption and integration of privacy-enhancing technologies. In conclusion, the analysis provides an overview of various perspectives on privacy-enhancing technology and data protection. It emphasizes the importance of good decision-making, compliance with privacy laws and community expectations, risk identification and mitigation, data accountability tools, purpose specification, and collection minimization in ensuring effective data protection practices.
Moreover, the analysis sheds light on the challenges and barriers associated with the implementation of privacy-enhancing technologies, highlighting the need for a comprehensive approach to overcome these obstacles.
Suchakra Sharma
Speech speed
188 words per minute
Speech length
1845 words
Speech time
589 secs
Arguments
Privacy Enhancement Technologies (PETs) should be considered from the software perspective
Supporting facts:
- Data is moved from one system to another by software.
- Looking at the software handling the data can give an understanding of developers’ intention and predict potential privacy violations.
- Privado is developing a tool to evaluate how software handles data
Topics: Data Privacy, Software Development, PETs
Technically verifiable Privacy Impact Assessments (PIAs) can guarantee proactive privacy
Supporting facts:
- The information necessary for PIAs are already available during software development.
- This process can ensure privacy regulations are adhered to from the design to deployment.
- A tool has been built to perform these verifiable PIAs that identifies potential privacy violations.
Topics: Data Privacy, Software Development, PIAs
Certifying software for privacy can become a possibility
Supporting facts:
- The data processing and handling intentions of a software can be evaluated.
- This allows for privacy compliance checks before software deployment.
- Regulatory laws like GDPR, CCPA can be translated into fine-grained checks and tests for compliance.
Topics: Data privacy, Software Certification, Regulations
Report
The speakers in the discussion present different perspectives on privacy in software development. One speaker argues in favour of considering Privacy Enhancement Technologies (PETs) from the software perspective. This involves examining how software handles data, as it can provide insights into developers’ intentions and identify potential privacy violations.
The speaker highlights the importance of evaluating the software in order to predict and prevent privacy breaches. As a solution, Privado is developing a tool that can assess how software handles data. On the other hand, another speaker focuses on the significance of technically verifiable Privacy Impact Assessments (PIAs) in ensuring proactive privacy.
They note that during software development, the necessary information for PIAs is already available. By incorporating PIAs into the development process, privacy regulations can be adhered to right from the design phase to deployment. To facilitate this, a tool has been built to perform verifiable PIAs, identifying potential privacy violations in advance.
This approach is seen as a guarantee for proactive privacy. The third speaker explores the possibility of certifying software for privacy compliance. They highlight the importance of evaluating the data processing and handling intentions of software. By doing so, privacy compliance checks can be conducted before the software is deployed.
They suggest that regulatory laws such as GDPR and CCPA can be translated into fine-grained checks and tests for compliance. This certification process is considered a potential solution to ensure privacy in software development. In conclusion, the speakers all emphasize the need to evaluate how software handles data and ensure compliance with privacy regulations throughout the entire software development lifecycle.
By considering PETs, performing verifiable PIAs, and certifying software for privacy compliance, proactive measures can be taken to protect privacy. These perspectives highlight the increasing importance of addressing privacy concerns in the software development process.
Udbhav Tiwari
Speech speed
202 words per minute
Speech length
2672 words
Speech time
795 secs
Arguments
Mozilla is a unique organization that does not have typical incentives that apply in the technology sector
Supporting facts:
- Mozilla Corporation is owned by Mozilla Foundation
- There’s no shareholder pressure or drive for profits
Topics: Mozilla Foundation, Technology sector, Data Collection
Mozilla is exploring ways to use PPE technologies like DAP and Oblivious HTTP to collect telemetry information
Supporting facts:
- Apple’s private relay service uses similar technology to separate data’s origin from its substance
- Few service providers currently offer the infrastructure necessary for utilizing DAP and OHTP
Topics: PPE technologies, DAP, Oblivious HTTP, Mozilla, Telemetry information
Mozilla introduced ‘Total Cookie Protection’ which segregates cookies and other identifiers dropped by each website the user visits
Supporting facts:
- TCP creates separate ‘jars’ for cookies from different websites, which cannot communicate with each other
- The tool uses heuristics to discern useful third-party cookies from advertising ones
Topics: Mozilla, Total Cookie Protection, User privacy
The usage of privacy enhancing technologies (PETs) may cause increased data collection
Supporting facts:
- PETs may make it easier and risk-free to collect data, leading to an increase in its collection
Topics: Privacy Enhancing Technologies, Data Collection, Data Protection
The decision of what data to collect is crucial and should involve human input
Supporting facts:
- Decisions on what data to collect is a human decision, and sometimes the decision to not collect certain data can be the safest option
Topics: Data Collection, Decision Making, Human Input
The need for Privacy Enhancing Technologies (PETs) that enable data collection without identifiable information
Supporting facts:
- PETs can be developed to collect data without the identifiable information, which can ensure privacy and minimize consequences if data leakage occurs
Topics: Privacy Enhancing Technologies, Data Collection, Data Anonymization
Report
Mozilla Corporation, owned by Mozilla Foundation, is a unique organization in the technology sector. It operates without the typical incentives for profit maximization and prioritizes user welfare and the public interest. While initially having a strong policy against data collection, Mozilla had to make changes due to limitations in product development.
They have since explored privacy-preserving ways of collecting information, separating the “who” from the “what” to protect user privacy. Privacy-preserving technologies have become increasingly feasible with the proliferation of internet availability, bandwidth, and computational power. Privacy has emerged as a key differentiating factor for products, leading to increased investment in privacy-focused solutions.
Mozilla has taken a critical stance on Google’s Chrome Privacy Sandbox set of technologies, acknowledging improvements but asserting the need for technical validation. They are also exploring the use of Privacy-Preserving Technologies (PETs) like Decentralized Ad Delivery (DAP) and Oblivious HTTP (OHTP) for telemetry information collection.
While recognizing the value of advertising to support internet publishers, Mozilla deems the current state of the advertising ecosystem unsustainable. They have introduced features like Firefox’s “Total Cookie Protection” to enhance user privacy while still allowing essential functionality. Mozilla has raised concerns about Google’s Privacy Sandbox standards potentially becoming the de facto norms, with the potential to impact privacy and competition.
They advocate for responsible implementation of PETs to strike a balance between privacy and data collection. Human involvement in data collection decisions is crucial to consider the risks to user privacy. Mozilla emphasizes the importance of accountability and responsible practices.
In summary, Mozilla Corporation distinguishes itself in the technology sector with its focus on user welfare and the public interest. They actively explore privacy-preserving technologies, criticize Google’s Privacy Sandbox, and advocate for responsible data collection practices. Through their efforts, Mozilla aims to foster a more privacy-protective and user-centered tech industry.
Wojciech Wiewiórowski
Speech speed
141 words per minute
Speech length
1119 words
Speech time
476 secs
Arguments
The European Data Protection Supervisor prioritizes tools in its mandates as a solution to operationalize privacy laws
Supporting facts:
- The European Data Protection Supervisor serves as a supervisor for EU, European Union institutions, bodies, and agencies
- They advise in the legislative process in the European Union
- Their goal is to provide safer digital future by promoting IT architects and comprehensive privacy engineering approach
Topics: The European Data Protection Supervisor, Data privacy, Privacy-enhancing tools
Developing tools such as trusted execution environment and trusted smart surveys can help in ensuring privacy
Supporting facts:
- These tools were developed by Eurostat
- They have been provided in the guide on privacy enhancing technologies for official statistics by the United Nations
Topics: Eurostat, Privacy-preserving tools, Trusted Execution Environment, Trusted Smart Surveys
Report
The European Data Protection Supervisor (EDPS) plays an essential role in safeguarding privacy within the European Union (EU). Their key priority is the effective implementation of privacy laws through the use of tools. The EDPS serves as a supervisor for EU institutions and offers advice during the legislative process, ensuring that privacy concerns are integrated into decision-making.
Their ultimate goal is to promote a safer digital future by advocating for the use of IT architects and a comprehensive privacy engineering approach. In line with the EDPS’s efforts, Wojciech Wiewiórowski, a prominent figure in the field, acknowledges and supports the work of non-governmental organizations (NGOs) in enforcing privacy policies.
He recognizes the vital role that NGOs play and suggests that their work should have been undertaken by data protection commissions much earlier. This recognition highlights the importance of collaboration between regulatory bodies and NGOs in effectively safeguarding individuals’ privacy rights.
Furthermore, Eurostat, the statistical office of the European Union, has developed privacy-preserving tools such as trusted execution environments and trusted smart surveys. These innovative tools aim to ensure privacy while conducting official statistics. The United Nations has included these tools in their guide on privacy enhancing technologies for official statistics, further validating their importance and effectiveness in maintaining data privacy.
Overall, the European Data Protection Supervisor, Wojciech Wiewiórowski, and Eurostat are actively working to uphold privacy rights and create a safer digital environment. Their focus on utilizing tools and collaborating with NGOs demonstrates their commitment to establishing a robust framework for data protection.
Embracing these initiatives provides individuals with greater confidence in the privacy of their personal information.