By Feodora Hamza
OpenAI’s ChatGPT has gained widespread attention for its ability to generate human-like text when responding to prompts. However, after months of celebration for OpenAI and ChatGPT, the company is now facing legal action from several European data protection authorities who believe that it has scraped people’s personal data, without their consent. The Italian Data Protection Authority has temporarily blocked the use of ChatGPT as a precautionary measure, while French, German, Irish, and Canadian data regulators are also investigating how OpenAI collects and uses data. In addition, the European Data Protection Board set up an EU-wide task force to coordinate investigations and enforcement concerning ChatGPT, leading to a heated discussion on the use of AI language models and raising important ethical and regulatory issues, particularly those involving data protection and privacy.
Concerns around GDPR compliance: How can generative AI comply with data protection rules such as GDPR?
According to Italian authorities, OpenAI’s disclosure regarding its collection of user data during the post-training phase of its system, specifically chat logs of interactions with ChatGPT, is not entirely transparent. This raises concerns about compliance with General Data Protection Regulation (GDPR) provisions that aim to safeguard the privacy and personal data of EU citizens, such as the principles of transparency, purpose limitation, data minimisation, and data subject rights.
As a condition for lifting the ban it imposed on ChatGPT, Italy has outlined the steps OpenAI must take. These steps include obtaining user consent for data scraping or demonstrating a legitimate interest in collecting the data, which is established when a company processes personal data within a client relationship, for direct marketing purposes, to prevent fraudulent activities, or to safeguard the network and information security of its IT systems. In addition, the company must provide users with an explanation of how ChatGPT utilises their data and offer them the option to have their data erased, or refuse permission for the program to use it.
The company’s choice to offer an opt-out feature comes amid mounting pressure from European data protection regulators concerning the firm’s data collection and usage practices. Italy has demanded OpenAI’s compliance with the GDPR by April 30. In response, OpenAI implemented a user opt-out form and the ability to object to personal data being used in ChatGPT, allowing Italy to restore access to the platform in the country. This move is a positive step towards empowering individuals to manage their data.
Challenges in deleting inaccurate or unwanted information from AI systems remain
However, the issue of deleting inaccurate or unwanted information from AI systems in compliance with GDPR is more challenging. Although some companies have been instructed to delete algorithms developed from unauthorised data, eliminating all personal data used to train models remains challenging. The problem arises because machine learning models often have complex black box architectures that make it difficult to understand how a given data point or set of data points is being used. As a result, models often have to be retrained with a smaller dataset in order to exclude specific data, which is time-consuming and costly for companies.
Data protection experts argue that the OpenAI could have saved itself a lot of trouble by building in robust data record-keeping from the start. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos. In AI development, the dominant paradigm is that the more training data – the better. OpenAI’s GPT-3 model was trained on a massive 570 GB of data. These methods, and the sheer size of the data set, mean that tech companies tend to not have full understanding of what has gone into training their models.
While many criticise the GDPR for being unexciting and hampering innovation, experts argue that the legislation serves as a model for companies to improve their practices when they are compelled to comply with it. It is presently the sole means available to individuals to exercise any authority over their digital lives and data in a world that is becoming progressively automated.
The impact on the future of generative AI: The need for ongoing dialogue and collaboration between AI developers, users, and regulators
This highlights the need for ongoing dialogue and collaboration between AI developers, users, and regulators to ensure that the technology is used in a responsible and ethical manner. It seems that ChatGPT is facing a rough ride with Europe’s privacy watchdogs. The Italian ban seems to have been the beginning, since OpenAI has not set up a local headquarters in one of the EU countries yet, exposing it to further investigations and bans from any member country’s data protection authority.
However, while the EU regulators are still wrapping their head around the regulatory implications of and for generative AI, companies like OpenAI continue to benefit and monetise from the lack of regulation in this area. With the EU’s Artificial Intelligence Act being passed soon, the EU aims to address the gaps of the GDPR when regulating AI and inspire similar initiatives being proposed in other countries. It seems the impact of generative AI models on privacy will probably be on the regulators’ agenda for many years to come.