OpenAI’s ChatGPT faces scrutiny from Italian privacy watchdog

According to the authority, there are indications of violations of data privacy law. OpenAI has been granted a 30-day period to present its defence arguments in response to the allegations.

 Text, Symbol, Electronics, Mobile Phone, Phone

The Garante per la protezione dei dati personali, Italy’s data protection authority (DPA), notified OpenAI that its AI application used in ChatGPT breach data protection regulations. Namely, the Italian DPA concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU General Data Protection Regulation (GDPR). According to the authority, the finding follows a probe opened by the authority in March 2023. OpenAI has been given a 30-day period to present its defence arguments in response to the allegations. Additionally, the Italian DPA said it would also consider the investigation result from the European task force established by the European Data Protection Board (EDPB) in its final determination on the case.

Last year in March, the Italian DPA imposed a brief ban on ChatGpt due to concerns regarding the processing of Italian users’ data. At the time, concerns have been raised regarding the absence of a legal basis for the extensive collection and processing of personal data for algorithm training purposes. Additionally, the absence of age verification mechanisms poses risks of exposing children to inappropriate responses within the platform. At the end of April of that same year, OpenAI fulfilled the requirements set forth by the authority. ChatGPT has been accessible to Italian users ever since.

Why does it matter?

As the situation unfolds, OpenAI will have the opportunity to present its defense and address the concerns raised by the Italian Data Protection Authority. The outcome of this investigation will likely have implications not only for ChatGPT but also for the broader landscape of AI applications and their adherence to data protection standards in EU.