Meta halts AI launch in Europe after EU regulator ruling

Meta will be halting the launch of its AI in Europe after Ireland’s leading data protection authority requested tighter privacy around the training of the AI.

Meta, EU

Meta’s main EU regulator, the Irish Data Protection Commission (DPC), requested that the company delay the training of its large language models (LLMs) on content published publicly by adults on the company’s platforms. In response, Meta announced they would not be launching their AI in Europe for the time being. 

The main reason behind the request is Meta’s plan to use this data to train its AI models without explicitly seeking consent. The company claims it must do so or else its AI ‘won’t accurately understand important regional languages, cultures or trending topics on social media.’ It is already developing continent-specific AI technology. Another cause for concern is Meta’s use of information belonging to people who do not use its services. In a message to its Facebook users, it said that it may process information about non-users if they appear in an image or are mentioned on their platforms. 

The DPC welcomed Meta’s decision to delay its implementation. The commission is leading the regulation of Meta’s AI tools on behalf of EU data protection authorities (DPAs), 11 of which received complaints by advocacy group NOYB (None Of Your Business). NOYB argues that the GDPR is flexible enough to accommodate this AI, as long as it asks for the user’s consent. The delay comes right before Meta’s new privacy policy comes into force on 26 June. 

Beyond the EU, the executive director of the UK’s Information Commissioner’s Office was pleased with the delay, and added that ‘in order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.’