Britain intensifies efforts to protect user data from AI
UK strengthens AI data privacy regulations, targeting companies collecting personal data without consent. Information commissioner warns AI firms of penalties for non-compliance with data protection laws.
The United Kingdom (UK) is taking a firm stance against AI companies that collect personal data without obtaining proper consent, especially in relation to chatbots. The information commissioner has issued a warning to AI firms that use generative AI technology, advising them about potential penalties if they fail to acquire permission or demonstrate a valid reason for collecting personal data. The Information Commissioner’s Office (ICO) has made it clear that these companies must obtain consent or provide evidence of a legitimate interest when gathering personal information. The ICO has the power to issue notices, enforcement orders, or fines of up to £17 million under data protection laws.
Recently, the UK government has engaged in discussions with major AI companies to determine appropriate regulations. Organisations utilising generative AI technology must fulfil their obligations regarding data protection to ensure that they handle personal data lawfully, either through consent or legitimate interests. Even if the information is publicly accessible, data protection laws still apply, as emphasised by Professor Lorna Woods from Essex University: ‘Data protection rules apply whether or not you’ve made something public.’
In a similar crackdown on AI, Ofcom, the online safety regulator, is also planning to introduce stricter regulations, including more rigorous rules for AI companies that would require them to conduct risk assessments for any new AI technologies. Additionally, the Competition and Markets Authority (CMA), the UK’s competition watchdog, is currently conducting an investigation into the AI market, with a specific focus on safety implications.