EU regulators work with tech giants on AI rules

The companies have sought the European Commission’s input on their new AI products, particularly those in the large language model space.

Image of hand and indication about AI.

According to Ireland’s Data Protection Commission, leading global internet companies are working closely with the EU regulators to ensure their AI products comply with the bloc’s stringent data protection laws. This body, which oversees compliance for major firms like Google, Meta, Microsoft, TikTok, and OpenAI, has yet to exercise its full regulatory power over AI but may enforce significant changes to business models to uphold data privacy.

AI introduces several potential privacy issues, such as whether companies can use public data to train AI models and the legal basis for using personal data. AI operators must also guarantee individuals’ rights, including the right to have their data erased and address the risk of AI models generating incorrect personal information. Significant engagement has been noted from tech giants seeking guidance on their AI innovations, particularly large language models.

Following consultations with the Irish regulator, Google has already agreed to delay and modify its Gemini AI chatbot. While Ireland leads regulation due to many tech firms’ EU headquarters being located there, other EU regulators can influence decisions through the European Data Protection Board. AI operators must comply with the new EU AI Act and the General Data Protection Regulation, which imposes fines of up to 4% of a company’s global turnover for non-compliance.

Why does it matter?

Ireland’s broad regulatory authority means that companies failing to perform due diligence on new products could be forced to alter their designs. As the EU’s AI regulatory landscape evolves, these tech firms must navigate both the AI Act and existing data protection laws to avoid substantial penalties.