EU AI Act begins as tech firms push back

Developers must comply with new EU AI regulations or face heavy scrutiny and penalties.

The EU begins enforcing its AI Act this week, requiring developers to assess risks, report incidents and improve transparency.

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!