Tripartite Accord: Germany, France, and Italy publish joint AI regulation framework

These countries are aligning against the initial tiered approach envisioned in EU AI Act for these foundation models, advocating instead for a framework that emphasises codes of conduct over as what they termed untested norms.

AI-Powered Cybersecurity in Your Hands: Empowering Technology for Data Protection

Three European powerhouses—France, Germany, and Italy—have jointly unveiled an agreement aimed at regulating AI within the continent.

The joint agreement, detailed in a shared paper, revolves around the concept of ‘mandatory self-regulation through codes of conduct’ primarily directed at foundation models designed to generate diverse outputs. Notably, these countries are aligning against the initial tiered approach envisioned in EU AI Act for these foundation models, advocating instead for a framework that emphasises codes of conduct over what they termed untested norms.

What does the current draft of the EU AI Act propose? It proposes a tiered approach to regulating AI based on its potential risks. This approach involves categorising AI into different risk bands, with more or less regulation depending on the risk level. 

The joint paper emphasises regulating only the use of AI rather than technology through the AI Act ‘to compete globally’, added the Digital Affairs Minister Volker Wissing, who expressed his satisfaction that such an agreement had been reached with France and Germany.

The paper outlines the requirement for developers of foundation models to define model cards. What are model cards? These are documents that provide information about machine learning models, detailing various aspects such as their intended use, performance characteristics, limitations, and potential biases. They serve as a means of transparency, allowing users to understand the model’s behaviour and assess its suitability for specific applications. These model cards will be established on best practices within the developers’ community.

The joint paper suggests that an AI governance body might contribute to formulating guidelines and overseeing the implementation of model cards. According to the joint paper, there is no recommendation for immediate sanctions. Should violations of the code of conduct be identified after a certain period, a sanctions system could be established.