EU Council’s Spanish presidency proposes governance framework for AI foundation models
The Spanish presidency has proposed a governance framework for overseeing foundation models and high-impact AI models under the EU’s AI law. This framework includes a scientific panel and specific obligations for models like GPT-4.
The Spanish presidency of the EU Council of Ministers has introduced a governance structure aimed at supervising foundation models and high-impact foundation models within the framework of the EU’s AI legislation. This structure involves the creation of a scientific panel and lays out specific responsibilities concerning models such as OpenAI’s GPT-4, which supports ChatGPT.
The European Commission has been granted exclusive authority to oversee these responsibilities, conduct investigations, enforce regulations, and carry out audits on foundation models. They have the option to delegate these audits to independent auditors or carefully vetted red-team experts. For high-impact models, the proposal suggests conducting adversarial evaluations by red teams.
Additionally, the governance structure includes a system for penalizing non-compliant providers, although the precise penalties remain undefined. The framework introduces what is termed a ‘governance framework,’ which comprises the AI Office and a scientific panel responsible for engaging with the scientific community, advising on high-impact models, and monitoring safety concerns.
Why does this matter?
This proposal from the Spanish presidency may exert a significant influence on the ongoing discussions as the AI legislation approaches the final stages of the legislative process. This signals a significant step towards the regulation of AI in the EU, especially concerning advanced AI models like GPT-4.