EU advances GPAI framework with focus on forecasting systemic risks
AI regulation efforts in the EU expand through structured forecasting and scenario-based risk evaluation systems.
At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.
The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.
The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.
Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.
The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
