The European Commission is closing its consultation on a draft implementing regulation on detailed arrangements for certain proceedings under the AI Act.
The draft states that it lays down detailed arrangements and conditions for the evaluation of general-purpose AI models under Article 92, including procedures for involving independent experts and selecting them. It also lays down detailed arrangements and procedural safeguards for proceedings in view of the possible adoption of decisions under Article 101 of Regulation (EU) 2024/1689.
Under Article 2, a European Commission decision requesting access to a general-purpose AI model would have to specify the technical means, components, and conditions by which the provider must provide that access. The draft states that access may include APIs, internal access, source code, model weights, access to the infrastructure used to host the model, access to inspect and modify system state, and all levels of access granted to the provider’s own employees.
The draft also states that the European Commission may require a provider to disable and remove logging measures that could track or record the Commission’s access, to the extent necessary to ensure the integrity and confidentiality of the evaluation process. Providers who requested access would have to provide it in a timely and effective manner.
Regarding independent experts, the draft states that the European Commission must take into account factors such as shared ownership, governance, management, personnel, resources, and contractual relationships when assessing independence. It also states that appointed experts must remain independent throughout their appointment and that the confidentiality, integrity, and availability of sensitive information must be protected.
For proceedings that may lead to fines, the draft states that the European Commission may initiate proceedings against relevant conduct by providers of general-purpose AI models. It also states that the Commission may, by decision, order interim measures on grounds of urgency due to a risk of serious damage to health, safety requirements, or other grounds relating to the public interest covered by Regulation (EU) 2024/1689, including preventing a general-purpose AI model from being made available on the market, based on a prima facie finding of an infringement.
Procedural safeguards include written observations on preliminary findings, with a time limit of at least 14 days set by the European Commission, and rules governing access to the file. The draft states that the addressee may obtain access to documents mentioned in the preliminary findings, subject to redactions protecting business secrets and other confidential information, while broader access may be granted under terms of disclosure set by the Commission.
The annex sets format and length requirements for written observations submitted under Article 7. It states that observations must be submitted in a format that allows electronic processing, digitisation, and character recognition, and sets requirements for page format, font, spacing, margins, and numbering. Written observations must not exceed 50 pages, while annexes do not count towards that limit if they have a purely evidential and instrumental function and are proportionate in number and length.
The draft also lays down limitation periods for the imposition and enforcement of penalties, rules on the beginning and setting of time periods, and provisions on the transmission and receipt of information. It states that documents transmitted by digital means must use at least one qualified electronic signature and that, for real-time or near real-time information shared through APIs or equivalent means, the European Commission will define the methods and duration of that sharing.
The regulation states that it would enter into force on the twentieth day following its publication in the Official Journal of the European Union.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Two decades since the launch of the .eu domain, the EU has marked its role in establishing a unified digital identity across member states.
On 7 April 2006, the .eu top-level domain (TLD) was launched, offering businesses, citizens, and organisations a pan-EU online identity.
Over time, .eu has developed into one of the largest country-code domains globally, with millions of registrations and consistent growth.
Its technical stability and security record, including uninterrupted service since launch, have reinforced its reputation as a reliable digital infrastructure. Investments in fraud detection and data integrity have further strengthened trust in its ecosystem.
The domain has also evolved to reflect the EU’s linguistic diversity, with the introduction of internationalised domain names and additional scripts such as Cyrillic and Greek. These developments have expanded accessibility and reinforced inclusivity within the European digital space.
Looking ahead, .eu is positioned as a key instrument for advancing digital sovereignty and supporting the Single Market. Its role in global internet governance discussions is expected to grow, particularly as the EU institutions seek to shape a more open, secure, and rights-based digital environment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union has entered a new phase in the governance of AI, moving from the legislative adoption of the Artificial Intelligence Act (AI Act) towards its practical implementation. This particular phase places particular emphasis on obligations of providers of general-purpose AI (GPAI) models, reflecting the increasing role of such systems in the broader digital ecosystem.
The AI Act, adopted in 2024, establishes a comprehensive legal framework for AI within the EU. It introduces a risk-based approach that classifies AI systems into categories ranging from minimal risk to unacceptable risk, with corresponding regulatory requirements.
According to the official text of the regulation, the framework is designed to ensure that AI systems placed on the market in the Union are ‘safe and respect existing law on fundamental rights and Union values.’
While earlier discussions around the Act focused on its legislative negotiation and scope, the current phase centres on how its provisions will be applied in practice.
General-purpose AI models within the AI Act
A key element of this implementation phase concerns general-purpose AI models. These models, which can be integrated into a wide range of downstream applications, occupy a distinct position within the regulatory framework.
The AI Act defines general-purpose AI models as systems that can be used across multiple tasks and contexts and may ‘serve a variety of purposes, both for direct use and for integration into other AI systems.’
That positioning reflects the broad applicability of these models, particularly in areas such as natural language processing, content generation, and data analysis.
The Act also recognises that the widespread deployment of such models may have implications beyond individual use cases, particularly when integrated into high-risk systems.
Obligations for providers of GPAI models
The European Commission, together with the European AI Office, has begun outlining expectations for compliance with provisions related to general-purpose AI.
According to official EU materials, providers of GPAI models are required to ensure that technical documentation is drawn up and kept up to date.
Image via Freepik
The regulation specifies that providers should ‘draw up and keep up-to-date technical documentation of the model,’ ensuring that relevant information is accessible for compliance and oversight purposes. In addition, transparency obligations require providers to make certain information available to downstream deployers.
The intention of this is to support the responsible integration of GPAI models into other systems.
Distinction between GPAI and systemic-risk models
The AI Act introduces a distinction between general-purpose AI models and those considered to pose systemic risk.
Models that meet specific criteria, such as scale, capability, or deployment level, may be classified as having a systemic impact.
For such models, additional obligations apply, including requirements related to evaluation, risk mitigation, and reporting. The European Commission has indicated that further guidance will clarify how systemic risk thresholds are determined, including through delegated acts and technical standards.
Role of the European AI Office in implementation
The European AI Office, established within the European Commission, plays a central role in supporting the implementation of the AI Act.
Its responsibilities include contributing to the consistent application of the regulation, coordinating with national authorities, and supporting the development of methodologies for compliance.
According to the European Commission, the AI Office is tasked with ‘ensuring the coherent implementation of the AI Act across the Union.’ The Office is also expected to contribute to the development of benchmarks, testing frameworks, and guidance documents that support both regulators and providers.
Phased implementation timeline
The implementation of the AI Act is structured as a phased process, with different provisions becoming applicable over time.
That phased approach allows stakeholders to adapt to the regulatory requirements while enabling authorities to establish enforcement mechanisms.
Provisions related to general-purpose AI models are among the earlier elements to be operationalised, reflecting their central role in the current AI landscape.
The European Commission has indicated that additional implementing acts and guidance documents will be issued as part of this process.
Coordination with national authorities
While the European AI Office plays a coordinating role at the EU level, enforcement remains the responsibility of national authorities within member states.
The AI Act establishes mechanisms for cooperation and information-sharing to support a harmonised approach across the European Union.
National authorities are expected to work closely with the AI Office and the European Commission to oversee compliance and address emerging challenges.
Stakeholder engagement and technical guidance
The implementation phase also involves engagement with a range of stakeholders, including industry actors, civil society organisations, and technical experts.
Also, the European Commission has initiated consultations and workshops to gather input on practical aspects of implementation, such as documentation standards and risk assessment methodologies.
The following process supports the development of operational guidance applicable across sectors and use cases.
Interaction with the EU digital regulatory framework
These frameworks address different aspects of the digital ecosystem, including data protection, platform governance, and market competition.
The relationship between the AI Act and these instruments is expected to be clarified further during implementation.
International context: OECD and UN approaches
The governance of general-purpose AI models is also being addressed at the international level.
The OECD AI Principles state that AI systems should be ‘robust, secure and safe throughout their entire lifecycle,’ and emphasise accountability for their functioning.
At the UN level, the Global Digital Compact process addresses issues related to transparency, accountability, and oversight of digital technologies, including AI.
The listed initiatives provide non-binding guidance, in contrast to the legally binding framework established by the EU AI Act.
Ongoing development of technical standards
The development of technical standards is an important component of the implementation process.
The European Commission has indicated that it will work with standardisation organisations to develop specifications related to documentation, evaluation, and risk management.
These standards are expected to support the practical application of the AI Act’s provisions.
From regulatory framework to regulatory practice
The current phase of the EU AI Act marks a transition from legislative design to regulatory practice.
For providers of general-purpose AI models, this involves preparing to meet obligations related to documentation, transparency, and risk management. For regulators, the focus is on ensuring consistent application of the rules across member states, supported by coordination mechanisms and guidance from the AI Office.
The implementation process is expected to evolve as further guidance is issued.
Conclusion
The European Union’s AI Act is entering its implementation phase, with a particular focus on general-purpose AI models.
That phase involves translating the regulation’s legal provisions into operational requirements, supported by guidance from the European Commission and the AI Office.
The development of technical standards, coordination mechanisms, and compliance frameworks will play a central role in this process. As implementation progresses, further clarification is expected through additional guidance and regulatory measures, contributing to the operationalisation of the EU’s approach to AI governance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has awarded a €5 million grant to strengthen independent fact-checking capacity across the European Union and associated countries. The initiative will establish a comprehensive support network for fact-checkers working in all the EU languages.
The European Fact-Checking Standards Network will lead the project alongside seven partner organisations. The scheme will provide fact-checkers with protection covering legal support, cybersecurity assistance, psychological support and access to an independent European repository of fact-checks.
By expanding Europe’s independent fact-checking community, the initiative will improve the Union’s ability to detect and analyse disinformation threats. The announcement reflects the Commission’s commitment to safeguarding information integrity and democratic resilience across Brussels.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU’s interim ePrivacy derogation allowing certain communications services to detect child sexual abuse online voluntarily expired after 3 April 2026, bringing to an end the temporary legal basis that had permitted some providers to scan private communications for child sexual abuse material under limited conditions.
The exemption applied to number-independent interpersonal communications services such as messaging, webmail, and internet telephony platforms, allowing them to use specific technologies to detect, report, and remove child sexual abuse material in private communications.
Under the temporary framework, providers were also required to make information from reports submitted to authorities and the European Commission available in a structured, machine-readable format.
On 26 March 2026, the European Parliament said the derogation would not be extended after negotiations with the Council of the European Union failed to produce an agreement. Parliament had supported a further extension on 11 March, backing a shorter prolongation until August 2027 and a narrower scope than the European Commission had proposed, but no final deal was reached before the deadline.
The expiry leaves the EU without an updated interim arrangement, while negotiations on a permanent legal framework for addressing online child sexual abuse continue. In practice, that means the bloc still has no settled long-term answer to one of its most difficult digital policy questions: how to reconcile child protection measures with privacy and confidentiality rules governing private communications.
Why does it matter?
Because the lapse removes the temporary EU legal basis that had allowed some messaging and other communications services to voluntarily use detection technologies for online child sexual abuse under a limited exemption from ePrivacy rules. That creates immediate legal and operational uncertainty for providers that had relied on the framework, while also reopening a wider policy conflict the EU has still not resolved: how to support child safety online without undermining privacy, confidentiality of communications, and data protection safeguards in the absence of a permanent legislative solution.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.
The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.
The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.
MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.
The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.
MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.
The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights. At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.
Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.
Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.
Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.
Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.
The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.
For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A speech by European Central Bank’s Member of Executive Board Piero Cipollone outlines how a digital euro could strengthen Europe’s resilience and autonomy in payments.
An initiative that responds to growing dependence on non-European financial infrastructure, which increasingly shapes transaction rules, costs, and access across the euro area.
According to Mr Cipollone, ‘dependence on a non-European infrastructure leaves users vulnerable to an outright withdrawal of access.‘
Most card transactions in the euro area depend on non-European schemes, while declining cash usage intensifies dependence on digital systems beyond European control.
He added that the proposed digital euro would function as a sovereign digital payment method, available online and offline, ensuring continuity and privacy.
It would also reduce reliance on foreign providers, lower transaction costs, and create a unified infrastructure supporting competition and innovation across the EU payment systems.
Beyond retail payments, the ECB emphasises a broader strategy including tokenised central bank money and distributed ledger technologies.
These measures aim to strengthen financial integration, prevent fragmentation, and ensure that the EU’s digital financial ecosystem develops on foundations aligned with its economic sovereignty.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.
It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.
Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.
By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has highlighted the growing impact of the Strategic Technologies for Europe Platform (STEP), which has mobilised €29 billion to strengthen innovation and competitiveness across key sectors.
An initiative that supports the development and manufacturing of critical technologies, reinforcing the Union’s strategic autonomy.
Funding has been directed toward digital and deep-tech innovation, clean technologies, biotechnology and defence, combining resources from EU programmes and Member States.
Such a coordinated approach reflects efforts to reduce strategic dependencies instead of relying on fragmented investment strategies.
The platform has also improved access to funding, with hundreds of calls and projects supported across all Member States. Tools such as the STEP Seal and the planned AI-based access systems aim to simplify processes and attract further public and private investment into high-potential projects.
Looking ahead, the initiative is shaping broader reforms, including proposals for a European Competitiveness Fund. These developments signal a continued focus on streamlining funding mechanisms while supporting innovation ecosystems and long-term economic growth across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!