The European Commission is closing its consultation on a draft implementing regulation on detailed arrangements for certain proceedings under the AI Act.
The draft states that it lays down detailed arrangements and conditions for the evaluation of general-purpose AI models under Article 92, including procedures for involving independent experts and selecting them. It also lays down detailed arrangements and procedural safeguards for proceedings in view of the possible adoption of decisions under Article 101 of Regulation (EU) 2024/1689.
Under Article 2, a European Commission decision requesting access to a general-purpose AI model would have to specify the technical means, components, and conditions by which the provider must provide that access. The draft states that access may include APIs, internal access, source code, model weights, access to the infrastructure used to host the model, access to inspect and modify system state, and all levels of access granted to the provider’s own employees.
The draft also states that the European Commission may require a provider to disable and remove logging measures that could track or record the Commission’s access, to the extent necessary to ensure the integrity and confidentiality of the evaluation process. Providers who requested access would have to provide it in a timely and effective manner.
Regarding independent experts, the draft states that the European Commission must take into account factors such as shared ownership, governance, management, personnel, resources, and contractual relationships when assessing independence. It also states that appointed experts must remain independent throughout their appointment and that the confidentiality, integrity, and availability of sensitive information must be protected.
For proceedings that may lead to fines, the draft states that the European Commission may initiate proceedings against relevant conduct by providers of general-purpose AI models. It also states that the Commission may, by decision, order interim measures on grounds of urgency due to a risk of serious damage to health, safety requirements, or other grounds relating to the public interest covered by Regulation (EU) 2024/1689, including preventing a general-purpose AI model from being made available on the market, based on a prima facie finding of an infringement.
Procedural safeguards include written observations on preliminary findings, with a time limit of at least 14 days set by the European Commission, and rules governing access to the file. The draft states that the addressee may obtain access to documents mentioned in the preliminary findings, subject to redactions protecting business secrets and other confidential information, while broader access may be granted under terms of disclosure set by the Commission.
The annex sets format and length requirements for written observations submitted under Article 7. It states that observations must be submitted in a format that allows electronic processing, digitisation, and character recognition, and sets requirements for page format, font, spacing, margins, and numbering. Written observations must not exceed 50 pages, while annexes do not count towards that limit if they have a purely evidential and instrumental function and are proportionate in number and length.
The draft also lays down limitation periods for the imposition and enforcement of penalties, rules on the beginning and setting of time periods, and provisions on the transmission and receipt of information. It states that documents transmitted by digital means must use at least one qualified electronic signature and that, for real-time or near real-time information shared through APIs or equivalent means, the European Commission will define the methods and duration of that sharing.
The regulation states that it would enter into force on the twentieth day following its publication in the Official Journal of the European Union.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Corning Incorporated and Meta Platforms have begun construction on a major expansion of Corning’s optical cable manufacturing facility in Hickory, North Carolina. The project will support advanced AI data centres using US-developed technology.
The initiative is part of a multiyear, up to $6 billion agreement between the two companies to accelerate the deployment of high-performance data centres. Under the agreement, Corning will supply Meta with new optical fibre, cable, and connectivity solutions.
Meta will act as the anchor customer for the Hickory expansion, which will produce optical cable critical for AI infrastructure. The expansion is expected to strengthen domestic manufacturing and create additional skilled jobs in North Carolina.
Corning currently employs more than 5,000 people in the state and plans to increase its workforce by 15 to 20 percent. Executives emphasised the partnership’s role in advancing US innovation and supporting the next generation of AI infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union has entered a new phase in the governance of AI, moving from the legislative adoption of the Artificial Intelligence Act (AI Act) towards its practical implementation. This particular phase places particular emphasis on obligations of providers of general-purpose AI (GPAI) models, reflecting the increasing role of such systems in the broader digital ecosystem.
The AI Act, adopted in 2024, establishes a comprehensive legal framework for AI within the EU. It introduces a risk-based approach that classifies AI systems into categories ranging from minimal risk to unacceptable risk, with corresponding regulatory requirements.
According to the official text of the regulation, the framework is designed to ensure that AI systems placed on the market in the Union are ‘safe and respect existing law on fundamental rights and Union values.’
While earlier discussions around the Act focused on its legislative negotiation and scope, the current phase centres on how its provisions will be applied in practice.
General-purpose AI models within the AI Act
A key element of this implementation phase concerns general-purpose AI models. These models, which can be integrated into a wide range of downstream applications, occupy a distinct position within the regulatory framework.
The AI Act defines general-purpose AI models as systems that can be used across multiple tasks and contexts and may ‘serve a variety of purposes, both for direct use and for integration into other AI systems.’
That positioning reflects the broad applicability of these models, particularly in areas such as natural language processing, content generation, and data analysis.
The Act also recognises that the widespread deployment of such models may have implications beyond individual use cases, particularly when integrated into high-risk systems.
Obligations for providers of GPAI models
The European Commission, together with the European AI Office, has begun outlining expectations for compliance with provisions related to general-purpose AI.
According to official EU materials, providers of GPAI models are required to ensure that technical documentation is drawn up and kept up to date.
Image via Freepik
The regulation specifies that providers should ‘draw up and keep up-to-date technical documentation of the model,’ ensuring that relevant information is accessible for compliance and oversight purposes. In addition, transparency obligations require providers to make certain information available to downstream deployers.
The intention of this is to support the responsible integration of GPAI models into other systems.
Distinction between GPAI and systemic-risk models
The AI Act introduces a distinction between general-purpose AI models and those considered to pose systemic risk.
Models that meet specific criteria, such as scale, capability, or deployment level, may be classified as having a systemic impact.
For such models, additional obligations apply, including requirements related to evaluation, risk mitigation, and reporting. The European Commission has indicated that further guidance will clarify how systemic risk thresholds are determined, including through delegated acts and technical standards.
Role of the European AI Office in implementation
The European AI Office, established within the European Commission, plays a central role in supporting the implementation of the AI Act.
Its responsibilities include contributing to the consistent application of the regulation, coordinating with national authorities, and supporting the development of methodologies for compliance.
According to the European Commission, the AI Office is tasked with ‘ensuring the coherent implementation of the AI Act across the Union.’ The Office is also expected to contribute to the development of benchmarks, testing frameworks, and guidance documents that support both regulators and providers.
Phased implementation timeline
The implementation of the AI Act is structured as a phased process, with different provisions becoming applicable over time.
That phased approach allows stakeholders to adapt to the regulatory requirements while enabling authorities to establish enforcement mechanisms.
Provisions related to general-purpose AI models are among the earlier elements to be operationalised, reflecting their central role in the current AI landscape.
The European Commission has indicated that additional implementing acts and guidance documents will be issued as part of this process.
Coordination with national authorities
While the European AI Office plays a coordinating role at the EU level, enforcement remains the responsibility of national authorities within member states.
The AI Act establishes mechanisms for cooperation and information-sharing to support a harmonised approach across the European Union.
National authorities are expected to work closely with the AI Office and the European Commission to oversee compliance and address emerging challenges.
Stakeholder engagement and technical guidance
The implementation phase also involves engagement with a range of stakeholders, including industry actors, civil society organisations, and technical experts.
Also, the European Commission has initiated consultations and workshops to gather input on practical aspects of implementation, such as documentation standards and risk assessment methodologies.
The following process supports the development of operational guidance applicable across sectors and use cases.
Interaction with the EU digital regulatory framework
These frameworks address different aspects of the digital ecosystem, including data protection, platform governance, and market competition.
The relationship between the AI Act and these instruments is expected to be clarified further during implementation.
International context: OECD and UN approaches
The governance of general-purpose AI models is also being addressed at the international level.
The OECD AI Principles state that AI systems should be ‘robust, secure and safe throughout their entire lifecycle,’ and emphasise accountability for their functioning.
At the UN level, the Global Digital Compact process addresses issues related to transparency, accountability, and oversight of digital technologies, including AI.
The listed initiatives provide non-binding guidance, in contrast to the legally binding framework established by the EU AI Act.
Ongoing development of technical standards
The development of technical standards is an important component of the implementation process.
The European Commission has indicated that it will work with standardisation organisations to develop specifications related to documentation, evaluation, and risk management.
These standards are expected to support the practical application of the AI Act’s provisions.
From regulatory framework to regulatory practice
The current phase of the EU AI Act marks a transition from legislative design to regulatory practice.
For providers of general-purpose AI models, this involves preparing to meet obligations related to documentation, transparency, and risk management. For regulators, the focus is on ensuring consistent application of the rules across member states, supported by coordination mechanisms and guidance from the AI Office.
The implementation process is expected to evolve as further guidance is issued.
Conclusion
The European Union’s AI Act is entering its implementation phase, with a particular focus on general-purpose AI models.
That phase involves translating the regulation’s legal provisions into operational requirements, supported by guidance from the European Commission and the AI Office.
The development of technical standards, coordination mechanisms, and compliance frameworks will play a central role in this process. As implementation progresses, further clarification is expected through additional guidance and regulatory measures, contributing to the operationalisation of the EU’s approach to AI governance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US software company, Adobe, has introduced Student Spaces, a free AI study tool within Acrobat designed to help students generate learning materials efficiently.
Users can create flashcards, quizzes, mind maps, podcasts, and editable presentations from PDFs, Docs, PowerPoint, Excel, URLs, and handwritten notes.
The tool builds on Acrobat’s AI features, now allowing students to interact with a chat assistant grounded in uploaded documents, reducing errors.
Tested with 500 students from universities including Harvard, Berkeley, and Brown, Adobe emphasises convenience, letting students generate study materials without constantly moving files.
The goal is to simplify study workflows and support learning across multiple document types.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
New research published by the Information Commissioner’s Office (ICO) found that 24% of primary school-aged children have shared their real name or address online, while 21% of parents and carers have never spoken to them about online privacy. It also found that 22% of children have shared personal information, such as health details, with AI tools.
Research published by the ICO also found that 71% of parents worry that information their child shares today could affect their future. Findings also show that 46% do not feel confident protecting their children’s privacy online, 44% say they try but are not sure they are doing enough, and 42% say they probably do not spend enough time checking privacy settings.
Online privacy is one of the least-discussed online safety topics among parents, according to the ICO. Its research found that 38% discuss it less than once a month, while 90% have discussed screen time in the past month.
Emily Keaney, Deputy Commissioner at the ICO, said: ‘The internet offers amazing opportunities for children – but every click can leave a hidden data trail and these digital footprints can last forever.’ She added: ‘We wouldn’t expect our children to share their birthdays or address with a stranger in a shop, because we’d explain stranger danger to them from a very young age, but kids these days are growing up online.’
Keaney said: ‘We know that where children’s details – like their name, interests and pictures – aren’t protected, the potential risks are serious: unwanted contact from strangers, grooming and radicalisation.’ She said children’s online privacy ‘requires a whole society approach’ and added: ‘We have taken and will continue to take action to hold tech companies accountable for their role.’
Keaney also said: ‘There’s a role for parents too but the problem is that many families have never been shown how to talk to their children about online privacy.’ She added: ‘This is where the ICO comes in. We want parents to feel empowered and children to feel digitally confident, because only then will they be able to start to trust in how their data is used and be part of the whole society solution that is needed for online safety.’
The ICO campaign website outlines three steps for parents: talk regularly with children about online privacy, carefully choose what personal information to share, and check privacy settings on new devices and apps.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Major technology and security companies have joined forces under Project Glasswing to defend critical software infrastructure using advanced AI. The initiative brings together organisations including AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, JPMorganChase and the Linux Foundation.
Anthropic is deploying its frontier model, Claude Mythos Preview, at the centre of the effort. The system detects complex software vulnerabilities at scale, uncovering thousands of previously unknown flaws across operating systems, browsers, and core infrastructure.
The model’s findings suggest a major shift in cybersecurity capabilities. AI systems are increasingly capable of matching or surpassing human expert performance in vulnerability discovery, raising both defensive opportunities and security risks.
Some of the flaws identified had persisted for decades, undetected by traditional testing methods.
Project Glasswing aims to convert these capabilities into a coordinated defensive advantage. Partners will use the model to scan and secure systems more efficiently, supported by $100 million in usage credits and additional funding for open-source security initiatives.
The programme also targets long-term improvements in cybersecurity standards and secure development practices.
Modern society depends on software that runs critical infrastructure, including banking systems, healthcare networks, energy grids, and communications platforms. When AI systems find vulnerabilities at scale, the balance shifts between attackers and defenders, making hidden weaknesses easier to uncover and faster to fix before exploitation.
For global infrastructure, this means cybersecurity is shifting from slow, human-driven auditing to continuous, AI-assisted defence, where speed, coordination, and secure-by-design practices become essential to maintaining stability and reducing systemic risk.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s Information Commissioner’s Office has issued new guidance on the growing use of AI in recruitment, warning jobseekers may be unaware of how automated systems influence hiring decisions. The regulator says greater transparency is needed as adoption accelerates.
Automated decision-making tools are increasingly used to screen applications, analyse CVs and rank candidates. While this can improve efficiency, some applicants may be rejected before any human review takes place.
The regulator highlights risks including bias, lack of clarity and potential unfair treatment if safeguards towards the use of AI are not properly applied. Employers are expected to monitor systems for discrimination and clearly explain how decisions are made.
Jobseekers are entitled to know when automation is used, to challenge outcomes, and to request human review. The guidance aims to ensure fair and lawful hiring practices as AI becomes increasingly embedded in UK recruitment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.
Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.
A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.
By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.
Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.
A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.
Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.
Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US District Court for the District of Columbia Chief Judge James Boasberg and US District Court for the District of Massachusetts Judge Allison Burroughs discussed AI, privacy, and the courts during the IAPP Global Summit 2026 in Washington, D.C.
The IAPP report said Burroughs pointed to the gap between older legal protections and newer technologies, including debates over how surveillance rules apply to cell-tower data. Burroughs said existing laws and constitutional protections are ‘not keeping up, never have kept up and never will keep up’ with the speed of innovation.
Burroughs commented: ‘The gap is getting bigger for two reasons. One is that there’s so much more data stored electronically that if you even search for someone’s laptop, you’re going to get more data now than you used to get, and the other one is that there is so much more technology, there are just so many ways of gaining access to data.’
Another part of the IAPP report stated that Boasberg referred to a case in which lawyers submitted filings containing hallucinatory information generated through AI use. According to the report, he required that side to pay attorney’s fees to the other side as a sanction after discovering that AI had been used in the briefs.
Boasberg noted at the IAPP session: ‘I’m sure lawyers using AI is happening a lot more on the state level, and some judges are referring lawyers to state bars (for possible discipline), but there have been federal judges whose opinions included hallucinatory (citations) and that was obviously embarrassing for them.’ He added: ‘The question is how can it help without compromising privacy issues, sealed cases; there’s just a whole lot that we have to figure out, but I think judges are trying to learn how we can use this constructively.’
Burroughs also remarked at the IAPP event that judges want disclosure when lawyers use AI in filings. She said: ‘We want lawyers to tell us when they’ve used AI. They can use it, but they have to disclose it.’ She added: ‘They can use AI, they can’t use AI, they must disclose when they’re using it, they have to certify that they do citation checks to make sure they don’t have hallucinatory citations — it’s hard to think of what these rules would be going forward today.’
IAPP reported the remarks from the summit discussion. At the IAPP Global Summit, the discussion focused on how AI is affecting legal filings, surveillance questions, and court practice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.
The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.
Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.
While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!