EU-backed financing boosts Bulgaria’s high-tech sector and innovation growth

The European Investment Fund (EIF) will manage a €210 million financing initiative to support high-tech businesses in Bulgaria, focusing on sectors such as AI, microelectronics and advanced technologies.

The programme operates within the JEREMIE Bulgaria framework, which aims to improve access to capital for small and medium-sized enterprises.

An initiative that reflects a broader EU strategy to strengthen innovation capacity and support sustainable economic growth through targeted investment mechanisms.

The EIF, a subsidiary of the EIB Group, will prioritise equity financing and scale-up support to address structural gaps that often limit the expansion of high-growth companies within national markets.

A programme that also aligns with wider efforts to retain technological talent and reduce reliance on external capital by reinforcing domestic innovation ecosystems.

By supporting dual-use technologies and strategic sectors, the measure contributes to both economic competitiveness and technological resilience.

Through its revolving funding model, reinvested capital is expected to sustain long-term financing capacity, reinforcing the position of Bulgaria within regional venture capital networks and supporting the development of a more mature innovation economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU monitoring highlights platform performance under revised hate speech code

The European Commission has published the first monitoring results under the revised Code of Conduct on Countering Illegal Hate Speech Online+, providing insight into how major platforms handle reported content.

The assessment combines independent monitoring with self-reported data from participating companies.

Findings indicate that most platforms reviewed a majority of notifications within 24 hours, in line with their commitments.

However, a significant share of reported cases was either disputed or classified as erroneous, with inaccuracies partly attributed to monitoring bodies’ misuse of reporting channels.

The monitoring exercise functions as a structured stress test within the framework of the Digital Services Act (DSA), assessing whether platforms meet minimum response thresholds and apply appropriate measures when illegal hate speech is identified under national and the EU law.

Such a publication of results aims to strengthen transparency and accountability, while informing future improvements ahead of the next monitoring cycle.

The Code of Conduct on Countering Illegal Hate Speech Online+ now operates as part of the EU’s co-regulatory approach to platform governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU approves Italian State aid to support graphene-based photonic chip development

The European Commission has approved a €211 million Italian State aid measure to support the development of photonic chips based on graphene technology.

A funding will be provided to the Italian SME CamGraPhIC, with project activities taking place in Pisa and Bergamo.

Such an initiative focuses on optical transceivers that transmit data using light rather than electrons. The use of graphene instead of silicon is expected to enhance performance and energy efficiency across sectors such as telecommunications, automotive, aerospace and defence.

The Commission assessed the measure under the EU State aid rules and concluded that the funding is necessary, proportionate and aligned with research and innovation objectives. It also found that the project would not proceed without public support, demonstrating an incentive effect.

A decision that reflects broader EU efforts to strengthen semiconductor capabilities and support advanced digital technologies through targeted public investment and regulatory oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU launches Mediterranean digital programme to support governance, cybersecurity and skills

The European Commission has launched a digital transformation programme for countries in North Africa and the Middle East, marking the first digital initiative under the Pact for the Mediterranean.

EU aims to support inclusive and sustainable growth by improving access to digital services and strengthening regulatory alignment.

The initiative focuses on enhancing digital governance by aligning telecommunications regulations with the EU standards and strengthening national regulatory authorities. It also promotes regional cooperation by creating coordinated networks across participating countries.

Cybersecurity forms a central component, with measures designed to improve national frameworks and institutional capacity to prevent and respond to cyber threats.

Additionally, the programme advances digital skills development based on EU competency frameworks, supporting long-term capacity development.

Such an approach reflects a broader policy objective to foster regional digital integration, strengthen institutional resilience and promote secure and inclusive digital transformation across neighbouring regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU AI Continent Action Plan shows progress in infrastructure, data and governance

The European Commission has reported significant progress under its AI Continent Action Plan, marking one year of implementation aimed at strengthening Europe’s position in AI. The strategy focuses on infrastructure, data, talent, adoption and trustworthy AI.

Investment in computational capacity has expanded, with AI factories deployed across European supercomputers and further large-scale facilities in development. These initiatives aim to increase access to advanced computing resources for researchers and emerging companies.

On data governance, the Commission introduced the Data Union Strategy and complementary regulatory measures to improve data sharing and provide legal certainty for businesses.

Efforts to support talent development and mobility, alongside new training initiatives in the EU, form another central component of the plan.

The programme also promotes AI adoption across public and industrial sectors through targeted funding and coordinated initiatives. The overall approach reflects a policy framework designed to balance innovation with regulatory oversight and alignment with European values.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece moves to restrict youth social media access with new digital age rules

New measures to protect minors online have been announced by Greece, introducing a national ‘digital age of majority’, restricting access to social media for users under 15.

The policy forms part of a broader strategy addressing child safety and digital overuse, with implementation scheduled for January 2027.

An initiative that places primary responsibility on platforms, requiring robust age-verification systems and periodic re-verification of existing accounts. Authorities will oversee compliance under the EU’s Digital Services Act framework, with penalties including fines and operational restrictions for violations.

The policy builds on earlier tools such as KidsWallet, an age-verification mechanism already deployed nationally.

Authorities in Greece argue that reliance on parental control alone is insufficient, citing increasing evidence linking excessive platform use to mental health risks, including anxiety, reduced sleep, and social isolation.

A proposal that aligns with wider European discussions on youth protection, including efforts to establish a unified digital age threshold across member states.

Greece has also called for stronger EU-wide enforcement mechanisms, positioning the measure as part of a coordinated approach to safeguarding minors in digital environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European Commission consultation closes on draft AI Act procedure rules

The European Commission is closing its consultation on a draft implementing regulation on detailed arrangements for certain proceedings under the AI Act.

The draft states that it lays down detailed arrangements and conditions for the evaluation of general-purpose AI models under Article 92, including procedures for involving independent experts and selecting them. It also lays down detailed arrangements and procedural safeguards for proceedings in view of the possible adoption of decisions under Article 101 of Regulation (EU) 2024/1689.

Under Article 2, a European Commission decision requesting access to a general-purpose AI model would have to specify the technical means, components, and conditions by which the provider must provide that access. The draft states that access may include APIs, internal access, source code, model weights, access to the infrastructure used to host the model, access to inspect and modify system state, and all levels of access granted to the provider’s own employees.

The draft also states that the European Commission may require a provider to disable and remove logging measures that could track or record the Commission’s access, to the extent necessary to ensure the integrity and confidentiality of the evaluation process. Providers who requested access would have to provide it in a timely and effective manner.

Regarding independent experts, the draft states that the European Commission must take into account factors such as shared ownership, governance, management, personnel, resources, and contractual relationships when assessing independence. It also states that appointed experts must remain independent throughout their appointment and that the confidentiality, integrity, and availability of sensitive information must be protected.

For proceedings that may lead to fines, the draft states that the European Commission may initiate proceedings against relevant conduct by providers of general-purpose AI models. It also states that the Commission may, by decision, order interim measures on grounds of urgency due to a risk of serious damage to health, safety requirements, or other grounds relating to the public interest covered by Regulation (EU) 2024/1689, including preventing a general-purpose AI model from being made available on the market, based on a prima facie finding of an infringement.

Procedural safeguards include written observations on preliminary findings, with a time limit of at least 14 days set by the European Commission, and rules governing access to the file. The draft states that the addressee may obtain access to documents mentioned in the preliminary findings, subject to redactions protecting business secrets and other confidential information, while broader access may be granted under terms of disclosure set by the Commission.

The annex sets format and length requirements for written observations submitted under Article 7. It states that observations must be submitted in a format that allows electronic processing, digitisation, and character recognition, and sets requirements for page format, font, spacing, margins, and numbering. Written observations must not exceed 50 pages, while annexes do not count towards that limit if they have a purely evidential and instrumental function and are proportionate in number and length.

The draft also lays down limitation periods for the imposition and enforcement of penalties, rules on the beginning and setting of time periods, and provisions on the transmission and receipt of information. It states that documents transmitted by digital means must use at least one qualified electronic signature and that, for real-time or near real-time information shared through APIs or equivalent means, the European Commission will define the methods and duration of that sharing.

The regulation states that it would enter into force on the twentieth day following its publication in the Official Journal of the European Union.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU digital identity strengthens after 20 years of .eu expansion

Two decades since the launch of the .eu domain, the EU has marked its role in establishing a unified digital identity across member states.

On 7 April 2006, the .eu top-level domain (TLD) was launched, offering businesses, citizens, and organisations a pan-EU online identity.

Over time, .eu has developed into one of the largest country-code domains globally, with millions of registrations and consistent growth.

Its technical stability and security record, including uninterrupted service since launch, have reinforced its reputation as a reliable digital infrastructure. Investments in fraud detection and data integrity have further strengthened trust in its ecosystem.

The domain has also evolved to reflect the EU’s linguistic diversity, with the introduction of internationalised domain names and additional scripts such as Cyrillic and Greek. These developments have expanded accessibility and reinforced inclusivity within the European digital space.

Looking ahead, .eu is positioned as a key instrument for advancing digital sovereignty and supporting the Single Market. Its role in global internet governance discussions is expected to grow, particularly as the EU institutions seek to shape a more open, secure, and rights-based digital environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The implementation of the EU AI Act with a focus on general-purpose AI models

Transition from legislation to implementation

The European Union has entered a new phase in the governance of AI, moving from the legislative adoption of the Artificial Intelligence Act (AI Act) towards its practical implementation. This particular phase places particular emphasis on obligations of providers of general-purpose AI (GPAI) models, reflecting the increasing role of such systems in the broader digital ecosystem.

The AI Act, adopted in 2024, establishes a comprehensive legal framework for AI within the EU. It introduces a risk-based approach that classifies AI systems into categories ranging from minimal risk to unacceptable risk, with corresponding regulatory requirements.

According to the official text of the regulation, the framework is designed to ensure that AI systems placed on the market in the Union are ‘safe and respect existing law on fundamental rights and Union values.’

While earlier discussions around the Act focused on its legislative negotiation and scope, the current phase centres on how its provisions will be applied in practice.

General-purpose AI models within the AI Act

A key element of this implementation phase concerns general-purpose AI models. These models, which can be integrated into a wide range of downstream applications, occupy a distinct position within the regulatory framework.

The AI Act defines general-purpose AI models as systems that can be used across multiple tasks and contexts and may ‘serve a variety of purposes, both for direct use and for integration into other AI systems.’

That positioning reflects the broad applicability of these models, particularly in areas such as natural language processing, content generation, and data analysis.

The Act also recognises that the widespread deployment of such models may have implications beyond individual use cases, particularly when integrated into high-risk systems.

Obligations for providers of GPAI models

The European Commission, together with the European AI Office, has begun outlining expectations for compliance with provisions related to general-purpose AI.

According to official EU materials, providers of GPAI models are required to ensure that technical documentation is drawn up and kept up to date.

European Union
Image via Freepik

The regulation specifies that providers should ‘draw up and keep up-to-date technical documentation of the model,’ ensuring that relevant information is accessible for compliance and oversight purposes. In addition, transparency obligations require providers to make certain information available to downstream deployers.

The intention of this is to support the responsible integration of GPAI models into other systems.

Distinction between GPAI and systemic-risk models

The AI Act introduces a distinction between general-purpose AI models and those considered to pose systemic risk.

Models that meet specific criteria, such as scale, capability, or deployment level, may be classified as having a systemic impact.

For such models, additional obligations apply, including requirements related to evaluation, risk mitigation, and reporting. The European Commission has indicated that further guidance will clarify how systemic risk thresholds are determined, including through delegated acts and technical standards.

Role of the European AI Office in implementation

The European AI Office, established within the European Commission, plays a central role in supporting the implementation of the AI Act.

Its responsibilities include contributing to the consistent application of the regulation, coordinating with national authorities, and supporting the development of methodologies for compliance.

European AI Office
Source: digital-strategy.ec.europa.eu/en/policies/ai-office

According to the European Commission, the AI Office is tasked with ‘ensuring the coherent implementation of the AI Act across the Union.’ The Office is also expected to contribute to the development of benchmarks, testing frameworks, and guidance documents that support both regulators and providers.

Phased implementation timeline

The implementation of the AI Act is structured as a phased process, with different provisions becoming applicable over time.

That phased approach allows stakeholders to adapt to the regulatory requirements while enabling authorities to establish enforcement mechanisms.

Provisions related to general-purpose AI models are among the earlier elements to be operationalised, reflecting their central role in the current AI landscape.

The European Commission has indicated that additional implementing acts and guidance documents will be issued as part of this process.

Coordination with national authorities

While the European AI Office plays a coordinating role at the EU level, enforcement remains the responsibility of national authorities within member states.

The AI Act establishes mechanisms for cooperation and information-sharing to support a harmonised approach across the European Union.

National authorities are expected to work closely with the AI Office and the European Commission to oversee compliance and address emerging challenges.

Stakeholder engagement and technical guidance

The implementation phase also involves engagement with a range of stakeholders, including industry actors, civil society organisations, and technical experts.

Also, the European Commission has initiated consultations and workshops to gather input on practical aspects of implementation, such as documentation standards and risk assessment methodologies.

The following process supports the development of operational guidance applicable across sectors and use cases.

Interaction with the EU digital regulatory framework

The AI Act forms part of a broader EU digital policy framework that includes instruments such as the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the Digital Markets Act (DMA).

These frameworks address different aspects of the digital ecosystem, including data protection, platform governance, and market competition.

The relationship between the AI Act and these instruments is expected to be clarified further during implementation.

International context: OECD and UN approaches

The governance of general-purpose AI models is also being addressed at the international level.

The OECD AI Principles state that AI systems should be ‘robust, secure and safe throughout their entire lifecycle,’ and emphasise accountability for their functioning.

 Logo, Disk, Astronomy, Outer Space

At the UN level, the Global Digital Compact process addresses issues related to transparency, accountability, and oversight of digital technologies, including AI.

The listed initiatives provide non-binding guidance, in contrast to the legally binding framework established by the EU AI Act.

Ongoing development of technical standards

The development of technical standards is an important component of the implementation process.

The European Commission has indicated that it will work with standardisation organisations to develop specifications related to documentation, evaluation, and risk management.

These standards are expected to support the practical application of the AI Act’s provisions.

From regulatory framework to regulatory practice

The current phase of the EU AI Act marks a transition from legislative design to regulatory practice.

For providers of general-purpose AI models, this involves preparing to meet obligations related to documentation, transparency, and risk management. For regulators, the focus is on ensuring consistent application of the rules across member states, supported by coordination mechanisms and guidance from the AI Office.

The implementation process is expected to evolve as further guidance is issued.

Conclusion

The European Union’s AI Act is entering its implementation phase, with a particular focus on general-purpose AI models.

That phase involves translating the regulation’s legal provisions into operational requirements, supported by guidance from the European Commission and the AI Office.

The development of technical standards, coordination mechanisms, and compliance frameworks will play a central role in this process. As implementation progresses, further clarification is expected through additional guidance and regulatory measures, contributing to the operationalisation of the EU’s approach to AI governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!