Council compromise text advances EU AI Act changes

The Council of the European Union has confirmed agreement on a compromise text for the Digital Omnibus on AI, a proposal intended to simplify parts of the EU AI Act’s implementation while preserving protections for health, safety, and fundamental rights.

The Permanent Representatives Committee confirmed the agreement on 13 May 2026, following informal negotiations between the EU institutions on 6 May. The Council Presidency was authorised to send a letter to the European Parliament stating that, if Parliament adopts the text at first reading, the Council will approve Parliament’s position.

The compromise text amends Regulation (EU) 2024/1689 on AI and Regulation (EU) 2018/1139 on civil aviation. It says targeted changes are needed because delayed standards, national governance structures, and conformity assessment frameworks have created compliance burdens heavier than expected.

The proposal would adjust several AI Act implementation rules, including provisions on AI literacy, treatment of small mid-cap enterprises, conformity assessment, AI regulatory sandboxes, real-world testing, and the role of the AI Office. It would also simplify some registration and monitoring requirements while providing more time for high-risk AI obligations to apply.

One major addition concerns prohibited AI practices. The text would prohibit placing on the market, putting into service, or using AI systems that generate or manipulate realistic non-consensual intimate images, videos, audio, or similar material of identifiable people. It would also prohibit AI systems that generate or manipulate child sexual abuse material, subject to limited lawful exceptions.

The compromise text also modifies the AI literacy obligation. Instead of requiring providers and deployers to ensure a sufficient level of AI literacy among staff, the revised wording would require them to take measures to support AI literacy, while clarifying that they are not required to guarantee a specific level for each individual.

For high-risk AI systems, the compromise text proposes delayed application dates for certain obligations: 2 December 2027 for systems classified as high-risk under Article 6(2) and Annex III, and 2 August 2028 for systems classified as high-risk under Article 6(1) and Annex I. The text says this is intended to address implementation challenges linked to delayed standards, guidance, and national competent authorities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CJEU backs fair remuneration for press publishers

The Court of Justice of the European Union (CJEU) has ruled that member states may allow press publishers to claim fair remuneration when they authorise online service providers to use their publications.

The judgement came in a case involving Meta Platforms Ireland’s challenge to an Italian Communications Regulatory Authority decision on criteria for determining fair remuneration for online use of press publications. Meta argued that the Italian framework conflicted with EU rules on publishers’ rights under the Digital Single Market copyright directive.

The CJEU found that a fair remuneration right for publishers can be compatible with EU law if the payment is consideration for authorising online service providers to use press publications. Publishers must also be able to refuse authorisation or grant it free of charge, and online service providers cannot be required to pay for it when they do not use the publications.

The ruling also says online service providers may be required to negotiate with publishers without limiting content visibility during talks and to provide data needed to calculate remuneration. The CJEU said such obligations may restrict the freedom to conduct a business, but appear justified where they help ensure fair negotiations and support EU objectives on copyright, media pluralism, and publishers’ ability to recoup investments.

The CJEU also found that powers granted to AGCOM to set criteria, determine remuneration in the event of disagreement, ensure compliance with information obligations, and impose penalties may be permissible if they support the effective implementation of publishers’ rights.

The final assessment remains for the national court, which must verify whether the Italian legislation satisfies the conditions identified by the CJEU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council of the EU extends cyber sanctions framework until 2027

The Council of the European Union has extended restrictive measures against individuals and entities involved in cyber-attacks threatening the EU and its member states until 18 May 2027. The legal framework behind the sanctions regime had already been extended until 18 May 2028.

The framework allows the EU to impose targeted sanctions on persons or entities involved in significant cyber-attacks that constitute an external threat to the Union or its member states. Measures can also be imposed in response to cyber-attacks against third countries or international organisations, where they support Common Foreign and Security Policy objectives.

Current listings under the regime apply to 19 individuals and seven entities. Sanctioned actors face asset freezes, while the EU citizens and companies are prohibited from making funds or economic resources available to them. Listed individuals are also subject to travel bans preventing them from entering or transiting through the EU territory.

The Council said the individual listings will continue to be reviewed every 12 months. It also said the measures are intended to deter malicious cyber activity and uphold the international rules-based order by ensuring accountability for those responsible.

The sanctions mechanism forms part of the EU’s broader cyber diplomacy toolbox, established in 2017 to strengthen coordinated diplomatic responses to malicious cyber activity. The Council said the EU and its member states would continue working with international partners to promote an open, free, stable and secure cyberspace.

Why does it matter?

The decision shows how cybersecurity has become part of the EU’s foreign policy and sanctions toolkit, not only a matter of technical defence. By extending cyber sanctions listings, the EU is reinforcing its use of diplomatic and economic measures to deter malicious cyber activity, attribute responsibility and signal that significant cyber-attacks can carry geopolitical consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission reviews Android DMA rules on interoperability

The European Commission is consulting third parties on proposed measures requiring Alphabet to ensure effective interoperability between Google Android and AI services under the Digital Markets Act.

The draft measures focus on AI services’ access to key Android capabilities, including wake-word activation, contextual data, integration with applications, and access to hardware and software resources needed for reliable and responsive services.

The Commission opened proceedings in January 2026 to specify how Alphabet should comply with DMA interoperability obligations for features relevant to AI services. Its proposed measures cover invocation, context, actions on apps and the operating system, access to resources, and general requirements such as free access, documented frameworks and APIs, technical assistance and reporting.

Stakeholders were asked to comment on the effectiveness, completeness, feasibility and implementation timelines of the proposed measures, particularly from the perspective of AI service providers and Android device manufacturers.

Input from Alphabet and interested third parties may lead to adjustments before the Commission adopts a final decision-making the measures legally binding. The Commission is expected to adopt that decision by 27 July 2026.

Why does it matter?

The case shows how the DMA is being applied to the emerging competitive landscape for AI assistants and mobile operating systems. If third-party AI services need access to Android features such as wake words, contextual data, app actions and on-device resources to compete effectively, interoperability rules could shape which AI tools reach users and how much control gatekeepers retain over mobile AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU weighs social media age rules to protect children

The European Commission has signalled that it may propose EU-level rules on delaying children’s access to social media, as concerns grow over addictive platform design, harmful content and AI-enabled risks for minors.

In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.

Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.

Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.

The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.

Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.

The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.

Why does it matter?

The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn faces allegations over data access practices

Privacy rights group noyb has filed a complaint against LinkedIn, alleging that the platform restricts access to certain user data by placing it behind a paid Premium subscription.

The complaint centres on LinkedIn’s ‘Who’s viewed your profile’ feature, which shows users who have visited their profile. According to noyb, LinkedIn tracks profile visits and makes detailed visitor information available to Premium subscribers, while refusing to provide the same data free of charge when users submit an access request under Article 15 of the GDPR.

Noyb argues that users have the right to receive their own personal data free of charge under the EU data protection rules. The organisation claims that LinkedIn has cited data protection concerns when refusing access requests, despite making similar information available through its paid subscription service.

The complaint was lodged with the Austrian Data Protection Authority and seeks enforcement action requiring LinkedIn to provide the data requested, as well as potential penalties. Noyb also questions whether LinkedIn’s tracking of profile visits complies with the EU consent requirements.

LinkedIn has reportedly denied the allegations, saying it complies with applicable rules and provides relevant information in accordance with its privacy policies.

The case adds to ongoing scrutiny of how digital platforms handle data access rights in the EU, particularly when information collected about users is also used for paid services.

Why does it matter?

The complaint tests whether platforms can monetise access to information that may also fall under users’ GDPR right of access. If regulators side with noyb, the case could affect how subscription-based platforms structure premium features that involve personal data, especially when the same data is withheld from non-paying users who make formal access requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!