Council of the EU extends cyber sanctions framework until 2027

The Council of the European Union has extended restrictive measures against individuals and entities involved in cyber-attacks threatening the EU and its member states until 18 May 2027. The legal framework behind the sanctions regime had already been extended until 18 May 2028.

The framework allows the EU to impose targeted sanctions on persons or entities involved in significant cyber-attacks that constitute an external threat to the Union or its member states. Measures can also be imposed in response to cyber-attacks against third countries or international organisations, where they support Common Foreign and Security Policy objectives.

Current listings under the regime apply to 19 individuals and seven entities. Sanctioned actors face asset freezes, while the EU citizens and companies are prohibited from making funds or economic resources available to them. Listed individuals are also subject to travel bans preventing them from entering or transiting through the EU territory.

The Council said the individual listings will continue to be reviewed every 12 months. It also said the measures are intended to deter malicious cyber activity and uphold the international rules-based order by ensuring accountability for those responsible.

The sanctions mechanism forms part of the EU’s broader cyber diplomacy toolbox, established in 2017 to strengthen coordinated diplomatic responses to malicious cyber activity. The Council said the EU and its member states would continue working with international partners to promote an open, free, stable and secure cyberspace.

Why does it matter?

The decision shows how cybersecurity has become part of the EU’s foreign policy and sanctions toolkit, not only a matter of technical defence. By extending cyber sanctions listings, the EU is reinforcing its use of diplomatic and economic measures to deter malicious cyber activity, attribute responsibility and signal that significant cyber-attacks can carry geopolitical consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission reviews Android DMA rules on interoperability

The European Commission is consulting third parties on proposed measures requiring Alphabet to ensure effective interoperability between Google Android and AI services under the Digital Markets Act.

The draft measures focus on AI services’ access to key Android capabilities, including wake-word activation, contextual data, integration with applications, and access to hardware and software resources needed for reliable and responsive services.

The Commission opened proceedings in January 2026 to specify how Alphabet should comply with DMA interoperability obligations for features relevant to AI services. Its proposed measures cover invocation, context, actions on apps and the operating system, access to resources, and general requirements such as free access, documented frameworks and APIs, technical assistance and reporting.

Stakeholders were asked to comment on the effectiveness, completeness, feasibility and implementation timelines of the proposed measures, particularly from the perspective of AI service providers and Android device manufacturers.

Input from Alphabet and interested third parties may lead to adjustments before the Commission adopts a final decision-making the measures legally binding. The Commission is expected to adopt that decision by 27 July 2026.

Why does it matter?

The case shows how the DMA is being applied to the emerging competitive landscape for AI assistants and mobile operating systems. If third-party AI services need access to Android features such as wake words, contextual data, app actions and on-device resources to compete effectively, interoperability rules could shape which AI tools reach users and how much control gatekeepers retain over mobile AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU weighs social media age rules to protect children

The European Commission has signalled that it may propose EU-level rules on delaying children’s access to social media, as concerns grow over addictive platform design, harmful content and AI-enabled risks for minors.

In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.

Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.

Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.

The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.

Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.

The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.

Why does it matter?

The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn faces allegations over data access practices

Privacy rights group noyb has filed a complaint against LinkedIn, alleging that the platform restricts access to certain user data by placing it behind a paid Premium subscription.

The complaint centres on LinkedIn’s ‘Who’s viewed your profile’ feature, which shows users who have visited their profile. According to noyb, LinkedIn tracks profile visits and makes detailed visitor information available to Premium subscribers, while refusing to provide the same data free of charge when users submit an access request under Article 15 of the GDPR.

Noyb argues that users have the right to receive their own personal data free of charge under the EU data protection rules. The organisation claims that LinkedIn has cited data protection concerns when refusing access requests, despite making similar information available through its paid subscription service.

The complaint was lodged with the Austrian Data Protection Authority and seeks enforcement action requiring LinkedIn to provide the data requested, as well as potential penalties. Noyb also questions whether LinkedIn’s tracking of profile visits complies with the EU consent requirements.

LinkedIn has reportedly denied the allegations, saying it complies with applicable rules and provides relevant information in accordance with its privacy policies.

The case adds to ongoing scrutiny of how digital platforms handle data access rights in the EU, particularly when information collected about users is also used for paid services.

Why does it matter?

The complaint tests whether platforms can monetise access to information that may also fall under users’ GDPR right of access. If regulators side with noyb, the case could affect how subscription-based platforms structure premium features that involve personal data, especially when the same data is withheld from non-paying users who make formal access requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU reaches provisional deal on targeted AI Act changes

The Council presidency and European Parliament negotiators have reached a provisional agreement on targeted changes to the EU AI Act as part of the Omnibus VII package, which aims to simplify parts of the Union’s digital rulebook and ease implementation burdens.

According to the announcement, the deal broadly preserves the thrust of the Commission’s proposal on high-risk AI systems. The provisional agreement sets new application dates of 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.

The agreement also extends certain simplification measures beyond SMEs to small mid-caps, while keeping some safeguards. It reinstates the obligation for providers to register AI systems in the EU database where they consider those systems exempt from high-risk classification, and restores the requirement of strict necessity for processing special categories of personal data for bias detection and correction.

At the same time, the co-legislators added a new prohibited AI practice covering the generation of non-consensual sexual and intimate content and child sexual abuse material (CSAM). The deal also postpones the deadline for national AI regulatory sandboxes to 2 August 2027 and shortens the grace period for transparency measures for AI-generated content from 6 months to 3 months, with a new deadline of 2 December 2026.

The provisional agreement further clarifies the division of supervisory powers between the AI Office and national authorities, particularly where general-purpose AI models and downstream AI systems are developed by the same provider, by listing exceptions where national authorities remain competent. It also addresses overlaps between the AI Act and sectoral legislation in areas such as medical devices, toys, machinery, lifts, and watercraft: if the sectoral law has similar AI-specific requirements to the AI Act, then the AI Act’s application is limited through implementing acts. A specific solution was found for machinery regulation by exempting it from the direct applicability of the AI Act, while the Commission is empowered to adopt delegated acts under the machinery regulation, which would add health and safety requirements in respect of AI systems that are classified as high-risk pursuant to the AI Act.

The text must still be endorsed by both the Council and the European Parliament before undergoing legal and linguistic revision and formal adoption. The proposal is part of the EU’s broader simplification agenda, which has been driven by calls from the European Council and followed by a series of Omnibus packages since early 2025.

Marilena Raouna, Deputy Minister for European Affairs of the Republic of Cyprus, elaborated: ‘Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs. It ensures legal certainty and a smoother and more harmonised implementation of the rules across the Union, strengthening EU’s digital sovereignty and overall competitiveness.’

Raouna added: ‘At the same time, we are stepping up the protection of children targeting risks linked to the AI systems. This agreement is clear evidence of our institutions’ ability to act swiftly and deliver on our commitments. It marks the first deliverable under the ‘One Europe, One Market’ roadmap agreed by the three institutions last week, well within the set deadline.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission publishes first Digital Markets Act review

The European Commission has published its first formal review of the Digital Markets Act, assessing how the regulation is affecting the behaviour of large online platforms in the EU digital economy. According to the review, the law has produced visible changes in some areas, while also exposing continuing problems in implementation and enforcement.

The review points to changes in user choice since the DMA entered into force in March 2024. These include support for third-party app stores and prompts on devices to select browsers or search engines, alongside reported increases in usage and downloads of alternative services.

Enforcement action is also a central part of the assessment. In April 2025, Apple was fined €500 million for blocking developers from directing users to cheaper purchasing options, while Meta was fined €200 million over its ‘consent or pay’ model. Both companies are appealing the decisions.

At the same time, the review identifies clear implementation challenges. It says investigations are taking around twice as long as the 12-month target, while legal procedures are being used to slow compliance. It also raises broader questions about whether fast-growing areas such as AI tools and cloud platforms should eventually be brought within the scope of the regulation.

The Digital Markets Act is therefore presented less as a completed intervention than as an ongoing regulatory process. The review suggests that its long-term impact will depend not only on the rules already in force, but also on how consistently they are enforced and how the EU responds to changes in digital markets.

Why does it matter?

The review matters because it shows that the real test of the Digital Markets Act is no longer whether the EU can write rules for large platforms, but whether it can enforce them quickly and adapt them to new market realities. Early changes in user choice suggest the law is starting to affect platform behaviour. However, delays in investigations and questions around AI and cloud services show that the regulatory contest is still evolving.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot