EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SHEIN faces Irish inquiry over EU data transfers to China

Ireland’s Data Protection Commission has opened an inquiry into Infinite Styles Services Co. Ltd. (known as SHEIN Ireland), over transfers of personal data of EU and EEA users to China.

The inquiry will examine whether SHEIN Ireland has complied with its obligations under the General Data Protection Regulation in relation to those transfers. The DPC said it will assess compliance with GDPR principles on personal data processing, transparency obligations under Article 13, and Chapter V requirements governing transfers of personal data to third countries.

The regulator said its decision to begin the inquiry was issued to SHEIN Ireland at the end of April. The case comes as data transfers to China face growing regulatory scrutiny in Europe, including through recent DPC enforcement action and complaints filed with other European supervisory authorities.

Deputy Commissioner Graham Doyle said: ‘When an individual’s personal data is transferred to a country outside the EU, the GDPR requires that this personal data is afforded essentially the same protections as it would within the EU.’

He added: ‘Recent regulatory action by the DPC, together with complaints to other European supervisory authorities, has brought data transfers to China, in particular, into focus. The inquiry is an important strategic priority for the DPC and we intend to cooperate closely with our peer European Supervisory Authorities as part of the investigation.’

Under the GDPR, transfers of personal data outside the EU and EEA must meet specific safeguards so that the level of protection provided under EU law is not undermined. Where no European Commission adequacy decision exists for a third country, organisations must rely on alternative mechanisms, such as standard contractual clauses, and demonstrate that equivalent protections are in place.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Instagram pulls the plug on encrypted chats

Instagram will no longer support end-to-end encrypted chats from 8 May 2026, ending an optional privacy feature for some direct messages on the platform.

Users affected by the change are being prompted to download any messages or media from encrypted chats that they wish to keep before the feature is removed. Instagram’s help page says users may need to update the app to access or download their end-to-end encrypted chats.

End-to-end encryption allows only the people in a conversation to read messages or hear calls, with messages protected by encryption keys linked to authorised devices. On Instagram, however, encrypted chats were an optional feature rather than the default for all direct messages.

After 8 May 2026, users will no longer be able to send or receive end-to-end encrypted messages or calls on Instagram. The help page also notes that users can still report messages from encrypted chats and that shared content may still be forwarded outside an encrypted conversation.

The change marks a rollback of a privacy feature on one of Meta’s major social platforms, even as end-to-end encryption remains central to debates over secure communications, platform safety and user confidentiality.

Why does it matter?

End-to-end encryption is widely seen as a core privacy protection because it limits access to message content, including by the platform itself. Its removal from Instagram encrypted chats raises questions about how major platforms prioritise privacy features, user safety, product complexity and interoperability across their messaging services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Central Bank moves forward with digital euro technical work

The European Central Bank is advancing technical work on the digital euro, a proposed electronic form of central bank money designed to complement cash in an increasingly digital payments landscape.

The project reflects Europe’s response to the rapid shift towards digital payments, where cards, apps and mobile wallets are increasingly used for everyday transactions. The ECB says a digital euro would provide a European payment option that could be used across the euro area, both online and offline.

Users would be able to store digital euro holdings in an account set up with a bank or public intermediary and use them for in-store, online and person-to-person payments. The ECB says the system would aim to combine the convenience of digital payments with features associated with cash, including offline functionality.

Policy objectives include strengthening Europe’s strategic autonomy in payments, supporting monetary sovereignty and ensuring access to public money in digital form. The ECB has also presented privacy as a central design feature, saying offline digital euro payments would offer cash-like privacy, with transaction details known only to the payer and the recipient.

The project remains conditional on the EU legislative process. The ECB aims to be technically ready for a potential first issuance of the digital euro in 2029, assuming the necessary EU legislation is adopted in 2026.

Supporters view the digital euro as a way to preserve the role of central bank money in digital payments and reduce reliance on non-European payment providers. Debate continues over how to balance innovation, privacy, financial inclusion, bank intermediation and public trust.

Why does it matter?

The digital euro would shape how public money functions in a digital economy increasingly dominated by private payment platforms and international card schemes. Its significance lies not only in creating a new payment tool, but in preserving access to central bank money, supporting European payment sovereignty and setting privacy expectations for public digital infrastructure.

Its success will depend on whether the final design can offer clear benefits over existing payment options while maintaining trust, usability and strong safeguards. The project also raises broader questions about how central banks remain relevant in everyday payments without crowding out private-sector innovation or weakening the role of commercial banks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

Dutch court backs Solvinity DigiD contract despite US data access fears

The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.

The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.

Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.

The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LinkedIn faces allegations over data access practices

Privacy rights group noyb has filed a complaint against LinkedIn, alleging that the platform restricts access to certain user data by placing it behind a paid Premium subscription.

The complaint centres on LinkedIn’s ‘Who’s viewed your profile’ feature, which shows users who have visited their profile. According to noyb, LinkedIn tracks profile visits and makes detailed visitor information available to Premium subscribers, while refusing to provide the same data free of charge when users submit an access request under Article 15 of the GDPR.

Noyb argues that users have the right to receive their own personal data free of charge under the EU data protection rules. The organisation claims that LinkedIn has cited data protection concerns when refusing access requests, despite making similar information available through its paid subscription service.

The complaint was lodged with the Austrian Data Protection Authority and seeks enforcement action requiring LinkedIn to provide the data requested, as well as potential penalties. Noyb also questions whether LinkedIn’s tracking of profile visits complies with the EU consent requirements.

LinkedIn has reportedly denied the allegations, saying it complies with applicable rules and provides relevant information in accordance with its privacy policies.

The case adds to ongoing scrutiny of how digital platforms handle data access rights in the EU, particularly when information collected about users is also used for paid services.

Why does it matter?

The complaint tests whether platforms can monetise access to information that may also fall under users’ GDPR right of access. If regulators side with noyb, the case could affect how subscription-based platforms structure premium features that involve personal data, especially when the same data is withheld from non-paying users who make formal access requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EESC backs revised Cybersecurity Act with warnings on ENISA and supply chains

The European Economic and Social Committee has backed the EU’s proposed revision of the Cybersecurity Act, supporting reforms to ENISA, the cybersecurity certification framework and ICT supply-chain security, while warning that the next phase of the EU cyber policy must remain workable in practice.

In its opinion, the committee argues that cybersecurity and ICT supply-chain security should not be treated as narrow technical questions. Instead, it presents them as matters of economic security and geopolitical resilience, closely linked to the EU’s competitiveness, legal certainty and broader resilience.

The opinion welcomes the European Commission’s attempt to update the Cybersecurity Act and align related rules under NIS 2, particularly where the package aims to simplify compliance and reduce overlapping obligations. At the same time, the committee says that a stronger ENISA will require stronger backing. If the agency is expected to take on more responsibilities, those tasks should come with adequate resources, specialist staff and a mandatory workforce plan.

The committee also supports a single-entry point for incident reporting. It says parallel reporting requirements under NIS 2, DORA and sector-specific rules should be streamlined so that one comprehensive report can serve all relevant regulatory regimes.

On ICT supply-chain security, the opinion supports a structured EU framework for identifying key assets and addressing high-risk suppliers. However, it warns that restrictions and phase-outs should be transparent, proportionate and supported by realistic transition plans that account for replacement timelines, service continuity, costs, labour-market effects and the risk of shifting compliance burdens onto smaller firms outside the regulation’s scope.

The committee also calls for the cyber debate to address democratic resilience. A proposed amendment would give ENISA a clearer role in supporting election security, democratic resilience and public awareness of cyber threats, disinformation and safe digital behaviour.

Why does it matter?

The opinion supports a more centralised and strategic EU cybersecurity framework, but also highlights the practical risks of expanding cyber regulation faster than institutions and companies can implement it. The debate around ENISA’s mandate, incident reporting and ICT supply-chain restrictions will shape how far the EU can strengthen cyber resilience without creating fragmented obligations or disproportionate burdens for smaller firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!