European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Health New Zealand issues guidance on use of generative AI and large language models

Health New Zealand has published new guidance on generative AI and large language models for healthcare settings.

The guidance states that the National Artificial Intelligence and Algorithm Expert Advisory Group evaluates the use of generative AI tools and LLMs and recommends caution in their application across Health New Zealand environments. It notes that further data is needed to assess risks and benefits in the New Zealand health context.

Employees and contractors are prohibited from entering personal, confidential or sensitive patient or organisational information into unapproved LLMs or generative AI tools. The guidance also says such tools must not be used for clinical decisions or personalised patient advice.

Staff using generative AI tools in other contexts must take full responsibility for checking the information generated and acknowledge when generative AI has been used to create content. Anyone planning to use generative AI or LLMs is also asked to seek advice from the advisory group.

The guidance highlights potential risks including privacy breaches, inaccurate or misleading outputs, bias in training data, lack of transparency in model outputs, data sovereignty concerns and intellectual property risks. It also notes that generative AI systems may not adequately support te reo Māori and other minority languages spoken in Aotearoa New Zealand.

Why does it matter?

The guidance shows how health systems are beginning to set practical boundaries for generative AI before its use becomes routine in clinical and administrative settings. By prohibiting unapproved tools for patient data, clinical decisions and personalised advice, Health New Zealand is drawing a clear line between limited productivity uses and high-risk healthcare applications. In contrast, its references to Māori data sovereignty and language support widen the governance frame to include equity, cultural rights and data protection concerns that standard technology policies may not fully address.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young users’ reliance on ChatGPT raises questions over AI advice and autonomy

Sam Altman has described a generational divide in how people use ChatGPT, saying younger users are integrating the tool more deeply into learning, planning and everyday decision-making.

Speaking at Sequoia Capital’s AI Ascent 2025, the OpenAI CEO said older users tend to treat ChatGPT more like a search tool, while people in their 20s and 30s often use it as a personal advisor. College students, he said, are going further by treating ChatGPT almost like an operating system, connecting it to files, tasks and complex workflows.

The remarks point to a shift in how AI tools are being embedded into daily routines, particularly among students and younger adults. Business Insider reported that a February 2025 OpenAI report found US college students were among the platform’s most frequent users, while a Pew Research Centre survey found that 26% of US teens aged 13 to 17 used ChatGPT for schoolwork in 2024, double the share recorded in 2023.

Altman’s comments also raise questions about dependence, accuracy and boundaries as AI systems move closer to advisory roles. While users may benefit from private spaces to test ideas, organise tasks and prepare decisions, concerns remain over over-reliance, data privacy and the shifting role of human relationships in decision-making.

Why does it matter?

The trend suggests that AI is becoming more than an information tool for younger users. As ChatGPT and similar systems become part of studying, planning and personal decision-making, they influence not only how information is consumed, but also how habits, confidence and judgement develop.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SHEIN faces Irish inquiry over EU data transfers to China

Ireland’s Data Protection Commission has opened an inquiry into Infinite Styles Services Co. Ltd. (known as SHEIN Ireland), over transfers of personal data of EU and EEA users to China.

The inquiry will examine whether SHEIN Ireland has complied with its obligations under the General Data Protection Regulation in relation to those transfers. The DPC said it will assess compliance with GDPR principles on personal data processing, transparency obligations under Article 13, and Chapter V requirements governing transfers of personal data to third countries.

The regulator said its decision to begin the inquiry was issued to SHEIN Ireland at the end of April. The case comes as data transfers to China face growing regulatory scrutiny in Europe, including through recent DPC enforcement action and complaints filed with other European supervisory authorities.

Deputy Commissioner Graham Doyle said: ‘When an individual’s personal data is transferred to a country outside the EU, the GDPR requires that this personal data is afforded essentially the same protections as it would within the EU.’

He added: ‘Recent regulatory action by the DPC, together with complaints to other European supervisory authorities, has brought data transfers to China, in particular, into focus. The inquiry is an important strategic priority for the DPC and we intend to cooperate closely with our peer European Supervisory Authorities as part of the investigation.’

Under the GDPR, transfers of personal data outside the EU and EEA must meet specific safeguards so that the level of protection provided under EU law is not undermined. Where no European Commission adequacy decision exists for a third country, organisations must rely on alternative mechanisms, such as standard contractual clauses, and demonstrate that equivalent protections are in place.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Instagram pulls the plug on encrypted chats

Instagram will no longer support end-to-end encrypted chats from 8 May 2026, ending an optional privacy feature for some direct messages on the platform.

Users affected by the change are being prompted to download any messages or media from encrypted chats that they wish to keep before the feature is removed. Instagram’s help page says users may need to update the app to access or download their end-to-end encrypted chats.

End-to-end encryption allows only the people in a conversation to read messages or hear calls, with messages protected by encryption keys linked to authorised devices. On Instagram, however, encrypted chats were an optional feature rather than the default for all direct messages.

After 8 May 2026, users will no longer be able to send or receive end-to-end encrypted messages or calls on Instagram. The help page also notes that users can still report messages from encrypted chats and that shared content may still be forwarded outside an encrypted conversation.

The change marks a rollback of a privacy feature on one of Meta’s major social platforms, even as end-to-end encryption remains central to debates over secure communications, platform safety and user confidentiality.

Why does it matter?

End-to-end encryption is widely seen as a core privacy protection because it limits access to message content, including by the platform itself. Its removal from Instagram encrypted chats raises questions about how major platforms prioritise privacy features, user safety, product complexity and interoperability across their messaging services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!