Singapore Ministry of Health addresses AI-developed drugs and patient data safeguards

Singapore’s Ministry of Health has said that drugs developed with the use of AI will be subject to the same regulatory expectations as conventionally developed medicines, including requirements on quality, safety and efficacy.

The ministry made the statement in response to a parliamentary question on the regulation of AI-developed drugs, clinical trials and safeguards for patient data used in AI-related healthcare innovation.

It said the Health Sciences Authority’s approach is aligned with international regulatory principles on the responsible use of AI in drug development, including those outlined by the US Food and Drug Administration and the European Medicines Agency.

The ministry also said that patient data used for AI development is covered by existing data protection and cybersecurity safeguards, including obligations under Singapore’s Personal Data Protection Act to maintain patient confidentiality and prevent data leakage.

Authorities will continue to monitor developments in AI-related healthcare innovation and strengthen safeguards where necessary.

Why does it matter?

The response signals that Singapore is not creating a separate, lighter pathway for AI-developed medicines, but is applying existing drug safety standards while monitoring how AI changes research, development and clinical use. The issue is relevant for digital health governance because AI in drug development depends not only on regulatory approval of final products, but also on the protection of patient data used to train, test or validate health-related AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French CNIL hosts global privacy talks in Paris

The French Commission Nationale de l’Informatique et des Libertés will host the G7 roundtable of data protection and privacy authorities in June 2026. The meeting aims to strengthen international cooperation amid rapid digital and AI developments.

The roundtable, created in 2021, brings together data protection authorities from G7 countries and the EU. It focuses on sharing legal and technological developments and encouraging coordinated approaches to common challenges.

Key areas of work for 2026 include emerging technologies, enforcement cooperation and the free flow of data. The discussions are expected to address growing concerns about data protection amid expanding AI use.

The CNIL stated that the French presidency will prioritise dialogue and practical cooperation, aiming to support global governance that respects fundamental rights, and that the event will take place in Paris.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI found non-compliant in Canadian ChatGPT privacy probe

Canada’s federal and provincial privacy regulators have found that aspects of OpenAI’s collection, use, and disclosure of personal information through ChatGPT did not comply with applicable private-sector privacy laws, particularly in relation to model training on publicly accessible online data and user interactions.

The joint investigation was conducted by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, and the privacy commissioners of British Columbia and Alberta.

It examined OpenAI’s GPT-3.5 and GPT-4 models as used in ChatGPT, focusing on whether the company’s handling of personal information from public internet sources, licensed third-party datasets, and user interactions met legal requirements on appropriate purposes, consent, transparency, accuracy, access, retention, and accountability.

The regulators accepted that OpenAI’s overall purposes for developing and deploying ChatGPT were legitimate and appropriate. However, they found that the company’s initial collection of personal information from publicly accessible websites and licensed third-party sources for model training was overbroad and therefore inappropriate, given the scale, sensitivity, and potential inaccuracy of the data involved, as well as the limits of the mitigation measures in place at the time.

The Offices also found that OpenAI failed to obtain valid consent to collect and use personal information from public internet sources to train its models. They concluded that implied consent was not sufficient because the data could include sensitive personal information and because individuals would not reasonably have expected information about them posted online to be scraped and used for AI model training in this way.

On user interactions with ChatGPT, the regulators accepted that using some chat data for model improvement could serve OpenAI’s legitimate purposes. Still, they found that express consent should have been obtained.

They said OpenAI’s safeguards at the time were not strong enough to ensure that sensitive personal information would not be included in training data, and that many users would not reasonably have understood that their conversations could be used to train models or reviewed by human trainers.

The report also found that OpenAI should have obtained express consent for certain disclosures of personal information through ChatGPT outputs, especially where the information was sensitive or fell outside individuals’ reasonable expectations.

While OpenAI had introduced measures to reduce the risk of sensitive disclosures, the regulators said those measures covered a narrower set of information than the broader categories of personal information protected under the relevant privacy laws.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI guidance issued by Australia’s New South Wales tribunal

The New South Wales Civil and Administrative Tribunal has issued guidance on the acceptable use of generative AI in tribunal proceedings as part of Privacy Awareness Week NSW 2026, which this year focuses on personal information risks in the age of AI.

According to NCAT, generative AI tools may be used to assist with administrative and organisational tasks such as summarising material, organising information, or preparing chronologies. At the same time, the tribunal warns that such tools can create privacy risks if users enter personal, sensitive, or confidential information.

The guidance is set out in NCAT Procedural Direction 7 on the use of generative AI, together with an accompanying fact sheet. NCAT says the aim is to clarify when generative AI may be used in tribunal-related work while reinforcing obligations to protect personal and confidential information.

The tribunal also draws a clear line around evidentiary material. Generative AI must not be used to generate or alter evidence in tribunal proceedings, including statements, affidavits, statutory declarations, character references, or other evidentiary documents.

NCAT further states that generative AI must not be used to generate content for an expert report unless the tribunal has given permission. It is encouraging parties and their representatives to review the guidance before using such tools in proceedings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China closes consultation on digital virtual human services

The Cyberspace Administration of China has closed its public consultation on the draft Administrative Measures for Digital Virtual Human Information Services, which set out proposed rules for digital virtual human services provided to the public in China.

The notice states that the consultation opened in April 2026 and that comments were accepted until 6 May 2026. According to the draft, the measures would apply to internet information services delivered to the public within China through digital virtual humans.

The draft says providers and users must process data for lawful purposes and within a lawful scope, use data from legal sources, and fulfil their data security responsibilities. It also requires technical and other necessary measures to protect data storage and transmission and to prevent leaks or improper use.

The text further requires digital virtual human service providers and users to establish security risk monitoring, warning, emergency response, anti-addiction mechanisms, and stronger content-direction management, while also retaining logs. Providers whose services have public opinion attributes or social mobilisation capacity would also be required to complete algorithm filing procedures and security assessments in line with existing national rules.

Beyond cybersecurity and data protection, the draft includes provisions on personal information, personality rights, intellectual property, content controls, labelling requirements, and protections for minors. It defines digital virtual humans as virtual figures in the non-physical world that simulate human appearance and may have voice, behaviour, interaction abilities, or personality traits, using graphics, digital image processing, or AI technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Meta age assurance system aims to prevent underage access

Meta has expanded its use of AI to strengthen age assurance and improve enforcement of underage account policies across its platforms. The systems are designed to detect users under 13 for removal and to place suspected teens into protected Teen Account settings on Instagram and Facebook in regions including the EU, Brazil, and the US.

The technology analyses a range of signals, including profile information, user activity, and other contextual indicators, to estimate age more accurately. Automated systems are also being used to support faster and more consistent review of reports related to underage use.

Visual analysis has also become part of Meta’s broader detection approach, with the company saying its systems look for general age-related indicators rather than attempting to identify specific individuals. Reporting tools have been simplified, and AI-assisted moderation is being used to improve the speed and reliability of enforcement decisions.

Alongside these enforcement measures, Meta is increasing parental engagement through notifications and guidance to encourage more accurate age reporting and safer online behaviour. The wider effort reflects growing pressure on platforms to move beyond self-declared age checks and to build stronger systems to protect younger users.

Why does it matter?

The significance of the move lies in the fact that age assurance is becoming a core platform governance issue rather than a secondary moderation tool. Meta is trying to show that large social platforms can use AI not only to recommend or personalise content, but also to enforce minimum age rules at scale. That matters because regulators are increasingly questioning whether self-declared age data is enough to protect minors online. It also points to a broader shift in which platforms are expected to combine safety obligations, automated detection, and parental tools into a more active system of child protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Cybersecurity and AI safety in focus at European Parliament discussion

Members of the European Parliament’s Committee on the Internal Market and Consumer Protection are set to discuss the safety of AI systems that could pose serious security risks.

According to the event description, the discussion will examine how existing EU legislation applies in practice, particularly the AI Act and the Cybersecurity Act. It will focus on how advanced AI systems are developed and managed when they may present security risks, and on how companies are implementing the EU rules and the challenges they face.

Experts from ENISA, the European Union Agency for Cybersecurity, and the European Commission are expected to take part. They will explain how the relevant legal and regulatory frameworks operate in practice across the EU, including the rules governing AI systems.

The discussion also comes as the European Commission has proposed changes to the Cybersecurity Act. In the European Parliament, the Committee on Industry, Research and Energy is leading work on the file, while IMCO is contributing an opinion focused on internal market and consumer protection aspects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data access emerges as cornerstone of EU AI plan

The European Commission has unveiled its AI Continent Action Plan, setting out a strategy to strengthen Europe’s position in the global AI landscape. The plan responds to rapid international advances and seeks to accelerate AI adoption across European industry and public services, where progress remains uneven.

Rather than introducing a new regulatory framework, the plan brings together targeted investments and policy measures around five priorities: expanding AI infrastructure, improving access to data, accelerating adoption in strategic sectors, strengthening skills, and supporting the implementation of existing rules.

Access to high-quality and interoperable data is presented as one of the key conditions for scaling AI in Europe. The plan links this objective to the EU’s wider data strategy and to efforts to make cross-border data use more practical, enabling organisations to train and deploy AI systems more effectively while operating within Europe’s transparency and accountability standards.

The broader ambition is to move Europe from fragmented experimentation towards more scalable and trustworthy AI deployment. In that sense, the Action Plan treats data, infrastructure, skills, and implementation capacity as parts of the same competitiveness agenda rather than separate policy tracks.

Why does it matter?

Europe’s AI challenge is no longer only about regulation, but about whether companies and public institutions can actually build and use AI at scale. If access to data remains fragmented across borders, sectors, and technical systems, the EU risks falling further behind competitors that already combine compute, capital, and data more effectively. By putting data access alongside infrastructure and skills, the Commission is signalling that AI competitiveness will depend as much on operational capacity as on rules or research strength.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission urges fast rollout of EU age verification app

The European Commission has adopted a recommendation urging member states to accelerate the rollout of the EU age verification app and make it available by the end of the year. The recommendation says the app can be deployed either as a standalone solution or integrated into a European Digital Identity Wallet.

According to the Commission, the app is intended to let users prove they meet a required age threshold without disclosing their exact age, identity, or other personal details. The Commission has also published a blueprint for the system, leaving it to member states to customise and produce the app for their citizens.

The recommendation sets out actions for member states to support rapid availability and interoperability, including implementation plans and coordination to ensure the swift rollout of the solution across the EU.

The measure forms part of the EU’s wider approach to protecting minors online under the Digital Services Act, which requires online platforms to ensure a high level of privacy, safety, and security for minors.

Executive Vice-President Henna Virkkunen said: ‘Effective and privacy-preserving age verification is the next piece of the puzzle that we are getting closer to completing, as we work towards an online space where our children are safe and empowered to use positively and responsibly without restricting the rights of adults.’

Why does it matter?

The move takes age verification in the EU from a general policy objective to a more concrete implementation phase. Rather than leaving platforms and member states to develop separate solutions, the Commission is trying to steer the bloc towards a common privacy-preserving model that can work across borders.

That matters for both child protection and regulatory coherence, because if countries adopt incompatible systems or move at very different speeds, enforcement under the Digital Services Act could become uneven in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!