Singapore Ministry of Health addresses AI-developed drugs and patient data safeguards

Singapore’s Ministry of Health has said that drugs developed with the use of AI will be subject to the same regulatory expectations as conventionally developed medicines, including requirements on quality, safety and efficacy.

The ministry made the statement in response to a parliamentary question on the regulation of AI-developed drugs, clinical trials and safeguards for patient data used in AI-related healthcare innovation.

It said the Health Sciences Authority’s approach is aligned with international regulatory principles on the responsible use of AI in drug development, including those outlined by the US Food and Drug Administration and the European Medicines Agency.

The ministry also said that patient data used for AI development is covered by existing data protection and cybersecurity safeguards, including obligations under Singapore’s Personal Data Protection Act to maintain patient confidentiality and prevent data leakage.

Authorities will continue to monitor developments in AI-related healthcare innovation and strengthen safeguards where necessary.

Why does it matter?

The response signals that Singapore is not creating a separate, lighter pathway for AI-developed medicines, but is applying existing drug safety standards while monitoring how AI changes research, development and clinical use. The issue is relevant for digital health governance because AI in drug development depends not only on regulatory approval of final products, but also on the protection of patient data used to train, test or validate health-related AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s ICO issues guidance on AI-generated FOI requests

The UK Information Commissioner’s Office (ICO) has published new guidance to help public authorities handle Freedom of Information (FOI) requests generated using AI, as public authorities report growing pressure from higher volumes and more complex requests.

According to the ICO, some AI-generated requests misquote or misinterpret FOI legislation, while others require significant clarification before they can be processed. The regulator says the guidance is intended to give FOI teams practical support so they can continue meeting their legal duties without adding new burdens.

The guidance addresses issues that practitioners say are increasingly common, including requests generated with AI that misstate the law, a rising number of submissions that need refinement, and the need to ensure requests are handled fairly and consistently regardless of how they were created.

It also includes example wording that public authorities can use to encourage more responsible use of AI by requesters and to support clearer and more effective FOI submissions. The ICO says the aim is to reduce delays, errors, and complaints linked to poorly framed or confusing requests.

Deborah Clark, the ICO’s Upstream Regulation Manager, clarified: ‘This guidance is about giving teams practical, sensible support, not adding new burdens. It does not change the law or create new requirements; instead, it helps teams apply existing FOI principles consistently, regardless of how a request is created. Used responsibly, AI also has the potential to help public authorities improve how they handle FOI requests, and this guidance sits alongside our wider work to support innovation that delivers real benefits for organisations and the public.’

The ICO says the guidance applies to all public authorities covered by the Freedom of Information Act and draws on existing casework, stakeholder engagement, practitioner feedback, and input from its AI specialists.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising data centre demand increases energy and cyber risks

Data centres are increasingly central to digital economies, but their rapid expansion is reshaping both electricity demand and cybersecurity risks. According to the International Energy Agency, data centres used about 1.5% of global electricity in 2024, with demand rising as AI and cloud services expand.

These facilities operate as both energy consumers and producers, relying on grid power while also maintaining on-site generation and battery systems. Their ability to switch power sources instantly supports service continuity but can also cause sudden load shifts that challenge grid stability during outages or cyber incidents.

Cybersecurity is now closely tied to energy resilience. Data centres depend on interconnected systems such as backup power, cooling, and digital control networks, all of which require continuous monitoring and protection.

Weaknesses in any part of this ‘system of systems’ can affect both service availability and wider electricity infrastructure.

Why does it matter? 

Data centres are becoming a critical infrastructure that directly affects both digital services and electricity systems. Shared planning for power disruptions, cyber events, and load management is increasingly seen as necessary to ensure stability across both digital services and national energy systems.

Their rising energy demand and reliance on complex on-site and grid power arrangements mean disruptions or cyber incidents can have wider knock-on effects, making resilience and cross-sector coordination essential for overall system stability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

French CNIL hosts global privacy talks in Paris

The French Commission Nationale de l’Informatique et des Libertés will host the G7 roundtable of data protection and privacy authorities in June 2026. The meeting aims to strengthen international cooperation amid rapid digital and AI developments.

The roundtable, created in 2021, brings together data protection authorities from G7 countries and the EU. It focuses on sharing legal and technological developments and encouraging coordinated approaches to common challenges.

Key areas of work for 2026 include emerging technologies, enforcement cooperation and the free flow of data. The discussions are expected to address growing concerns about data protection amid expanding AI use.

The CNIL stated that the French presidency will prioritise dialogue and practical cooperation, aiming to support global governance that respects fundamental rights, and that the event will take place in Paris.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reaches provisional deal on targeted AI Act changes

The Council presidency and European Parliament negotiators have reached a provisional agreement on targeted changes to the EU AI Act as part of the Omnibus VII package, which aims to simplify parts of the Union’s digital rulebook and ease implementation burdens.

According to the announcement, the deal broadly preserves the thrust of the Commission’s proposal on high-risk AI systems. The provisional agreement sets new application dates of 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.

The agreement also extends certain simplification measures beyond SMEs to small mid-caps, while keeping some safeguards. It reinstates the obligation for providers to register AI systems in the EU database where they consider those systems exempt from high-risk classification, and restores the requirement of strict necessity for processing special categories of personal data for bias detection and correction.

At the same time, the co-legislators added a new prohibited AI practice covering the generation of non-consensual sexual and intimate content and child sexual abuse material (CSAM). The deal also postpones the deadline for national AI regulatory sandboxes to 2 August 2027 and shortens the grace period for transparency measures for AI-generated content from 6 months to 3 months, with a new deadline of 2 December 2026.

The provisional agreement further clarifies the division of supervisory powers between the AI Office and national authorities, particularly where general-purpose AI models and downstream AI systems are developed by the same provider, by listing exceptions where national authorities remain competent. It also addresses overlaps between the AI Act and sectoral legislation in areas such as medical devices, toys, machinery, lifts, and watercraft: if the sectoral law has similar AI-specific requirements to the AI Act, then the AI Act’s application is limited through implementing acts. A specific solution was found for machinery regulation by exempting it from the direct applicability of the AI Act, while the Commission is empowered to adopt delegated acts under the machinery regulation, which would add health and safety requirements in respect of AI systems that are classified as high-risk pursuant to the AI Act.

The text must still be endorsed by both the Council and the European Parliament before undergoing legal and linguistic revision and formal adoption. The proposal is part of the EU’s broader simplification agenda, which has been driven by calls from the European Council and followed by a series of Omnibus packages since early 2025.

Marilena Raouna, Deputy Minister for European Affairs of the Republic of Cyprus, elaborated: ‘Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs. It ensures legal certainty and a smoother and more harmonised implementation of the rules across the Union, strengthening EU’s digital sovereignty and overall competitiveness.’

Raouna added: ‘At the same time, we are stepping up the protection of children targeting risks linked to the AI systems. This agreement is clear evidence of our institutions’ ability to act swiftly and deliver on our commitments. It marks the first deliverable under the ‘One Europe, One Market’ roadmap agreed by the three institutions last week, well within the set deadline.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Norway joins Pax Silica initiative to secure AI and semiconductor supply chains

The Pax Silixca initiative, which focuses on secure AI, semiconductor, and critical raw materials supply chains, has expanded with the addition of Norway. The partnership aims to strengthen technological innovation while protecting sensitive technologies.

Norway joins a group of 14 participating countries, including the USA, Japan, the UK and India. Norwegian officials said participation could improve market access for domestic companies operating in advanced technological sectors and strengthen economic security cooperation with strategic partners.

Minister of Trade and Industry, Cecilie Myrseth, said the initiative aligns with Norway’s goal of expanding cooperation with leading countries in AI and emerging technologies. Norwegian ambassador to the USA, Anniken Huitfeldt, is expected to formally sign the agreement on behalf of the country.

The move also complements broader Norwegian and European efforts to secure access to critical technologies and supply chains. The government highlighted initiatives linked to the European Chips Act and the EU Critical Raw Materials Act as part of a wider strategy to strengthen technology resilience and industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple may be preparing a major Siri AI shake-up in iOS 27

Apple is reportedly preparing a major expansion of Apple Intelligence that could allow users to choose which AI model powers Siri and other system features. According to recent reports, iOS 27, iPadOS 27, and macOS 27 may introduce a new ‘Extensions’ framework designed to integrate third-party AI systems directly into Apple’s software ecosystem.

The reported feature would allow applications such as Gemini and Claude to connect with Siri through their App Store apps. Users may be able to select different AI providers for different tasks, while Apple is also said to be testing separate Siri voices for responses generated by external models rather than Apple’s own systems.

The move would expand Apple’s broader AI partnership strategy rather than replace existing integrations. ChatGPT already supports selected Apple Intelligence functions, and earlier reporting suggested Google Gemini could eventually power parts of Siri itself. The new framework appears aimed at turning Apple devices into a wider AI platform that supports multiple large language models rather than a single assistant stack.

Apple is expected to present further details during its Worldwide Developers Conference on 8 June 2026. If the reported changes materialise, they could significantly reshape how users interact with AI assistants by giving them more control over which models handle tasks such as search, writing, and image generation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI found non-compliant in Canadian ChatGPT privacy probe

Canada’s federal and provincial privacy regulators have found that aspects of OpenAI’s collection, use, and disclosure of personal information through ChatGPT did not comply with applicable private-sector privacy laws, particularly in relation to model training on publicly accessible online data and user interactions.

The joint investigation was conducted by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, and the privacy commissioners of British Columbia and Alberta.

It examined OpenAI’s GPT-3.5 and GPT-4 models as used in ChatGPT, focusing on whether the company’s handling of personal information from public internet sources, licensed third-party datasets, and user interactions met legal requirements on appropriate purposes, consent, transparency, accuracy, access, retention, and accountability.

The regulators accepted that OpenAI’s overall purposes for developing and deploying ChatGPT were legitimate and appropriate. However, they found that the company’s initial collection of personal information from publicly accessible websites and licensed third-party sources for model training was overbroad and therefore inappropriate, given the scale, sensitivity, and potential inaccuracy of the data involved, as well as the limits of the mitigation measures in place at the time.

The Offices also found that OpenAI failed to obtain valid consent to collect and use personal information from public internet sources to train its models. They concluded that implied consent was not sufficient because the data could include sensitive personal information and because individuals would not reasonably have expected information about them posted online to be scraped and used for AI model training in this way.

On user interactions with ChatGPT, the regulators accepted that using some chat data for model improvement could serve OpenAI’s legitimate purposes. Still, they found that express consent should have been obtained.

They said OpenAI’s safeguards at the time were not strong enough to ensure that sensitive personal information would not be included in training data, and that many users would not reasonably have understood that their conversations could be used to train models or reviewed by human trainers.

The report also found that OpenAI should have obtained express consent for certain disclosures of personal information through ChatGPT outputs, especially where the information was sensitive or fell outside individuals’ reasonable expectations.

While OpenAI had introduced measures to reduce the risk of sensitive disclosures, the regulators said those measures covered a narrower set of information than the broader categories of personal information protected under the relevant privacy laws.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Siri AI delays lead to $250 million Apple settlement

Apple has agreed to pay $250 million to settle a class action lawsuit alleging that it misled consumers about the readiness and availability of AI-powered Siri features promoted ahead of the iPhone 16 launch. Under the proposed agreement, eligible US customers who bought supported iPhone models between 10 June 2024 and 29 March 2025 may receive between $25 and $95 per device, depending on the number of claims. Apple denied wrongdoing and settled the case without admitting liability.

The complaint argued that consumers who purchased supported iPhone 15 and iPhone 16 models expected advanced Apple Intelligence features and a significantly upgraded Siri experience that were not available at the time of sale. Plaintiffs said Apple’s marketing created the impression that the new capabilities would arrive sooner and with broader functionality than users ultimately received.

The settlement comes shortly before Apple’s annual Worldwide Developers Conference, where the company is widely expected to present further updates to Siri and its wider AI strategy.

Why does it matter?

The case shows how AI product marketing is becoming a legal and regulatory risk, not just a branding issue. As technology companies use generative AI features to drive device sales and platform adoption, courts and consumers are paying closer attention to whether those capabilities are actually available when products reach the market. The Apple settlement suggests that overstating AI readiness can create liability even before regulators step in, making transparency around launch claims increasingly important across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Generative AI guidance issued by Australia’s New South Wales tribunal

The New South Wales Civil and Administrative Tribunal has issued guidance on the acceptable use of generative AI in tribunal proceedings as part of Privacy Awareness Week NSW 2026, which this year focuses on personal information risks in the age of AI.

According to NCAT, generative AI tools may be used to assist with administrative and organisational tasks such as summarising material, organising information, or preparing chronologies. At the same time, the tribunal warns that such tools can create privacy risks if users enter personal, sensitive, or confidential information.

The guidance is set out in NCAT Procedural Direction 7 on the use of generative AI, together with an accompanying fact sheet. NCAT says the aim is to clarify when generative AI may be used in tribunal-related work while reinforcing obligations to protect personal and confidential information.

The tribunal also draws a clear line around evidentiary material. Generative AI must not be used to generate or alter evidence in tribunal proceedings, including statements, affidavits, statutory declarations, character references, or other evidentiary documents.

NCAT further states that generative AI must not be used to generate content for an expert report unless the tribunal has given permission. It is encouraging parties and their representatives to review the guidance before using such tools in proceedings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!