UK strengthens AI healthcare governance to ensure safety, equity and system-wide evaluation

The Medicines and Healthcare products Regulatory Agency in the UK has outlined priorities for regulating AI in healthcare, focusing on safety, effectiveness and public trust.

An approach that includes strengthening pre-market evaluation and post-market surveillance, particularly for adaptive systems operating in real-world settings.

Contributions from the Health Foundation and the National Commission for the Regulation of AI in Healthcare highlight the need for broader governance frameworks.

These extend beyond technical validation to include implementation challenges, system-wide impacts and the role of human oversight in clinical environments.

The analysis emphasises that AI in healthcare operates as a socio-technical system, requiring assessment of usability, fairness and real-world outcomes. It also identifies gaps in current evaluation practices, particularly in local service assessments, which may lack consistency and reliability.

Strengthening evaluation standards, improving coordination and addressing risks such as bias and inequity are presented as central to enabling safe and scalable adoption.

Such a framework in the UK aims to balance innovation with accountability while ensuring equitable access to healthcare technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reshapes cybersecurity access as defenders gain new tools

OpenAI has expanded its Trusted Access for Cyber programme and introduced a more permissive AI model designed specifically for cybersecurity work. The initiative reflects a broader shift in digital security, in which advanced AI tools are increasingly integrated into both defensive and offensive cyber operations.

The development highlights a structural change in cybersecurity, where defenders are no longer relying solely on traditional tools but are instead incorporating AI systems capable of analysing code, identifying vulnerabilities and accelerating incident response.

At the same time, the same technological capabilities are becoming accessible to malicious actors, intensifying the need for controlled and verified access.

New automated vulnerability tools are being deployed to detect and fix security flaws at scale, moving towards continuous AI-assisted protection. Rather than periodic security reviews, development environments are gradually shifting towards real-time monitoring and automated remediation.

The broader implication is a tightening link between AI capability growth and cyber risk management. Access frameworks based on identity verification and trust signals aim to balance the wider availability of defensive tools with safeguards against misuse.

The expansion of AI-driven cybersecurity tools reflects a structural shift in how digital infrastructure is protected at scale. As software systems become more complex and interconnected, traditional periodic security checks are increasingly insufficient to manage fast-evolving threats. 

Cybersecurity is moving towards an always-on, automated model where the balance between openness and restriction will directly shape global digital resilience. The outcome of this approach will influence how resilient digital infrastructure becomes as AI-driven threats and defences evolve in parallel.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Sussex police deploy AI cameras to detect traffic offences

Sussex Police has introduced AI cameras to detect drivers using mobile phones or not wearing seatbelts. The technology is being deployed to support enforcement and reduce road safety risks.

The rollout follows a 2024 trial by National Highways in Sussex, during which 458 offences were detected in 7 days. Most cases involved seatbelt violations, while others included mobile phone use or both offences combined.

Chief Constable Jo Shiner said the cameras are intended to support policing rather than replace it. She added that AI cameras help monitor driver behaviour and enable action where necessary.

Police and Crime Commissioner Katy Bourne said the technology would strengthen enforcement and allow resources to be used more effectively. She noted that collisions linked to phone use and lack of seatbelts continue to cause injuries.

The cameras, supplied by Acusensus, will operate for several weeks before evaluation. Officials said the system will contribute to wider road safety efforts and ongoing monitoring initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

European Data Protection Board introduces DPIA template to strengthen GDPR compliance

The European Data Protection Board has introduced a standardised template for Data Protection Impact Assessments (DPIAs), aiming to improve consistency and simplify GDPR compliance across Europe.

The initiative follows the board’s broader effort to harmonise regulatory practices and make data protection requirements easier for organisations to apply.

A DPIA is required when data processing is likely to pose a high risk to individuals’ rights and freedoms. It involves describing how personal data is handled, assessing necessity and proportionality, and identifying measures to reduce risk.

The new template is designed to guide organisations step by step, offering structured fields that improve clarity and reduce the risk of incomplete or inconsistent assessments.

While use of the template is not mandatory, organisations are encouraged to adopt it as a practical tool to streamline reporting and ensure completeness. An accompanying document simplifies key concepts and addresses common uncertainties, making implementation more accessible across sectors.

The template will remain open for public consultation until 9 June, after which national data protection authorities are expected to integrate it into their frameworks. Stakeholders are invited to provide feedback during this period as part of ongoing efforts to align data protection practices across the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Polish data protection authority seeks personal data rules for civic budgets

The President of Poland’s Personal Data Protection Office, Mirosław Wróblewski, has called for legislation clarifying how personal data should be processed in so-called civic budget procedures.

In a submission to the Minister of the Interior and Administration, Wróblewski said that current local government rules do not comprehensively regulate the processing of personal data in participatory budgeting.

According to the office, civic budget procedures involve the processing of personal data not only by public authorities but also by citizens who collect, record, and submit support lists for proposed projects. The authority says this has created practical difficulties for both public bodies responsible for consultations and the people whose data are processed.

The office says local government laws in Poland should clarify who acts as the data controller, what categories of personal data may be processed, how the status of eligible voters should be verified, and how personal data should be secured. It notes that current rules leave these issues largely to local resolutions, without precise statutory criteria on data processing.

The submission also raises concerns about the scope of personal data collected during voting. It states that some civic budget procedures require voters to provide a PESEL number, which can exclude residents who do not have one, including some foreigners and Polish citizens born abroad who use only a passport.

The office says the collection and further processing of PESEL numbers for strictly defined purposes should follow directly from legal provisions and notes that administrative case law has generally found no legal basis for requiring it in this context.

The authority also calls for rules on electronic voting in civic budgets. It says that local authorities do not always consider themselves responsible for data security before support lists are transferred, and that people collecting signatures are not always aware of their responsibilities for processing personal data.

The authority also adds that digital platforms used for such voting should meet minimum criteria consistent with the GDPR and with broader cybersecurity and digital identity frameworks, including NIS2 and eIDAS2.

According to the office, such systems should comply with data minimisation requirements and ensure transparency and verifiability of the voting process, including auditability and verification of vote counting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Belgian DPA releases new AI harms information brochure

The Belgian Data Protection Authority has outlined the impact of AI on privacy in a new publication, highlighting growing concerns around data use and protection. The analysis forms part of its ongoing work on emerging technologies.

According to the Belgian Data Protection Authority, AI systems rely on large volumes of data, which can increase risks related to the processing of personal data and compliance with existing regulations. This raises questions about transparency and accountability.

The authority notes that AI can make it more difficult for individuals to understand how their data is used, particularly in complex or automated decision-making systems. This may challenge established data protection principles.

The Authority emphasises the need to adapt regulatory approaches and safeguards to ensure privacy rights remain protected as AI adoption expands in Belgium.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea warns on AI fake news risks

Reporting by The Korea Herald states that South Korean Prime Minister Kim Min-seok has warned of the risks of AI-generated fake news ahead of an upcoming election. Authorities are urging greater vigilance as digital content becomes harder to verify.

According to the report, AI technologies are increasingly capable of producing realistic false information, including manipulated images and videos. This raises concerns about their potential impact on public opinion and trust.

The government has called for precautionary measures to limit the spread of misinformation and protect the integrity of democratic processes. This includes encouraging awareness and responsible use of AI tools.

The warning reflects broader concerns about the influence of AI driven disinformation during election cycles in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian authorities warn of data exploitation through social media platforms

Social media and messaging services pose growing security and privacy risks, with personal data used to build profiles for fraud, espionage, or social engineering. Even routine posts may contribute to broader data collection and unintended exposure.

Platforms typically collect extensive user and device data under evolving privacy policies, sometimes storing it across jurisdictions with varying legal protections. Such conditions increase the risks to identity theft, reputational harm, and the misuse of aggregated personal information.

The Australian Government advises organisations to restrict access to official accounts, train staff, and enforce clear policies on what can be shared. It also highlights the importance of breach response procedures to maintain operational security.

For individuals, the Government guidance recommends limiting exposure of personal data, using privacy settings, avoiding unknown contacts, and applying strong authentication.

Regular updates, careful app permissions, and device security measures are also encouraged to reduce cyber risks.

Strengthening awareness and applying consistent security practices reduces vulnerability and supports more resilient organisational systems in an increasingly interconnected digital environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Corporate AI governance gaps highlighted in UNESCO report

UNESCO and the Thomson Reuters Foundation have published ‘Responsible AI in practice: 2025 global insights from the AI Company Data Initiative‘, presenting findings from what the report describes as the largest global dataset of corporate responsible AI disclosures.

The report analyses 2,972 companies across 11 sectors and multiple regions using publicly available disclosures and company survey responses collected through the AI Company Data Initiative.

The report says AI is being embedded across companies’ products, services, and internal operations faster than governance and disclosure are developing. It states that 43.7% of companies publicly communicate having an AI strategy or guidelines, but only 13% publicly claim adherence to a formal AI governance framework.

Among those that do cite a framework, 53% refer to the EU AI Act, while the report says 43.6% cite ‘other’ frameworks, which it presents as weakening comparability across the wider AI governance ecosystem.

The publication also says many companies describe AI governance in conceptual terms while providing less evidence on operational controls, accountability pathways, monitoring, and remediation. It states that 40% report board- or committee-level oversight on AI, and 12.4% report having a policy to ensure a human oversees AI systems.

At the same time, the publication says 72% of companies do not report conducting any AI-related impact assessment. Of those that do, 11% report environmental impact assessments and 7% report human rights impact assessments. The key statistics on page 10 visually present these findings.

Regarding labour impacts, the report says companies do not provide adequate protection for workers as AI reshapes jobs. It states that while 31% of companies claim to have AI training programmes, only 12% offered structured training with comprehensive coverage. It also argues that effective worker protection requires stronger evidence of reskilling, retraining, redeployment, transition support, and access to remedy where AI affects workers’ rights.

Why does it matter?

The report further states that ethical issues, including human rights and environmental impacts, are being sidelined in AI governance and risk management, while transparency regarding training data, third-party systems, and user rights remains uneven. It presents the AI Company Data Initiative as a tool to help companies assess their governance practices against UNESCO’s Recommendation on the Ethics of AI and to give investors more comparable information on how AI is governed in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan approves APPI amendment bill on personal data, AI training, and fines

Japan’s Cabinet has approved a bill to amend the Act on the Protection of Personal Information, or APPI, for submission to parliament.

The proposed amendments combine stricter enforcement with regulatory easing. They would introduce an administrative fine system, strengthen protections for children’s data and certain biometric data, and allow broader use of personal data for AI training. The bill would also ease some data-breach notification requirements.

Digital Minister of Japan, Hisashi Matsumoto, said enabling the use of sensitive personal data without consent is important for developing domestic AI models. He said the bill seeks to balance that objective with stronger protections for children’s data and facial-recognition data, as well as the introduction of administrative fines.

The fine mechanism would be introduced in a limited form. Provisions to impose fines for large-scale data breaches resulting from inadequate security measures were removed. Instead, the bill would target improper acquisition or use of personal data, unlawful provision of data to third parties, and misuse of sensitive data beyond stated statistical purposes, including transfers to third parties.

According to the proposal, fines would apply in large-scale cases involving more than 1,000 affected individuals, with amounts linked to profits derived from unlawful data handling. During drafting, the Personal Information Protection Commission also dropped plans to introduce consumer class actions for legal redress, while saying it would continue studying the issue.

The Personal Information Protection Commission is seeking passage during the current parliamentary session. The proposal follows a lengthy amendment process, during which earlier plans faced opposition from business and technology groups.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!