EDPB adopts scientific research data guidelines and Europrivacy opinions

The European Data Protection Board (EDPB) has adopted guidelines on the processing of personal data for scientific research purposes during its latest plenary, and opened them for public consultation until 25 June. The Board also created a dedicated ‘sprint team’ to complete its upcoming guidelines on anonymisation by the summer.

According to the EDPB, the new guidelines are intended to provide researchers with greater clarity on how the General Data Protection Regulation (GDPR) applies to scientific research while protecting individuals’ fundamental rights. The Board says the text clarifies the meaning of ‘scientific research’ under the GDPR and sets out six indicative factors to help determine whether processing is carried out for scientific research purposes.

The guidelines also explain that further processing for scientific research purposes is presumed to be compatible with the initial purpose for collecting personal data, meaning controllers do not need to carry out the GDPR purpose compatibility test. The EDPB says controllers must still ensure that the legal basis for the initial processing is also suitable for the further processing of personal data for scientific research purposes.

EDPB Chair Anu Talus said: ‘Scientific research can drive societal progress and improve our daily lives. Our guidelines facilitate innovative research by helping researchers to navigate the GDPR. The EDPB is committed to supporting the scientific community and unlocking the full potential of scientific research in the EU while upholding data protection rights.’

On consent, the Board says controllers may rely on ‘broad consent’ when research purposes are not fully known at the time of data collection, provided appropriate safeguards are in place. It also says controllers may seek consent separately for individual research projects once their purposes become known, and that a combination of broad and dynamic consent is possible.

The guidelines also address the rights of individuals, including the rights to erasure and to object, and explain when limitations may apply in the context of scientific research. The EDPB says the text also clarifies how responsibilities should be allocated when several entities are involved in processing, and outlines safeguards such as anonymisation or pseudonymisation, secure processing environments, privacy-enhancing technologies, confidentiality arrangements, and conditions for further use.

In addition, the Board adopted two opinions on two sets of Europrivacy certification criteria for approval as European Data Protection Seals. One opinion approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.

The second, adopted for the first time, recognises Europrivacy certification criteria as a European Data Protection Seal that can be used as a tool for transfers under Articles 42 and 46 GDPR. According to the EDPB, this will allow data importers outside Europe that are not subject to the GDPR to apply to the Europrivacy certification scheme for transferred data they receive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers flag risks in EU AI changes

A research paper by Hannah van Kolfschooten, Barry Solaiman and Daria Onitiu examines how recent European Union policy proposals could affect safeguards for medical AI under the EU AI Act. The study focuses on changes linked to broader simplification initiatives.

According to the authors, the reforms could maintain the classification of AI-enabled medical devices as high risk while removing key obligations tied to that classification. These include requirements on data governance, risk management and human oversight.

The paper argues that this shift would separate risk classification from the safeguards that give it practical meaning. It suggests that reliance may move back towards existing medical device laws without equivalent AI-specific protections.

The authors warn that such changes could weaken oversight, increase legal uncertainty and affect patient safety where AI systems influence clinical decisions in the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s OAIC updates the Children’s Online Privacy Code page during public consultation

The Office of the Australian Information Commissioner (OAIC) updated its Children’s Online Privacy Code page, as the regulator continues consultation on a draft code that will set privacy rules for online services likely to be accessed by children.

The page says the Code is being developed under the Privacy and Other Legislation Amendment Act 2024 and will operate as an APP Code under the Privacy Act 1988.

According to the OAIC, the Code will apply to online services that fall within the categories of social media services, relevant electronic services, and designated internet services under the Online Safety Act 2021, where those services are likely to be accessed by children or primarily concern children’s activities. The regulator says the Code is intended to put children at the centre of privacy protections in Australia while also lifting privacy practices more broadly.

The updated page highlights the current public consultation on the exposure draft of the Children’s Online Privacy Code. It also refers users to separate consultation pathways for children, young people, parents and carers, and for industry, civil society, academia, and other interested parties.

The OAIC also says it has created a dedicated Privacy for Kids hub to support participation in the consultation. According to the page, the hub includes workbooks and child-friendly guides to help explain the draft Code to children, young people, and parents and carers.

In addition, the updated page invites stakeholders to register for an OAIC webinar on the Children’s Online Privacy Code public consultation. The OAIC says the final Code must be finalised and registered by 10 December 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Minnesota weighs AI free speech limits

The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.

According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.

The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.

The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU targets platforms over child safety and addictive design practices

The European Commission has intensified enforcement under the Digital Services Act (DSA), targeting online platforms for child safety, addictive design features, and insufficient age-verification systems.

Executive Vice-President Virkkunen said the measures are intended to ensure platforms are held accountable when services expose minors to harmful or restricted content.

Actions have been taken against multiple major platforms, including TikTok, Facebook, Instagram, Snapchat, and Shein, over concerns related to design practices such as infinite scroll, autoplay, and highly personalised recommendation systems.

Additional enforcement has also been launched against pornographic platforms for failing to implement adequate age verification tools.

Alongside enforcement, the EU has developed a digital age verification app designed to give users control over personal data through privacy-preserving technology based on zero-knowledge proofs.

The system is already technically ready and is being tested across several member states, either as a standalone tool or integrated into national digital wallets.

The Commission is also preparing an EU-wide coordination mechanism to standardise accreditation of national solutions and avoid fragmentation across member states. The initiative aims to establish a unified age-verification framework that upholds privacy standards and supports wider adoption across digital services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK strengthens AI healthcare governance to ensure safety, equity and system-wide evaluation

The Medicines and Healthcare products Regulatory Agency in the UK has outlined priorities for regulating AI in healthcare, focusing on safety, effectiveness and public trust.

An approach that includes strengthening pre-market evaluation and post-market surveillance, particularly for adaptive systems operating in real-world settings.

Contributions from the Health Foundation and the National Commission for the Regulation of AI in Healthcare highlight the need for broader governance frameworks.

These extend beyond technical validation to include implementation challenges, system-wide impacts and the role of human oversight in clinical environments.

The analysis emphasises that AI in healthcare operates as a socio-technical system, requiring assessment of usability, fairness and real-world outcomes. It also identifies gaps in current evaluation practices, particularly in local service assessments, which may lack consistency and reliability.

Strengthening evaluation standards, improving coordination and addressing risks such as bias and inequity are presented as central to enabling safe and scalable adoption.

Such a framework in the UK aims to balance innovation with accountability while ensuring equitable access to healthcare technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reshapes cybersecurity access as defenders gain new tools

OpenAI has expanded its Trusted Access for Cyber programme and introduced a more permissive AI model designed specifically for cybersecurity work. The initiative reflects a broader shift in digital security, in which advanced AI tools are increasingly integrated into both defensive and offensive cyber operations.

The development highlights a structural change in cybersecurity, where defenders are no longer relying solely on traditional tools but are instead incorporating AI systems capable of analysing code, identifying vulnerabilities and accelerating incident response.

At the same time, the same technological capabilities are becoming accessible to malicious actors, intensifying the need for controlled and verified access.

New automated vulnerability tools are being deployed to detect and fix security flaws at scale, moving towards continuous AI-assisted protection. Rather than periodic security reviews, development environments are gradually shifting towards real-time monitoring and automated remediation.

The broader implication is a tightening link between AI capability growth and cyber risk management. Access frameworks based on identity verification and trust signals aim to balance the wider availability of defensive tools with safeguards against misuse.

The expansion of AI-driven cybersecurity tools reflects a structural shift in how digital infrastructure is protected at scale. As software systems become more complex and interconnected, traditional periodic security checks are increasingly insufficient to manage fast-evolving threats. 

Cybersecurity is moving towards an always-on, automated model where the balance between openness and restriction will directly shape global digital resilience. The outcome of this approach will influence how resilient digital infrastructure becomes as AI-driven threats and defences evolve in parallel.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Sussex police deploy AI cameras to detect traffic offences

Sussex Police has introduced AI cameras to detect drivers using mobile phones or not wearing seatbelts. The technology is being deployed to support enforcement and reduce road safety risks.

The rollout follows a 2024 trial by National Highways in Sussex, during which 458 offences were detected in 7 days. Most cases involved seatbelt violations, while others included mobile phone use or both offences combined.

Chief Constable Jo Shiner said the cameras are intended to support policing rather than replace it. She added that AI cameras help monitor driver behaviour and enable action where necessary.

Police and Crime Commissioner Katy Bourne said the technology would strengthen enforcement and allow resources to be used more effectively. She noted that collisions linked to phone use and lack of seatbelts continue to cause injuries.

The cameras, supplied by Acusensus, will operate for several weeks before evaluation. Officials said the system will contribute to wider road safety efforts and ongoing monitoring initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

European Data Protection Board introduces DPIA template to strengthen GDPR compliance

The European Data Protection Board has introduced a standardised template for Data Protection Impact Assessments (DPIAs), aiming to improve consistency and simplify GDPR compliance across Europe.

The initiative follows the board’s broader effort to harmonise regulatory practices and make data protection requirements easier for organisations to apply.

A DPIA is required when data processing is likely to pose a high risk to individuals’ rights and freedoms. It involves describing how personal data is handled, assessing necessity and proportionality, and identifying measures to reduce risk.

The new template is designed to guide organisations step by step, offering structured fields that improve clarity and reduce the risk of incomplete or inconsistent assessments.

While use of the template is not mandatory, organisations are encouraged to adopt it as a practical tool to streamline reporting and ensure completeness. An accompanying document simplifies key concepts and addresses common uncertainties, making implementation more accessible across sectors.

The template will remain open for public consultation until 9 June, after which national data protection authorities are expected to integrate it into their frameworks. Stakeholders are invited to provide feedback during this period as part of ongoing efforts to align data protection practices across the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Polish data protection authority seeks personal data rules for civic budgets

The President of Poland’s Personal Data Protection Office, Mirosław Wróblewski, has called for legislation clarifying how personal data should be processed in so-called civic budget procedures.

In a submission to the Minister of the Interior and Administration, Wróblewski said that current local government rules do not comprehensively regulate the processing of personal data in participatory budgeting.

According to the office, civic budget procedures involve the processing of personal data not only by public authorities but also by citizens who collect, record, and submit support lists for proposed projects. The authority says this has created practical difficulties for both public bodies responsible for consultations and the people whose data are processed.

The office says local government laws in Poland should clarify who acts as the data controller, what categories of personal data may be processed, how the status of eligible voters should be verified, and how personal data should be secured. It notes that current rules leave these issues largely to local resolutions, without precise statutory criteria on data processing.

The submission also raises concerns about the scope of personal data collected during voting. It states that some civic budget procedures require voters to provide a PESEL number, which can exclude residents who do not have one, including some foreigners and Polish citizens born abroad who use only a passport.

The office says the collection and further processing of PESEL numbers for strictly defined purposes should follow directly from legal provisions and notes that administrative case law has generally found no legal basis for requiring it in this context.

The authority also calls for rules on electronic voting in civic budgets. It says that local authorities do not always consider themselves responsible for data security before support lists are transferred, and that people collecting signatures are not always aware of their responsibilities for processing personal data.

The authority also adds that digital platforms used for such voting should meet minimum criteria consistent with the GDPR and with broader cybersecurity and digital identity frameworks, including NIS2 and eIDAS2.

According to the office, such systems should comply with data minimisation requirements and ensure transparency and verifiability of the voting process, including auditability and verification of vote counting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!