Data Protection Act regulations bring AI code requirement into force

The UK has brought into force regulations requiring the Information Commissioner to prepare a code of practice on the processing of personal data in relation to AI and automated decision-making.

The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and came into force on 12 May. The regulations apply across England and Wales, Scotland and Northern Ireland.

Under the regulations, the Information Commissioner must prepare a code giving guidance on good practice in the processing of personal data under the UK GDPR and the Data Protection Act 2018 when developing and using AI and automated decision-making systems.

The code must also include guidance on good practice in the processing of children’s personal data. Automated decision-making is defined by reference to provisions in the UK GDPR and the Data Protection Act 2018 inserted through the Data (Use and Access) Act 2025.

The instrument also modifies the panel requirements for preparing or amending the code. Any panel established to consider the code must not consider or report on aspects relating to national security.

The explanatory note states that no full impact assessment was prepared for the instrument because the regulations themselves are not expected to have a significant impact on the private, voluntary or public sectors. The Information Commissioner must produce an impact assessment when preparing the code.

Why does it matter?

The regulations move UK guidance on AI, automated decision-making and personal data onto a statutory track. The eventual code could become an important reference point for organisations using AI systems that process personal data, particularly where automated decisions or children’s data are involved. For now, the main development is procedural: the Information Commissioner is required to prepare the code, while the practical compliance details will follow through that process.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU reaches provisional deal on targeted AI Act changes

The Council presidency and European Parliament negotiators have reached a provisional agreement on targeted changes to the EU AI Act as part of the Omnibus VII package, which aims to simplify parts of the Union’s digital rulebook and ease implementation burdens.

According to the announcement, the deal broadly preserves the thrust of the Commission’s proposal on high-risk AI systems. The provisional agreement sets new application dates of 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.

The agreement also extends certain simplification measures beyond SMEs to small mid-caps, while keeping some safeguards. It reinstates the obligation for providers to register AI systems in the EU database where they consider those systems exempt from high-risk classification, and restores the requirement of strict necessity for processing special categories of personal data for bias detection and correction.

At the same time, the co-legislators added a new prohibited AI practice covering the generation of non-consensual sexual and intimate content and child sexual abuse material (CSAM). The deal also postpones the deadline for national AI regulatory sandboxes to 2 August 2027 and shortens the grace period for transparency measures for AI-generated content from 6 months to 3 months, with a new deadline of 2 December 2026.

The provisional agreement further clarifies the division of supervisory powers between the AI Office and national authorities, particularly where general-purpose AI models and downstream AI systems are developed by the same provider, by listing exceptions where national authorities remain competent. It also addresses overlaps between the AI Act and sectoral legislation in areas such as medical devices, toys, machinery, lifts, and watercraft: if the sectoral law has similar AI-specific requirements to the AI Act, then the AI Act’s application is limited through implementing acts. A specific solution was found for machinery regulation by exempting it from the direct applicability of the AI Act, while the Commission is empowered to adopt delegated acts under the machinery regulation, which would add health and safety requirements in respect of AI systems that are classified as high-risk pursuant to the AI Act.

The text must still be endorsed by both the Council and the European Parliament before undergoing legal and linguistic revision and formal adoption. The proposal is part of the EU’s broader simplification agenda, which has been driven by calls from the European Council and followed by a series of Omnibus packages since early 2025.

Marilena Raouna, Deputy Minister for European Affairs of the Republic of Cyprus, elaborated: ‘Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs. It ensures legal certainty and a smoother and more harmonised implementation of the rules across the Union, strengthening EU’s digital sovereignty and overall competitiveness.’

Raouna added: ‘At the same time, we are stepping up the protection of children targeting risks linked to the AI systems. This agreement is clear evidence of our institutions’ ability to act swiftly and deliver on our commitments. It marks the first deliverable under the ‘One Europe, One Market’ roadmap agreed by the three institutions last week, well within the set deadline.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland and the EU intensify DSA pressure on Meta

Coimisiún na Meán, the media regulator of Ireland, has launched two formal investigations into Meta over the design of recommender systems on Facebook and Instagram under the Digital Services Act. The investigations focus on whether users are prevented from choosing recommendation feeds that are not based on the profiling of their personal data.

Coimisiún na Meán said concerns emerged following platform supervision reviews and complaints linked to potential ‘dark patterns’ and deceptive interface designs. Regulators are examining whether users can easily access and modify non-profiled recommendation feeds as required under Article 27 of the DSA, alongside whether interface designs may improperly influence user choices under Article 25.

John Evans, Digital Services Commissioner at Coimisiún na Meán, said recommender systems can repeatedly push harmful material into user feeds, particularly affecting children and younger users. The regulator also warned that Very Large Online Platforms (VLOPs) must ensure users can exercise their rights under the DSA without manipulation or unnecessary barriers.

EU investigates Meta over under-13 access on Instagram and Facebook

At the same time, the European Commission has preliminarily found Meta in potential breach of the DSA over failures to adequately prevent children under 13 from accessing Instagram and Facebook. Regulators said Meta’s age verification and reporting systems may be ineffective, while the company’s risk assessments allegedly failed to properly address harms faced by underage users.

Why does it matter?

These investigations are critical because they could shape how the DSA is enforced across Europe, particularly in cases involving children and algorithmic recommendation systems. If regulators conclude that Meta failed to properly protect minors or used manipulative interface designs that discouraged users from choosing non-profiled feeds, the case may set a wider precedent for how large online platforms handle age assurance, user consent, privacy protections, and recommender system transparency under EU law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI found non-compliant in Canadian ChatGPT privacy probe

Canada’s federal and provincial privacy regulators have found that aspects of OpenAI’s collection, use, and disclosure of personal information through ChatGPT did not comply with applicable private-sector privacy laws, particularly in relation to model training on publicly accessible online data and user interactions.

The joint investigation was conducted by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, and the privacy commissioners of British Columbia and Alberta.

It examined OpenAI’s GPT-3.5 and GPT-4 models as used in ChatGPT, focusing on whether the company’s handling of personal information from public internet sources, licensed third-party datasets, and user interactions met legal requirements on appropriate purposes, consent, transparency, accuracy, access, retention, and accountability.

The regulators accepted that OpenAI’s overall purposes for developing and deploying ChatGPT were legitimate and appropriate. However, they found that the company’s initial collection of personal information from publicly accessible websites and licensed third-party sources for model training was overbroad and therefore inappropriate, given the scale, sensitivity, and potential inaccuracy of the data involved, as well as the limits of the mitigation measures in place at the time.

The Offices also found that OpenAI failed to obtain valid consent to collect and use personal information from public internet sources to train its models. They concluded that implied consent was not sufficient because the data could include sensitive personal information and because individuals would not reasonably have expected information about them posted online to be scraped and used for AI model training in this way.

On user interactions with ChatGPT, the regulators accepted that using some chat data for model improvement could serve OpenAI’s legitimate purposes. Still, they found that express consent should have been obtained.

They said OpenAI’s safeguards at the time were not strong enough to ensure that sensitive personal information would not be included in training data, and that many users would not reasonably have understood that their conversations could be used to train models or reviewed by human trainers.

The report also found that OpenAI should have obtained express consent for certain disclosures of personal information through ChatGPT outputs, especially where the information was sensitive or fell outside individuals’ reasonable expectations.

While OpenAI had introduced measures to reduce the risk of sensitive disclosures, the regulators said those measures covered a narrower set of information than the broader categories of personal information protected under the relevant privacy laws.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EDPB adopts scientific research data guidelines and Europrivacy opinions

The European Data Protection Board (EDPB) has adopted guidelines on the processing of personal data for scientific research purposes during its latest plenary, and opened them for public consultation until 25 June. The Board also created a dedicated ‘sprint team’ to complete its upcoming guidelines on anonymisation by the summer.

According to the EDPB, the new guidelines are intended to provide researchers with greater clarity on how the General Data Protection Regulation (GDPR) applies to scientific research while protecting individuals’ fundamental rights. The Board says the text clarifies the meaning of ‘scientific research’ under the GDPR and sets out six indicative factors to help determine whether processing is carried out for scientific research purposes.

The guidelines also explain that further processing for scientific research purposes is presumed to be compatible with the initial purpose for collecting personal data, meaning controllers do not need to carry out the GDPR purpose compatibility test. The EDPB says controllers must still ensure that the legal basis for the initial processing is also suitable for the further processing of personal data for scientific research purposes.

EDPB Chair Anu Talus said: ‘Scientific research can drive societal progress and improve our daily lives. Our guidelines facilitate innovative research by helping researchers to navigate the GDPR. The EDPB is committed to supporting the scientific community and unlocking the full potential of scientific research in the EU while upholding data protection rights.’

On consent, the Board says controllers may rely on ‘broad consent’ when research purposes are not fully known at the time of data collection, provided appropriate safeguards are in place. It also says controllers may seek consent separately for individual research projects once their purposes become known, and that a combination of broad and dynamic consent is possible.

The guidelines also address the rights of individuals, including the rights to erasure and to object, and explain when limitations may apply in the context of scientific research. The EDPB says the text also clarifies how responsibilities should be allocated when several entities are involved in processing, and outlines safeguards such as anonymisation or pseudonymisation, secure processing environments, privacy-enhancing technologies, confidentiality arrangements, and conditions for further use.

In addition, the Board adopted two opinions on two sets of Europrivacy certification criteria for approval as European Data Protection Seals. One opinion approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.

The second, adopted for the first time, recognises Europrivacy certification criteria as a European Data Protection Seal that can be used as a tool for transfers under Articles 42 and 46 GDPR. According to the EDPB, this will allow data importers outside Europe that are not subject to the GDPR to apply to the Europrivacy certification scheme for transferred data they receive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU updates technology licensing competition rules to reflect data and digital markets

The European Commission has adopted revised rules governing technology transfer agreements (Technology Transfer Block Exemption Regulation and Guidelines on the application of Article 101 of the Treaty to technology transfer agreements), updating a framework originally introduced in 2014.

These changes aim to reflect developments in the digital economy, particularly the growing role of data and standardised technologies in enabling interoperability across markets.

Technology transfer agreements allow firms to license intellectual property such as patents, software and design rights, supporting the dissemination of innovation. While such agreements are often considered pro-competitive, they may also create risks if they restrict market access or distort competition.

The revised framework clarifies how these agreements are assessed under Article 101 of the Treaty on the Functioning of the European Union.

The updated rules introduce specific guidance on data licensing and licensing negotiation groups, addressing new market practices.

They also refine conditions under which agreements benefit from exemptions, including simplified criteria for early-stage technologies and clearer safeguards for technology pools linked to industry standards.

Overall, the revision by the EU seeks to improve legal certainty for businesses while ensuring that licensing practices support innovation, competition and the broader functioning of the single market. The new framework will apply from May 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates Meta over WhatsApp AI access in major antitrust enforcement case

The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp.

A move that forms part of an ongoing investigation into a possible abuse of dominant market position under the EU competition rules.

The Commission’s preliminary assessment suggests that recent policy changes, including the introduction of access fees, may have effects equivalent to an earlier exclusion of competing AI services.

Something that raises concerns about barriers to entry and reduced competition in the emerging market for AI assistants.

As part of interim measures under Article 102 of the Treaty on the Functioning of the European Union, regulators are considering requiring Meta to restore access to its services under previous conditions.

Such measures aim to prevent serious and potentially irreversible harm to competition while the investigation continues.

The case has been expanded to cover the entire European Economic Area, reflecting coordination with national authorities.

These proceedings highlight increasing regulatory scrutiny of platform control over AI ecosystems and access to digital markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK tests AI transcripts to improve access to justice and reduce court costs

The UK Ministry of Justice, alongside HM Courts & Tribunals Service, has launched a study examining how AI can be used to generate court transcripts more efficiently.

The initiative aims to reduce the cost and time required for accessing official court records.

Currently, transcript fees can be prohibitively expensive, limiting access for victims seeking clarity on court proceedings. The proposed use of AI-based systems, including in-house tools such as Justice Transcribe, could lower these barriers while maintaining required accuracy standards.

The policy forms part of broader efforts in the UK to modernise the justice system and enhance transparency. It aligns with legislative developments, including the Victims and Courts Bill, and plans to provide free access to sentencing remarks in Crown Court cases from 2027.

By improving access to legal records, the initiative seeks to strengthen accountability and support victims’ understanding of judicial processes, contributing to a more accessible and responsive justice system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI improves structured and coherent legal systems for better regulation

A study from Sultan Qaboos University shows how AI can be used to map hidden structural relationships within legal systems, offering new ways to understand how laws interact and evolve.

Published in The Journal of Engineering Research, the research applies natural language processing and network analysis to Oman’s 2023 Labour Law.

The analysis reveals that legal provisions operate as an interconnected system rather than isolated rules. Certain articles emerge as highly influential ‘hubs’, with Article 147 identified as a central node whose modification could generate cascading effects across multiple parts of the legislation.

These interdependencies are visualised through network mapping techniques that highlight structural relationships not easily detected through traditional review.

To construct this model, researchers developed a four-stage methodology combining Arabic-language NLP tools with industrial engineering approaches. Legal texts were mapped using terminology and cross-referencing patterns, with outputs validated by Omani legislative experts to ensure accuracy and relevance.

The study highlights links between labour law and broader regulatory domains, including commercial regulation, social protection, occupational health, and immigration policy.

The findings underline AI’s potential in the regulatory sector to improve coherence, reveal interdependencies, and support scalable, more consistent legal frameworks across jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot