Data Protection Act regulations bring AI code requirement into force

The UK has brought into force regulations requiring the Information Commissioner to prepare a code of practice on the processing of personal data in relation to AI and automated decision-making.

The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and came into force on 12 May. The regulations apply across England and Wales, Scotland and Northern Ireland.

Under the regulations, the Information Commissioner must prepare a code giving guidance on good practice in the processing of personal data under the UK GDPR and the Data Protection Act 2018 when developing and using AI and automated decision-making systems.

The code must also include guidance on good practice in the processing of children’s personal data. Automated decision-making is defined by reference to provisions in the UK GDPR and the Data Protection Act 2018 inserted through the Data (Use and Access) Act 2025.

The instrument also modifies the panel requirements for preparing or amending the code. Any panel established to consider the code must not consider or report on aspects relating to national security.

The explanatory note states that no full impact assessment was prepared for the instrument because the regulations themselves are not expected to have a significant impact on the private, voluntary or public sectors. The Information Commissioner must produce an impact assessment when preparing the code.

Why does it matter?

The regulations move UK guidance on AI, automated decision-making and personal data onto a statutory track. The eventual code could become an important reference point for organisations using AI systems that process personal data, particularly where automated decisions or children’s data are involved. For now, the main development is procedural: the Information Commissioner is required to prepare the code, while the practical compliance details will follow through that process.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta gives parents deeper insight into teen algorithms

Meta has introduced new supervision features designed to give parents greater visibility into the content shaping teenagers’ experiences on Instagram.

The updated tools allow parents and guardians to view the general topics their teens engage with through Instagram’s ‘Your Algorithm’ feature, which helps shape recommendations on Reels and Explore. Meta said parents in selected markets will soon receive notifications when teens add new interests, such as basketball, photography or musicals, helping explain why recommended content may change over time.

The company said the feature remains subject to existing teen safety protections and content restrictions already applied to Teen Accounts, including limits on certain content for users aged 13 and above and enforcement of Meta’s Community Standards.

Meta has also consolidated supervision tools for Instagram, Facebook, Messenger and Meta Horizon into a single Family Centre hub. Parents can now manage supervised accounts, safety settings and invitations across multiple apps without switching between separate platforms.

Meta said the number of US teens enrolled in supervision on Instagram has more than doubled over the past year. Additional updates planned for the coming months include aggregated activity insights, such as total time spent across Meta’s apps, to give families broader visibility into teen online habits.

Why does it matter?

The update shows how major platforms are responding to pressure for greater transparency around their recommendation systems, particularly regarding teenagers. While the tools do not reveal the full logic of Instagram’s algorithm, they give parents more visibility into the interest categories shaping teen content feeds and create another layer of oversight around personalised recommendations, screen time and online safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

New South Wales criminalises AI sexual deepfakes

Australia’s New South Wales state has clarified that creating, sharing, or threatening to share sexually explicit images, videos, or audio of a person without consent is a criminal offence, including where the material has been digitally altered or generated using AI.

The state government strengthened protections in 2025 by amending the Crimes Act 1900 to cover digitally generated deepfakes. The law already applied to sexually explicit image material, but now also covers content created or altered by AI to place someone in a sexual situation they were never in.

The reforms mean that non-consensual sexual images or audio are covered regardless of how they were made. Threatening to create or share such material is also a criminal offence in New South Wales, with penalties of up to three years in prison, a fine of up to A$11,000, or both.

Courts can also order offenders to remove or delete the material. Failure to comply with such an order can result in up to 2 years’ imprisonment, a fine of up to A$5,500, or both.

The law operates alongside existing child abuse material offences. Under criminal law, any material depicting a person under 18 in a sexually explicit way can be treated as child abuse material, including AI-generated content.

Criminal proceedings against people under 16 can begin only with the approval of the Director of Public Prosecutions, which is intended to ensure that only the most serious matters involving young people enter the criminal justice system.

Limited exemptions apply for proper purposes, including genuine medical, scientific, law enforcement, or legal proceedings-related purposes. A review of the law will take place 12 months after it comes into effect to assess how it is working and whether changes are needed.

The changes are intended to address the misuse of AI and deepfake technology to harass, shame, or exploit people through fake digital content. New South Wales says its criminal law works alongside national online safety frameworks, including the work of Australia’s eSafety Commissioner, as It seeks to keep privacy and consent protections aligned with emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU weighs social media age rules to protect children

The European Commission has signalled that it may propose EU-level rules on delaying children’s access to social media, as concerns grow over addictive platform design, harmful content and AI-enabled risks for minors.

In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.

Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.

Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.

The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.

Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.

The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.

Why does it matter?

The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC guidance sets out platform duties under Take It Down Act

The US Federal Trade Commission has issued guidance for online platforms on compliance with Section 3 of the Take It Down Act, which takes effect on 19 May 2026 and requires covered platforms to remove non-consensual intimate photos or videos within 48 hours of receiving a valid request.

The FTC says the law applies to a broad range of online platforms, including websites, apps, social media, messaging, image and video sharing, and gaming services. Platforms may fall under the law if they primarily provide a forum for user-generated content or regularly publish, curate, host, or furnish intimate content shared without consent.

Covered platforms must provide clear and conspicuous plain-language information about how people can submit removal requests for intimate photos or videos shared without consent. The FTC says platforms should make the process easy to use, including for people who do not have an account on the service.

The law also covers ‘digital forgeries’, including intimate images that were digitally created or altered using software, apps, or AI. Platforms that receive a valid request must remove the reported content and make reasonable efforts to locate and remove known identical copies within 48 hours.

The FTC also encourages platforms to help prevent removed images from spreading further, including through hashing technology and, where appropriate, by sharing hashes with services such as the National Center for Missing and Exploited Children’s Take It Down service or StopNCII.org.

Violations of the Take It Down Act will be enforced by the FTC and treated as violations of an FTC rule. The agency says platforms that breach the law may face civil penalties of $53,088 per violation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ireland and the EU intensify DSA pressure on Meta

Coimisiún na Meán, the media regulator of Ireland, has launched two formal investigations into Meta over the design of recommender systems on Facebook and Instagram under the Digital Services Act. The investigations focus on whether users are prevented from choosing recommendation feeds that are not based on the profiling of their personal data.

Coimisiún na Meán said concerns emerged following platform supervision reviews and complaints linked to potential ‘dark patterns’ and deceptive interface designs. Regulators are examining whether users can easily access and modify non-profiled recommendation feeds as required under Article 27 of the DSA, alongside whether interface designs may improperly influence user choices under Article 25.

John Evans, Digital Services Commissioner at Coimisiún na Meán, said recommender systems can repeatedly push harmful material into user feeds, particularly affecting children and younger users. The regulator also warned that Very Large Online Platforms (VLOPs) must ensure users can exercise their rights under the DSA without manipulation or unnecessary barriers.

EU investigates Meta over under-13 access on Instagram and Facebook

At the same time, the European Commission has preliminarily found Meta in potential breach of the DSA over failures to adequately prevent children under 13 from accessing Instagram and Facebook. Regulators said Meta’s age verification and reporting systems may be ineffective, while the company’s risk assessments allegedly failed to properly address harms faced by underage users.

Why does it matter?

These investigations are critical because they could shape how the DSA is enforced across Europe, particularly in cases involving children and algorithmic recommendation systems. If regulators conclude that Meta failed to properly protect minors or used manipulative interface designs that discouraged users from choosing non-profiled feeds, the case may set a wider precedent for how large online platforms handle age assurance, user consent, privacy protections, and recommender system transparency under EU law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!