Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves to enforce digital fairness rules with stronger consumer oversight

Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.

An initiative that forms part of a broader effort to ensure stronger consumer protection across digital markets, with officials signalling stricter oversight of commercial practices that disadvantage users.

The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.

The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.

Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.

The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.

Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.

Stronger consumer safeguards are becoming central to the digital agenda of the EU.

The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Colorado targets AI chatbot safety

AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.

House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.

The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.

Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit hit with a major ICO penalty over children’s privacy failures

The UK’s Information Commissioner’s Office has fined Reddit £14.47 million after finding that the platform unlawfully used children’s personal information and failed to put in place adequate age checks.

The regulator concluded that Reddit allowed children under 13 to access the platform without robust age-verification measures, leaving them exposed to content they were not able to understand or control.

Although Reddit updated its processes in July 2025, self-declaration remained easy to bypass, offering only a veneer of protection. Investigators also found that the company had not completed a data protection impact assessment until 2025, despite a large number of teenagers using the service.

Concerns were heightened by the volume of children affected and the risks created by relying on inadequate age checks.

The regulator noted that unlawful data processing occurred over a prolonged period, and that children were at risk of viewing harmful material while their information was processed without a lawful basis.

UK Information Commissioner John Edwards said companies must prioritise meaningful age assurance and understand the responsibilities set out in the Children’s Code.

The ICO said it will continue monitoring Reddit’s current controls and expects online platforms to align with robust age-assurance standards rather than rely on weak verification.

It will coordinate its oversight with Ofcom as part of broader efforts to strengthen online safety and ensure under-18s benefit from high privacy protections by default.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI preparing kids for careers that don’t exist yet, say education leaders

Education leaders and industry stakeholders in South Africa say the rise of AI is transforming labour-market expectations to the point that tomorrow’s careers may not yet exist.

They argue that traditional curricula, centred on static knowledge and routine tasks, must evolve to prioritise adaptability, problem solving, creativity, ethical reasoning and digital fluency, competencies that complement AI rather than compete with it.

Speakers at recent education forums emphasised that AI will continue to automate routine cognitive and technical work, pushing demand toward roles that require higher-order thinking and human-centred skills.

They described a growing need to integrate AI literacy and data skills into schooling from an early age to reduce future workforce displacement and prepare students to harness AI as a productive partner.

Experts also highlighted equity concerns: without intentional policy and investment to support under-resourced schools and communities, the ‘AI skills gap’ could exacerbate inequality. Some educators recommended stronger partnerships between government, tech industry and educational institutions to co-develop curricula, teacher training and accessible AI tools.

They underscored that competencies such as empathetic communication, cultural awareness and ethical judgement, areas where AI lacks robust capabilities, will remain crucial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU drops revised GDPR personal data definition amid regulatory pressure

Governments across the EU have withdrawn the revised definition of personal data from the GDPR omnibus package, softening earlier proposals that had prompted strong resistance from regulators and civil society.

A decision that signals a preference for maintaining the original scope of the General Data Protection Regulation instead of reopening sensitive debates that risked weakening long-standing protections.

Greater attention is now placed on the forthcoming pseudonymisation guidelines prepared by the European Data Protection Board. These guidelines are expected to shape how organisations interpret key safeguards, offering practical direction instead of altering the legal definition of personal data.

The updated prominence given to the guidance reflects a broader trend within the Council towards regulatory clarity rather than legislative redesign.

The compromise text also maintains links with the wider review of the ePrivacy Directive, keeping future updates aligned with existing digital-rights rules.

Member states appear increasingly cautious about reopening foundational privacy concepts, opting to strengthen enforcement through guidance and implementation rather than altering core definitions in law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!