UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves to enforce digital fairness rules with stronger consumer oversight

Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.

An initiative that forms part of a broader effort to ensure stronger consumer protection across digital markets, with officials signalling stricter oversight of commercial practices that disadvantage users.

The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.

The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.

Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.

The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.

Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.

Stronger consumer safeguards are becoming central to the digital agenda of the EU.

The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Colorado targets AI chatbot safety

AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.

House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.

The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.

Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!