Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe pressed to slow digital age-verification push amid privacy fears

Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.

The warning arrives as several European states consider restrictions on children’s access to online platforms and as companies promote verification tools such as live selfies or uploads of government-issued IDs.

Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.

They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.

The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.

Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.

Concern escalated after early deployments in Italy and France, where verification is already mandatory.

Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pressures Meta over alleged smart glasses privacy breaches

Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.

Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.

The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.

They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.

EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.

Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.

The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.

Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.

Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot