Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian influencer family moves to UK over child social media ban

An Australian influencer family known as the Empire Family is relocating to the UK to avoid Australia’s new social media ban for under-16s, which begins in December. The law requires major platforms to take steps preventing underage users from creating or maintaining accounts.

The family, comprising mothers Beck and Bec Lea, their 17-year-old son Prezley and 14-year-old daughter Charlotte, said the move will allow Charlotte to continue creating online content. She has hundreds of thousands of followers across YouTube, TikTok and Instagram, with all accounts managed by her parents.

Beck said they support the government’s intent to protect young people from harm but are concerned about the uncertainty surrounding enforcement methods, such as ID checks or facial recognition. She said the family wanted stability while the system is clarified.

The Australia ban, described as a world first, will apply to Facebook, Instagram, TikTok, X and YouTube. Non-compliant firms could face fines of up to A$50 million, while observers say the rules raise new privacy and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK traffic to Pornhub plunges after age-verification law

In response to the UK’s new age-verification law, Pornhub reports that visits from UK users have fallen by about 77 %.

The change comes following legislation designed to block under-18s from accessing adult sites via mandatory age checks.

The company states that it began enforcing the verification system early in October, noting that many users are now turned away or fail the checks.

According to Pornhub, this explains the significant decrease in traffic from the UK. The platform emphasised that this is a reflection of compliance rather than an admission of harm.

Critics argue that the law creates risks of overblocking and privacy concerns, as users may turn to less regulated or unsafe alternatives. This case also underscores tensions between content regulation, digital rights and the efficacy of age-gating as a tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian police create AI tool to decode predators’ slang

Australian police are developing an AI tool with Microsoft to decode slang and emojis used by online predators. The technology is designed to interpret coded messages in digital conversations to help investigators detect harmful intent more quickly.

Federal Police Commissioner Krissy Barrett said social media has become a breeding ground for exploitation, bullying, and radicalisation. The AI based prototype, she explained, could allow officers to identify threats earlier and rescue children before abuse occurs.

Barrett also warned about the rise of so-called ‘crimefluencers’, offenders using social media trends to lure young victims, many of whom are pre-teen or teenage girls. Australian authorities believe understanding modern online language is key to disrupting their methods.

The initiative follows Australia’s new under-16 social media ban, due to take effect in December. Regulators worldwide are monitoring the country’s approach as governments struggle to balance online safety with privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character.ai restricts teen chat access on its platform

The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.

Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.

Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.

The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.

Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.

Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.

Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.

Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.

Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Estimating biological age from routine records with LifeClock

LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.

Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.

LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.

Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.

Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!