Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Identifying AI-generated videos on social media

AI-generated videos are flooding social media, and identifying them is becoming increasingly difficult. Low resolution or grainy footage can hint at artificial creation, though even polished clips may be deceptive.

Subtle flaws often reveal AI manipulation, including unnatural skin textures, unrealistic background movements, or odd patterns in hair and clothing. Shorter, highly compressed clips can conceal these artefacts, making detection even more challenging.

Digital literacy experts warn that traditional visual cues will soon be unreliable. Viewers should prioritise the source and context of online videos, approach content critically, and verify information through trustworthy channels.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian influencer family moves to UK over child social media ban

An Australian influencer family known as the Empire Family is relocating to the UK to avoid Australia’s new social media ban for under-16s, which begins in December. The law requires major platforms to take steps preventing underage users from creating or maintaining accounts.

The family, comprising mothers Beck and Bec Lea, their 17-year-old son Prezley and 14-year-old daughter Charlotte, said the move will allow Charlotte to continue creating online content. She has hundreds of thousands of followers across YouTube, TikTok and Instagram, with all accounts managed by her parents.

Beck said they support the government’s intent to protect young people from harm but are concerned about the uncertainty surrounding enforcement methods, such as ID checks or facial recognition. She said the family wanted stability while the system is clarified.

The Australia ban, described as a world first, will apply to Facebook, Instagram, TikTok, X and YouTube. Non-compliant firms could face fines of up to A$50 million, while observers say the rules raise new privacy and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI rolls out pet-centric AI video features and social tools in Sora

OpenAI has announced significant enhancements to its text-to-video app Sora. The update introduces new features including pet and object ‘cameos’ in AI-generated videos, expanded video editing tools, social sharing elements and a forthcoming Android version of the app.

Using the new pet cameo feature, users will be able to upload photos of their pets or objects and then incorporate them into animated video scenes generated by Sora. The objective is to deepen personalisation and creative expression by letting users centre their own non-human characters.

Sora is also gaining editing capabilities that simplify the creation process. Users can remix existing videos, apply stylistic changes, and integrate social-type features like feeds where others’ creations can be viewed and shared. The Android app is noted as ‘coming soon’ which expands Sora’s accessibility beyond the iOS/web initial release.

The move reflects OpenAI’s strategy to transition Sora from an experimental novelty into a more fully featured social video product. By enabling user-owned content (pets/objects), expanding sharing functionality and broadening platform reach, Sora is positioned to compete in the generative video and social media landscape.

At the same time, the update raises questions around content use, copyright (especially when user-owned pets or objects are included), deepfake risks, and moderation. Given Sora’s prior scrutiny over synthetic media, the expansion into more personalised video may prompt further regulatory or ethical review.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU nations back Danish plan to strengthen child protection online

EU countries have agreed to step up efforts to improve child protection online by supporting Denmark’s Jutland Declaration. The initiative, signed by 25 member states, focuses on strengthening existing EU rules that safeguard minors from harmful and illegal online content.

However, Denmark’s proposal to ban social media for children under 15 did not gain full backing, with several governments preferring other approaches.

The declaration highlights growing concern about young people’s exposure to inappropriate material and the addictive nature of online platforms.

It stresses the need for more reliable age verification tools and refers to the upcoming Digital Fairness Act as an opportunity to introduce such safeguards. Ministers argued that the same protections applied offline should exist online, where risks for minors remain significant.

Danish officials believe stronger measures are essential to address declining well-being among young users. Some EU countries, including Germany, Spain and Greece, expressed support for tighter protections but rejected outright bans, calling instead for balanced regulation.

Meanwhile, the European Commission has asked major platforms such as Snapchat, YouTube, Apple and Google to provide details about their age verification systems under the Digital Services Act.

These efforts form part of a broader EU drive to ensure a safer digital environment for children, as investigations into online platforms continue across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!