Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok rolls out mindfulness and screen-time reset features

TikTok has announced a set of new well-being features designed to help users build more balanced digital habits. The rollout includes an in-app experience with breathing exercises, calming audio tracks and short ‘Well-being Missions’ that reward mindful behaviour.

The missions are interactive tasks, such as quizzes and flashcards, that encourage users to explore TikTok’s existing digital-wellness tools (like Sleep Hours and Screen Time Management). Completing these missions earns users badges, reinforcing positive habits. In early tests, approximately 40 percent of people who saw the missions chose to try them.

TikTok is also experimenting with a dedicated ‘pause and recharge’ space within the app. This includes safe, calming activities that help users disconnect: for instance, before bedtime or after long scrolling sessions.

The broader effort reflects TikTok’s growing emphasis on digital wellness, part of a larger industry trend on the responsible and healthy use of social platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok launches new tools to manage AI-generated content

TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.

The aim is to personalise content rather than remove it entirely.

To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.

The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.

Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Identifying AI-generated videos on social media

AI-generated videos are flooding social media, and identifying them is becoming increasingly difficult. Low resolution or grainy footage can hint at artificial creation, though even polished clips may be deceptive.

Subtle flaws often reveal AI manipulation, including unnatural skin textures, unrealistic background movements, or odd patterns in hair and clothing. Shorter, highly compressed clips can conceal these artefacts, making detection even more challenging.

Digital literacy experts warn that traditional visual cues will soon be unreliable. Viewers should prioritise the source and context of online videos, approach content critically, and verify information through trustworthy channels.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian influencer family moves to UK over child social media ban

An Australian influencer family known as the Empire Family is relocating to the UK to avoid Australia’s new social media ban for under-16s, which begins in December. The law requires major platforms to take steps preventing underage users from creating or maintaining accounts.

The family, comprising mothers Beck and Bec Lea, their 17-year-old son Prezley and 14-year-old daughter Charlotte, said the move will allow Charlotte to continue creating online content. She has hundreds of thousands of followers across YouTube, TikTok and Instagram, with all accounts managed by her parents.

Beck said they support the government’s intent to protect young people from harm but are concerned about the uncertainty surrounding enforcement methods, such as ID checks or facial recognition. She said the family wanted stability while the system is clarified.

The Australia ban, described as a world first, will apply to Facebook, Instagram, TikTok, X and YouTube. Non-compliant firms could face fines of up to A$50 million, while observers say the rules raise new privacy and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI rolls out pet-centric AI video features and social tools in Sora

OpenAI has announced significant enhancements to its text-to-video app Sora. The update introduces new features including pet and object ‘cameos’ in AI-generated videos, expanded video editing tools, social sharing elements and a forthcoming Android version of the app.

Using the new pet cameo feature, users will be able to upload photos of their pets or objects and then incorporate them into animated video scenes generated by Sora. The objective is to deepen personalisation and creative expression by letting users centre their own non-human characters.

Sora is also gaining editing capabilities that simplify the creation process. Users can remix existing videos, apply stylistic changes, and integrate social-type features like feeds where others’ creations can be viewed and shared. The Android app is noted as ‘coming soon’ which expands Sora’s accessibility beyond the iOS/web initial release.

The move reflects OpenAI’s strategy to transition Sora from an experimental novelty into a more fully featured social video product. By enabling user-owned content (pets/objects), expanding sharing functionality and broadening platform reach, Sora is positioned to compete in the generative video and social media landscape.

At the same time, the update raises questions around content use, copyright (especially when user-owned pets or objects are included), deepfake risks, and moderation. Given Sora’s prior scrutiny over synthetic media, the expansion into more personalised video may prompt further regulatory or ethical review.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot