Teens launch High Court bid to stop Australia’s under-16 social media ban

Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.

The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.

The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.

The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.

Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok rolls out mindfulness and screen-time reset features

TikTok has announced a set of new well-being features designed to help users build more balanced digital habits. The rollout includes an in-app experience with breathing exercises, calming audio tracks and short ‘Well-being Missions’ that reward mindful behaviour.

The missions are interactive tasks, such as quizzes and flashcards, that encourage users to explore TikTok’s existing digital-wellness tools (like Sleep Hours and Screen Time Management). Completing these missions earns users badges, reinforcing positive habits. In early tests, approximately 40 percent of people who saw the missions chose to try them.

TikTok is also experimenting with a dedicated ‘pause and recharge’ space within the app. This includes safe, calming activities that help users disconnect: for instance, before bedtime or after long scrolling sessions.

The broader effort reflects TikTok’s growing emphasis on digital wellness, part of a larger industry trend on the responsible and healthy use of social platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok launches new tools to manage AI-generated content

TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.

The aim is to personalise content rather than remove it entirely.

To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.

The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.

Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok faces scrutiny over AI moderation and UK staff cuts

TikTok has responded to the Science, Innovation and Technology Committee regarding proposed cuts to its UK Trust and Safety teams. The company claimed that reducing staff while expanding AI, third-party specialists, and more localised teams would improve moderation effectiveness.

The social media platform, however, did not provide any supporting data or risk assessment to justify these changes. MPs previously called for more transparency on content moderation data during an inquiry into social media, misinformation, and harmful algorithms.

TikTok’s increasing reliance on AI comes amid broader concerns over AI safety, following reports of chatbots encouraging harmful behaviours.

Committee Chair Dame Chi Onwurah expressed concern that AI cannot reliably replace human moderators. She warned AI could cause harm and criticised TikTok for not providing evidence that staff cuts would protect users.

The Committee urges the Government and Ofcom to take action to ensure user safety before implementing staffing reductions. Dame Onwurah emphasised that without credible data, it is impossible to determine whether the changes will effectively protect users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU warns Meta and TikTok over transparency failures

The European Commission has found that Meta and TikTok violated key transparency obligations under the EU’s Digital Services Act (DSA). According to preliminary findings, both companies failed to provide adequate data access to researchers studying public content on their platforms.

The Commission said Facebook, Instagram, and TikTok imposed ‘burdensome’ conditions that left researchers with incomplete or unreliable data, hampering efforts to investigate the spread of harmful or illegal content online.

Meta faces additional accusations of breaching the DSA’s rules on user reporting and complaints. The Commission said the ‘Notice and Action’ systems on Facebook and Instagram were not user-friendly and contained ‘dark patterns’, manipulative design choices that discouraged users from reporting problematic content.

Moreover, Meta allegedly failed to give users sufficient explanations when their posts or accounts were removed, undermining transparency and accountability requirements set by the law.

Both companies have the opportunity to respond before the Commission issues final decisions. However, if the findings are confirmed, Meta and TikTok could face fines of up to 6% of their global annual revenue.

The EU executive also announced new rules, effective next week, that will expand data access for ‘vetted’ researchers, allowing them to study internal platform dynamics and better understand how large social media platforms shape online information flows.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump signs order to advance TikTok spin-off tied to his allies

President Donald Trump has signed an executive order that paves the way for TikTok to remain in the US, despite a law requiring its Chinese owner, ByteDance, to divest the app or face a ban. The order grants negotiators 120 more days to finalise a deal, marking the fifth time Trump has delayed enforcement of the law passed by Congress and upheld by the Supreme Court.

The deal would transfer most of TikTok’s US operations to a new company controlled by American investors. Among them are Oracle co-founder Larry Ellison, private equity firm Silver Lake, and Susquehanna International’s Jeff Yass, a prominent Republican donor. An Emirati consortium known as MGX would also participate, reflecting the Gulf’s growing role in global tech investments. ByteDance would keep a minority stake and retain control of the app’s recommendation algorithm, a sticking point for lawmakers initially pushing for the sale.

Speaking from the Oval Office, Trump described the incoming management as ‘very smart Americans’ and said Chinese President Xi Jinping had approved the arrangement. Asked whether TikTok would favour pro-Trump content, the president joked that he would prefer a ‘100 percent MAGA’ feed but insisted the app would remain open to all perspectives.

Critics argue the arrangement undermines the very law that forced ByteDance to sell. By preserving a Chinese stake and leaving ByteDance in charge of the algorithm, the deal raises questions about whether the national security concerns that motivated Congress have truly been addressed. Some legal scholars say the White House’s role in handpicking buyers aligned with Trump’s political allies only adds to fears of political influence over a platform used by 170 million Americans.

The negotiations also highlight TikTok’s enormous influence and profit potential. Investors worldwide, including Rupert Murdoch’s Fox Corp., expressed interest in a slice of the app. TikTok’s algorithm, which will still be trained in China but adapted with US data, will remain central to the platform’s success. Oracle will continue to oversee American user data and review the algorithm for security risks.

The unusual process has fueled debate about political power and digital influence. Critics like California Governor Gavin Newsom warned that placing TikTok in the hands of Trump-friendly investors could create new risks of propaganda. Others note that the deal reflects less of a clear national security strategy and more of a high-stakes convergence of money, politics, and global tech rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!