Australia introduces groundbreaking bill to ban social media for children under 16

Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.

Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.

While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.

Tighter messaging controls for under-13 players on Roblox

Roblox has announced new measures to protect users under 13, permanently removing their ability to send messages outside of games. In-game messaging will remain available, but only with parental consent. Parents can now remotely manage accounts, oversee friend lists, set spending controls, and enforce screen time limits.

The gaming platform, which boasts 89 million users, has faced scrutiny over claims of child abuse on its service. In August, Turkish authorities blocked Roblox, citing concerns over user-generated content. A lawsuit filed in 2022 accused the company of facilitating exploitation, including sexual and financial abuse of a young girl in California.

New rules also limit communication for younger players, allowing under-13 users to receive public broadcast messages only within specific games. Roblox will implement updated content descriptors such as ‘Minimal’ and ‘Restricted’ to classify games, restricting access for users under nine to appropriate experiences.

Access to restricted content will now require users to be at least 17 years old and verify their age. These changes aim to enhance child safety amid growing concerns and highlight Roblox’s efforts to address ongoing challenges in its community.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

UK proposes fines for executives over illegal knife sales ads

The UK government is considering fines of up to £10,000 for social media executives who fail to remove illegal knife advertisements from their platforms. This proposal is part of Labour’s effort to halve knife crime in the next decade by addressing the ‘unacceptable use’ of online spaces to market illegal weapons and promote violence.

Under the plans, police would have the power to issue warnings to online companies and require the removal of specific content, with further penalties imposed on senior officials if action is not taken swiftly.The government also aims to tighten laws around the sale of ninja swords, following the tragic case of 16-year-old Ronan Kanda, who was killed with a weapon bought online.

Home Secretary Yvette Cooper stated that these new sanctions are part of a broader mission to reduce knife crime, which has devastated many communities. The proposals, backed by a coalition including actor Idris Elba, aim to ensure that online marketplaces take greater responsibility in preventing the sale of dangerous weapons.

Australia’s proposed ban on social media for under-16s sparks global debate on youth digital exposure

Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.

The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.

While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.

Australia plans to ban social media for children under 16

The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.

Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.

The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.

This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.

TikTok faces lawsuit in France after teen suicides linked to platform

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.

The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.

TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.

Coventry University project bridges education gap in Vietnam with AI tools

Coventry University researchers are using AI to support teachers in northern Vietnam‘s rural communities, where access to technology and training is often limited. Led by Dr Petros Lameras, the GameAid project introduces educators to generative AI, an advanced form of AI that creates text, images, and other materials in response to prompts, helping teachers improve lesson development and classroom engagement.

The GameAid initiative uses a game-based approach to demonstrate AI’s practical benefits, providing tools and guidelines that enable teachers to integrate AI into their curriculum. Dr Lameras highlights the project’s importance in transforming educators’ technological skills, while Dr Nguyen Thi Thu Huyen from Hanoi University emphasises its potential to close the educational gap between Vietnam’s urban and rural areas.

The initiative is seen as a key step towards promoting equal learning opportunities, offering much-needed educational resources to under-represented groups. Researchers at Coventry hope that their work will support more positive learning outcomes across Vietnam’s diverse educational landscape.

WhatsApp group exposes students to explicit content

Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.

Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.

UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.

UK man sentenced to 18 years for using AI to create child sexual abuse material

In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.

Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.

Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.