An Australian court upheld an order on Friday requiring Elon Musk’s X to pay a fine of A$610,500 ($418,000) for not cooperating with a regulator’s request regarding anti-child-abuse practices. X had contested the fine, but the Federal Court of Australia determined that the company was obligated to respond to a notice from the eSafety Commissioner, which sought information about measures to combat child sexual exploitation material on the platform.
Musk’s company claimed it was not obligated to respond to the notice due to its integration into a new corporate entity under his control, which it argued eliminated its liability. However, eSafety Commissioner Julie Inman Grant cautioned that accepting this argument could set a troubling precedent, enabling foreign companies to evade regulatory responsibilities in Australia through corporate restructuring. Alongside the fine, eSafety has also launched civil proceedings against X for noncompliance.
This is not the first confrontation between Musk and Australia’s internet safety regulator. Earlier this year, the eSafety Commissioner ordered X to take down posts showing a bishop being stabbed during a sermon. X contested the order in court, claiming that a regulator in one country should not control global content visibility. Ultimately, X retained the posts after the Australian regulator withdrew its case. Musk labelled the order as censorship and claimed it was part of a larger agenda by the World Economic Forum to impose global eSafety regulations.
An Australian court has upheld a ruling requiring Elon Musk’s X, previously known as Twitter, to pay a $418,000 fine. The fine was issued for failing to cooperate with a request from the eSafety Commissioner regarding anti-child-abuse measures on the platform.
X had contested the penalty, arguing that it was no longer bound by regulatory obligations following a corporate restructure under Musk’s ownership. However, the court ruled that the platform was still required to respond to the request made by the Australian internet safety regulator.
The eSafety Commissioner stated that accepting X’s argument could have set a worrying precedent for foreign companies merging to avoid regulatory responsibilities. Civil proceedings against X have also begun due to its noncompliance.
Musk’s platform has clashed with authorities in Australia before, notably in a case where X refused to remove content showing a stabbing incident. The company claimed that one country should not dictate global online content.
Texas Attorney General Ken Paxton has filed a lawsuit against TikTok, accusing the platform of violating children’s privacy laws. The lawsuit alleges that TikTok shared personal information of minors without parental consent, in breach of Texas’s Securing Children Online through Parental Empowerment Act (SCOPE Act).
The legal action seeks an injunction and civil penalties, with fines up to $10,000 per violation. Paxton claims TikTok failed to provide adequate privacy tools for children and allowed data to be shared from accounts set to private. Targeted advertising to children was also a concern raised in the lawsuit.
TikTok’s parent company, ByteDance, is being held responsible for allegedly prioritising profits over child safety. Paxton stressed the importance of holding large tech companies accountable for their role in protecting minors online.
The case was filed in Galveston County court, with TikTok yet to comment on the matter. The lawsuit represents a broader concern about the protection of children’s online privacy in the digital age.
Children who are chronically ill and unable to attend school can now stay connected to the classroom using the AV1 robot, developed by the company No Isolation from Norway. This innovative technology serves as their eyes and ears, allowing them to engage with lessons and interact with friends remotely. Controlled via an app, the robot sits on a classroom desk, enabling students to rotate its view, speak to classmates, and even signal when they want to participate.
The AV1 has been especially valuable for children undergoing long-term treatment or experiencing mental health challenges, helping them maintain a connection with their peers and stay socially included. In the United Kingdom, schools can rent or purchase the AV1, which has been widely adopted, particularly in countries like the UK and Germany, where over 1,000 units are active. For many students, the robot has become a lifeline during extended absences from school.
Though widely praised, there are logistical challenges in introducing the AV1 to schools and hospitals, including administrative hurdles and technical issues like weak Wi-Fi. Despite these obstacles, teachers and families have found the robot to be highly effective, with privacy protections and features tailored to students’ needs, including the option to avoid showing their face on screen.
Research has highlighted the AV1’s potential to keep children both socially and academically connected, and No Isolation has rolled out a training resource, AV1 Academy, to support teachers and schools in using the technology effectively. With its user-friendly design and robust privacy features, the AV1 continues to make a positive impact on the lives of children facing illness and long absences from school.
The European Commission has requested information from YouTube, Snapchat, and TikTok regarding the algorithms used to recommend content to users. Concerns have been raised about the influence of these systems on issues like elections, mental health, and protecting minors. The inquiry falls under the Digital Services Act (DSA), aiming to address potential systemic risks, including the spread of illegal content such as hate speech and drug promotion.
TikTok faces additional scrutiny about measures to prevent bad actors from manipulating the platform, especially during elections. These platforms must provide detailed information on their systems by 15 November. Failure to comply could result in further action, including potential fines.
The DSA mandates that major tech companies take more responsibility in tackling illegal and harmful content. In the past, the EU has initiated similar non-compliance proceedings with other tech giants like Meta, AliExpress, and TikTok over content regulation.
Ello, an AI reading companion designed to help children struggling with reading, has introduced a new feature called ‘Storytime’. This feature enables kids to create their own stories by choosing from a range of settings, characters, and plots. Story options are tailored to the child’s reading level and current lessons, helping them practise essential reading skills.
Ello’s AI, represented by a bright blue elephant, listens to children as they read aloud and helps correct mispronunciations. The tool uses phonics-based strategies to adapt stories based on the child’s responses, ensuring personalised and engaging experiences. It also offers two reading modes: one where the child and Ello take turns reading and another, more supportive mode for younger readers.
The Storytime feature distinguishes itself from other AI-assisted story creation tools by focusing on reading development. The technology has been tested with teachers and children, and includes safeguards to ensure age-appropriate content. Future versions of the product may allow even more creative input from children, while maintaining helpful structure to avoid overwhelming them.
Ello’s subscription costs $14.99 per month, with discounted pricing for low-income families. The company also partners with schools to offer its services for free, and has recently made its collection of decodable children’s books available online at no cost.
Cloudflare, internet service providers, and network equipment providers have embarked on a collaborative journey to enhance the safety and privacy of internet users globally. By offering Cloudflare’s DNS resolvers at no cost, these providers can deliver advanced security features crucial in today’s digital landscape.
That partnership empowers ISPs and equipment manufacturers to improve their service offerings and ensures that users can enjoy a safer browsing experience without additional costs. With children spending more time online, particularly during the COVID-19 pandemic, the demand for protective measures has never been greater.
Cloudflare’s initiatives, such as the launch of 1.1.1.1 for Families, allow these partners to implement content filtering and security features tailored specifically for households. The strategic alignment ensures that families can confidently navigate the internet, knowing that harmful content is being filtered and their online activities are shielded from threats.
Furthermore, Cloudflare, alongside ISPs and network equipment providers, addresses the challenges users face in setting up effective online protections. Many consumers find configuring DNS settings and implementing security features daunting. To tackle this issue, Cloudflare is working with its partners to simplify the setup process.
By integrating Cloudflare’s services directly into their platforms, ISPs can provide a seamless user experience that encourages the adoption of these important safety measures. That collaborative approach ensures that even the least tech-savvy users can benefit from enhanced security without feeling overwhelmed.
Why does this matter?
Cloudflare, internet service providers, and network equipment providers understand the need for flexible, customisable solutions to meet diverse user needs. With Cloudflare’s Gateway product, ISPs can offer advanced filtering options that let users tailor their online experience, including content restrictions and scheduling, such as limiting social media access. These customisable options empower users to control their online safety while boosting customer satisfaction and loyalty.
California has introduced a new law requiring schools to limit or ban the use of smartphones to combat rising concerns about their impact on mental health and education. Governor Gavin Newsom signed the bill following increasing evidence linking excessive phone use with anxiety, depression, and learning difficulties.
California is joining thirteen other states, including Florida, which introduced a similar ban last year. Los Angeles County schools, the state’s largest district, already prohibited phones for its 429,000 students earlier this year. The law, aimed at promoting student focus and social development, reflects a broader national movement to reduce smartphone use among young people.
Surgeon General Vivek Murthy has warned of the growing mental health crisis associated with social media, comparing it to the dangers of smoking. Studies in the US suggest that teenagers spending more than three hours a day on social media are at increased risk of mental illness, with average usage exceeding four hours daily.
School boards across California will be required to implement policies limiting phone use by July 2026, with updates every five years. Newsom stressed the importance of addressing the issue early to improve students’ wellbeing and academic focus.
Snapchat is positioning itself as a healthier social media alternative for teens, with CEO Evan Spiegel emphasising the platform’s different approach at the company’s annual conference. Recent research from the University of Amsterdam supports this view, showing that while platforms like TikTok and Instagram negatively affect youth mental health, Snapchat use appears to have positive effects on friendships and well-being.
However, critics argue that Snapchat’s disappearing messages feature can facilitate illegal activities. Matthew Bergman, an advocate for social media victims, claimed the platform has been used by drug dealers, citing instances of children dying from fentanyl poisoning after buying drugs via the app. Despite these concerns, Snapchat remains popular, particularly with younger users.
Industry analysts recognise the platform’s efforts but highlight its ongoing challenges. As Snapchat continues to grow its user base, balancing privacy and safety with revenue generation remains a key issue, especially as it struggles to compete with bigger players like TikTok, Meta, and Google for advertising.
Snapchat’s appeal lies in its low-pressure environment, with features like disappearing stories and augmented reality filters. Young users, like 14-year-old Lily, appreciate the casual nature of communication on the platform, while content creators praise its ability to offer more freedom and reduce social pressure compared to other social media platforms.
Telegram apparently decided to alleviate its policy restrictions and to provide users’ IP addresses and phone numbers to authorities in response to valid legal requests. The shift in policy, announced by CEO Pavel Durov, marks a significant change for the platform, which has long been known for its resistance to government data demands. The update comes in the wake of Durov’s recent legal troubles in France, where he is facing charges related to the spread of child abuse materials on the platform.
Durov, under investigation since his arrest in France last month, says the new measures are part of broader efforts to deter criminal activity on Telegram. Historically, Telegram has been criticised for its lax approach to moderation, often ignoring government requests to remove illegal content or share information on suspected criminals. Now, with AI and human moderators, the app conceals problematic content from search results.
Telegram has long been a tool for activists and dissidents, especially in countries like Russia and Iran, where it has been used to challenge authoritarian regimes. However, the platform has also attracted extremists, conspiracy theorists, and white supremacists. In some cases, Telegram has been used to coordinate real-world attacks, leading to mounting pressure on the company to take greater responsibility.