Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.
The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.
Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.
The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.
The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.
Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.
Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).
Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.
A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.
Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.
The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.
Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.
Telecommunications Regulatory Authority (TRA) in Oman has launched several initiatives to protect children’s internet usage in Oman, responding to alarming statistics revealing that nearly 86% of children in the Sultanate engage with the internet. Recognising that a substantial portion of this demographic spends considerable time online, 43.5% using it for information searches and 34% for entertainment and communication, the authority is actively pursuing a proposed law to regulate children’s internet activities.
The initiative aligns with ITU’s definition of a child, per Oman’s Child Protection Law No. 22/2014, which defines children as individuals under 18. Among these initiatives are the ‘Be Aware’ national awareness campaign, aimed at educating families on safe internet practices, the Secure Net program developed in partnership with Omantel and UNICEF to offer parental control features, and the Safe Net service designed to protect users from online threats such as viruses and phishing attacks.
Through these efforts, the TRA is committed to promoting a safe and responsible digital environment for children in Oman. By addressing the growing challenges of internet usage among minors, the authority aims to foster a culture of awareness and security that empowers families and protects the well-being of the younger generation in the digital landscape.
TikTok is facing multiple lawsuits from 13 US states and the District of Columbia, accusing the platform of harming and failing to protect young users. The lawsuits, filed in New York, California, and other states, allege that TikTok uses intentionally addictive software to maximise user engagement and profits, particularly targeting children who lack the ability to set healthy boundaries around screen time.
California Attorney General Rob Bonta condemned TikTok for fostering social media addiction to boost corporate profits, while New York Attorney General Letitia James connected the platform to mental health issues among young users. Washington D.C. Attorney General Brian Schwalb further accused TikTok of operating an unlicensed money transmission service through its live streaming and virtual currency features and claimed that the platform enables the sexual exploitation of minors.
TikTok, in response, denied the allegations and expressed disappointment in the legal action taken, arguing that the states should collaborate on solutions instead. The company pointed to safety measures, such as screen time limits and privacy settings for users under 16.
These lawsuits are part of a broader set of legal challenges TikTok is facing, including a prior lawsuit from the U.S. Justice Department over children’s privacy violations. The company is also dealing with efforts to ban the app in the US due to concerns about its Chinese ownership.
An Australian court upheld an order on Friday requiring Elon Musk’s X to pay a fine of A$610,500 ($418,000) for not cooperating with a regulator’s request regarding anti-child-abuse practices. X had contested the fine, but the Federal Court of Australia determined that the company was obligated to respond to a notice from the eSafety Commissioner, which sought information about measures to combat child sexual exploitation material on the platform.
Musk’s company claimed it was not obligated to respond to the notice due to its integration into a new corporate entity under his control, which it argued eliminated its liability. However, eSafety Commissioner Julie Inman Grant cautioned that accepting this argument could set a troubling precedent, enabling foreign companies to evade regulatory responsibilities in Australia through corporate restructuring. Alongside the fine, eSafety has also launched civil proceedings against X for noncompliance.
This is not the first confrontation between Musk and Australia’s internet safety regulator. Earlier this year, the eSafety Commissioner ordered X to take down posts showing a bishop being stabbed during a sermon. X contested the order in court, claiming that a regulator in one country should not control global content visibility. Ultimately, X retained the posts after the Australian regulator withdrew its case. Musk labelled the order as censorship and claimed it was part of a larger agenda by the World Economic Forum to impose global eSafety regulations.
An Australian court has upheld a ruling requiring Elon Musk’s X, previously known as Twitter, to pay a $418,000 fine. The fine was issued for failing to cooperate with a request from the eSafety Commissioner regarding anti-child-abuse measures on the platform.
X had contested the penalty, arguing that it was no longer bound by regulatory obligations following a corporate restructure under Musk’s ownership. However, the court ruled that the platform was still required to respond to the request made by the Australian internet safety regulator.
The eSafety Commissioner stated that accepting X’s argument could have set a worrying precedent for foreign companies merging to avoid regulatory responsibilities. Civil proceedings against X have also begun due to its noncompliance.
Musk’s platform has clashed with authorities in Australia before, notably in a case where X refused to remove content showing a stabbing incident. The company claimed that one country should not dictate global online content.
Texas Attorney General Ken Paxton has filed a lawsuit against TikTok, accusing the platform of violating children’s privacy laws. The lawsuit alleges that TikTok shared personal information of minors without parental consent, in breach of Texas’s Securing Children Online through Parental Empowerment Act (SCOPE Act).
The legal action seeks an injunction and civil penalties, with fines up to $10,000 per violation. Paxton claims TikTok failed to provide adequate privacy tools for children and allowed data to be shared from accounts set to private. Targeted advertising to children was also a concern raised in the lawsuit.
TikTok’s parent company, ByteDance, is being held responsible for allegedly prioritising profits over child safety. Paxton stressed the importance of holding large tech companies accountable for their role in protecting minors online.
The case was filed in Galveston County court, with TikTok yet to comment on the matter. The lawsuit represents a broader concern about the protection of children’s online privacy in the digital age.
Children who are chronically ill and unable to attend school can now stay connected to the classroom using the AV1 robot, developed by the company No Isolation from Norway. This innovative technology serves as their eyes and ears, allowing them to engage with lessons and interact with friends remotely. Controlled via an app, the robot sits on a classroom desk, enabling students to rotate its view, speak to classmates, and even signal when they want to participate.
The AV1 has been especially valuable for children undergoing long-term treatment or experiencing mental health challenges, helping them maintain a connection with their peers and stay socially included. In the United Kingdom, schools can rent or purchase the AV1, which has been widely adopted, particularly in countries like the UK and Germany, where over 1,000 units are active. For many students, the robot has become a lifeline during extended absences from school.
Though widely praised, there are logistical challenges in introducing the AV1 to schools and hospitals, including administrative hurdles and technical issues like weak Wi-Fi. Despite these obstacles, teachers and families have found the robot to be highly effective, with privacy protections and features tailored to students’ needs, including the option to avoid showing their face on screen.
Research has highlighted the AV1’s potential to keep children both socially and academically connected, and No Isolation has rolled out a training resource, AV1 Academy, to support teachers and schools in using the technology effectively. With its user-friendly design and robust privacy features, the AV1 continues to make a positive impact on the lives of children facing illness and long absences from school.