Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.
Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.
Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.
Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.
Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.
The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.
Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.
The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.
Starting in December, Britain’s media regulator Ofcom will outline new safety demands for social media platforms, compelling them to take action against illegal content. Under the new guidelines, tech companies will have three months to assess the risks of harmful content or face consequences, including hefty fines or even having their services blocked. These demands stem from the Online Safety Bill passed last year, aiming to protect users, particularly children, from harmful content.
the UK‘s Ofcom’s Chief Executive Melanie Dawes emphasised that the time for discussion is over, and 2025 will be pivotal for making the internet a safer space. Platforms such as Meta, the parent company of Facebook and Instagram, have already introduced changes to limit risks like children being contacted by strangers. However, the regulator has made it clear that any companies failing to meet the new standards will face strict penalties.
The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.
Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.
Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).
Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.
A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.
Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.
The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.
Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.
Telecommunications Regulatory Authority (TRA) in Oman has launched several initiatives to protect children’s internet usage in Oman, responding to alarming statistics revealing that nearly 86% of children in the Sultanate engage with the internet. Recognising that a substantial portion of this demographic spends considerable time online, 43.5% using it for information searches and 34% for entertainment and communication, the authority is actively pursuing a proposed law to regulate children’s internet activities.
The initiative aligns with ITU’s definition of a child, per Oman’s Child Protection Law No. 22/2014, which defines children as individuals under 18. Among these initiatives are the ‘Be Aware’ national awareness campaign, aimed at educating families on safe internet practices, the Secure Net program developed in partnership with Omantel and UNICEF to offer parental control features, and the Safe Net service designed to protect users from online threats such as viruses and phishing attacks.
Through these efforts, the TRA is committed to promoting a safe and responsible digital environment for children in Oman. By addressing the growing challenges of internet usage among minors, the authority aims to foster a culture of awareness and security that empowers families and protects the well-being of the younger generation in the digital landscape.
Turkey has blocked access to the messaging platform Discord after the company refused to share information requested by the government. A court in Ankara issued the decision, citing concerns over child sexual abuse and obscene content being shared by users on the platform. The Information Technologies and Communication Authority confirmed the ban.
The action follows outrage after a 19-year-old in Istanbul murdered two women, with Discord users allegedly praising the incident online. Justice Minister Yilmaz Tunc explained that there was sufficient suspicion of illegal activity linked to the platform, which prompted the court to intervene.
Transport Minister Abdulkadir Uraloglu added that monitoring platforms like Discord is difficult, as security forces can only act when users report content. Discord’s refusal to provide data, such as IP addresses, further complicated the situation, leading to the decision to block the service.
The ban in Turkey coincides with a similar action in Russia, where Discord was blocked for violating local laws after failing to remove prohibited content. The platform has faced growing scrutiny over its handling of illegal activity.
TikTok is facing multiple lawsuits from 13 US states and the District of Columbia, accusing the platform of harming and failing to protect young users. The lawsuits, filed in New York, California, and other states, allege that TikTok uses intentionally addictive software to maximise user engagement and profits, particularly targeting children who lack the ability to set healthy boundaries around screen time.
California Attorney General Rob Bonta condemned TikTok for fostering social media addiction to boost corporate profits, while New York Attorney General Letitia James connected the platform to mental health issues among young users. Washington D.C. Attorney General Brian Schwalb further accused TikTok of operating an unlicensed money transmission service through its live streaming and virtual currency features and claimed that the platform enables the sexual exploitation of minors.
TikTok, in response, denied the allegations and expressed disappointment in the legal action taken, arguing that the states should collaborate on solutions instead. The company pointed to safety measures, such as screen time limits and privacy settings for users under 16.
These lawsuits are part of a broader set of legal challenges TikTok is facing, including a prior lawsuit from the U.S. Justice Department over children’s privacy violations. The company is also dealing with efforts to ban the app in the US due to concerns about its Chinese ownership.
An Australian court upheld an order on Friday requiring Elon Musk’s X to pay a fine of A$610,500 ($418,000) for not cooperating with a regulator’s request regarding anti-child-abuse practices. X had contested the fine, but the Federal Court of Australia determined that the company was obligated to respond to a notice from the eSafety Commissioner, which sought information about measures to combat child sexual exploitation material on the platform.
Musk’s company claimed it was not obligated to respond to the notice due to its integration into a new corporate entity under his control, which it argued eliminated its liability. However, eSafety Commissioner Julie Inman Grant cautioned that accepting this argument could set a troubling precedent, enabling foreign companies to evade regulatory responsibilities in Australia through corporate restructuring. Alongside the fine, eSafety has also launched civil proceedings against X for noncompliance.
This is not the first confrontation between Musk and Australia’s internet safety regulator. Earlier this year, the eSafety Commissioner ordered X to take down posts showing a bishop being stabbed during a sermon. X contested the order in court, claiming that a regulator in one country should not control global content visibility. Ultimately, X retained the posts after the Australian regulator withdrew its case. Musk labelled the order as censorship and claimed it was part of a larger agenda by the World Economic Forum to impose global eSafety regulations.