In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.
Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.
Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.
A Florida mother is suing the AI chatbot startup Character.AI, alleging it played a role in her 14-year-old son’s suicide by fostering an unhealthy attachment to a chatbot. Megan Garcia claims her son Sewell became ‘addicted’ to Character.AI and formed an emotional dependency on a chatbot, which allegedly represented itself as a psychotherapist and a romantic partner, contributing to his mental distress.
According to the lawsuit filed in Orlando, Florida, Sewell shared suicidal thoughts with the chatbot, which reportedly reintroduced these themes in later conversations. Garcia argues the platform’s realistic nature and hyper-personalised interactions led her son to isolate himself, suffer from low self-esteem, and ultimately feel unable to live outside of the world the chatbot created.
Character.AI offered condolences and noted it has since implemented additional safety features, such as prompts for users expressing self-harm thoughts, to improve protection for younger users. Garcia’s lawsuit also names Google, alleging it extensively contributed to Character.AI’s development, although Google denies involvement in the product’s creation.
The lawsuit is part of a wider trend of legal claims against tech companies by parents concerned about the impact of online services on teenage mental health. While Character.AI, with an estimated 20 million users, faces unique claims regarding its AI-powered chatbot, other platforms such as TikTok, Instagram, and Facebook are also under scrutiny.
Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.
Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.
Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.
Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.
Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.
The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.
Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.
The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.
Starting in December, Britain’s media regulator Ofcom will outline new safety demands for social media platforms, compelling them to take action against illegal content. Under the new guidelines, tech companies will have three months to assess the risks of harmful content or face consequences, including hefty fines or even having their services blocked. These demands stem from the Online Safety Bill passed last year, aiming to protect users, particularly children, from harmful content.
the UK‘s Ofcom’s Chief Executive Melanie Dawes emphasised that the time for discussion is over, and 2025 will be pivotal for making the internet a safer space. Platforms such as Meta, the parent company of Facebook and Instagram, have already introduced changes to limit risks like children being contacted by strangers. However, the regulator has made it clear that any companies failing to meet the new standards will face strict penalties.
The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.
Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.
Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).
Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.
A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.
Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.
The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.
Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.
Telecommunications Regulatory Authority (TRA) in Oman has launched several initiatives to protect children’s internet usage in Oman, responding to alarming statistics revealing that nearly 86% of children in the Sultanate engage with the internet. Recognising that a substantial portion of this demographic spends considerable time online, 43.5% using it for information searches and 34% for entertainment and communication, the authority is actively pursuing a proposed law to regulate children’s internet activities.
The initiative aligns with ITU’s definition of a child, per Oman’s Child Protection Law No. 22/2014, which defines children as individuals under 18. Among these initiatives are the ‘Be Aware’ national awareness campaign, aimed at educating families on safe internet practices, the Secure Net program developed in partnership with Omantel and UNICEF to offer parental control features, and the Safe Net service designed to protect users from online threats such as viruses and phishing attacks.
Through these efforts, the TRA is committed to promoting a safe and responsible digital environment for children in Oman. By addressing the growing challenges of internet usage among minors, the authority aims to foster a culture of awareness and security that empowers families and protects the well-being of the younger generation in the digital landscape.
Turkey has blocked access to the messaging platform Discord after the company refused to share information requested by the government. A court in Ankara issued the decision, citing concerns over child sexual abuse and obscene content being shared by users on the platform. The Information Technologies and Communication Authority confirmed the ban.
The action follows outrage after a 19-year-old in Istanbul murdered two women, with Discord users allegedly praising the incident online. Justice Minister Yilmaz Tunc explained that there was sufficient suspicion of illegal activity linked to the platform, which prompted the court to intervene.
Transport Minister Abdulkadir Uraloglu added that monitoring platforms like Discord is difficult, as security forces can only act when users report content. Discord’s refusal to provide data, such as IP addresses, further complicated the situation, leading to the decision to block the service.
The ban in Turkey coincides with a similar action in Russia, where Discord was blocked for violating local laws after failing to remove prohibited content. The platform has faced growing scrutiny over its handling of illegal activity.