Growing demand of AI-generated child abuse material in dark web

As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.

Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.

Why does it matter?

The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.

Man who used AI to create indecent images of children faces jail

In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.

The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.

Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.

FTC sues TikTok over child privacy violations

The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.

According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.

FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.

The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.

US Senate approves major online child safety reforms

The US Senate has passed significant online child safety reforms in a near-unanimous vote, but the fate of these bills remains uncertain in the House of Representatives. The two pieces of legislation, known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), aim to protect minors from targeted advertising and unauthorised data collectiochiln while also enabling parents and children to delete their information from social media platforms. The Senate’s bipartisan approval, with a vote of 91-3, marks a critical step towards enhancing online safety for minors.

COPPA 2.0 and KOSA have sparked mixed reactions within the tech industry. While platforms like Snap and X have shown support for KOSA, Meta Platforms and TikTok executives have expressed reservations. Critics, including the American Civil Liberties Union and certain tech industry groups, argue that the bills could limit minors’ access to essential information on topics such as vaccines, abortion, and LGBTQ issues. Despite amendments to address these concerns, some, like Senator Ron Wyden, still need to be convinced of the bills’ efficacy and potential impact on vulnerable groups.

The high economic stakes are highlighted by a Harvard study indicating that top US social media platforms generated approximately $11 billion in advertising revenue from users under 18 in 2022. Advocates for the bills, such as Maurine Molak of ParentsSOS, view the Senate vote as a historic milestone in protecting children online. However, the legislation’s future hinges on its passage in the Republican-controlled House, which is currently in recess until September.

English school reprimanded for facial recognition misuse

Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.

The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.

Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.

The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.

AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.

Indian data protection law under fire for inadequate child online safety measures

India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.

Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.

Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.

The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.

FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

US Supreme Court declines Snapchat case

The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.

Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.

Why does this matter?

The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.

US DoJ to file lawsuit against TikTok for alleged children’s privacy violations

TikTok will be sued again by the US Department of Justice (DoJ) in a consumer protection lawsuit against ByteDance’s TikTok later this year, focusing on alleged children’s privacy violations. The incentive for the legal move comes on behalf of the Federal Trade Commission (FTC), but the DoJ will not pursue allegations that TikTok misled US consumers about data security, specifically dropping claims that the company failed to inform users that China-based employees could access their personal and financial information.

The decision suggests that the primary focus will now be on how TikTok handles children’s privacy. The FTC had referred to the DoJ a complaint against TikTok and its parent, ByteDance, concerning potential violations of children’s privacy, stating that it investigated TikTok and found evidence suggesting they may be breaking the Children’s Online Privacy Protection Act. The federal act requires apps and websites aimed at kids to get parental consent before collecting personal information from children under 13.

Simultaneously, TikTok and ByteDance are challenging a US law that aims to ban the popular short video app in the United States starting from 19 January next year.