In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.
The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.
Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.
Türkiye has blocked access to the popular kids’ gaming platform Roblox due to concerns over content that could potentially lead to child abuse. Justice Minister Yilmaz Tunc announced the decision on X, citing a legal ruling based on a law regulating internet broadcasting. He emphasised the state’s constitutional duty to protect children and condemned the harmful use of technology.
Çocukların istismarına neden olacak içerikler barındırması sebebiyle Roblox isimli oyun platformuna ve uygulama marketlerindeki linklerine, Adana Cumhuriyet Başsavcılığı tarafından yürütülen soruşturma kapsamında “İnternet Ortamında Yapılan Yayınların Düzenlenmesi ve Bu Yayınlar…
The ban on Roblox is the latest in a series of measures targeting internet platforms in Türkiye. Recently, Instagram faced similar restrictions after a senior aide to President Recep Tayyip Erdogan accused the social media platform of censoring posts related to the death of Hamas political leader Ismail Haniyeh.
Roblox has not yet responded to the request for comment regarding the ban. The company has been grappling with challenges related to keeping inappropriate content off its platform, including issues involving paedophiles.
The development highlights the ongoing tension between the Turkish government and digital platforms as authorities enforce stringent measures to control online content and protect vulnerable users.
The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.
According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.
FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.
The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.
The US Senate has passed significant online child safety reforms in a near-unanimous vote, but the fate of these bills remains uncertain in the House of Representatives. The two pieces of legislation, known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), aim to protect minors from targeted advertising and unauthorised data collectiochiln while also enabling parents and children to delete their information from social media platforms. The Senate’s bipartisan approval, with a vote of 91-3, marks a critical step towards enhancing online safety for minors.
COPPA 2.0 and KOSA have sparked mixed reactions within the tech industry. While platforms like Snap and X have shown support for KOSA, Meta Platforms and TikTok executives have expressed reservations. Critics, including the American Civil Liberties Union and certain tech industry groups, argue that the bills could limit minors’ access to essential information on topics such as vaccines, abortion, and LGBTQ issues. Despite amendments to address these concerns, some, like Senator Ron Wyden, still need to be convinced of the bills’ efficacy and potential impact on vulnerable groups.
The high economic stakes are highlighted by a Harvard study indicating that top US social media platforms generated approximately $11 billion in advertising revenue from users under 18 in 2022. Advocates for the bills, such as Maurine Molak of ParentsSOS, view the Senate vote as a historic milestone in protecting children online. However, the legislation’s future hinges on its passage in the Republican-controlled House, which is currently in recess until September.
Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.
In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.
Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.
The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.
The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.
Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.
Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.
The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.
Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.
The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.
A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.
The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.
A recent study has revealed that AI chatbots pose significant risks to children, who often view them as lifelike and trustworthy. Dr Nomisha Kurian from the University of Cambridge calls for urgent action to prioritise ‘child-safe AI’ in the development of these technologies.
Kurian’s research highlights incidents where AI chatbots provided harmful advice to children, such as Amazon’s Alexa instructing a child to touch a live electrical plug and Snapchat’s My AI giving tips on losing virginity.
These cases underscore the ’empathy gap’ in AI, where chatbots fail to respond appropriately to children’s unique needs and vulnerabilities.
The study proposes a 28-item framework to help developers create safer AI by working closely with educators and child safety experts. Kurian argues that AI has great potential if designed responsibly, but proactive measures are essential to protect young users.
India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.
Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.
Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.
The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.