Calls for ‘digital vaccination’ of children to combat fake news

A recently published report by the University of Sheffield and its research partners proposes implementing a ‘digital vaccination’ for children to combat misinformation and bridge the digital divide. The report sets out recommendations for digital upskilling and innovative approaches to address the digital divide that hampers the opportunities of millions of children in the UK.

The authors warn that there could be severe economic and educational consequences without addressing these issues, highlighting that over 40% of UK children lack access to broadband or a device, and digital skills shortages cost £65 billion annually.

The report calls for adopting the Minimum Digital Living Standards framework to ensure every household has the digital infrastructure. It also stresses the need for improved school digital literacy education, teacher training, and new government guidance to mitigate online risks, including fake news.

AI cat spark online controversy and curiosity – meet Chubby

A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.

@mpminds

Step into a new dimension where AI and humans come together!✨ @cantina #cantinapartner #cantina #catsoftiktok #cats #ai #aiart

♬ son original – MPminds

These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.

The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.

As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.

California’s child safety law faces legal setback

A US appeals court has upheld an essential aspect of an injunction against a California law designed to protect children from harmful online content. The law, known as the California Age-Appropriate Design Code Act, was challenged by NetChoice, a trade group representing major tech companies because it violated free speech rights under the First Amendment. The court agreed, stating that the law’s requirement for companies to create detailed reports on potential risks to children was likely unconstitutional.

The court suggested that California could protect children through less restrictive means, such as enhancing education for parents and children about online dangers or offering incentives for companies to filter harmful content. The appeals court partially overturned a lower court’s injunction but sent the case back for further review, particularly concerning provisions related to the collection of children’s data.

California’s law, modelled after a similar UK law, was set to take effect in July 2024. Governor Gavin Newsom defended the law, emphasising the need for child safety and urging NetChoice to drop its legal challenge. Despite this, NetChoice hailed the court’s decision as a win for free speech and online security, highlighting the ongoing legal battle over online content regulation.

Growing demand of AI-generated child abuse material in dark web

As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.

Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.

Why does it matter?

The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.

Man who used AI to create indecent images of children faces jail

In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.

The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.

Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.

FTC sues TikTok over child privacy violations

The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.

According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.

FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.

The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.

US Senate approves major online child safety reforms

The US Senate has passed significant online child safety reforms in a near-unanimous vote, but the fate of these bills remains uncertain in the House of Representatives. The two pieces of legislation, known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), aim to protect minors from targeted advertising and unauthorised data collectiochiln while also enabling parents and children to delete their information from social media platforms. The Senate’s bipartisan approval, with a vote of 91-3, marks a critical step towards enhancing online safety for minors.

COPPA 2.0 and KOSA have sparked mixed reactions within the tech industry. While platforms like Snap and X have shown support for KOSA, Meta Platforms and TikTok executives have expressed reservations. Critics, including the American Civil Liberties Union and certain tech industry groups, argue that the bills could limit minors’ access to essential information on topics such as vaccines, abortion, and LGBTQ issues. Despite amendments to address these concerns, some, like Senator Ron Wyden, still need to be convinced of the bills’ efficacy and potential impact on vulnerable groups.

The high economic stakes are highlighted by a Harvard study indicating that top US social media platforms generated approximately $11 billion in advertising revenue from users under 18 in 2022. Advocates for the bills, such as Maurine Molak of ParentsSOS, view the Senate vote as a historic milestone in protecting children online. However, the legislation’s future hinges on its passage in the Republican-controlled House, which is currently in recess until September.

English school reprimanded for facial recognition misuse

Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.

The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.

Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.

The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.

AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.

Indian data protection law under fire for inadequate child online safety measures

India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.

Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.

Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.

The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.