Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

English school reprimanded for facial recognition misuse

Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.

The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.

Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.

The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.

AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.

Cambridge researcher urges child-safe AI development

A recent study has revealed that AI chatbots pose significant risks to children, who often view them as lifelike and trustworthy. Dr Nomisha Kurian from the University of Cambridge calls for urgent action to prioritise ‘child-safe AI’ in the development of these technologies.

Kurian’s research highlights incidents where AI chatbots provided harmful advice to children, such as Amazon’s Alexa instructing a child to touch a live electrical plug and Snapchat’s My AI giving tips on losing virginity.

These cases underscore the ’empathy gap’ in AI, where chatbots fail to respond appropriately to children’s unique needs and vulnerabilities.

The study proposes a 28-item framework to help developers create safer AI by working closely with educators and child safety experts. Kurian argues that AI has great potential if designed responsibly, but proactive measures are essential to protect young users.

Indian data protection law under fire for inadequate child online safety measures

India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.

Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.

Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.

The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.

FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

US Supreme Court declines Snapchat case

The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.

Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.

Why does this matter?

The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.

Australia asks internet companies to help regulate online access for minors

Australia has given internet-related companies six months to develop enforceable codes aimed at preventing children from accessing pornography and other inappropriate content online, according to the eSafety Commission’s announcement on Tuesday. To outline their expectations, it presented a policy paper to guide the internet actors’ codes. Preliminary drafts of the codes are to be presented by 3 October 2024, and final codes are to be submitted by 19 December 2024. 

These codes will complement current government drives towards online content policy and safety. “We […] need industry to play their part by putting in some effective barriers to protect children,” said the eSafety Commissioner Ms Inman Grant. Concerned internet actors range from dating apps and social media to search engines and games.

The codes will be centred primarily around pornography, but will also encompass suicide and serious illnesses, including self-harm and eating disorders. Potential measures to protect children from explicit content may include age verification systems, default parental controls, and software to blur or filter inappropriate material.

The regulator did specify this will not completely limit access. “We know kids will always be curious and will likely seek out porn as they enter adolescence and explore their sexuality, so, many of these measures are really focused on preventing unintentional exposure to young children.” Australia has previously decided against the use of age verification for pornographic or adult content websites.   

Australia already has a number of codes for online safety, many of which are conceived thanks to similar consultations with NGOs and civil society actors. Spokespeople for Google and Meta have already said they will continue to engage with the commissioner for the conception of regulation and safety codes codes.

US billionaire aims to acquire TikTok to challenge Big Tech dominance

Frank McCourt, a US real estate billionaire, aims to acquire TikTok to combat the negative influence of major tech platforms on society. Known for owning the Los Angeles Dodgers and Olympique de Marseille, McCourt has been vocal about the harm these platforms inflict, particularly on children. Speaking at the Collision tech conference in Toronto, he emphasised the manipulative nature of social media algorithms, linking them to societal chaos and political polarisation.

McCourt’s concern stems from the detrimental impact of social media on mental health, especially among children, citing rising anxiety, depression, and youth suicides. His solution is a ‘new internet’ based on an open-source, decentralised protocol where users control their own data, a vision he calls Project Liberty. With its vast user base of young people, acquiring TikTok would significantly advance this initiative. Project Liberty has garnered support from internet pioneer Tim Berners-Lee and NYU professor Jonathan Haidt.

The acquisition bid comes amid US government pressures on TikTok to divest from Chinese ownership due to national security concerns. While the future of TikTok’s ownership remains uncertain, McCourt hopes this situation will raise awareness about data privacy issues across all platforms, emphasising the need for user control over personal data to preserve democratic values.