New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

X now officially allows adult content

X, formerly known as Twitter, has officially updated its rules to allow the posting of adult and graphic content. Users can now share consensually produced NSFW (not safe for work) content, including AI-generated images and videos, provided they are clearly labelled. This change is a formal acknowledgement of practices that have existed unofficially for years, especially under the platform’s current ownership by Elon Musk, who has been exploring ways to host and potentially monetise adult content.

The new guidelines emphasise that while adult content is permitted, it must be consensually produced and appropriately labelled to prevent unintended exposure, particularly to minors. X continues to prohibit excessively gory content and any depiction of sexual violence, aligning with its existing violent content policies. The platform also requires users to mark posts containing sensitive media, ensuring that such content is only visible to those over 18 who have provided birthdates.

This move opens the door for X to potentially develop services around adult content, possibly positioning itself as a competitor to platforms like OnlyFans. The prevalence of adult content on X has been significant, with about 13% of posts in 2022 containing such material, a figure that has likely increased with the proliferation of porn bots. Regulatory bodies will closely monitor X’s efforts to manage and eliminate non-consensual porn and child sexual abuse material (CSAM), especially following past fines and warnings from countries like Australia and India.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.

EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.

OpenAI considers allowing AI-generated pornography

OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.

The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.

Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.

Why does it matter?

As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.

Tech firms urged to implement child safety measures in UK

Social media platforms such as Facebook, Instagram, and TikTok face proposed measures in the UK to modify their algorithms and better safeguard children from harmful content. These measures, outlined by regulator Ofcom, are part of the broader Online Safety Act and include implementing robust age checks to shield children from harmful material related to sensitive topics like suicide, self-harm, and pornography.

Ofcom’s Chief Executive, Melanie Dawes, has underscored the situation’s urgency, emphasising the necessity of holding tech firms accountable for protecting children online. She asserts that platforms must reconfigure aggressive algorithms that push harmful content to children and incorporate age verification mechanisms.

The utilisation of complex algorithms by social media companies to curate content has raised serious concerns. These algorithms often amplify harmful material, potentially influencing children negatively. The proposed measures seek to address this issue by urging platforms to reevaluate their algorithmic systems to prioritize child safety by providing children with a safer online experience tailored to their age.

UK’s Technology Secretary, Michelle Donelan, called for social media platforms to engage with regulators and proactively implement these measures, cautioning against waiting for enforcement and potential fines. After a consultation, Ofcom plans to finalise its Children’s Safety Codes of Practice within a year, with anticipated enforcement actions, including penalties for non-compliance, once parliament approves.

Kyrgyzstan blocks TikTok over child protection concerns

Kyrgyzstan has banned TikTok following security service recommendations to safeguard children. The decision comes amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

The Kyrgyz digital ministry cited ByteDance’s failure to comply with child protection laws, sparking concerns from advocacy groups about arbitrary censorship. The decision reflects Kyrgyzstan’s broader trend of tightening control over media and civil society, departing from its relatively open stance.

Meanwhile, TikTok continues to face scrutiny worldwide over its data policies and alleged connections to the Chinese government.

Why does it matter?

This decision stems from legislative text approved last summer aimed at curbing the distribution of ‘harmful’ online content accessible to minors. Such content encompasses material featuring ‘non-traditional sexual relationships’ and those that undermine ‘family values,’ as well as promoting illegal conduct, substance abuse, or anti-social behaviours. Chinese officials have not publicly commented on this decision, although in March, Beijing accused the US of ‘bullying’ over similar actions against TikTok.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.

European Commission gives TikTok 24 hours to provide risk assessment of TikTok Lite

European regulators have demanded a risk assessment from TikTok within 24 hours regarding its new app, TikTok Lite, recently launched in France and Spain. The European Commission, under the Digital Services Act (DSA), is concerned about potential impacts on children and users’ mental health. This action follows an investigation initiated two months ago into TikTok for potential breaches of the EU tech rules.

Thierry Breton, the EU industry chief, emphasised the need for TikTok to conduct a risk assessment before launching the app in the 27-country EU. The DSA requires platforms to take stronger actions against illegal and harmful content, with penalties of up to 6% of their global annual turnover for violations. Breton likened the potentially addictive and toxic nature of ‘social media lite’ to ‘cigarettes light,’ underlining the commitment to protecting minors under the DSA.

TikTok Lite, targeted at users aged 18+, includes a ‘Task and Reward Lite’ program that allows users to earn points by engaging in specific platform activities. These points can be redeemed for rewards like Amazon vouchers, PayPal gift cards, or TikTok coins for tipping creators. The Commission expressed concerns about the app’s impact on minors and users’ mental health, particularly potential addictive behaviours.

Why does it matter?

TikTok has been directed to provide the requested risk assessment for TikTok Lite within 24 hours and additional information by 26 April. The Commission will analyse TikTok’s response and determine the next steps. TikTok has acknowledged the request for information and stated that it is in direct contact with the Commission regarding this matter. Additionally, the Commission has asked for details on measures implemented by TikTok to mitigate systemic risks associated with the new app.