New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.

EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.

Tech firms urged to implement child safety measures in UK

Social media platforms such as Facebook, Instagram, and TikTok face proposed measures in the UK to modify their algorithms and better safeguard children from harmful content. These measures, outlined by regulator Ofcom, are part of the broader Online Safety Act and include implementing robust age checks to shield children from harmful material related to sensitive topics like suicide, self-harm, and pornography.

Ofcom’s Chief Executive, Melanie Dawes, has underscored the situation’s urgency, emphasising the necessity of holding tech firms accountable for protecting children online. She asserts that platforms must reconfigure aggressive algorithms that push harmful content to children and incorporate age verification mechanisms.

The utilisation of complex algorithms by social media companies to curate content has raised serious concerns. These algorithms often amplify harmful material, potentially influencing children negatively. The proposed measures seek to address this issue by urging platforms to reevaluate their algorithmic systems to prioritize child safety by providing children with a safer online experience tailored to their age.

UK’s Technology Secretary, Michelle Donelan, called for social media platforms to engage with regulators and proactively implement these measures, cautioning against waiting for enforcement and potential fines. After a consultation, Ofcom plans to finalise its Children’s Safety Codes of Practice within a year, with anticipated enforcement actions, including penalties for non-compliance, once parliament approves.

UNICEF study finds video games can boost children’s well-being when properly designed

New research from UNICEF Innocenti’s Global Office of Research and Foresight, as part of the Responsible Innovation in Technology for Children (RITEC) project, suggests that video games can significantly enhance the well-being of children if designed thoughtfully.

This international collaboration, co-founded by UNICEF and the LEGO Group and funded by the LEGO Foundation, highlights that well-designed digital games can promote children’s autonomy, competence, creativity, identity, emotion regulation, and relationship building.

The study, conducted in partnership with the University of Sheffield, New York University, City University New York and the Queensland University of Technology, found that digital games offer children valuable experiences such as a sense of control, mastery, achievement, and the ability to explore personal and social identities. However, the positive impact of games depends on their ability to cater to children’s unique needs and desires.

As digital games evolve, the research advocates for designs prioritising young players’ safety, creativity, and emotional development, potentially redefining gaming’s role in nurturing future generations.

Why does it matter?

Traditionally, video games have been viewed with scepticism, often considered detrimental to the psychological and emotional development of children, especially because of their often addictive features. However, this new study suggests a nuanced perspective, prompting a reevaluation of how games are crafted and integrated into children’s lives rather than attempting to eliminate video games from children’s lives—a challenging and potentially counterproductive approach.

TikTok responds to EU concerns, suspends rewards in Lite app

TikTok has suspended its rewards functions in TikTok Lite, a new app catering to regions with slower internet speeds. This decision follows concerns raised by the European Commission regarding the app’s ‘Task and Reward Program,’ which incentivises user engagement with rewards like Amazon vouchers and PayPal gift cards. Particularly, worries over potential addictive effects, especially for children, due to inadequate age verification mechanisms have been highlighted by the EU executive.

In response to the Commission’s apprehensions, TikTok stated its commitment to engaging constructively with regulators and suspended the rewards functions. However, Commissioner Thierry Breton emphasised that concerns regarding TikTok’s platform addictiveness persist, along with an ongoing investigation to determine TikTok Lite’s compliance with the Digital Services Act (DSA). The DSA, which came into force recently, imposes regulations on how online platforms handle illegal and harmful content, with TikTok falling under its jurisdiction as a very large online platform (VLOP).

Under the DSA, TikTok was required to conduct and submit a risk assessment before launching the Lite app. However, the Commission’s proceedings revealed TikTok’s initial failure to meet this requirement. Despite missing the initial deadline, TikTok eventually submitted the risk assessment, indicating compliance with the Commission’s demands. France’s digital minister and MEPs have welcomed TikTok’s suspension decision, signalling a positive response from the EU authorities regarding the company’s efforts to address regulatory concerns.

EU threatens TikTok Lite suspension over mental health concerns

The European Commission has warned TikTok that it may suspend a key feature of TikTok Lite in the European Union on Thursday if the company fails to address concerns regarding its impact on users’ mental health. This action is being taken under the EU’s Digital Services Act (DSA), which mandates that large online platforms take action against harmful content or face fines of up to 6% of their global annual turnover.

Thierry Breton, the EU industry chief, emphasised the Commission’s readiness to implement interim measures, including suspending TikTok Lite, if TikTok does not provide compelling evidence of the feature’s safety. Breton highlighted concerns about potential addiction generated by TikTok Lite’s reward program.

TikTok has been given a 24-hour deadline to provide a risk assessment report on TikTok Lite to avoid fines and additional requested information by 3 May to avoid penalties. Despite these demands, TikTok still needs to respond to the Commission’s requests for comment.

The TikTok Lite app, recently launched in France and Spain, includes a reward program where users earn points by engaging in specific tasks on the platform. However, TikTok should have submitted a risk assessment report before the app’s launch, as required by the DSA. The Commission remains firm on enforcing regulations to protect users’ well-being amidst the growing influence of digital platforms.

Kyrgyzstan blocks TikTok over child protection concerns

Kyrgyzstan has banned TikTok following security service recommendations to safeguard children. The decision comes amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

The Kyrgyz digital ministry cited ByteDance’s failure to comply with child protection laws, sparking concerns from advocacy groups about arbitrary censorship. The decision reflects Kyrgyzstan’s broader trend of tightening control over media and civil society, departing from its relatively open stance.

Meanwhile, TikTok continues to face scrutiny worldwide over its data policies and alleged connections to the Chinese government.

Why does it matter?

This decision stems from legislative text approved last summer aimed at curbing the distribution of ‘harmful’ online content accessible to minors. Such content encompasses material featuring ‘non-traditional sexual relationships’ and those that undermine ‘family values,’ as well as promoting illegal conduct, substance abuse, or anti-social behaviours. Chinese officials have not publicly commented on this decision, although in March, Beijing accused the US of ‘bullying’ over similar actions against TikTok.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.