Warning labels for social media suggested by US Surgeon General

US Surgeon General Vivek Murthy has called for a warning label on social media apps to highlight the harm these platforms can cause young people, particularly adolescents. In a New York Times op-ed, Murthy emphasised that while a warning label alone won’t make social media safe, it can raise awareness and influence behaviour, similar to tobacco warning labels. The proposal requires legislative approval from Congress. Social media platforms like Facebook, Instagram, TikTok, and Snapchat have faced longstanding criticism for their negative impact on youth, including shortened attention spans, negative body image, and vulnerability to online predators and bullies.

Murthy’s proposal comes amid increasing efforts by youth advocates and lawmakers to protect children from social media’s harmful effects. US senators grilled CEOs of major social media companies, accusing them of failing to protect young users from dangers such as sexual predators. States are also taking action; New York recently passed legislation requiring parental consent for users under 18 to access ‘addictive’ algorithmic content, and Florida has banned children under 14 from social media platforms while requiring parental consent for 14- and 15-year-olds.

Despite these growing concerns and legislative efforts, major social media companies have not publicly responded to Murthy’s call for warning labels. The push for such labels is part of broader initiatives to mitigate the mental health risks associated with social media use among adolescents, aiming to reduce issues like anxiety and depression linked to these platforms.

X bans over 230,000 accounts in India for violations

Between April 26 and May 25, Elon Musk’s X Corp banned 229,925 accounts in India, primarily for promoting child sexual exploitation and non-consensual nudity. Additionally, 967 accounts were removed for promoting terrorism, bringing the total to 230,892 banned accounts during this period. In compliance with the new IT Rules, 2021, X Corp’s monthly report noted receiving 17,580 user complaints in India. The company processed 76 grievances appealing account suspensions but upheld all suspensions after review.

The report also mentioned 31 general account-related inquiries. Most user complaints involved ban evasion (6,881), hateful conduct (3,763), sensitive adult content (3,205), and abuse/harassment (2,815). Previously, between March 26 and April 25, X banned 184,241 accounts in India and removed 1,303 for promoting terrorism.

Why does it matter?

India, with nearly 700 million internet users, has introduced new regulations for social media, streaming services, and digital news outlets. These rules mandate firms to enable traceability of encrypted messages, establish local offices with senior officials, comply with takedown requests within 24 hours, resolve grievances within 15 days, and publish a monthly compliance report detailing received requests and actions taken.

New York lawmakers pass bills on social media restrictions

New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.

Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.

While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.

The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.

Microsoft faces GDPR investigation over data protection concerns

The advocacy group NOYB has filed two complaints against Microsoft’s 365 Education software suite, alleging that the company is shifting its responsibilities for children’s personal data onto schools that are not equipped to handle these responsibilities. The complaints centre on concerns about transparency and processing children’s data on the Microsoft platform, potentially violating the European Union’s General Data Protection Regulation (GDPR).

The first complaint alleges that Microsoft’s contracts with schools attempt to shift responsibility for GDPR compliance onto them despite schools lacking the capacity to monitor or enforce Microsoft’s data practices. That could result in children’s data being processed in ways that do not comply with GDPR. The second complaint highlights the use of tracking cookies within Microsoft 365 Education software, which reportedly collects user browsing data and analyses user behaviour, potentially for advertising purposes.

NOYB claims that such tracking practices occur without users’ consent or the schools’ knowledge, and there appears to be no legal justification for it under GDPR. They request that the Austrian Data Protection Authority investigate the complaints and determine the extent of data processing by Microsoft 365 Education. The group has also urged the authority to impose fines if GDPR violations are confirmed.

Microsoft has not yet responded to the complaints. Still, the company has stated that its 365 for Education complies with GDPR and other applicable privacy laws and that it thoroughly protects the privacy of its young users.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

X now officially allows adult content

X, formerly known as Twitter, has officially updated its rules to allow the posting of adult and graphic content. Users can now share consensually produced NSFW (not safe for work) content, including AI-generated images and videos, provided they are clearly labelled. This change is a formal acknowledgement of practices that have existed unofficially for years, especially under the platform’s current ownership by Elon Musk, who has been exploring ways to host and potentially monetise adult content.

The new guidelines emphasise that while adult content is permitted, it must be consensually produced and appropriately labelled to prevent unintended exposure, particularly to minors. X continues to prohibit excessively gory content and any depiction of sexual violence, aligning with its existing violent content policies. The platform also requires users to mark posts containing sensitive media, ensuring that such content is only visible to those over 18 who have provided birthdates.

This move opens the door for X to potentially develop services around adult content, possibly positioning itself as a competitor to platforms like OnlyFans. The prevalence of adult content on X has been significant, with about 13% of posts in 2022 containing such material, a figure that has likely increased with the proliferation of porn bots. Regulatory bodies will closely monitor X’s efforts to manage and eliminate non-consensual porn and child sexual abuse material (CSAM), especially following past fines and warnings from countries like Australia and India.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.

EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.

OpenAI considers allowing AI-generated pornography

OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.

The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.

Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.

Why does it matter?

As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.