FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

US Supreme Court declines Snapchat case

The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.

Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.

Why does this matter?

The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.

US DoJ to file lawsuit against TikTok for alleged children’s privacy violations

TikTok will be sued again by the US Department of Justice (DoJ) in a consumer protection lawsuit against ByteDance’s TikTok later this year, focusing on alleged children’s privacy violations. The incentive for the legal move comes on behalf of the Federal Trade Commission (FTC), but the DoJ will not pursue allegations that TikTok misled US consumers about data security, specifically dropping claims that the company failed to inform users that China-based employees could access their personal and financial information.

The decision suggests that the primary focus will now be on how TikTok handles children’s privacy. The FTC had referred to the DoJ a complaint against TikTok and its parent, ByteDance, concerning potential violations of children’s privacy, stating that it investigated TikTok and found evidence suggesting they may be breaking the Children’s Online Privacy Protection Act. The federal act requires apps and websites aimed at kids to get parental consent before collecting personal information from children under 13.

Simultaneously, TikTok and ByteDance are challenging a US law that aims to ban the popular short video app in the United States starting from 19 January next year.

US Justice Department to investigate TikTok over child privacy complaint

The US Federal Trade Commission (FTC) has referred a complaint against TikTok and its parent company, ByteDance, to the Justice Department over potential violations of children’s privacy. The move follows an investigation that suggested the companies might be breaking the law and deemed it in the public interest to proceed with the complaint. The following investigation stems from allegations that TikTok failed to comply with a 2019 agreement to safeguard children’s privacy.

TikTok has been discussing with the FTC for over a year to address the agency’s concerns. The company expressed disappointment over the FTC’s decision to pursue litigation rather than continue negotiations, arguing that many of the FTC’s allegations are outdated or incorrect. TikTok remains committed to resolving the issues and believes it has already addressed many concerns.

Separately, TikTok is facing scrutiny from US Congress regarding the potential misuse of data from its 170 million US users by the Chinese government, a claim TikTok denies. Additionally, TikTok is preparing to file a legal brief challenging a recent law that mandates its parent company, ByteDance, to divest TikTok’s US assets by 19 January or face a ban.

Warning labels for social media suggested by US Surgeon General

US Surgeon General Vivek Murthy has called for a warning label on social media apps to highlight the harm these platforms can cause young people, particularly adolescents. In a New York Times op-ed, Murthy emphasised that while a warning label alone won’t make social media safe, it can raise awareness and influence behaviour, similar to tobacco warning labels. The proposal requires legislative approval from Congress. Social media platforms like Facebook, Instagram, TikTok, and Snapchat have faced longstanding criticism for their negative impact on youth, including shortened attention spans, negative body image, and vulnerability to online predators and bullies.

Murthy’s proposal comes amid increasing efforts by youth advocates and lawmakers to protect children from social media’s harmful effects. US senators grilled CEOs of major social media companies, accusing them of failing to protect young users from dangers such as sexual predators. States are also taking action; New York recently passed legislation requiring parental consent for users under 18 to access ‘addictive’ algorithmic content, and Florida has banned children under 14 from social media platforms while requiring parental consent for 14- and 15-year-olds.

Despite these growing concerns and legislative efforts, major social media companies have not publicly responded to Murthy’s call for warning labels. The push for such labels is part of broader initiatives to mitigate the mental health risks associated with social media use among adolescents, aiming to reduce issues like anxiety and depression linked to these platforms.

New York lawmakers pass bills on social media restrictions

New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.

Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.

While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.

The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.

Microsoft faces GDPR investigation over data protection concerns

The advocacy group NOYB has filed two complaints against Microsoft’s 365 Education software suite, alleging that the company is shifting its responsibilities for children’s personal data onto schools that are not equipped to handle these responsibilities. The complaints centre on concerns about transparency and processing children’s data on the Microsoft platform, potentially violating the European Union’s General Data Protection Regulation (GDPR).

The first complaint alleges that Microsoft’s contracts with schools attempt to shift responsibility for GDPR compliance onto them despite schools lacking the capacity to monitor or enforce Microsoft’s data practices. That could result in children’s data being processed in ways that do not comply with GDPR. The second complaint highlights the use of tracking cookies within Microsoft 365 Education software, which reportedly collects user browsing data and analyses user behaviour, potentially for advertising purposes.

NOYB claims that such tracking practices occur without users’ consent or the schools’ knowledge, and there appears to be no legal justification for it under GDPR. They request that the Austrian Data Protection Authority investigate the complaints and determine the extent of data processing by Microsoft 365 Education. The group has also urged the authority to impose fines if GDPR violations are confirmed.

Microsoft has not yet responded to the complaints. Still, the company has stated that its 365 for Education complies with GDPR and other applicable privacy laws and that it thoroughly protects the privacy of its young users.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.