The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.
The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.
The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.
The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.
The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.
Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.
Why does this matter?
The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.
Australia has given internet-related companies six months to develop enforceable codes aimed at preventing children from accessing pornography and other inappropriate content online, according to the eSafety Commission’s announcement on Tuesday. To outline their expectations, it presented a policy paper to guide the internet actors’ codes. Preliminary drafts of the codes are to be presented by 3 October 2024, and final codes are to be submitted by 19 December 2024.
These codes will complement current government drives towards online content policy and safety. “We […] need industry to play their part by putting in some effective barriers to protect children,” said the eSafety Commissioner Ms Inman Grant. Concerned internet actors range from dating apps and social media to search engines and games.
The codes will be centred primarily around pornography, but will also encompass suicide and serious illnesses, including self-harm and eating disorders. Potential measures to protect children from explicit content may include age verification systems, default parental controls, and software to blur or filter inappropriate material.
The regulator did specify this will not completely limit access. “We know kids will always be curious and will likely seek out porn as they enter adolescence and explore their sexuality, so, many of these measures are really focused on preventing unintentional exposure to young children.” Australia has previously decided against the use of age verification for pornographic or adult content websites.
Australia already has a number of codes for online safety, many of which are conceived thanks to similar consultations with NGOs and civil society actors. Spokespeople for Google and Meta have already said they will continue to engage with the commissioner for the conception of regulation and safety codes codes.
Frank McCourt, a US real estate billionaire, aims to acquire TikTok to combat the negative influence of major tech platforms on society. Known for owning the Los Angeles Dodgers and Olympique de Marseille, McCourt has been vocal about the harm these platforms inflict, particularly on children. Speaking at the Collision tech conference in Toronto, he emphasised the manipulative nature of social media algorithms, linking them to societal chaos and political polarisation.
McCourt’s concern stems from the detrimental impact of social media on mental health, especially among children, citing rising anxiety, depression, and youth suicides. His solution is a ‘new internet’ based on an open-source, decentralised protocol where users control their own data, a vision he calls Project Liberty. With its vast user base of young people, acquiring TikTok would significantly advance this initiative. Project Liberty has garnered support from internet pioneer Tim Berners-Lee and NYU professor Jonathan Haidt.
The acquisition bid comes amid US government pressures on TikTok to divest from Chinese ownership due to national security concerns. While the future of TikTok’s ownership remains uncertain, McCourt hopes this situation will raise awareness about data privacy issues across all platforms, emphasising the need for user control over personal data to preserve democratic values.
TikTok will be sued again by the US Department of Justice (DoJ) in a consumer protection lawsuit against ByteDance’s TikTok later this year, focusing on alleged children’s privacy violations. The incentive for the legal move comes on behalf of the Federal Trade Commission (FTC), but the DoJ will not pursue allegations that TikTok misled US consumers about data security, specifically dropping claims that the company failed to inform users that China-based employees could access their personal and financial information.
The decision suggests that the primary focus will now be on how TikTok handles children’s privacy. The FTC had referred to the DoJ a complaint against TikTok and its parent, ByteDance, concerning potential violations of children’s privacy, stating that it investigated TikTok and found evidence suggesting they may be breaking the Children’s Online Privacy Protection Act. The federal act requires apps and websites aimed at kids to get parental consent before collecting personal information from children under 13.
The US Federal Trade Commission (FTC) has referred a complaint against TikTok and its parent company, ByteDance, to the Justice Department over potential violations of children’s privacy. The move follows an investigation that suggested the companies might be breaking the law and deemed it in the public interest to proceed with the complaint. The following investigation stems from allegations that TikTok failed to comply with a 2019 agreement to safeguard children’s privacy.
TikTok has been discussing with the FTC for over a year to address the agency’s concerns. The company expressed disappointment over the FTC’s decision to pursue litigation rather than continue negotiations, arguing that many of the FTC’s allegations are outdated or incorrect. TikTok remains committed to resolving the issues and believes it has already addressed many concerns.
Separately, TikTok is facing scrutiny from US Congress regarding the potential misuse of data from its 170 million US users by the Chinese government, a claim TikTok denies. Additionally, TikTok is preparing to file a legal brief challenging a recent law that mandates its parent company, ByteDance, to divest TikTok’s US assets by 19 January or face a ban.
US Surgeon General Vivek Murthy has called for a warning label on social media apps to highlight the harm these platforms can cause young people, particularly adolescents. In a New York Times op-ed, Murthy emphasised that while a warning label alone won’t make social media safe, it can raise awareness and influence behaviour, similar to tobacco warning labels. The proposal requires legislative approval from Congress. Social media platforms like Facebook, Instagram, TikTok, and Snapchat have faced longstanding criticism for their negative impact on youth, including shortened attention spans, negative body image, and vulnerability to online predators and bullies.
Murthy’s proposal comes amid increasing efforts by youth advocates and lawmakers to protect children from social media’s harmful effects. US senators grilled CEOs of major social media companies, accusing them of failing to protect young users from dangers such as sexual predators. States are also taking action; New York recently passed legislation requiring parental consent for users under 18 to access ‘addictive’ algorithmic content, and Florida has banned children under 14 from social media platforms while requiring parental consent for 14- and 15-year-olds.
Despite these growing concerns and legislative efforts, major social media companies have not publicly responded to Murthy’s call for warning labels. The push for such labels is part of broader initiatives to mitigate the mental health risks associated with social media use among adolescents, aiming to reduce issues like anxiety and depression linked to these platforms.
Between April 26 and May 25, Elon Musk’s X Corp banned 229,925 accounts in India, primarily for promoting child sexual exploitation and non-consensual nudity. Additionally, 967 accounts were removed for promoting terrorism, bringing the total to 230,892 banned accounts during this period. In compliance with the new IT Rules, 2021, X Corp’s monthly report noted receiving 17,580 user complaints in India. The company processed 76 grievances appealing account suspensions but upheld all suspensions after review.
The report also mentioned 31 general account-related inquiries. Most user complaints involved ban evasion (6,881), hateful conduct (3,763), sensitive adult content (3,205), and abuse/harassment (2,815). Previously, between March 26 and April 25, X banned 184,241 accounts in India and removed 1,303 for promoting terrorism.
Why does it matter?
India, with nearly 700 million internet users, has introduced new regulations for social media, streaming services, and digital news outlets. These rules mandate firms to enable traceability of encrypted messages, establish local offices with senior officials, comply with takedown requests within 24 hours, resolve grievances within 15 days, and publish a monthly compliance report detailing received requests and actions taken.
New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.
Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.
While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.
The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.
The first complaint alleges that Microsoft’s contracts with schools attempt to shift responsibility for GDPR compliance onto them despite schools lacking the capacity to monitor or enforce Microsoft’s data practices. That could result in children’s data being processed in ways that do not comply with GDPR. The second complaint highlights the use of tracking cookies within Microsoft 365 Education software, which reportedly collects user browsing data and analyses user behaviour, potentially for advertising purposes.
NOYB claims that such tracking practices occur without users’ consent or the schools’ knowledge, and there appears to be no legal justification for it under GDPR. They request that the Austrian Data Protection Authority investigate the complaints and determine the extent of data processing by Microsoft 365 Education. The group has also urged the authority to impose fines if GDPR violations are confirmed.
Microsoft has not yet responded to the complaints. Still, the company has stated that its 365 for Education complies with GDPR and other applicable privacy laws and that it thoroughly protects the privacy of its young users.