TikTok faces lawsuit over viral challenge death

A US appeals court has recently revived a lawsuit against TikTok, filed by the mother of a 10-year-old girl who tragically died after participating in a dangerous viral challenge on the platform. The blackout challenge, which involved users choking themselves until they lost consciousness, led to the death of Nylah Anderson in 2021.

The case hinges on the argument that TikTok’s algorithm recommended the harmful challenge to Nylah despite federal protections typically shielding internet companies from liability for user-generated content. The 3rd US Circuit Court of Appeals in Philadelphia ruled that Section 230 of the Communications Decency Act, which generally protects online platforms from such lawsuits, does not apply to algorithmic recommendations made by the company itself.

Judge Patty Shwartz, writing for the panel, explained that while Section 230 covers third-party content, it does not extend to the platform’s content curation decisions. This ruling marks a substantial shift from previous cases where courts had upheld Section 230 to shield platforms from liability related to harmful user-generated content.

The court’s decision reflects a broader interpretation of a recent US Supreme Court ruling, which recognised that algorithms used by platforms represent editorial judgments by the companies themselves. According to this view, TikTok’s algorithm-driven recommendations are considered the company’s speech, not protected by Section 230.

The lawsuit, brought by Tawainna Anderson against TikTok and its parent company ByteDance, was initially dismissed by a lower court. Still, the appeals court has now allowed the case to proceed. Anderson’s lawyer, Jeffrey Goodman, hailed the ruling as a loss for Big Tech’s immunity protections. Meanwhile, Judge Paul Matey criticised TikTok for prioritising profits over safety, underscoring that the platform cannot claim immunity beyond what Congress has granted.

AI cat spark online controversy and curiosity – meet Chubby

A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.

@mpminds

Step into a new dimension where AI and humans come together!✨ @cantina #cantinapartner #cantina #catsoftiktok #cats #ai #aiart

♬ son original – MPminds

These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.

The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.

As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.

California’s child safety law faces legal setback

A US appeals court has upheld an essential aspect of an injunction against a California law designed to protect children from harmful online content. The law, known as the California Age-Appropriate Design Code Act, was challenged by NetChoice, a trade group representing major tech companies because it violated free speech rights under the First Amendment. The court agreed, stating that the law’s requirement for companies to create detailed reports on potential risks to children was likely unconstitutional.

The court suggested that California could protect children through less restrictive means, such as enhancing education for parents and children about online dangers or offering incentives for companies to filter harmful content. The appeals court partially overturned a lower court’s injunction but sent the case back for further review, particularly concerning provisions related to the collection of children’s data.

California’s law, modelled after a similar UK law, was set to take effect in July 2024. Governor Gavin Newsom defended the law, emphasising the need for child safety and urging NetChoice to drop its legal challenge. Despite this, NetChoice hailed the court’s decision as a win for free speech and online security, highlighting the ongoing legal battle over online content regulation.

Growing demand of AI-generated child abuse material in dark web

As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.

Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.

Why does it matter?

The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.

Man who used AI to create indecent images of children faces jail

In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.

The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.

Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.

Turkey blocks Roblox amid child protection concerns

Türkiye has blocked access to the popular kids’ gaming platform Roblox due to concerns over content that could potentially lead to child abuse. Justice Minister Yilmaz Tunc announced the decision on X, citing a legal ruling based on a law regulating internet broadcasting. He emphasised the state’s constitutional duty to protect children and condemned the harmful use of technology.

The ban on Roblox is the latest in a series of measures targeting internet platforms in Türkiye. Recently, Instagram faced similar restrictions after a senior aide to President Recep Tayyip Erdogan accused the social media platform of censoring posts related to the death of Hamas political leader Ismail Haniyeh.

Roblox has not yet responded to the request for comment regarding the ban. The company has been grappling with challenges related to keeping inappropriate content off its platform, including issues involving paedophiles.

The development highlights the ongoing tension between the Turkish government and digital platforms as authorities enforce stringent measures to control online content and protect vulnerable users.

FTC sues TikTok over child privacy violations

The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.

According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.

FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.

The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.

US Senate approves major online child safety reforms

The US Senate has passed significant online child safety reforms in a near-unanimous vote, but the fate of these bills remains uncertain in the House of Representatives. The two pieces of legislation, known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), aim to protect minors from targeted advertising and unauthorised data collectiochiln while also enabling parents and children to delete their information from social media platforms. The Senate’s bipartisan approval, with a vote of 91-3, marks a critical step towards enhancing online safety for minors.

COPPA 2.0 and KOSA have sparked mixed reactions within the tech industry. While platforms like Snap and X have shown support for KOSA, Meta Platforms and TikTok executives have expressed reservations. Critics, including the American Civil Liberties Union and certain tech industry groups, argue that the bills could limit minors’ access to essential information on topics such as vaccines, abortion, and LGBTQ issues. Despite amendments to address these concerns, some, like Senator Ron Wyden, still need to be convinced of the bills’ efficacy and potential impact on vulnerable groups.

The high economic stakes are highlighted by a Harvard study indicating that top US social media platforms generated approximately $11 billion in advertising revenue from users under 18 in 2022. Advocates for the bills, such as Maurine Molak of ParentsSOS, view the Senate vote as a historic milestone in protecting children online. However, the legislation’s future hinges on its passage in the Republican-controlled House, which is currently in recess until September.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.