Judge blocks Utah’s social media law targeting minors

A federal judge has temporarily halted a new Utah law designed to protect minors’ mental health by regulating social media use. The law, set to go into effect on 1 October, would have required social media companies to verify users’ ages and impose restrictions on accounts used by minors. Chief US District Judge Robert Shelby granted a preliminary injunction, stating that the law likely violates the First Amendment rights of social media companies by overly restricting their free speech.

The lawsuit, filed by tech industry group NetChoice, argued that the law unfairly targets social media platforms while exempting other websites, creating content-based restrictions. NetChoice represents major tech firms, including Meta, YouTube, Snapchat, and X (formerly Twitter). The court found their arguments convincing, highlighting that the law failed to meet the high scrutiny required for laws regulating speech.

Utah officials expressed disappointment with the ruling but affirmed their commitment to protecting children from the harmful effects of social media. Attorney General Sean Reyes stated that his office is reviewing the decision and is considering further steps. Governor Spencer Cox signed the law in March, hoping to shield minors from the negative impact of social media. Still, the legal battle underscores the complexity of balancing free speech with safeguarding children online.

The ruling is part of a broader national debate, with courts blocking similar laws in states like California, Texas, and Arkansas. Chris Marchese, director of NetChoice’s litigation centre, hailed the decision as a victory, emphasising that the law is deeply flawed and should be permanently struck down. This ongoing legal struggle reveals the challenge of finding solutions to address growing concerns over the effects of social media on youth without infringing on constitutional rights.

Australia plans age limits for social media use

Australia is preparing to introduce age restrictions for social media use to protect children’s mental and physical health. Prime Minister Anthony Albanese announced the plan, emphasising that the government would conduct an age verification trial before finalising the laws, likely setting the minimum age between 14 and 16. Albanese stressed the importance of reducing children’s reliance on social media in favour of real-life activities, citing growing concerns about the harmful effects of digital platforms.

The proposed law would make Australia one of the first to implement such a restriction. However, past attempts by the EU have faced resistance over concerns about limiting minors’ online rights. Tech giants like Meta, the parent company of Facebook and Instagram, which currently have a self-imposed minimum age of 13, have responded cautiously, calling for empowerment and tools for young users rather than outright restrictions.

Why does this matter?

Australia‘s move comes amid a parliamentary inquiry into social media’s impact on society, where testimonies have highlighted its negative influence on teenagers’ mental health. However, critics warn that the law may backfire, potentially driving younger users into unregulated, hidden areas of the internet. Digital rights advocates and experts from institutions like the Queensland University of Technology have expressed concerns, arguing that exclusion from mainstream platforms could harm children’s digital engagement and safety.

Australia’s eSafety Commissioner has also noted that restriction-based approaches may limit access to critical support services for younger users. As the debate continues, social media industry groups urge the government to consult with experts to ensure the policy does not inadvertently expose children to greater risks online.

Telegram tightens content rules after criticism

Telegram founder Pavel Durov announced that the messaging platform will tighten its content moderation policies following criticism over its use for illegal activities. The decision comes after Durov was placed under formal investigation in France for crimes linked to fraud, money laundering, and sharing abusive content. In a message to his 12.2 million subscribers, Durov stressed that most users were law-abiding but acknowledged that a small percentage were tarnishing the platform’s reputation. He vowed to transform Telegram’s moderation practices from a source of criticism to one of praise.

While details on how Telegram will improve its moderation remain sparse, Durov revealed that some features frequently misused for illegal activity had already been removed. These include disabling media uploads on a standalone blogging tool and scrapping the People Nearby feature, which scammers had exploited. The platform will now focus on showcasing legitimate businesses instead. These changes follow Durov’s arrest and questioning in France, raising significant concerns within the tech industry over free speech, platform responsibility, and content policing.

Critics, including former Meta executive Katie Harbath, warned that improving moderation would not be simple. Harbath suggested that Durov, like other tech CEOs, may find himself in for a difficult task. Telegram also quietly updated its Frequently Asked Questions, removing language that previously claimed it did not monitor illegal content in private chats, signalling a potential shift in how it approaches privacy and illegal activity.

Durov also defended Telegram’s moderation efforts, stating that the platform removes millions of harmful posts and channels daily, dismissing claims that it is a haven for illegal content. He expressed surprise at the French investigation, noting that authorities could have contacted the company’s the EU representative or himself directly to address concerns.

Snapchat faces lawsuit for child exploitation claims

New Mexico has filed a lawsuit against Snap Inc, alleging that Snapchat’s design facilitates the sharing of child sexual exploitation material. Attorney General Raul Torrez stated that a months-long investigation found Snapchat to be a key platform for sextortion, where predators coerce minors into sending explicit content.

Snap said it is reviewing the complaint and will respond in court. The company has invested significant funds into trust and safety measures and continues to work with law enforcement and safety experts to combat such issues.

Snapchat is widely used by teens due to its disappearing message feature, which has been criticised for misleading users. According to Torrez, predators can permanently capture the content, creating a virtual collection of child sexual images that are shared indefinitely.

Investigators opened a decoy Snapchat account as part of the investigation, discovering 10,000 records of child sexual abuse material on the dark web. Snapchat was identified as a major source for such content in these sites. New Mexico also sued Meta last December for similar reasons.

Belgian schools tighten smartphone restrictions to combat distractions and cyberbullying

An international school near Brussels, Belgium has implemented a strict policy to curb smartphone use by requiring students to place their devices in a locker at the start of the day. If students are found using smartphones, the devices are confiscated and returned at the end of the school day.

This initiative, led by school director David Bogaerts, is set to be adopted by hundreds of schools across Brussels and Wallonia. The new Francophone community government plans to enforce a smartphone ban in primary schools and the first three years of secondary schools. This reflects a growing trend across Europe, with the Netherlands already enacting such bans and France and Ireland contemplating similar measures. The same debate is also ongoing in the US.

These bans are driven by rising concerns over distractions and cyberbullying associated with smartphones, along with the negative effects of excessive screen time on children’s mental health. European Commission President Ursula von der Leyen highlighted these concerns, emphasising the critical importance of teenage years for brain and personality development and the susceptibility of young people to social media’s harms. In classrooms, teachers face challenges managing apps like TikTok, Snapchat, and the newer TenTen, which distract students significantly.

In response, the Francophone school federation Wallonie-Bruxelles Enseignement (WBE) has announced a comprehensive smartphone ban, arguing that previous, less restrictive measures have failed.

Why does it matter?

While some support the ban for its clarity and positive impacts on behaviour and attention spans, others also warn it may prevent students from learning responsible smartphone use during formative years. Alternatives include using apps to monitor smartphone use as educational tools and integrating laptops for digital learning, providing a balanced approach to managing technology in schools.

Pavel Durov, a transgressor or a fighter for free speech and privacy?

It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech. 

The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO. 

Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.

Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.

In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.

Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.

The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.

With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security. 

Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.

Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case. 

The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.

In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.

 Performer, Person, Solo Performance, Adult, Male, Man, Head, Face, Happy, Pavel Durov

The case also invites comparisons with other tech moguls who have faced similar dilemmas. Elon Musk’s acquisition of Twitter, now rebranded as X, has been marked by his advocacy for free speech. However, even Musk has had to navigate the treacherous waters of content moderation, facing governments’ pressure to combat disinformation and extremist content on his platform. The last example is the dispute with Brazil’s Supreme Court, where Elon Musk’s social media platform X could be easily ordered to shut down in Brazil due to alleged misinformation and extremist content concerning the case that was spread on X. The conflict has deepened tensions between Musk and Supreme Court Judge Alexandre de Moraes, whom Musk accused of engaging in censorship.

Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.

The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.

All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.

Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society? 

These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.

TikTok faces lawsuit over viral challenge death

A US appeals court has recently revived a lawsuit against TikTok, filed by the mother of a 10-year-old girl who tragically died after participating in a dangerous viral challenge on the platform. The blackout challenge, which involved users choking themselves until they lost consciousness, led to the death of Nylah Anderson in 2021.

The case hinges on the argument that TikTok’s algorithm recommended the harmful challenge to Nylah despite federal protections typically shielding internet companies from liability for user-generated content. The 3rd US Circuit Court of Appeals in Philadelphia ruled that Section 230 of the Communications Decency Act, which generally protects online platforms from such lawsuits, does not apply to algorithmic recommendations made by the company itself.

Judge Patty Shwartz, writing for the panel, explained that while Section 230 covers third-party content, it does not extend to the platform’s content curation decisions. This ruling marks a substantial shift from previous cases where courts had upheld Section 230 to shield platforms from liability related to harmful user-generated content.

The court’s decision reflects a broader interpretation of a recent US Supreme Court ruling, which recognised that algorithms used by platforms represent editorial judgments by the companies themselves. According to this view, TikTok’s algorithm-driven recommendations are considered the company’s speech, not protected by Section 230.

The lawsuit, brought by Tawainna Anderson against TikTok and its parent company ByteDance, was initially dismissed by a lower court. Still, the appeals court has now allowed the case to proceed. Anderson’s lawyer, Jeffrey Goodman, hailed the ruling as a loss for Big Tech’s immunity protections. Meanwhile, Judge Paul Matey criticised TikTok for prioritising profits over safety, underscoring that the platform cannot claim immunity beyond what Congress has granted.

AI cat spark online controversy and curiosity – meet Chubby

A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.

@mpminds

Step into a new dimension where AI and humans come together!✨ @cantina #cantinapartner #cantina #catsoftiktok #cats #ai #aiart

♬ son original – MPminds

These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.

The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.

As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.

California’s child safety law faces legal setback

A US appeals court has upheld an essential aspect of an injunction against a California law designed to protect children from harmful online content. The law, known as the California Age-Appropriate Design Code Act, was challenged by NetChoice, a trade group representing major tech companies because it violated free speech rights under the First Amendment. The court agreed, stating that the law’s requirement for companies to create detailed reports on potential risks to children was likely unconstitutional.

The court suggested that California could protect children through less restrictive means, such as enhancing education for parents and children about online dangers or offering incentives for companies to filter harmful content. The appeals court partially overturned a lower court’s injunction but sent the case back for further review, particularly concerning provisions related to the collection of children’s data.

California’s law, modelled after a similar UK law, was set to take effect in July 2024. Governor Gavin Newsom defended the law, emphasising the need for child safety and urging NetChoice to drop its legal challenge. Despite this, NetChoice hailed the court’s decision as a win for free speech and online security, highlighting the ongoing legal battle over online content regulation.

Growing demand of AI-generated child abuse material in dark web

As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.

Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.

Why does it matter?

The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.