Australia is preparing to introduce age restrictions for social media use to protect children’s mental and physical health. Prime Minister Anthony Albanese announced the plan, emphasising that the government would conduct an age verification trial before finalising the laws, likely setting the minimum age between 14 and 16. Albanese stressed the importance of reducing children’s reliance on social media in favour of real-life activities, citing growing concerns about the harmful effects of digital platforms.
The proposed law would make Australia one of the first to implement such a restriction. However, past attempts by the EU have faced resistance over concerns about limiting minors’ online rights. Tech giants like Meta, the parent company of Facebook and Instagram, which currently have a self-imposed minimum age of 13, have responded cautiously, calling for empowerment and tools for young users rather than outright restrictions.
Why does this matter?
Australia‘s move comes amid a parliamentary inquiry into social media’s impact on society, where testimonies have highlighted its negative influence on teenagers’ mental health. However, critics warn that the law may backfire, potentially driving younger users into unregulated, hidden areas of the internet. Digital rights advocates and experts from institutions like the Queensland University of Technology have expressed concerns, arguing that exclusion from mainstream platforms could harm children’s digital engagement and safety.
Australia’s eSafety Commissioner has also noted that restriction-based approaches may limit access to critical support services for younger users. As the debate continues, social media industry groups urge the government to consult with experts to ensure the policy does not inadvertently expose children to greater risks online.
New Mexico has filed a lawsuit against Snap Inc, alleging that Snapchat’s design facilitates the sharing of child sexual exploitation material. Attorney General Raul Torrez stated that a months-long investigation found Snapchat to be a key platform for sextortion, where predators coerce minors into sending explicit content.
Snap said it is reviewing the complaint and will respond in court. The company has invested significant funds into trust and safety measures and continues to work with law enforcement and safety experts to combat such issues.
Snapchat is widely used by teens due to its disappearing message feature, which has been criticised for misleading users. According to Torrez, predators can permanently capture the content, creating a virtual collection of child sexual images that are shared indefinitely.
Investigators opened a decoy Snapchat account as part of the investigation, discovering 10,000 records of child sexual abuse material on the dark web. Snapchat was identified as a major source for such content in these sites. New Mexico also sued Meta last December for similar reasons.
France has recently implemented a new ‘digital comma’ system to curb mobile phone use among students as the new school semester begins. Currently, in a pilot phase, the policy involves 200 middle schools where students must either submit their phones to teachers upon arrival or seal them in lockers, ensuring they are not accessible during school hours.
The following initiative is designed to reinforce a 2018 policy that prohibited elementary and middle school students from using phones on campus—a rule that has been criticised for its lack of effective enforcement. Should this pilot prove successful over the next four months, the French government plans to extend the ban to all schools nationwide beginning next year.
The decision to enforce stricter control over mobile phone use stems from growing concerns about the negative impact of digital devices on young people’s well-being. A Screen Use Expert Committee report, established by President Emmanuel Macron, highlights several risks associated with excessive screen time, including sleep disorders, reduced physical activity, obesity, and vision problems. The committee advocates for a gradual introduction of digital devices, suggesting that children under three should have no contact with such devices, while those under 11 should be completely banned from using mobile phones. For older children, limited access to phones without internet capabilities is recommended, with full access to internet-enabled phones being delayed until age 15 and, even then, without access to social networking services.
This trend is not isolated to France – numerous other countries have also begun to regulate mobile phone use in schools. The UK, Germany, Italy, and several states in the United States, including California and New York, have all implemented or are considering similar restrictions. These actions reflect a growing recognition of the need to protect children from the potential harms associated with digital technology and a commitment to fostering a healthier and more focused learning environment. As the conversation around digital device usage continues to evolve, it is clear that many nations prioritise the well-being of their youth in the face of rapidly advancing technology.
It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech.
The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO.
Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.
In 2011, Durov said the Russian government asked him to delete the accounts of anti-government people on his social media platform. He refused. After the 2014 coup in Ukraine, Durov refused to provide the Russian government with information about users involved in the event. pic.twitter.com/hqnijdiBJ5
Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.
In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.
Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.
Pavel Durov left Russia when the government tried to control his social media company, Telegram. But in the end, it wasn’t Putin who arrested him for allowing the public to exercise free speech. It was a western country, a Biden administration ally and enthusiastic NATO member,… https://t.co/F83E9GbNHC
The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.
With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security.
Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.
It would be helpful to the global public to understand more details about why he was arrested
Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case.
The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.
In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.
True, Brazil is controlled by a tyrannical dictator masquerading as a judge https://t.co/kkPfNRrBOh
Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.
When you say you are committed to freedom of expression, you are lying. We have a letter from France that proves this, without a doubt.
We had to shutdown Rumble in France because you have NO committment to freedom of expression.
The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.
All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.
Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society?
These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.
A recently published report by the University of Sheffield and its research partners proposes implementing a ‘digital vaccination’ for children to combat misinformation and bridge the digital divide. The report sets out recommendations for digital upskilling and innovative approaches to address the digital divide that hampers the opportunities of millions of children in the UK.
The authors warn that there could be severe economic and educational consequences without addressing these issues, highlighting that over 40% of UK children lack access to broadband or a device, and digital skills shortages cost £65 billion annually.
The report calls for adopting the Minimum Digital Living Standards framework to ensure every household has the digital infrastructure. It also stresses the need for improved school digital literacy education, teacher training, and new government guidance to mitigate online risks, including fake news.
A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.
These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.
The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.
As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.
A US appeals court has upheld an essential aspect of an injunction against a California law designed to protect children from harmful online content. The law, known as the California Age-Appropriate Design Code Act, was challenged by NetChoice, a trade group representing major tech companies because it violated free speech rights under the First Amendment. The court agreed, stating that the law’s requirement for companies to create detailed reports on potential risks to children was likely unconstitutional.
The court suggested that California could protect children through less restrictive means, such as enhancing education for parents and children about online dangers or offering incentives for companies to filter harmful content. The appeals court partially overturned a lower court’s injunction but sent the case back for further review, particularly concerning provisions related to the collection of children’s data.
California’s law, modelled after a similar UK law, was set to take effect in July 2024. Governor Gavin Newsom defended the law, emphasising the need for child safety and urging NetChoice to drop its legal challenge. Despite this, NetChoice hailed the court’s decision as a win for free speech and online security, highlighting the ongoing legal battle over online content regulation.
As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.
Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.
Why does it matter?
The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.
In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.
The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.
Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.
The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.
According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.
FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.
The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.