Telegram apparently decided to alleviate its policy restrictions and to provide users’ IP addresses and phone numbers to authorities in response to valid legal requests. The shift in policy, announced by CEO Pavel Durov, marks a significant change for the platform, which has long been known for its resistance to government data demands. The update comes in the wake of Durov’s recent legal troubles in France, where he is facing charges related to the spread of child abuse materials on the platform.
Durov, under investigation since his arrest in France last month, says the new measures are part of broader efforts to deter criminal activity on Telegram. Historically, Telegram has been criticised for its lax approach to moderation, often ignoring government requests to remove illegal content or share information on suspected criminals. Now, with AI and human moderators, the app conceals problematic content from search results.
Telegram has long been a tool for activists and dissidents, especially in countries like Russia and Iran, where it has been used to challenge authoritarian regimes. However, the platform has also attracted extremists, conspiracy theorists, and white supremacists. In some cases, Telegram has been used to coordinate real-world attacks, leading to mounting pressure on the company to take greater responsibility.
Sensitive personal and medical data from millions of Star Health customers, India’s largest standalone health insurer, has been leaked and made accessible through chatbots on Telegram. This breach exposes names, phone numbers, addresses, and even medical diagnoses. The stolen data, amounting to 7.24 terabytes, includes over 31 million records and is being sold via these chatbots. Despite the insurer’s initial claims that there was no widespread compromise, numerous policy and claims documents have been publicly available for weeks. Victims were not notified of the breach, even though their private details were openly traded.
Telegram, known for its rapid growth fueled by customisable chatbots, is under heightened scrutiny as these bots become tools for cybercriminals. Even with Telegram’s attempts to remove them, new bots emerge, offering stolen data. This situation underscores the ongoing difficulties Indian companies face in protecting sensitive information as hackers increasingly exploit modern platforms for illicit activities.
Star Health has informed local authorities about the breach, but millions of customers remain vulnerable to identity theft and fraud. This incident highlights major concerns about the safety of sensitive information in India’s digital landscape, emphasising the urgent need for stronger data protection laws and cybersecurity measures.
Meta is set to overhaul Instagram’s privacy settings for users under 18, introducing stricter controls to protect young users. Accounts for teenagers will now be private by default, ensuring only approved connections can message or tag them. The move comes amid growing concerns over the negative impact of social media on youth, with studies highlighting links to mental health issues such as depression and anxiety.
Parents will have more authority over their children’s accounts, including monitoring who they engage with and setting restrictions on app usage. Teens under 16 will need parental permission to change default settings. The update also includes new features like a 60-minute daily usage reminder and a default “sleep mode” to mute notifications overnight.
Social media platforms, including Meta’s Instagram, have faced numerous lawsuits, with critics arguing that these apps have addictive qualities and contribute to rising mental health problems in teenagers. Recent US legislation seeks to hold platforms accountable for their effects on young users, pushing Meta to introduce these changes.
The rollout will take place in the US, UK, Canada, and Australia within the next two months, with European Union users following later. Global adoption of the new teen accounts is expected by January next year.
France has taken a bold legal step with its new law, targeting tech executives whose platforms enable illegal activities. The pioneering legislation, enacted in January 2023, puts France at the forefront of efforts to curb cybercrime. The law allows for criminal charges against tech leaders, like Telegram CEO Pavel Durov, for complicity in crimes committed through their platforms. Durov is under formal investigation in France, facing potential charges that could carry a 10-year prison sentence and a €500,000 fine. He denies Telegram’s role in facilitating illegal transactions, stating the platform complies with the EU regulations.
The LOPMI law for the Ministry of the Interior outlines a €15 billion plan over five years to address future security challenges by enhancing human, legal, and budgetary resources. It aims to modernise operations through digital transformation, with nearly half the budget dedicated to digitising services, modernising investigative tools, and improving cybercrime response. The law plans to create 8,500 new jobs, improve recruitment diversity, and double police and gendarme presence by 2030. It emphasises crisis management, especially for civil security and climate events, and strengthens public order for major international events like the 2024 Olympics. Additionally, it aims to improve border security through advanced technology and cooperation.
While the US has prosecuted individuals like Ross Ulbricht, founder of the Silk Road marketplace, those cases required proof of active involvement in criminal activity. However, French law seeks to hold platform operators accountable for illegal actions facilitated through their sites, even if they were not directly involved.
Prosecutors in Paris, led by Laure Beccuau, have praised the law as a powerful tool in their fight against organised cybercrime, including child exploitation, credit card trafficking, and denial-of-service attacks. The recent high-profile arrest of Durov and the shutdown of other criminal platforms like Coco highlight France’s aggressive stance in combating online crime. The J3 cybercrime unit overseeing Durov’s case has been involved in other relevant investigations, including the notorious case of Dominique Pelicot, who used the anonymous chat forum Coco to orchestrate heinous crimes.
While the law gives French authorities unprecedented power, legal and academic experts caution that its untested nature could lead to challenges in court. Nonetheless, France’s new cybercrime law seriously escalates the global battle against online criminal activity.
A federal judge has temporarily halted a new Utah law designed to protect minors’ mental health by regulating social media use. The law, set to go into effect on 1 October, would have required social media companies to verify users’ ages and impose restrictions on accounts used by minors. Chief US District Judge Robert Shelby granted a preliminary injunction, stating that the law likely violates the First Amendment rights of social media companies by overly restricting their free speech.
The lawsuit, filed by tech industry group NetChoice, argued that the law unfairly targets social media platforms while exempting other websites, creating content-based restrictions. NetChoice represents major tech firms, including Meta, YouTube, Snapchat, and X (formerly Twitter). The court found their arguments convincing, highlighting that the law failed to meet the high scrutiny required for laws regulating speech.
Utah officials expressed disappointment with the ruling but affirmed their commitment to protecting children from the harmful effects of social media. Attorney General Sean Reyes stated that his office is reviewing the decision and is considering further steps. Governor Spencer Cox signed the law in March, hoping to shield minors from the negative impact of social media. Still, the legal battle underscores the complexity of balancing free speech with safeguarding children online.
The ruling is part of a broader national debate, with courts blocking similar laws in states like California, Texas, and Arkansas. Chris Marchese, director of NetChoice’s litigation centre, hailed the decision as a victory, emphasising that the law is deeply flawed and should be permanently struck down. This ongoing legal struggle reveals the challenge of finding solutions to address growing concerns over the effects of social media on youth without infringing on constitutional rights.
Australia is preparing to introduce age restrictions for social media use to protect children’s mental and physical health. Prime Minister Anthony Albanese announced the plan, emphasising that the government would conduct an age verification trial before finalising the laws, likely setting the minimum age between 14 and 16. Albanese stressed the importance of reducing children’s reliance on social media in favour of real-life activities, citing growing concerns about the harmful effects of digital platforms.
The proposed law would make Australia one of the first to implement such a restriction. However, past attempts by the EU have faced resistance over concerns about limiting minors’ online rights. Tech giants like Meta, the parent company of Facebook and Instagram, which currently have a self-imposed minimum age of 13, have responded cautiously, calling for empowerment and tools for young users rather than outright restrictions.
Why does this matter?
Australia‘s move comes amid a parliamentary inquiry into social media’s impact on society, where testimonies have highlighted its negative influence on teenagers’ mental health. However, critics warn that the law may backfire, potentially driving younger users into unregulated, hidden areas of the internet. Digital rights advocates and experts from institutions like the Queensland University of Technology have expressed concerns, arguing that exclusion from mainstream platforms could harm children’s digital engagement and safety.
Australia’s eSafety Commissioner has also noted that restriction-based approaches may limit access to critical support services for younger users. As the debate continues, social media industry groups urge the government to consult with experts to ensure the policy does not inadvertently expose children to greater risks online.
Telegram founder Pavel Durov announced that the messaging platform will tighten its content moderation policies following criticism over its use for illegal activities. The decision comes after Durov was placed under formal investigation in France for crimes linked to fraud, money laundering, and sharing abusive content. In a message to his 12.2 million subscribers, Durov stressed that most users were law-abiding but acknowledged that a small percentage were tarnishing the platform’s reputation. He vowed to transform Telegram’s moderation practices from a source of criticism to one of praise.
While details on how Telegram will improve its moderation remain sparse, Durov revealed that some features frequently misused for illegal activity had already been removed. These include disabling media uploads on a standalone blogging tool and scrapping the People Nearby feature, which scammers had exploited. The platform will now focus on showcasing legitimate businesses instead. These changes follow Durov’s arrest and questioning in France, raising significant concerns within the tech industry over free speech, platform responsibility, and content policing.
Critics, including former Meta executive Katie Harbath, warned that improving moderation would not be simple. Harbath suggested that Durov, like other tech CEOs, may find himself in for a difficult task. Telegram also quietly updated its Frequently Asked Questions, removing language that previously claimed it did not monitor illegal content in private chats, signalling a potential shift in how it approaches privacy and illegal activity.
Durov also defended Telegram’s moderation efforts, stating that the platform removes millions of harmful posts and channels daily, dismissing claims that it is a haven for illegal content. He expressed surprise at the French investigation, noting that authorities could have contacted the company’s the EU representative or himself directly to address concerns.
New Mexico has filed a lawsuit against Snap Inc, alleging that Snapchat’s design facilitates the sharing of child sexual exploitation material. Attorney General Raul Torrez stated that a months-long investigation found Snapchat to be a key platform for sextortion, where predators coerce minors into sending explicit content.
Snap said it is reviewing the complaint and will respond in court. The company has invested significant funds into trust and safety measures and continues to work with law enforcement and safety experts to combat such issues.
Snapchat is widely used by teens due to its disappearing message feature, which has been criticised for misleading users. According to Torrez, predators can permanently capture the content, creating a virtual collection of child sexual images that are shared indefinitely.
Investigators opened a decoy Snapchat account as part of the investigation, discovering 10,000 records of child sexual abuse material on the dark web. Snapchat was identified as a major source for such content in these sites. New Mexico also sued Meta last December for similar reasons.
An international school near Brussels, Belgium has implemented a strict policy to curb smartphone use by requiring students to place their devices in a locker at the start of the day. If students are found using smartphones, the devices are confiscated and returned at the end of the school day.
This initiative, led by school director David Bogaerts, is set to be adopted by hundreds of schools across Brussels and Wallonia. The new Francophone community government plans to enforce a smartphone ban in primary schools and the first three years of secondary schools. This reflects a growing trend across Europe, with the Netherlands already enacting such bans and France and Ireland contemplating similar measures. The same debate is also ongoing in the US.
These bans are driven by rising concerns over distractions and cyberbullying associated with smartphones, along with the negative effects of excessive screen time on children’s mental health. European Commission President Ursula von der Leyen highlighted these concerns, emphasising the critical importance of teenage years for brain and personality development and the susceptibility of young people to social media’s harms. In classrooms, teachers face challenges managing apps like TikTok, Snapchat, and the newer TenTen, which distract students significantly.
In response, the Francophone school federation Wallonie-Bruxelles Enseignement (WBE) has announced a comprehensive smartphone ban, arguing that previous, less restrictive measures have failed.
Why does it matter?
While some support the ban for its clarity and positive impacts on behaviour and attention spans, others also warn it may prevent students from learning responsible smartphone use during formative years. Alternatives include using apps to monitor smartphone use as educational tools and integrating laptops for digital learning, providing a balanced approach to managing technology in schools.
It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech.
The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO.
Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.
In 2011, Durov said the Russian government asked him to delete the accounts of anti-government people on his social media platform. He refused. After the 2014 coup in Ukraine, Durov refused to provide the Russian government with information about users involved in the event. pic.twitter.com/hqnijdiBJ5
Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.
In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.
Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.
Pavel Durov left Russia when the government tried to control his social media company, Telegram. But in the end, it wasn’t Putin who arrested him for allowing the public to exercise free speech. It was a western country, a Biden administration ally and enthusiastic NATO member,… https://t.co/F83E9GbNHC
The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.
With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security.
Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.
It would be helpful to the global public to understand more details about why he was arrested
Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case.
The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.
In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.
True, Brazil is controlled by a tyrannical dictator masquerading as a judge https://t.co/kkPfNRrBOh
Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.
When you say you are committed to freedom of expression, you are lying. We have a letter from France that proves this, without a doubt.
We had to shutdown Rumble in France because you have NO committment to freedom of expression.
The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.
All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.
Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society?
These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.