Pavel Durov under formal investigation in France for alleged organised crime activities

Telegram founder Pavel Durov is under formal investigation in France for his alleged involvement in organised crime activities facilitated through the messaging platform. A French judge granted Durov bail on Wednesday, requiring him to pay €5 million, police report twice weekly, and remain within French territory. Durov’s legal troubles stem from accusations that Telegram has been complicit in enabling illicit activities such as child exploitation, drug trafficking, and fraud, as well as refusing to cooperate with authorities.

Durov was initially arrested at a Paris airport on Saturday, sparking significant debate about the balance between free speech and law enforcement. The French authorities’ move to formally investigate Durov highlights ongoing concerns about Telegram’s role in criminal activities and its lack of cooperation with judicial requests. The investigation began in February and is part of a broader effort by European law enforcement to hold tech platforms accountable for illegal activities.

The situation has strained diplomatic relations between France and Russia. Russian officials, including Foreign Minister Sergei Lavrov, have criticised France’s actions, while the Kremlin has offered support to Durov due to his Russian citizenship. However, Durov’s multiple citizenships, including French and UAE, complicate the situation.

Telegram, known for its commitment to free speech and privacy, has faced criticism for being a platform where extremist groups, conspiracy theorists, and political dissidents can operate with little oversight. Despite these concerns, Telegram has defended its moderation practices, stating that they comply with the EU laws and are continually improving.

French President Emmanuel Macron, an active Telegram user who granted Durov French citizenship in 2021, has insisted that the legal actions against Durov are not politically motivated. As the investigation continues, it will test the limits of free speech and the responsibility of tech companies in the digital age.

NCA seeks input on network message guidelines

The National Communications Authority (NCA) is conducting a public consultation on its draft Guidelines for the Management of Network Promotional Messages. These guidelines are designed to set industry standards for transmitting network promotional messages, ensuring they comply with legal, ethical, and transparent practices. The guidelines also aim to protect consumer rights by introducing clear opt-in and opt-out mechanisms, regulating the frequency and timing of messages, and standardizing sender identification for better consumer recognition.


The consultation, which began on 2 August 2024, is ongoing and will conclude on 19 September 2024. The NCA encourages all stakeholders, including Service Providers, Consumer Advocacy Groups, and the general public, to participate by reviewing the draft guidelines and providing feedback. The NCA has reaffirmed its commitment to transparency by announcing that all submissions will be treated as non-confidential and will be published on the NCA website as they are received.

Businesses prepare for uncertain future with AI regulation

The growing risk of AI regulation is becoming a key concern for US companies, with 27% of Fortune 500 firms citing it in their recent reports. The development of AI rules is seen as a potential threat to innovation and business practices, especially in light of state-level initiatives such as California’s SB 1047. Companies fear that such regulations could hinder AI model development and sharing, with hundreds of similar bills being proposed nationwide.

Businesses like Moody’s have voiced concerns over how AI regulation could increase compliance burdens, while others like Johnson & Johnson are mindful of global efforts, including the EU’s AI Act. Despite the potential for greater oversight, companies like Booking Holdings have acknowledged the benefits of regulating AI models to prevent biases and other risks. The White House’s Executive Order on AI and the rise of state legislation point to a future of tighter regulation.

To manage these risks, some corporations are taking matters into their own hands by implementing internal AI guidelines ahead of new laws. S&P Global has established its own AI policies to anticipate upcoming regulations but remains concerned that new laws could impede competition in the AI space. On the other hand, companies such as Nasdaq have already begun working with regulators on AI-enabled solutions, demonstrating how businesses are navigating the complex regulatory landscape.

Despite these challenges, companies are pressing ahead with AI initiatives as they seek to stay competitive. Despite regulatory uncertainty and varying laws from state to state, businesses are unwilling to slow their AI development, knowing their rivals are likely to push forward. Industry leaders believe thoughtful regulation could eventually benefit AI adoption if it supports responsible and innovative practices.

TikTok faces lawsuit over viral challenge death

A US appeals court has recently revived a lawsuit against TikTok, filed by the mother of a 10-year-old girl who tragically died after participating in a dangerous viral challenge on the platform. The blackout challenge, which involved users choking themselves until they lost consciousness, led to the death of Nylah Anderson in 2021.

The case hinges on the argument that TikTok’s algorithm recommended the harmful challenge to Nylah despite federal protections typically shielding internet companies from liability for user-generated content. The 3rd US Circuit Court of Appeals in Philadelphia ruled that Section 230 of the Communications Decency Act, which generally protects online platforms from such lawsuits, does not apply to algorithmic recommendations made by the company itself.

Judge Patty Shwartz, writing for the panel, explained that while Section 230 covers third-party content, it does not extend to the platform’s content curation decisions. This ruling marks a substantial shift from previous cases where courts had upheld Section 230 to shield platforms from liability related to harmful user-generated content.

The court’s decision reflects a broader interpretation of a recent US Supreme Court ruling, which recognised that algorithms used by platforms represent editorial judgments by the companies themselves. According to this view, TikTok’s algorithm-driven recommendations are considered the company’s speech, not protected by Section 230.

The lawsuit, brought by Tawainna Anderson against TikTok and its parent company ByteDance, was initially dismissed by a lower court. Still, the appeals court has now allowed the case to proceed. Anderson’s lawyer, Jeffrey Goodman, hailed the ruling as a loss for Big Tech’s immunity protections. Meanwhile, Judge Paul Matey criticised TikTok for prioritising profits over safety, underscoring that the platform cannot claim immunity beyond what Congress has granted.

TikTok faces new challenges as key leader exits

Nicole Lacopetti, TikTok’s head of content strategy and policy, is set to leave the company in September, marking a significant change in the platform’s leadership. Her departure follows the earlier exit of former COO V Pappas and the ongoing reorganisation led by current COO Adam Presser.

TikTok’s strategy is evolving as the platform grows, aiming to cater to an older audience. According to industry insights, content is becoming more complex and engaging, with a notable trend toward interactive elements like online games, which have gained popularity among users over 30.

The platform has faced severe scrutiny from US lawmakers, who have raised concerns over data privacy and its connections to China, leading to discussions of a potential ban. Despite these challenges, TikTok remains a powerful tool for reaching younger audiences, particularly in the political sphere, where it engages younger voters.

As TikTok navigates these changes, the platform’s influence in the political landscape is expected to grow, with the next US president needing to acknowledge its power in connecting with voters more personally and dynamically.

French Ministry of Justice weighs charges against Telegram CEO

French Ministry of Justice is weighing Pavel Durov’s charges to decide whether he will be placed under formal investigation following his recent arrest as part of a probe into organised crime on the messaging platform. Durov, who was detained on Saturday evening after landing at a Paris airport on a private jet, now faces scrutiny over the potential criminal liability of app providers and the broader debate about the balance between freedom of speech and law enforcement.

Telegram boasts nearly 1 billion users and is particularly popular in Russia, Ukraine, and former Soviet republics. Being placed under formal investigation in France does not imply guilt but signals that judges believe sufficient evidence exists to continue the probe. Such investigations can take years to either go to trial or be shelved. If Durov is formally investigated, judges will also consider whether he should be placed in pretrial detention, mainly if there is concern he might flee.

Currently, the broader investigation is focused on unidentified individuals and examines allegations including facilitating illicit transactions, possession of child sexual abuse material, drug trafficking, fraud, withholding information from authorities, and providing cryptographic services to criminals. The prosecutor’s office has not clarified which specific charges, if any, Durov might face and declared that an update on the investigation is expected soon.

Durov’s French lawyer has not responded to repeated requests for comment. His arrest has exacerbated tensions between Russia and France, especially given France’s support for Ukraine in its ongoing conflict with Russia. President Emmanuel Macron has officially stated that the arrest was not politically motivated.

Durov has been in police custody since his arrest on Saturday and can be held for a maximum of 96 hours, or four days before judges must decide whether to proceed with a formal investigation.

Social media platforms urged to comply with new Malaysian regulations

Malaysia is moving forward with a plan to regulate social media platforms by requiring them to obtain licences if they have more than eight million users. The move aims to tackle rising cybercrime in the country, with legal action possible for non-compliance by January 2025. While tech industry group Asia Internet Coalition (AIC), whose members include Google, Meta, and X, raised concerns over the clarity of the regulations, Communications Minister Fahmi Fadzil stated that tech companies must respect Malaysian laws if they wish to continue operating.

The AIC initially called for a pause on the plan in an open letter to Prime Minister Anwar Ibrahim, describing the licensing regime as ‘unworkable’ and warning it could stifle innovation by placing undue burdens on businesses. The group also highlighted the need for formal public consultation, leaving uncertainty regarding the obligations imposed on social media companies. However, the letter was later edited, removing references to the regulations being ‘unworkable’ and deleting a list of AIC member companies.

Malaysia‘s communications ministry has remained firm on the new regulations, asserting that its laws are bigger than the tech giants operating within its borders. The government has been in discussions with industry representatives and plans to conduct a public inquiry to gather feedback from industry players and the public on the regulatory framework. Despite the objections from AIC, implementing the new licensing regime is set to proceed without delay.

Calls for ‘digital vaccination’ of children to combat fake news

A recently published report by the University of Sheffield and its research partners proposes implementing a ‘digital vaccination’ for children to combat misinformation and bridge the digital divide. The report sets out recommendations for digital upskilling and innovative approaches to address the digital divide that hampers the opportunities of millions of children in the UK.

The authors warn that there could be severe economic and educational consequences without addressing these issues, highlighting that over 40% of UK children lack access to broadband or a device, and digital skills shortages cost £65 billion annually.

The report calls for adopting the Minimum Digital Living Standards framework to ensure every household has the digital infrastructure. It also stresses the need for improved school digital literacy education, teacher training, and new government guidance to mitigate online risks, including fake news.

Iran has banned VPNs to tighten internet control

Iran has officially banned virtual private networks (VPNs) as part of a broader effort to tighten control over internet access. The directive, issued by the Supreme Council of Cyberspace and endorsed by Supreme Leader Ayatollah Ali Khamenei, prohibits using VPNs unless authorised by authorities. The regulation is particularly alarming for many Iranians, who have relied on VPNs to bypass extensive internet censorship and access blocked content, including popular social media platforms like Facebook, Twitter, and Instagram and streaming services such as YouTube and Netflix.

The motivations behind the crackdown are complex, with reports suggesting that some officials may profit from the VPN trade, indicating that the ban may eliminate competition rather than solely address national security concerns. Furthermore, the need for more clarity regarding enforcement leaves citizens uncertain about potential consequences for violations.

Public reaction has mainly been adverse, as the ban criminalises the actions of many citizens. Critics argue that it reflects a deep-seated fear of mobilisation among the populace when granted unrestricted access to information. This sentiment has garnered international attention, with the United States condemning the ban and reaffirming its commitment to supporting internet freedom in Iran.

Overall, the VPN ban is part of a broader trend of digital repression in Iran, especially following the protests after Mahsa Amini died in 2022. As the government tightens its control, tensions between state authority and citizens’ rights to access information pose significant challenges to digital rights in the country.

Zuckerberg alleges Biden admin pressured Meta on COVID censorship

Meta Platforms CEO Mark Zuckerberg has disclosed in a recent letter that senior Biden administration officials pressured his company to censor COVID-19 content during the pandemic. The letter, sent on 26 August to the US House Judiciary Committee, reveals Zuckerberg’s regret over not publicly addressing this pressure sooner and his acknowledgement of questionable content removal decisions made by Meta.

You can read the letter by clicking on X post

Zuckerberg detailed in the letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire. According to Zuckerberg, this pressure led to frustration when Meta did not fully comply.

The letter, which the Judiciary Committee on Facebook shared, highlights Zuckerberg’s criticism of the government’s actions. He expressed regret for not being more vocal about the situation and reflected on the decisions made with the benefit of hindsight.

The White House and Meta have not commented on the matter outside regular business hours. The Judiciary Committee, led by Chairman Jim Jordan, has labelled the letter a ‘big win for free speech,’ noting Zuckerberg’s admission that Facebook censored some content.

Additionally, Zuckerberg announced that he would refrain from contributing to electoral infrastructure for the upcoming presidential election. The approach follows his controversial $400 million donation in 2020 through his Chan Zuckerberg Initiative, which faced criticism and legal challenges from some groups who perceived it as partisan.