X must pay fine over child protection dispute

An Australian court has upheld a ruling requiring Elon Musk’s X, previously known as Twitter, to pay a $418,000 fine. The fine was issued for failing to cooperate with a request from the eSafety Commissioner regarding anti-child-abuse measures on the platform.

X had contested the penalty, arguing that it was no longer bound by regulatory obligations following a corporate restructure under Musk’s ownership. However, the court ruled that the platform was still required to respond to the request made by the Australian internet safety regulator.

The eSafety Commissioner stated that accepting X’s argument could have set a worrying precedent for foreign companies merging to avoid regulatory responsibilities. Civil proceedings against X have also begun due to its noncompliance.

Musk’s platform has clashed with authorities in Australia before, notably in a case where X refused to remove content showing a stabbing incident. The company claimed that one country should not dictate global online content.

TikTok faces lawsuit in Texas over child privacy breach

Texas Attorney General Ken Paxton has filed a lawsuit against TikTok, accusing the platform of violating children’s privacy laws. The lawsuit alleges that TikTok shared personal information of minors without parental consent, in breach of Texas’s Securing Children Online through Parental Empowerment Act (SCOPE Act).

The legal action seeks an injunction and civil penalties, with fines up to $10,000 per violation. Paxton claims TikTok failed to provide adequate privacy tools for children and allowed data to be shared from accounts set to private. Targeted advertising to children was also a concern raised in the lawsuit.

TikTok’s parent company, ByteDance, is being held responsible for allegedly prioritising profits over child safety. Paxton stressed the importance of holding large tech companies accountable for their role in protecting minors online.

The case was filed in Galveston County court, with TikTok yet to comment on the matter. The lawsuit represents a broader concern about the protection of children’s online privacy in the digital age.

EU questions YouTube, TikTok, and Snapchat over algorithms

The European Commission has requested information from YouTube, Snapchat, and TikTok regarding the algorithms used to recommend content to users. Concerns have been raised about the influence of these systems on issues like elections, mental health, and protecting minors. The inquiry falls under the Digital Services Act (DSA), aiming to address potential systemic risks, including the spread of illegal content such as hate speech and drug promotion.

TikTok faces additional scrutiny about measures to prevent bad actors from manipulating the platform, especially during elections. These platforms must provide detailed information on their systems by 15 November. Failure to comply could result in further action, including potential fines.

The DSA mandates that major tech companies take more responsibility in tackling illegal and harmful content. In the past, the EU has initiated similar non-compliance proceedings with other tech giants like Meta, AliExpress, and TikTok over content regulation.

This latest request reflects the EU’s ongoing efforts to ensure greater accountability from social media platforms. The focus remains on protecting users and maintaining a fair and safe digital environment.

Cloudflare partners with ISPs to enhance internet security and privacy for users worldwide

Cloudflare, internet service providers, and network equipment providers have embarked on a collaborative journey to enhance the safety and privacy of internet users globally. By offering Cloudflare’s DNS resolvers at no cost, these providers can deliver advanced security features crucial in today’s digital landscape.

That partnership empowers ISPs and equipment manufacturers to improve their service offerings and ensures that users can enjoy a safer browsing experience without additional costs. With children spending more time online, particularly during the COVID-19 pandemic, the demand for protective measures has never been greater.

Cloudflare’s initiatives, such as the launch of 1.1.1.1 for Families, allow these partners to implement content filtering and security features tailored specifically for households. The strategic alignment ensures that families can confidently navigate the internet, knowing that harmful content is being filtered and their online activities are shielded from threats.

Furthermore, Cloudflare, alongside ISPs and network equipment providers, addresses the challenges users face in setting up effective online protections. Many consumers find configuring DNS settings and implementing security features daunting. To tackle this issue, Cloudflare is working with its partners to simplify the setup process.

By integrating Cloudflare’s services directly into their platforms, ISPs can provide a seamless user experience that encourages the adoption of these important safety measures. That collaborative approach ensures that even the least tech-savvy users can benefit from enhanced security without feeling overwhelmed.

Why does this matter?

Cloudflare, internet service providers, and network equipment providers understand the need for flexible, customisable solutions to meet diverse user needs. With Cloudflare’s Gateway product, ISPs can offer advanced filtering options that let users tailor their online experience, including content restrictions and scheduling, such as limiting social media access. These customisable options empower users to control their online safety while boosting customer satisfaction and loyalty.

New law targets excessive phone use in California schools

California has introduced a new law requiring schools to limit or ban the use of smartphones to combat rising concerns about their impact on mental health and education. Governor Gavin Newsom signed the bill following increasing evidence linking excessive phone use with anxiety, depression, and learning difficulties.

California is joining thirteen other states, including Florida, which introduced a similar ban last year. Los Angeles County schools, the state’s largest district, already prohibited phones for its 429,000 students earlier this year. The law, aimed at promoting student focus and social development, reflects a broader national movement to reduce smartphone use among young people.

Surgeon General Vivek Murthy has warned of the growing mental health crisis associated with social media, comparing it to the dangers of smoking. Studies in the US suggest that teenagers spending more than three hours a day on social media are at increased risk of mental illness, with average usage exceeding four hours daily.

School boards across California will be required to implement policies limiting phone use by July 2026, with updates every five years. Newsom stressed the importance of addressing the issue early to improve students’ wellbeing and academic focus.

Snapchat’s balance between user safety and growth remains a challenge

Snapchat is positioning itself as a healthier social media alternative for teens, with CEO Evan Spiegel emphasising the platform’s different approach at the company’s annual conference. Recent research from the University of Amsterdam supports this view, showing that while platforms like TikTok and Instagram negatively affect youth mental health, Snapchat use appears to have positive effects on friendships and well-being.

However, critics argue that Snapchat’s disappearing messages feature can facilitate illegal activities. Matthew Bergman, an advocate for social media victims, claimed the platform has been used by drug dealers, citing instances of children dying from fentanyl poisoning after buying drugs via the app. Despite these concerns, Snapchat remains popular, particularly with younger users.

Industry analysts recognise the platform’s efforts but highlight its ongoing challenges. As Snapchat continues to grow its user base, balancing privacy and safety with revenue generation remains a key issue, especially as it struggles to compete with bigger players like TikTok, Meta, and Google for advertising.

Snapchat’s appeal lies in its low-pressure environment, with features like disappearing stories and augmented reality filters. Young users, like 14-year-old Lily, appreciate the casual nature of communication on the platform, while content creators praise its ability to offer more freedom and reduce social pressure compared to other social media platforms.

Telegram to share user data with authorities

Telegram apparently decided to alleviate its policy restrictions and to provide users’ IP addresses and phone numbers to authorities in response to valid legal requests. The shift in policy, announced by CEO Pavel Durov, marks a significant change for the platform, which has long been known for its resistance to government data demands. The update comes in the wake of Durov’s recent legal troubles in France, where he is facing charges related to the spread of child abuse materials on the platform.

Durov, under investigation since his arrest in France last month, says the new measures are part of broader efforts to deter criminal activity on Telegram. Historically, Telegram has been criticised for its lax approach to moderation, often ignoring government requests to remove illegal content or share information on suspected criminals. Now, with AI and human moderators, the app conceals problematic content from search results.

The case against Durov has intensified scrutiny of Telegram’s role in facilitating illegal activities. French authorities have accused Durov of refusing to cooperate with law enforcement by not providing data for wiretaps related to criminal investigations. Durov denies the charges despite these accusations and has remained in France in the inquiry.

Why does this matter?

Telegram has long been a tool for activists and dissidents, especially in countries like Russia and Iran, where it has been used to challenge authoritarian regimes. However, the platform has also attracted extremists, conspiracy theorists, and white supremacists. In some cases, Telegram has been used to coordinate real-world attacks, leading to mounting pressure on the company to take greater responsibility.

In response to these challenges, Telegram has introduced several policy changes. Earlier this month, the platform disabled new media uploads to combat bots and scammers. These moves signal a new chapter for Telegram as it navigates the delicate balance between privacy, free speech, and public safety.

Massive data leak hits India’s Star Health

Sensitive personal and medical data from millions of Star Health customers, India’s largest standalone health insurer, has been leaked and made accessible through chatbots on Telegram. This breach exposes names, phone numbers, addresses, and even medical diagnoses. The stolen data, amounting to 7.24 terabytes, includes over 31 million records and is being sold via these chatbots. Despite the insurer’s initial claims that there was no widespread compromise, numerous policy and claims documents have been publicly available for weeks. Victims were not notified of the breach, even though their private details were openly traded.

Telegram, known for its rapid growth fueled by customisable chatbots, is under heightened scrutiny as these bots become tools for cybercriminals. Even with Telegram’s attempts to remove them, new bots emerge, offering stolen data. This situation underscores the ongoing difficulties Indian companies face in protecting sensitive information as hackers increasingly exploit modern platforms for illicit activities.

Star Health has informed local authorities about the breach, but millions of customers remain vulnerable to identity theft and fraud. This incident highlights major concerns about the safety of sensitive information in India’s digital landscape, emphasising the urgent need for stronger data protection laws and cybersecurity measures.

Meta introduces new Instagram teen accounts

Meta is set to overhaul Instagram’s privacy settings for users under 18, introducing stricter controls to protect young users. Accounts for teenagers will now be private by default, ensuring only approved connections can message or tag them. The move comes amid growing concerns over the negative impact of social media on youth, with studies highlighting links to mental health issues such as depression and anxiety.

Parents will have more authority over their children’s accounts, including monitoring who they engage with and setting restrictions on app usage. Teens under 16 will need parental permission to change default settings. The update also includes new features like a 60-minute daily usage reminder and a default “sleep mode” to mute notifications overnight.

Social media platforms, including Meta’s Instagram, have faced numerous lawsuits, with critics arguing that these apps have addictive qualities and contribute to rising mental health problems in teenagers. Recent US legislation seeks to hold platforms accountable for their effects on young users, pushing Meta to introduce these changes.

The rollout will take place in the US, UK, Canada, and Australia within the next two months, with European Union users following later. Global adoption of the new teen accounts is expected by January next year.

Telegram’s Pavel Durov faces criminal probe in France under LOPMI law

France has taken a bold legal step with its new law, targeting tech executives whose platforms enable illegal activities. The pioneering legislation, enacted in January 2023, puts France at the forefront of efforts to curb cybercrime. The law allows for criminal charges against tech leaders, like Telegram CEO Pavel Durov, for complicity in crimes committed through their platforms. Durov is under formal investigation in France, facing potential charges that could carry a 10-year prison sentence and a €500,000 fine. He denies Telegram’s role in facilitating illegal transactions, stating the platform complies with the EU regulations.

The so-called LOPMI (Loi d’Orientation et de Programmation du Ministère de l’Intérieur) 2023-22 law, unique in its scope, is yet to be tested in court, making France the first country to target tech executives in this way directly. Legal experts point out that no similar laws exist in the US or elsewhere in the Western world.

While the US has prosecuted individuals like Ross Ulbricht, founder of the Silk Road marketplace, those cases required proof of active involvement in criminal activity. However, French law seeks to hold platform operators accountable for illegal actions facilitated through their sites, even if they were not directly involved.

Prosecutors in Paris, led by Laure Beccuau, have praised the law as a powerful tool in their fight against organised cybercrime, including child exploitation, credit card trafficking, and denial-of-service attacks. The recent high-profile arrest of Durov and the shutdown of other criminal platforms like Coco highlight France’s aggressive stance in combating online crime. The J3 cybercrime unit overseeing Durov’s case has been involved in other relevant investigations, including the notorious case of Dominique Pelicot, who used the anonymous chat forum Coco to orchestrate heinous crimes.

While the law gives French authorities unprecedented power, legal and academic experts caution that its untested nature could lead to challenges in court. Nonetheless, France’s new cybercrime law seriously escalates the global battle against online criminal activity.