Social media platforms face penalties over child safety

The UK government is intensifying efforts to safeguard children online, with new measures requiring social media platforms to implement robust age verification and protect young users from harmful content. Technology Secretary Peter Kyle highlighted the importance of ‘watertight’ systems, warning that companies failing to comply could face significant fines or even prison terms for executives.

The measures, part of the Online Safety Act passed in 2023, will see platforms penalised for failing to address issues such as bullying, violent content, and risky stunts. Ofcom, the UK‘s communications regulator, is set to outline further obligations in January, including stricter ID verification for adult-only apps.

Debate continues over the balance between safety and accessibility. While some advocate for bans similar to Australia‘s under-16 restrictions, teenagers consulted by Kyle emphasised the positive aspects of social media, including learning opportunities and community connections. Research into the impact of screen time on mental health is ongoing, with new findings expected next year.

Vietnam enacts strict internet rules targeting social media and gaming

Vietnam’s new internet law, known as ‘Decree 147,’ came into effect Wednesday, requiring platforms like Facebook and TikTok to verify user identities and share data with authorities upon request. Critics view the move as a crackdown on freedom of expression, with activists warning it will stifle dissent and blur the lines between legal and illegal online activity. Under the rules, tech companies must store verified information alongside users’ names and dates of birth and remove government-designated “illegal” content within 24 hours.

The decree also impacts the booming social commerce sector by allowing only verified accounts to livestream. Additionally, it imposes restrictions on gaming for minors, limiting sessions to one hour and a maximum of 180 minutes daily. Vietnam, with over 65 million Facebook users and a growing gaming population, may see significant disruptions in online behaviour and businesses.

Critics liken the law to China’s tight internet controls. Activists and content creators have expressed fear of persecution, citing recent examples like the 12-year prison sentence for a YouTuber critical of the government. Despite the sweeping measures, some local businesses and gamers remain sceptical about enforcement, suggesting a wait-and-see approach to the decree’s real-world impact.

Bots and disinformation test Bluesky’s resilience

Bluesky has seen explosive growth in recent months, surpassing 25 million users. The platform, which promotes decentralisation and user control, gained attention during the US elections and after X’s brief ban in Brazil. Bluesky has become an appealing alternative for those disenchanted with traditional platforms like Meta and X, offering curated features and a community-focused experience.

Rapid growth, however, has introduced significant challenges. Bots and AI-driven accounts have flooded the site, spreading misinformation and cluttering user interactions. The platform’s small team has worked swiftly to combat these issues, increasing its moderation capacity and introducing new tools to tackle impersonation and spam. Despite these efforts, the fight against AI bots and disinformation continues to grow more complex.

Bluesky’s commitment to decentralisation and user control has attracted users frustrated with larger platforms’ power dynamics. Experts caution, however, that the platform faces hurdles in maintaining its integrity while scaling its operations. Political and social fragmentation in online spaces could also limit Bluesky’s growth compared to visual-heavy platforms like TikTok and Instagram, which dominate younger audiences.

As the platform navigates its challenges, its future remains uncertain. Balancing growth, moderation, and user satisfaction will be critical to establishing Bluesky as a sustainable alternative in the competitive social media landscape.

Messaging app Viber blocked in Russia

Russian authorities have blocked access to the Viber messaging app, citing violations of rules aimed at curbing terrorism, extremism, and drug-related activities. The decision was announced by Roskomnadzor, the country’s communications regulator, marking the latest action in a series of restrictions on social media platforms.

Viber, owned by Japan’s Rakuten Group, had been a vocal opponent of Russian disinformation. Hiroshi Mikitani, Rakuten’s chief executive, previously described the app as a tool to combat propaganda, stating that the platform took a firm stance against fake news. However, Rakuten has yet to respond to the block.

This development comes amidst an ongoing digital crackdown in Russia, which has targeted various platforms perceived as threats to state narratives. Critics argue that such measures stifle free communication and independent information sharing. Viber now joins the list of restricted apps as Russia intensifies its grip on online spaces.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

UK’s online safety rules take effect

Social media platforms operating in the UK have been given until March 2025 to identify and mitigate illegal content on their services or risk fines of up to 10% of their global revenue. The warning comes as the Online Safety Act (OSA) begins to take effect, with Ofcom, the regulator, releasing final guidelines on tackling harmful material, including child sexual abuse, self-harm promotion, and extreme violence.

Dame Melanie Dawes, Ofcom’s chief, described this as the industry’s “last chance” to reform. “If platforms fail to act, we will take enforcement measures,” she warned, adding that public pressure for stricter action could grow. Companies must conduct risk assessments by March, focusing on how such material appears and devising ways to block its spread.

While hailed as a step forward, critics argue the law leaves gaps in child safety measures. The Molly Rose Foundation and NSPCC have expressed concerns about the lack of targeted action on harmful content in private messaging and self-harm imagery. Despite these criticisms, the UK government views the Act as a reset of societal expectations for tech firms, aiming to ensure a safer online environment.

Viral tweets mislead on Pi Coin Indian government support

Recent viral tweets have falsely claimed that the Indian government is supporting Pi Coin, citing an article from the Ministry of Ayush’s website. The article, however, was posted on a user-generated content (UGC) platform, not by government officials. The Ministry of Ayush, responsible for traditional medicine, has no official connection to Pi Coin, and the article was simply part of content posted by users to build links.

Despite its appearance on a government site, the article does not represent the views or support of the Ministry of Ayush or any other Indian government body. These misleading claims were likely spread by Pi Coin promotional accounts.

Users must verify the sources of information they come across, especially on social media, where misinformation can spread quickly. The Ministry of Ayush has no involvement in promoting Pi Coin, and the article in question was not authored by government officials.

In conclusion, claims that the Indian government is backing Pi Coin are false, and users should be cautious of such misleading content circulating online.

UK social media platforms criticised over safety failures

Nearly a quarter of children aged 8-17 in the UK lie about their age to access adult social media platforms, according to a new Ofcom report. The media regulator criticised current verification processes as insufficient and warned tech companies they face heavy fines if they fail to improve safety measures under the Online Safety Act, which takes effect in 2025.

The law will require platforms to implement ‘highly effective’ age assurance to prevent underage users from accessing adult content. Ofcom’s findings highlight the risks children face from harmful material online, sparking concerns from advocates like the Molly Rose Foundation, which warns that tech companies are not enforcing their own rules.

Some social media platforms, including TikTok, claim they are enhancing safety measures with machine learning and other innovations. However, BBC investigations and feedback from teenagers suggest that bypassing current systems remains alarmingly easy, with no ID verification required for account setup. Calls for stricter regulation continue as online safety concerns grow.

Social media fine plan dropped in Australia

Australia’s government has abandoned a proposal to fine social media platforms up to 5% of their global revenue for failing to curb online misinformation. The decision follows resistance from various political parties, making the legislation unlikely to pass the Senate.

Communications Minister Michelle Rowland stated the proposal aimed to enhance transparency and hold tech companies accountable for limiting harmful misinformation online. Despite broad public support for tackling misinformation, opposition from conservative and crossbench politicians stalled the plan.

The centre-left Labor government, currently lagging in polls, faces criticism for its approach. Greens senator Sarah Hanson-Young described the proposed law as a ‘half-baked option,’ adding to calls for more robust measures against misinformation.

Industry group DIGI, including Meta, argued the proposal merely reinforced an existing code. Australia’s tech regulation efforts are part of broader concerns about foreign platforms undermining national sovereignty.

Study finds 75% of news posts shared without reading

A new study has revealed that 75% of news-related social media posts are shared without being read, highlighting the rapid spread of unverified information. Researchers from US universities analysed over 35 million Facebook posts from 2017 to 2020, focusing on key moments in American politics. The study found that many users share links based on headlines, summaries, or the number of likes a post has received, without ever clicking to read the full article.

The study, published in Nature Human Behaviour, suggests this behavior may be driven by information overload and the fast-paced nature of social media. Users are often pressured to quickly share content without fully processing it, leading to the spread of misinformation. The research also pointed out that political partisans are more likely to share news without reading, though this could also be influenced by a few highly active, partisan accounts.

To mitigate the spread of misinformation, the authors suggest social media platforms implement warnings or alerts to inform users of the risks involved in sharing content without reading it. This would help users make more informed decisions before reposting news articles.