Australia enacts groundbreaking law banning under-16s from social media

Australia has approved a groundbreaking law banning children under 16 from accessing social media, following a contentious debate. The new regulation targets major tech companies like Meta, TikTok, and Snapchat, which will face fines of up to A$49.5 million if they allow minors to log in. Starting with a trial period in January, the law is set to take full effect in 2025. The move comes amid growing global concerns about the mental health impact of social media on young people, with several countries considering similar restrictions.

The law, which marks a significant political win for Prime Minister Anthony Albanese, has received widespread public support, with 77% of Australians backing the ban. However, it has faced opposition from privacy advocates, child rights groups, and social media companies, which argue the law was rushed through without adequate consultation. Critics also warn that it could inadvertently harm vulnerable groups, such as LGBTQIA or migrant teens, by cutting them off from supportive online communities.

Despite the backlash, many parents and mental health advocates support the ban, citing concerns about social media’s role in exacerbating youth mental health issues. High-profile campaigns and testimonies from parents of children affected by cyberbullying have helped drive public sentiment in favour of the law. However, some experts warn the ban could have unintended consequences, pushing young people toward more dangerous corners of the internet where they can avoid detection.

The law also has the potential to strain relations between Australia and the United States, as tech companies with major US ties, including Meta and X, have voiced concerns about its implications for internet freedom. While these companies have pledged to comply, there remain significant questions about how the law will be enforced and whether it can achieve its intended goals without infringing on privacy or digital rights.

UK social media platforms criticised over safety failures

Nearly a quarter of children aged 8-17 in the UK lie about their age to access adult social media platforms, according to a new Ofcom report. The media regulator criticised current verification processes as insufficient and warned tech companies they face heavy fines if they fail to improve safety measures under the Online Safety Act, which takes effect in 2025.

The law will require platforms to implement ‘highly effective’ age assurance to prevent underage users from accessing adult content. Ofcom’s findings highlight the risks children face from harmful material online, sparking concerns from advocates like the Molly Rose Foundation, which warns that tech companies are not enforcing their own rules.

Some social media platforms, including TikTok, claim they are enhancing safety measures with machine learning and other innovations. However, BBC investigations and feedback from teenagers suggest that bypassing current systems remains alarmingly easy, with no ID verification required for account setup. Calls for stricter regulation continue as online safety concerns grow.

T-Mobile prevents cyberattack, safeguarding customer data

T-Mobile has reported recent attempts by cyber attackers to infiltrate its systems. The US telecom giant confirmed that its security measures successfully prevented access to sensitive customer data, including calls, voicemails, and texts. The intrusion originated from a compromised network connected to T-Mobile’s systems, prompting the company to sever the connection.

The attackers’ traits resembled those of Salt Typhoon, a Chinese-linked cyber espionage group, though T-Mobile has not confirmed their identity. The firm’s Chief Security Officer, Jeff Simon, stated that customer information remained secure, with no disruption to services. Findings were reported to the US government for further investigation.

Simon attended a White House meeting last week to discuss escalating cyber threats. The FBI and the Cybersecurity & Infrastructure Security Agency recently disclosed an ongoing investigation into a Chinese-linked espionage campaign targeting several US telecom providers.

The broader operation reportedly infiltrated multiple companies, stealing sensitive call data and accessing private communications. Such breaches compromised the devices of individuals in government and politics, including campaign staff during the 2020 US presidential election, raising concerns about national security.

CrowdStrike raises forecasts amid cyberattack surge

Cybersecurity firm CrowdStrike has increased its annual revenue and profit projections, reporting a 29% rise in third-quarter revenue to $1.01 billion, surpassing analyst estimates. The company credits robust demand for its comprehensive cybersecurity services as businesses address growing online threats. Notable clients, including AT&T and Ticketmaster, have faced recent cyberattacks, driving greater investment in digital defences.

CrowdStrike now expects annual revenue between $3.92 billion and $3.93 billion, up from previous forecasts. Adjusted profit per share estimates have also been raised to $3.74–$3.76. However, its fourth-quarter revenue outlook, in line with analyst expectations, led to a slight dip in share value.

Rival Palo Alto Networks has similarly seen strong growth, reflecting increased spending in the cybersecurity sector. Despite challenges such as a July outage, CrowdStrike maintains optimism for sustained growth, bolstered by its enhanced customer engagement programmes.

UNESCO survey finds many influencers don’t fact-check content

A new survey by UNESCO reveals that over 60% of online influencers fail to fact-check the content they share with their followers. The study, conducted by researchers at Bowling Green State University, surveyed 500 influencers across 45 countries about their content-sharing practices. It found that many influencers struggle to assess the reliability of information, with 42% relying on the number of likes and shares a post receives as a measure of credibility.

The survey also highlighted that only 37% of content creators use mainstream media as a source, with personal experiences and their own research being the top sources for content. While many influencers are aware of the challenge of misinformation, only 73% expressed interest in training to better handle disinformation and online hate speech.

UNESCO is responding to this need by launching a month-long training program designed to equip influencers with tools to combat disinformation. The course will teach content creators how to verify information, source from diverse outlets, and debunk false narratives, aiming to improve the overall quality of online information.

Margrethe Vestager reflects on EU legacy as competition chief

Margrethe Vestager, the European Union’s outgoing competition chief, is stepping down after a decade of high-profile confrontations with tech giants like Apple and Google. In an exit interview, she expressed regret over not being more aggressive in regulating Big Tech, acknowledging the continued dominance of major platforms despite billions in fines. She described her tenure as ‘partly successful,’ noting the slow pace of change in the tech landscape.

Vestager was instrumental in shaping the EU’s regulatory framework, pushing for initiatives like the Digital Markets Act (DMA) to curb monopolistic behaviour. However, she conceded that the full impact of these measures may take years to be felt. She emphasised the importance of stronger enforcement and deterrence, advocating for a bolder approach to regulating tech firms globally.

Her reflections also highlighted the role of the Digital Services Act (DSA) in overseeing social media platforms and addressing harmful content. Platforms like X and Telegram, which face criticism for inadequate content moderation, were pointed out as examples of why robust regulation is necessary. Vestager stressed that platforms undermining democracy must comply with the EU’s stringent laws.

As she prepares to transition to academia, Vestager’s departure marks the end of an era. While her legacy includes significant strides in holding tech companies accountable, the ongoing influence of these firms signals that the battle for better regulation is far from over. Teresa Ribera Rodríguez will succeed her, tasked with continuing this critical work.

Australian parliament advances social media restrictions for kids

Australia’s House of Representatives passed a groundbreaking bill on Wednesday aiming to ban social media use for children under 16. The bill, supported by Prime Minister Anthony Albanese’s Labor government and the opposition, introduces strict measures requiring platforms to implement age-verification systems. Companies could face fines of up to A$49.5 million ($32 million) for breaches. The Senate will debate the bill next, with Albanese pushing for its approval before the year ends.

The law follows an emotional inquiry that highlighted cyberbullying’s devastating effects, including testimony from parents of children who self-harmed. While advocates argue the ban will protect young people’s mental health, critics, including youth groups and human rights organisations, warn it risks cutting off teens from vital social connections. Tech giants like Google, Meta, and TikTok have urged the government to delay the legislation until a proposed age-verification trial concludes in 2025.

Despite these concerns, public opinion overwhelmingly supports the ban, with recent polls showing 77% approval. Parent advocacy groups have praised the initiative as a critical step in addressing the negative impacts of social media on children. However, critics within parliament and civil rights groups have called for more nuanced solutions, emphasising the importance of balancing protection with privacy and self-expression rights.

If passed, Australia will become a global leader in stringent social media regulations, but the debate over how best to safeguard young users while respecting their freedoms is far from over.

Tech giants push back against Australia’s social media ban for children

Google and Meta are urging the Australian government to delay a proposed law that would prohibit social media use for children under 16, citing insufficient time to evaluate its potential effects. Prime Minister Anthony Albanese’s government aims to pass the bill, which includes some of the strictest child social media controls globally, before the parliamentary year ends on Thursday. However, critics argue the rushed timeline undermines thorough debate and expert input.

The bill mandates social media platforms, not parents or children, to implement age-verification systems, potentially involving biometrics or government IDs. Platforms failing to comply could face fines of up to AUD 49.5 million ($32 million). While the Liberal opposition is likely to support the legislation, some independents and tech companies like TikTok and Elon Musk’s X have raised concerns about its clarity and impact on human rights, including freedom of expression and access to information.

Tech companies argue the government should wait for the results of an age-verification trial before proceeding. TikTok called the bill rushed and poorly consulted, while Meta described it as “inconsistent and ineffective.” Meanwhile, Elon Musk criticised the bill as a potential tool for broader internet control, amplifying debates over balancing child safety with digital freedoms.

As a Senate committee prepares a report on the legislation, the controversy underscores the global challenge of regulating children’s online activity without infringing on broader rights.

US court to decide TikTok’s future amid ByteDance divestment law

A United States federal appeals court is set to rule by 6 December on whether ByteDance, TikTok‘s Chinese parent company, must divest its US operations or face a ban. The ruling will address national security concerns raised by the Justice Department, which alleges that TikTok’s Chinese ownership poses risks due to access to vast American user data. ByteDance has challenged the law as unconstitutional, arguing it unfairly targets TikTok and violates free speech.

The three-judge panel could uphold the law, leading to a likely appeal by ByteDance. Alternatively, the court might allow the law but criticise its fairness, requiring further certification of TikTok as a security risk. A ruling deeming the law unconstitutional could halt efforts to force ByteDance to sell TikTok’s US assets. Any outcome may result in further legal battles, including an appeal to the Supreme Court.

The case underscores tensions between US national security priorities and free market principles, with over 170 million Americans actively using TikTok. The final decision could shape the future of tech regulation and US-China relations.

Meta proposes EU standards for teen safety online

Meta has proposed a unified system for age verification and safety standards across the EU to better protect teenagers online. The plan includes requiring parental approval for app downloads by users under 16, with app stores notifying parents for consent. Meta also advocates for consistent age-appropriate content guidelines and supervision tools for teens that parents can manage.

The proposal follows calls from incoming EU technology commissioner Henna Virkkunen, who emphasised protecting minors as a priority. Meta’s global head of safety, Antigone Davis, highlighted the fragmented nature of current European regulations, urging the adoption of uniform rules to ensure better protections for teens.

Although some EU frameworks like the Digital Services Act and Audiovisual Media Services Directive touch on youth safety, the lack of EU-wide standards leaves much to member states. Meta’s proposal aligns with ongoing discussions around the Child Sexual Abuse Material regulation, which aims to enhance online protections for minors.