India introduces new rules for critical telecom infrastructure

The government of India introduced the Telecommunications (Critical Telecommunication Infrastructure) Rules, 2024, on 22 November, which require telecom entities designated as Critical Telecommunication Infrastructure (CTI) to grant government-authorised personnel access to inspect hardware, software, and data. These rules are part of the Telecommunications Act, 2023, empowering the government to designate telecom networks as CTI if their disruption could severely impact national security, the economy, public health, or safety.

The rules mandate that telecom entities appoint a Chief Telecom Security Officer (CTSO) to oversee cybersecurity efforts and report incidents within six hours, a revised deadline from the original two hours proposed in the draft rules. This brings the telecom sector in India in line with existing Telecom Cyber Security Rules and CERT-In directions, though experts argue that the six-hour window does not meet global standards and may contribute to over-regulation.

Telecom networks are already governed under the Information Technology Act, creating potential overlaps with other regulatory frameworks such as the National Critical Information Infrastructure Protection Centre (NCIIPC). The rules also raise concerns about inspection protocols and data access, as they lack clarity on when inspections can be triggered or what limitations should be placed on government personnel accessing sensitive information.

Experts have also questioned the accountability measures in case of abuse of power and the potential for government officials to access the personal data of telecom subscribers during these inspections. To implement these rules, telecom entities must provide detailed documentation to the government, including network architecture, access lists, cybersecurity plans, and security audit reports. They must also maintain logs and documentation for at least two years to assist in detecting anomalies.

Additionally, remote maintenance or repairs from outside India require government approval, and upgrades to hardware or software must be reviewed within 14 days. Immediate upgrades are allowed during cybersecurity incidents, with notification to the government within 24 hours. A digital portal will be established to manage these rules, but concerns about the lack of transparency in communications have been raised. Finally, all CTI hardware, software, and spares must meet Indian Telecommunication Security Assurance Requirements.

Australia’s new social media ban faces backlash from Big Tech

Australia’s new law banning children under 16 from using social media has sparked strong criticism from major tech companies. The law, passed late on Thursday, targets platforms like Meta’s Instagram and Facebook, as well as TikTok, imposing fines of up to A$49.5 million for allowing minors to log in. Tech giants, including TikTok and Meta, argue that the legislation was rushed through parliament without adequate consultation and could have harmful unintended consequences, such as driving young users to less visible, more dangerous parts of the internet.

The law was introduced after a parliamentary inquiry into the harmful effects of social media on young people, with testimony from parents of children who had been bullied online. While the Australian government had warned tech companies about the impending legislation for months, the bill was fast-tracked in a chaotic final session of parliament. Critics, including Meta, have raised concerns about the lack of clear evidence linking social media to mental health issues and question the rushed process.

Despite the backlash, the law has strong political backing, and the government is set to begin a trial of enforcement methods in January, with the full ban expected to take effect by November 2025. Australia’s long-standing tensions with major US-based tech companies, including previous legislation requiring platforms to pay for news content, are also fueling the controversy. As the law moves forward, both industry representatives and lawmakers face challenges in determining how it will be practically implemented.

Dubai Police partners with Crystal Intelligence to bolster security in digital asset sector

Crystal Intelligence and Dubai Police have collaborated to address economic crimes within the rapidly growing digital asset space. By combining advanced blockchain analytics with law enforcement expertise, the two entities aim to predict and prevent financial crimes, ensuring robust security within the digital asset ecosystem.

That collaboration reflects Dubai’s commitment to remaining at the forefront of global blockchain innovation. Moreover, as part of its broader strategy, the UAE, particularly Dubai, has positioned itself as a leader in digital assets by creating a regulatory framework that fosters innovation while ensuring security and compliance.

Notably, establishing the Virtual Assets Regulatory Authority (VARA), the world’s first regulator for virtual assets, has attracted numerous blockchain companies and service providers to the city, further solidifying Dubai’s role as a central hub for digital assets. This collaboration also involves strengthening Dubai Police’s capabilities through Crystal Intelligence’s advanced tools in transaction monitoring, risk management, and predictive analytics.

Why does it matter?

These tools will enable law enforcement to proactively detect and address fraudulent activities across blockchain networks, thereby ensuring the integrity of Dubai’s digital asset market. By combining regulatory foresight with cutting-edge technology, Dubai demonstrates its leadership in integrating innovation with security. Ultimately, this partnership sets a new global standard for digital asset security and offers a model for other jurisdictions to follow as they navigate the complexities of financial crimes in the digital asset space.

Mixed reactions as Australia bans social media for minors

Australia’s recent approval of a social media ban for children under 16 has sparked mixed reactions nationwide. While the government argues that the law sets a global benchmark for protecting youth from harmful online content, critics, including tech giants like TikTok, warn that it could push minors to darker corners of the internet. The law, which will fine platforms like Meta’s Facebook, Instagram and TikTok up to A$49.5 million if they fail to enforce it, takes effect one year after a trial period begins in January.

Prime Minister Anthony Albanese emphasised the importance of protecting children’s physical and mental health, citing the harmful impact of social media on body image and misogynistic content. Despite widespread support—77% of Australians back the measure—many are divided. Some, like Sydney resident Francesca Sambas, approve of the ban, citing concerns over inappropriate content, while others, like Shon Klose, view it as an overreach that undermines democracy. Young people, however, expressed their intent to bypass the restrictions, with 11-year-old Emma Wakefield saying she would find ways to access social media secretly.

This ban positions Australia as the first country to impose such a strict regulation, ahead of other countries like France and several US states that have restrictions based on parental consent. The swift passage of the law, which was fast-tracked through parliament, has drawn criticism from social media companies, which argue the law was rushed and lacked proper scrutiny. TikTok, in particular, warned that the law could worsen risks to children rather than protect them.

The move has also raised concerns about Australia’s relationship with the United States, as figures like Elon Musk have criticised the law as a potential overreach. However, Albanese defended the law, drawing parallels to age-based restrictions on alcohol, and reassured parents that while enforcement may not be perfect, it’s a necessary step to protect children online.

AWS and Telefonica Germany test quantum tech in mobile networks

Telefonica Germany has partnered with Amazon Web Services (AWS) to explore quantum technologies in its mobile network. The pilot project aims to optimise mobile tower placement, enhance security with quantum encryption, and provide insights for the development of 6G networks.

Quantum computing, known for its potential to outperform traditional systems, is expected to revolutionise industries, including telecommunications. Experts stress the importance of early engagement with prototypes to prepare for the arrival of powerful quantum systems. Telefonica’s Chief Technology & Information Officer, Mallik Rao, highlighted their proactive approach in integrating these emerging technologies.

Telefonica Germany has already made strides in modernising its network, recently migrating one million 5G customers to AWS cloud infrastructure. Plans are underway to transfer millions more over the next year and a half. Rao described the transition as smooth and beneficial for performance.

AWS and Telefonica’s collaboration underlines the growing interest among tech leaders in harnessing quantum mechanics for groundbreaking advancements in speed and security.

AI cloned voices fool bank security systems

Advancements in AI voice cloning have revealed vulnerabilities in banking security, as a BBC reporter demonstrated how cloned voices can bypass voice recognition systems. Using an AI-generated version of her voice, she successfully accessed accounts at two major banks, Santander and Halifax, simply by playing back the phrase “my voice is my password.”

The experiment highlighted potential security gaps, as the cloned voice worked on basic speakers and required no high-tech setup. While the banks noted that voice ID is part of a multi-layered security system, they maintained that it is more secure than traditional authentication methods. Experts, however, view this as a wake-up call about the risks posed by generative AI.

Cybersecurity specialists warn that rapid advancements in voice cloning technology could increase opportunities for fraud. They emphasise the importance of evolving defenses to address these challenges, especially as AI continues to blur the lines between real and fake identities.

US FTC targets tech support scams with new rule changes

The Federal Trade Commission (FTC) has strengthened its rules to better protect consumers from tech support scams. With new amendments to the Telemarketing Sales Rule (TSR), the agency can now act against fraudsters even when victims initiate the call, closing a loophole that left many unable to seek justice.

Tech support scams commonly trick victims through fake pop-ups, emails, and warnings that urge them to contact bogus help desks. These scams have disproportionately affected older adults, who are five times more likely to be targeted, leading to over $175M in reported losses.

Previously, the US FTC could only pursue scammers if they made the initial call. The rule change now removes exemptions for technical support services, allowing the agency to crack down on deceptive practices regardless of how contact is made. Authorities are also targeting fraudulent pop-ups as part of a broader effort to combat these schemes.

With cases like the fake ‘Geek Squad’ scams resulting in millions in losses, the FTC’s expanded powers mark a significant step in holding scammers accountable and protecting vulnerable populations from financial harm.

Australia enacts groundbreaking law banning under-16s from social media

Australia has approved a groundbreaking law banning children under 16 from accessing social media, following a contentious debate. The new regulation targets major tech companies like Meta, TikTok, and Snapchat, which will face fines of up to A$49.5 million if they allow minors to log in. Starting with a trial period in January, the law is set to take full effect in 2025. The move comes amid growing global concerns about the mental health impact of social media on young people, with several countries considering similar restrictions.

The law, which marks a significant political win for Prime Minister Anthony Albanese, has received widespread public support, with 77% of Australians backing the ban. However, it has faced opposition from privacy advocates, child rights groups, and social media companies, which argue the law was rushed through without adequate consultation. Critics also warn that it could inadvertently harm vulnerable groups, such as LGBTQIA or migrant teens, by cutting them off from supportive online communities.

Despite the backlash, many parents and mental health advocates support the ban, citing concerns about social media’s role in exacerbating youth mental health issues. High-profile campaigns and testimonies from parents of children affected by cyberbullying have helped drive public sentiment in favour of the law. However, some experts warn the ban could have unintended consequences, pushing young people toward more dangerous corners of the internet where they can avoid detection.

The law also has the potential to strain relations between Australia and the United States, as tech companies with major US ties, including Meta and X, have voiced concerns about its implications for internet freedom. While these companies have pledged to comply, there remain significant questions about how the law will be enforced and whether it can achieve its intended goals without infringing on privacy or digital rights.

UK social media platforms criticised over safety failures

Nearly a quarter of children aged 8-17 in the UK lie about their age to access adult social media platforms, according to a new Ofcom report. The media regulator criticised current verification processes as insufficient and warned tech companies they face heavy fines if they fail to improve safety measures under the Online Safety Act, which takes effect in 2025.

The law will require platforms to implement ‘highly effective’ age assurance to prevent underage users from accessing adult content. Ofcom’s findings highlight the risks children face from harmful material online, sparking concerns from advocates like the Molly Rose Foundation, which warns that tech companies are not enforcing their own rules.

Some social media platforms, including TikTok, claim they are enhancing safety measures with machine learning and other innovations. However, BBC investigations and feedback from teenagers suggest that bypassing current systems remains alarmingly easy, with no ID verification required for account setup. Calls for stricter regulation continue as online safety concerns grow.

T-Mobile prevents cyberattack, safeguarding customer data

T-Mobile has reported recent attempts by cyber attackers to infiltrate its systems. The US telecom giant confirmed that its security measures successfully prevented access to sensitive customer data, including calls, voicemails, and texts. The intrusion originated from a compromised network connected to T-Mobile’s systems, prompting the company to sever the connection.

The attackers’ traits resembled those of Salt Typhoon, a Chinese-linked cyber espionage group, though T-Mobile has not confirmed their identity. The firm’s Chief Security Officer, Jeff Simon, stated that customer information remained secure, with no disruption to services. Findings were reported to the US government for further investigation.

Simon attended a White House meeting last week to discuss escalating cyber threats. The FBI and the Cybersecurity & Infrastructure Security Agency recently disclosed an ongoing investigation into a Chinese-linked espionage campaign targeting several US telecom providers.

The broader operation reportedly infiltrated multiple companies, stealing sensitive call data and accessing private communications. Such breaches compromised the devices of individuals in government and politics, including campaign staff during the 2020 US presidential election, raising concerns about national security.