Meta tightens financial ad rules in Australia

Meta Platforms announced stricter regulations for advertisers promoting financial products and services in Australia, aiming to curb online scams. Following an October initiative where Meta removed 8,000 deceptive ‘celeb bait’ ads, the company now requires advertisers to verify beneficiary and payer details, including their Australian Financial Services License number, before running financial ads.

This move is part of Meta’s ongoing efforts to protect Australians from scams involving fake investment schemes using celebrity images. Verified advertisers must also display a “Paid for By” disclaimer, ensuring transparency in financial advertisements.

The updated policy follows a broader regulatory push in Australia, where the government recently abandoned plans to fine internet platforms for spreading misinformation. The crackdown on online platforms is part of a growing effort to assert Australian sovereignty over foreign tech companies, with a federal election looming.

Australia begins trial of teen social media ban

Australia‘s government is conducting a world-first trial to enforce its national social media ban for children under 16, focusing on age-checking technology. The trial, set to begin in January and run through March, will involve around 1,200 randomly selected Australians. It will help guide the development of effective age verification methods, as platforms like Meta, X (formerly Twitter), TikTok, and Snapchat must prove they are taking ‘reasonable steps’ to keep minors off their services or face fines of up to A$49.5 million ($32 million).

The trial is overseen by the Age Check Certification Scheme and will test several age-checking techniques, such as video selfies, document uploads for verification, and email cross-checking. Although platforms like YouTube are exempt, the trial is seen as a crucial step for setting a global precedent for online age restrictions, which many countries are now considering due to concerns about youth mental health and privacy.

The trial’s outcomes could influence how other nations approach enforcing age restrictions, despite concerns from some lawmakers and tech companies about privacy violations and free speech. The government has responded by ensuring that no personal data will be required without alternatives. The age-check process could significantly shape global efforts to regulate social media access for children in the coming years.

Australian social media ban sparked by politician’s wife’s call to action

Australia has passed a landmark law banning children under 16 from using social media, following a fast-moving push led by South Australian Premier Peter Malinauskas. The law, which takes effect in November 2025, aims to protect young people from the harmful effects of social media, including mental health issues linked to cyberbullying and body image problems. The bill has widespread support, with a government survey showing 77% of Australians backing the measure. However, it has sparked significant opposition from tech companies and privacy advocates, who argue that the law is rushed and could push young users to more dangerous parts of the internet.

The push for the national ban gained momentum after Malinauskas’s state-level initiative to restrict social media access for children under 14 in September. This led to a broader federal response, with Prime Minister Anthony Albanese’s government introducing a nationwide version of the policy. The legislation eliminates parental discretion, meaning no child under 16 will be able to use social media without facing fines for platforms that fail to enforce the rules. This move contrasts with policies in countries like France and Florida, where minors can access social media with parental permission.

While the law has garnered support from most of Australia’s political leaders, it has faced strong criticism from social media companies like Meta and TikTok. These platforms warn that the law could drive teens to hidden corners of the internet and that the rushed process leaves many questions unanswered. Despite the backlash, the law passed with bipartisan support, and a trial of age-verification technology will begin in January to prepare for its full implementation.

The debate over the law highlights growing concerns worldwide about the impact of social media on young people. Although some critics argue that the law is an overreach, others believe it is a necessary step to protect children from online harm. With the law now in place, Australia has set a precedent that could inspire other countries grappling with similar issues.

India introduces new rules for critical telecom infrastructure

The government of India introduced the Telecommunications (Critical Telecommunication Infrastructure) Rules, 2024, on 22 November, which require telecom entities designated as Critical Telecommunication Infrastructure (CTI) to grant government-authorised personnel access to inspect hardware, software, and data. These rules are part of the Telecommunications Act, 2023, empowering the government to designate telecom networks as CTI if their disruption could severely impact national security, the economy, public health, or safety.

The rules mandate that telecom entities appoint a Chief Telecom Security Officer (CTSO) to oversee cybersecurity efforts and report incidents within six hours, a revised deadline from the original two hours proposed in the draft rules. This brings the telecom sector in India in line with existing Telecom Cyber Security Rules and CERT-In directions, though experts argue that the six-hour window does not meet global standards and may contribute to over-regulation.

Telecom networks are already governed under the Information Technology Act, creating potential overlaps with other regulatory frameworks such as the National Critical Information Infrastructure Protection Centre (NCIIPC). The rules also raise concerns about inspection protocols and data access, as they lack clarity on when inspections can be triggered or what limitations should be placed on government personnel accessing sensitive information.

Experts have also questioned the accountability measures in case of abuse of power and the potential for government officials to access the personal data of telecom subscribers during these inspections. To implement these rules, telecom entities must provide detailed documentation to the government, including network architecture, access lists, cybersecurity plans, and security audit reports. They must also maintain logs and documentation for at least two years to assist in detecting anomalies.

Additionally, remote maintenance or repairs from outside India require government approval, and upgrades to hardware or software must be reviewed within 14 days. Immediate upgrades are allowed during cybersecurity incidents, with notification to the government within 24 hours. A digital portal will be established to manage these rules, but concerns about the lack of transparency in communications have been raised. Finally, all CTI hardware, software, and spares must meet Indian Telecommunication Security Assurance Requirements.

Australia’s new social media ban faces backlash from Big Tech

Australia’s new law banning children under 16 from using social media has sparked strong criticism from major tech companies. The law, passed late on Thursday, targets platforms like Meta’s Instagram and Facebook, as well as TikTok, imposing fines of up to A$49.5 million for allowing minors to log in. Tech giants, including TikTok and Meta, argue that the legislation was rushed through parliament without adequate consultation and could have harmful unintended consequences, such as driving young users to less visible, more dangerous parts of the internet.

The law was introduced after a parliamentary inquiry into the harmful effects of social media on young people, with testimony from parents of children who had been bullied online. While the Australian government had warned tech companies about the impending legislation for months, the bill was fast-tracked in a chaotic final session of parliament. Critics, including Meta, have raised concerns about the lack of clear evidence linking social media to mental health issues and question the rushed process.

Despite the backlash, the law has strong political backing, and the government is set to begin a trial of enforcement methods in January, with the full ban expected to take effect by November 2025. Australia’s long-standing tensions with major US-based tech companies, including previous legislation requiring platforms to pay for news content, are also fueling the controversy. As the law moves forward, both industry representatives and lawmakers face challenges in determining how it will be practically implemented.

Dubai Police partners with Crystal Intelligence to bolster security in digital asset sector

Crystal Intelligence and Dubai Police have collaborated to address economic crimes within the rapidly growing digital asset space. By combining advanced blockchain analytics with law enforcement expertise, the two entities aim to predict and prevent financial crimes, ensuring robust security within the digital asset ecosystem.

That collaboration reflects Dubai’s commitment to remaining at the forefront of global blockchain innovation. Moreover, as part of its broader strategy, the UAE, particularly Dubai, has positioned itself as a leader in digital assets by creating a regulatory framework that fosters innovation while ensuring security and compliance.

Notably, establishing the Virtual Assets Regulatory Authority (VARA), the world’s first regulator for virtual assets, has attracted numerous blockchain companies and service providers to the city, further solidifying Dubai’s role as a central hub for digital assets. This collaboration also involves strengthening Dubai Police’s capabilities through Crystal Intelligence’s advanced tools in transaction monitoring, risk management, and predictive analytics.

Why does it matter?

These tools will enable law enforcement to proactively detect and address fraudulent activities across blockchain networks, thereby ensuring the integrity of Dubai’s digital asset market. By combining regulatory foresight with cutting-edge technology, Dubai demonstrates its leadership in integrating innovation with security. Ultimately, this partnership sets a new global standard for digital asset security and offers a model for other jurisdictions to follow as they navigate the complexities of financial crimes in the digital asset space.

Mixed reactions as Australia bans social media for minors

Australia’s recent approval of a social media ban for children under 16 has sparked mixed reactions nationwide. While the government argues that the law sets a global benchmark for protecting youth from harmful online content, critics, including tech giants like TikTok, warn that it could push minors to darker corners of the internet. The law, which will fine platforms like Meta’s Facebook, Instagram and TikTok up to A$49.5 million if they fail to enforce it, takes effect one year after a trial period begins in January.

Prime Minister Anthony Albanese emphasised the importance of protecting children’s physical and mental health, citing the harmful impact of social media on body image and misogynistic content. Despite widespread support—77% of Australians back the measure—many are divided. Some, like Sydney resident Francesca Sambas, approve of the ban, citing concerns over inappropriate content, while others, like Shon Klose, view it as an overreach that undermines democracy. Young people, however, expressed their intent to bypass the restrictions, with 11-year-old Emma Wakefield saying she would find ways to access social media secretly.

This ban positions Australia as the first country to impose such a strict regulation, ahead of other countries like France and several US states that have restrictions based on parental consent. The swift passage of the law, which was fast-tracked through parliament, has drawn criticism from social media companies, which argue the law was rushed and lacked proper scrutiny. TikTok, in particular, warned that the law could worsen risks to children rather than protect them.

The move has also raised concerns about Australia’s relationship with the United States, as figures like Elon Musk have criticised the law as a potential overreach. However, Albanese defended the law, drawing parallels to age-based restrictions on alcohol, and reassured parents that while enforcement may not be perfect, it’s a necessary step to protect children online.

AWS and Telefonica Germany test quantum tech in mobile networks

Telefonica Germany has partnered with Amazon Web Services (AWS) to explore quantum technologies in its mobile network. The pilot project aims to optimise mobile tower placement, enhance security with quantum encryption, and provide insights for the development of 6G networks.

Quantum computing, known for its potential to outperform traditional systems, is expected to revolutionise industries, including telecommunications. Experts stress the importance of early engagement with prototypes to prepare for the arrival of powerful quantum systems. Telefonica’s Chief Technology & Information Officer, Mallik Rao, highlighted their proactive approach in integrating these emerging technologies.

Telefonica Germany has already made strides in modernising its network, recently migrating one million 5G customers to AWS cloud infrastructure. Plans are underway to transfer millions more over the next year and a half. Rao described the transition as smooth and beneficial for performance.

AWS and Telefonica’s collaboration underlines the growing interest among tech leaders in harnessing quantum mechanics for groundbreaking advancements in speed and security.

AI cloned voices fool bank security systems

Advancements in AI voice cloning have revealed vulnerabilities in banking security, as a BBC reporter demonstrated how cloned voices can bypass voice recognition systems. Using an AI-generated version of her voice, she successfully accessed accounts at two major banks, Santander and Halifax, simply by playing back the phrase “my voice is my password.”

The experiment highlighted potential security gaps, as the cloned voice worked on basic speakers and required no high-tech setup. While the banks noted that voice ID is part of a multi-layered security system, they maintained that it is more secure than traditional authentication methods. Experts, however, view this as a wake-up call about the risks posed by generative AI.

Cybersecurity specialists warn that rapid advancements in voice cloning technology could increase opportunities for fraud. They emphasise the importance of evolving defenses to address these challenges, especially as AI continues to blur the lines between real and fake identities.

US FTC targets tech support scams with new rule changes

The Federal Trade Commission (FTC) has strengthened its rules to better protect consumers from tech support scams. With new amendments to the Telemarketing Sales Rule (TSR), the agency can now act against fraudsters even when victims initiate the call, closing a loophole that left many unable to seek justice.

Tech support scams commonly trick victims through fake pop-ups, emails, and warnings that urge them to contact bogus help desks. These scams have disproportionately affected older adults, who are five times more likely to be targeted, leading to over $175M in reported losses.

Previously, the US FTC could only pursue scammers if they made the initial call. The rule change now removes exemptions for technical support services, allowing the agency to crack down on deceptive practices regardless of how contact is made. Authorities are also targeting fraudulent pop-ups as part of a broader effort to combat these schemes.

With cases like the fake ‘Geek Squad’ scams resulting in millions in losses, the FTC’s expanded powers mark a significant step in holding scammers accountable and protecting vulnerable populations from financial harm.