The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.
The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.
While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
Chinese drone manufacturers DJI and Autel Robotics face potential bans in the US under a proposed military bill. The legislation requires a national security review within a year to assess risks posed by their drones. If no review occurs, the companies will automatically join the Federal Communications Commission’s ‘Covered List,’ effectively blocking the sale of new models.
DJI, the world’s largest drone producer, claims the process is unfair, citing extensive security audits and enhanced privacy features. Autel Robotics, also impacted by the proposal, has previously been flagged for investigation over national security concerns.
US lawmakers remain concerned about potential surveillance risks and data vulnerabilities linked to Chinese drones. DJI has refuted these claims, emphasising that no forced labour is involved in its production, despite customs citing related concerns to block imports.
The controversy reflects escalating tensions in US-China relations, particularly in technology and national security domains. The outcome of the proposed bill could reshape the landscape of the commercial drone market in the United States.
US government agencies are set to brief the House of Representatives on a widespread cyberespionage campaign allegedly linked to China. Known as Salt Typhoon, the operation reportedly targeted American telecommunications firms to steal call metadata and other sensitive information. A similar briefing was held for senators last week.
The White House revealed that at least eight US telecom companies had been affected, with a large number of citizens’ data compromised. Senator Ron Wyden is drafting legislation in response, while Senator Bob Casey expressed significant concern, noting that legislative action might be delayed until the new year.
On Wednesday, a Senate Commerce subcommittee will examine the broader risks posed by cyber threats to communication networks. Industry representatives, including Competitive Carriers Association CEO Tim Donovan, will contribute insights on best practices to counter such attacks.
China has denied the allegations, labelling them as disinformation, and reaffirmed its opposition to cyber theft. Officials and lawmakers continue to emphasise the gravity of the breaches, with Senator Richard Blumenthal calling the scale of Chinese hacking efforts ‘terrifying.’
European regulators are investigating a previously undisclosed advertising partnership between Google and Meta that targeted teenagers on YouTube and Instagram, the Financial Times reports. The now-cancelled initiative aimed at promoting Instagram to users aged 13 to 17 allegedly bypassed Google’s policies restricting ad personalisation for minors.
The partnership, initially launched in the US with plans for global expansion, has drawn the attention of the European Commission, which has requested extensive internal records from Google, including emails and presentations, to evaluate potential violations. Google, defending its practices, stated that its safeguards for minors remain industry-leading and emphasised recent internal training to reinforce policy compliance.
This inquiry comes amid heightened concerns about the impact of social media on young users. Earlier this year, Meta introduced enhanced privacy features for teenagers on Instagram, reflecting the growing demand for stricter online protections for minors. Neither Meta nor the European Commission has commented on the investigation so far.
OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.
The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.
The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.
Palantir Technologies and Anduril Industries have joined forces to optimise defence data for AI training. Palantir’s platform will organise and label sensitive defence data for model training, while Anduril’s systems will manage the retention and distribution of this information for national security applications.
The collaboration highlights challenges in deploying AI for defence, where sensitive data complicates model training. Anduril recently partnered with OpenAI to integrate advanced AI into security missions, underscoring its commitment to autonomous defence solutions.
Palantir, a key player in the AI boom, continues to see robust demand from governments and businesses seeking advanced software solutions.
The US House of Representatives is preparing to vote on a defence bill proposing $3 billion for telecom companies to replace equipment from Chinese firms Huawei and ZTE. The legislation aims to address security concerns posed by Chinese technology in American wireless networks. A previous allocation of $1.9 billion was deemed insufficient for the programme, which the Federal Communications Commission (FCC) estimates will cost nearly $5 billion.
The initiative, known as the ‘rip and replace’ programme, targets rural carriers reliant on the equipment, which could lose connectivity if funding gaps persist. FCC Chair Jessica Rosenworcel warned that insufficient funding might force some rural networks to shut down, endangering services such as 911 emergency calls. Rural regions face significant risks without immediate support for the removal and replacement of insecure telecoms infrastructure.
The proposed funding would also cover up to $500 million for regional technology hubs, supported by revenue from an FCC spectrum auction. Advocates emphasise the importance of securing connectivity while maintaining services for millions of Americans. Competitive Carriers Association CEO Tim Donovan welcomed the proposed funding, calling it critical for network security and consumer access.
Pavel Durov, founder of Telegram, appeared in a Paris court on 6 December to address allegations that the messaging app has facilitated criminal activity. Represented by his lawyers, Durov reportedly stated he trusted the French justice system but declined to comment further on the case.
The legal proceedings stem from charges brought against Durov in August, accusing him of running a platform that enables illicit transactions. Following his arrest at Le Bourget airport, he posted a $6 million bail and has been barred from leaving France until March 2025. If convicted, he could face up to 10 years in prison and a fine of 500,000 euros.
Industry experts fear the case against Durov reflects a broader crackdown on privacy-preserving technologies in the Web3 space. Parallels have been drawn with the arrest of Tornado Cash developer Alexey Pertsev, raising concerns over government overreach and the implications for digital privacy.
Supply chain software company Blue Yonder is investigating claims of data theft after the ‘Termite’ ransomware group threatened to release stolen data. The Arizona-based company, which serves major clients like DHL, Starbucks, and Walgreens, was hit by a ransomware attack on 21 November. While Blue Yonder initially confirmed a cyberattack, it did not disclose the perpetrators.
The Termite group, which recently claimed responsibility for the breach on its dark web leak site, claims to have stolen 680 gigabytes of data, including documents, reports, and email lists. The group, believed to be a rebranded version of the Babuk ransomware gang, has threatened to release the data soon. Blue Yonder is working with cybersecurity experts to investigate the breach and has notified impacted customers, though it has not confirmed specific details about the stolen data.
The attack has caused operational disruptions for some clients, including UK supermarkets Morrisons and Sainsbury’s, and US company Starbucks, which was forced to manually calculate employee pay. The full extent of the attack on Blue Yonder’s 3,000+ customers remains unclear.