The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.
The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.
While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
European regulators are investigating a previously undisclosed advertising partnership between Google and Meta that targeted teenagers on YouTube and Instagram, the Financial Times reports. The now-cancelled initiative aimed at promoting Instagram to users aged 13 to 17 allegedly bypassed Google’s policies restricting ad personalisation for minors.
The partnership, initially launched in the US with plans for global expansion, has drawn the attention of the European Commission, which has requested extensive internal records from Google, including emails and presentations, to evaluate potential violations. Google, defending its practices, stated that its safeguards for minors remain industry-leading and emphasised recent internal training to reinforce policy compliance.
This inquiry comes amid heightened concerns about the impact of social media on young users. Earlier this year, Meta introduced enhanced privacy features for teenagers on Instagram, reflecting the growing demand for stricter online protections for minors. Neither Meta nor the European Commission has commented on the investigation so far.
OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.
The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.
The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.
Data brokers Mobilewalla and Gravy Analytics have agreed to stop using sensitive location data following a settlement with the US Federal Trade Commission (FTC). The agreement addresses concerns about tracking individuals’ religious beliefs, political leanings, and pregnancy status through mobile device data.
The settlement represents the first instance of banning the collection of location data through online advertising auctions. The FTC accused the companies of unfair practices, stating that Mobilewalla gathered information without consent from ad auction platforms. Such platforms allow advertisers to bid on specific audiences but inadvertently exposed consumers to privacy risks.
Gravy Analytics, owned by Unacast, sold location data to government contractors, prompting constitutional concerns from FTC commissioners. Mobilewalla disputed the allegations but stated the agreement allows it to continue offering insights while respecting privacy. Both companies committed to halting sensitive data usage and introducing opt-out options for consumers.
FTC Chair Lina Khan highlighted the broader risks of targeted advertising, warning that Americans’ sensitive data is at risk of misuse. The settlement is part of the Biden administration’s effort to regulate data brokers and strengthen privacy protections, as outlined by proposed rules from the US Consumer Financial Protection Bureau.
Australia‘s competition watchdog has called for a review of efforts to ensure more choice for internet users, citing Google’s dominance in the search engine market and the failure of its competitors to capitalise on the rise of AI. A report by the Australian Competition and Consumer Commission (ACCC) highlighted concerns about the growing influence of Big Tech, particularly Google and Microsoft, as they integrate generative AI into their search services. This raises questions about the accuracy and reliability of AI-generated search results.
While the use of AI in search engines is still in its early stages, the ACCC warns that large tech companies’ financial strength and market presence give them a significant advantage. The commission expressed concerns that AI-driven search could lead to misinformation, as consumers may find AI-generated responses both more useful and less accurate. In response to this, Australia is pushing for new regulations, including laws to prevent anti-competitive behaviour and improve consumer choice.
The Australian government has already introduced several measures targeting tech giants, such as requiring social media platforms to pay for news content and restricting access for children under 16. A proposed new law could impose hefty fines on companies that suppress competition. The ACCC has called for service-specific codes to address data advantages and ensure consumers have more freedom to switch between services. The inquiry is expected to close by March next year.
Australia‘s government is conducting a world-first trial to enforce its national social media ban for children under 16, focusing on age-checking technology. The trial, set to begin in January and run through March, will involve around 1,200 randomly selected Australians. It will help guide the development of effective age verification methods, as platforms like Meta, X (formerly Twitter), TikTok, and Snapchat must prove they are taking ‘reasonable steps’ to keep minors off their services or face fines of up to A$49.5 million ($32 million).
The trial is overseen by the Age Check Certification Scheme and will test several age-checking techniques, such as video selfies, document uploads for verification, and email cross-checking. Although platforms like YouTube are exempt, the trial is seen as a crucial step for setting a global precedent for online age restrictions, which many countries are now considering due to concerns about youth mental health and privacy.
The trial’s outcomes could influence how other nations approach enforcing age restrictions, despite concerns from some lawmakers and tech companies about privacy violations and free speech. The government has responded by ensuring that no personal data will be required without alternatives. The age-check process could significantly shape global efforts to regulate social media access for children in the coming years.
A recent investigation revealed that most top-selling mobile games in the UK fail to disclose the presence of loot boxes in their advertisements, despite regulations mandating transparency. Loot boxes, which provide randomised in-game items often obtained through payments, have drawn criticism for fostering addictive behaviors and targeting vulnerable groups, including children. Of the top 45 highest-grossing games analysed on Google Play, only two clearly mentioned loot boxes in their advertisements.
The UK Advertising Standards Authority, which oversees compliance, acknowledges the issue and promises further action but has faced criticism for its slow and limited enforcement. Critics argue that lax self-regulation within the gaming industry enables companies to prioritise profits over player well-being, particularly as loot boxes reportedly generate $15B annually.
Advocacy groups and researchers have voiced alarm over these findings, warning of long-term consequences. Zoë Osmond of GambleAware emphasised the risks of exposing children to gambling-like features in games, which could lead to harmful habits later in life. The gaming industry has so far resisted stricter government intervention, despite mounting evidence of non-compliance and harm.
Australia has passed a landmark law banning children under 16 from using social media, following a fast-moving push led by South Australian Premier Peter Malinauskas. The law, which takes effect in November 2025, aims to protect young people from the harmful effects of social media, including mental health issues linked to cyberbullying and body image problems. The bill has widespread support, with a government survey showing 77% of Australians backing the measure. However, it has sparked significant opposition from tech companies and privacy advocates, who argue that the law is rushed and could push young users to more dangerous parts of the internet.
The push for the national ban gained momentum after Malinauskas’s state-level initiative to restrict social media access for children under 14 in September. This led to a broader federal response, with Prime Minister Anthony Albanese’s government introducing a nationwide version of the policy. The legislation eliminates parental discretion, meaning no child under 16 will be able to use social media without facing fines for platforms that fail to enforce the rules. This move contrasts with policies in countries like France and Florida, where minors can access social media with parental permission.
While the law has garnered support from most of Australia’s political leaders, it has faced strong criticism from social media companies like Meta and TikTok. These platforms warn that the law could drive teens to hidden corners of the internet and that the rushed process leaves many questions unanswered. Despite the backlash, the law passed with bipartisan support, and a trial of age-verification technology will begin in January to prepare for its full implementation.
The debate over the law highlights growing concerns worldwide about the impact of social media on young people. Although some critics argue that the law is an overreach, others believe it is a necessary step to protect children from online harm. With the law now in place, Australia has set a precedent that could inspire other countries grappling with similar issues.
Australia’s new law banning children under 16 from using social media has sparked strong criticism from major tech companies. The law, passed late on Thursday, targets platforms like Meta’s Instagram and Facebook, as well as TikTok, imposing fines of up to A$49.5 million for allowing minors to log in. Tech giants, including TikTok and Meta, argue that the legislation was rushed through parliament without adequate consultation and could have harmful unintended consequences, such as driving young users to less visible, more dangerous parts of the internet.
The law was introduced after a parliamentary inquiry into the harmful effects of social media on young people, with testimony from parents of children who had been bullied online. While the Australian government had warned tech companies about the impending legislation for months, the bill was fast-tracked in a chaotic final session of parliament. Critics, including Meta, have raised concerns about the lack of clear evidence linking social media to mental health issues and question the rushed process.
Despite the backlash, the law has strong political backing, and the government is set to begin a trial of enforcement methods in January, with the full ban expected to take effect by November 2025. Australia’s long-standing tensions with major US-based tech companies, including previous legislation requiring platforms to pay for news content, are also fueling the controversy. As the law moves forward, both industry representatives and lawmakers face challenges in determining how it will be practically implemented.