Australia‘s competition watchdog has called for a review of efforts to ensure more choice for internet users, citing Google’s dominance in the search engine market and the failure of its competitors to capitalise on the rise of AI. A report by the Australian Competition and Consumer Commission (ACCC) highlighted concerns about the growing influence of Big Tech, particularly Google and Microsoft, as they integrate generative AI into their search services. This raises questions about the accuracy and reliability of AI-generated search results.
While the use of AI in search engines is still in its early stages, the ACCC warns that large tech companies’ financial strength and market presence give them a significant advantage. The commission expressed concerns that AI-driven search could lead to misinformation, as consumers may find AI-generated responses both more useful and less accurate. In response to this, Australia is pushing for new regulations, including laws to prevent anti-competitive behaviour and improve consumer choice.
The Australian government has already introduced several measures targeting tech giants, such as requiring social media platforms to pay for news content and restricting access for children under 16. A proposed new law could impose hefty fines on companies that suppress competition. The ACCC has called for service-specific codes to address data advantages and ensure consumers have more freedom to switch between services. The inquiry is expected to close by March next year.
Cate Blanchett has voiced her concerns about the societal implications of AI, describing the threat as ‘very real.’ In an interview with the BBC, the Australian actress shared her scepticism about advancements like driverless cars and AI‘s potential to replicate human voices, noting the broader risks for humanity. Blanchett emphasised that AI could replace anyone, not just actors, and criticised some technological advancements as ‘experimentation for its own sake.’
While promoting Rumours, her new apocalyptic comedy film, Blanchett described the plot as reflective of modern anxieties. The film, directed by Guy Maddin, portrays world leaders navigating absurd situations, offering both satire and a critique of detachment from reality. Blanchett highlighted how the story reveals the vulnerability and artificiality of political figures once removed from their structures of power.
Maddin shared that his characters emerged from initial disdain but evolved into figures of empathy as the narrative unfolds. Blanchett added that both actors and politicians face infantilisation within their respective systems, highlighting parallels in their perceived disconnection from the real world.
The UK faces an escalating cyber threat from hostile states and criminal gangs, according to Richard Horne, head of the National Cyber Security Centre (NCSC). In his first major speech, Horne warned that the severity of these risks is being underestimated, citing a significant rise in cyber incidents, particularly from Russia and China. He described Russia’s cyber activity as ‘aggressive and reckless’ while noting that China’s operations are highly sophisticated with growing global ambitions.
Over the past year, the NCSC responded to 430 cyber incidents, a marked increase from the previous year. Among them, 12 were deemed especially severe, a threefold rise from 2023. The agency highlighted the growing threats to critical infrastructure and supply chains, urging both public and private sectors to strengthen their cyber defences. The UK also faces a growing number of ransomware attacks, often originating from Russia, which target key organisations like the British Library and healthcare services.
Horne emphasised the human costs of cyber-attacks, citing how these incidents disrupt vital services like healthcare and education. The rise in ransomware, often linked to Russian criminal gangs, is a major concern, and the NCSC is working to address these challenges. The agency’s review also pointed to increasing cyber activity from China, Iran, and North Korea, with these states targeting the UK’s infrastructure and private sector.
Experts like Professor Alan Woodward of Surrey University echoed Horne’s concerns, urging the UK to step up its cybersecurity efforts to keep pace with evolving threats. With adversaries growing more sophisticated, the government and businesses must act swiftly to protect the country’s digital infrastructure.
A new lawsuit accuses Apple of illegally surveilling employees’ personal devices and iCloud accounts while restricting discussions about pay and workplace conditions. Filed in California by Amar Bhakta, a digital advertising employee, the suit claims Apple mandates software installations on personal devices used for work, enabling access to private data such as emails, photos, and health information. The lawsuit also alleges Apple enforces confidentiality policies that hinder whistleblowing and discussions about working conditions.
Bhakta asserts he was instructed to avoid discussing his work on podcasts and remove job-related details from LinkedIn. The complaint argues these practices suppress employee rights, including whistleblowing and job market mobility. Apple denies the claims, stating they lack merit and emphasising its commitment to employee training on workplace rights.
This case joins other legal challenges faced by Apple, including allegations of underpaying female employees and discouraging discussions about workplace bias and pay disparity. Filed under a California law allowing workers to sue on behalf of the state, the lawsuit could lead to penalties, with a portion allocated to employees bringing the claims.
Five Canadian news companies have launched a lawsuit against OpenAI, claiming its AI systems violate copyright laws. Torstar, Postmedia, The Globe and Mail, The Canadian Press, and CBC/Radio-Canada allege the company uses their journalism without permission or compensation. The legal filing, made in Ontario’s superior court, seeks damages and a permanent ban on OpenAI using their materials unlawfully.
The companies argue that OpenAI has deliberately appropriated their intellectual property for commercial purposes. In their statement, they emphasised the public value of journalism and condemned OpenAI’s actions as illegal. OpenAI, however, defended its practices, stating that its models rely on publicly available data and comply with fair use and copyright principles. The firm also noted its efforts to collaborate with publishers and provide mechanisms for opting out.
The case follows a trend of lawsuits by various creators, including authors and artists, against AI companies over the use of copyrighted content, and the Canadian lawsuit does not name Microsoft, a major OpenAI backer. Separately, Elon Musk recently expanded a legal case accusing both companies of attempting to dominate the generative AI market unlawfully.
Italy’s data protection authority has issued a warning to publisher GEDI over sharing personal data with OpenAI, citing potential violations of EU privacy regulations. GEDI, part of the Agnelli family’s Exor group, entered into a strategic partnership with OpenAI in September to provide Italian-language content for ChatGPT users.
Under the deal, OpenAI’schatbot would feature GEDI’s attributed content and links, while its journalism could refine the AI’s accuracy. Concerns have arisen due to the sensitive nature of the archives, containing information on millions of individuals. The regulator highlighted that such data requires careful handling and warned of potential sanctions if EU rules are breached.
GEDI clarified that the partnership does not involve selling personal data and noted that the project is still under review. No editorial content has been shared with OpenAI to date, according to a company statement. Discussions with the watchdog are ongoing, with GEDI expressing hope for constructive dialogue to resolve concerns.
Representatives from OpenAI have not yet commented on the matter.
Meta Platforms announced stricter regulations for advertisers promoting financial products and services in Australia, aiming to curb online scams. Following an October initiative where Meta removed 8,000 deceptive ‘celeb bait’ ads, the company now requires advertisers to verify beneficiary and payer details, including their Australian Financial Services License number, before running financial ads.
This move is part of Meta’s ongoing efforts to protect Australians from scams involving fake investment schemes using celebrity images. Verified advertisers must also display a “Paid for By” disclaimer, ensuring transparency in financial advertisements.
The updated policy follows a broader regulatory push in Australia, where the government recently abandoned plans to fine internet platforms for spreading misinformation. The crackdown on online platforms is part of a growing effort to assert Australian sovereignty over foreign tech companies, with a federal election looming.
Meta Platforms, the owner of Facebook, Instagram, and WhatsApp, is set to face trial in Spain in October 2025 over a €551 million ($582 million) lawsuit filed by 87 media companies. The complaint, led by the AMI media association, accuses Meta of unfair competition in advertising through its alleged misuse of user data from 2018 to 2023.
The media companies argue that Meta’s extensive data collection provides it with an unfair advantage in crafting personalised ads, violating EU data protection regulations. Prominent Spanish publishers, including El Pais owner Prisa and ABC publisher Vocento, are among the plaintiffs. A separate €160 million lawsuit against Meta was also filed by Spanish broadcasters last month on similar grounds.
The lawsuits are part of a broader effort by traditional media to push back against tech giants, which they claim undermine their revenue and fail to pay fair fees for content use. In response to similar challenges in other countries, Meta has restricted news sharing on its platforms and reduced its focus on news and political content in user feeds.
Meta has not yet commented on the Spanish lawsuits, which highlight ongoing tensions between digital platforms and legacy media seeking to safeguard their economic interests.
The US Federal Trade Commission (FTC) has initiated an antitrust investigation into Microsoft, examining its software licensing, cloud computing operations, and AI-related practices. Sources indicate the probe, approved by FTC Chair Lina Khan before her anticipated departure, also investigates claims of restrictive licensing aimed at limiting competition in cloud services.
Microsoft is the latest Big Tech firm under regulatory pressure. Alphabet, Apple, Meta, and Amazon face similar lawsuits over alleged monopolistic practices in markets ranging from app stores to advertising. Penalties and court rulings loom as regulators focus on digital fairness.
The FTC’s probe highlights growing concerns about the influence of Big Tech on consumer choice and competition. As scrutiny intensifies, the outcomes could reshape the technology sector’s landscape, impacting businesses and consumers alike.
Australia‘s government is conducting a world-first trial to enforce its national social media ban for children under 16, focusing on age-checking technology. The trial, set to begin in January and run through March, will involve around 1,200 randomly selected Australians. It will help guide the development of effective age verification methods, as platforms like Meta, X (formerly Twitter), TikTok, and Snapchat must prove they are taking ‘reasonable steps’ to keep minors off their services or face fines of up to A$49.5 million ($32 million).
The trial is overseen by the Age Check Certification Scheme and will test several age-checking techniques, such as video selfies, document uploads for verification, and email cross-checking. Although platforms like YouTube are exempt, the trial is seen as a crucial step for setting a global precedent for online age restrictions, which many countries are now considering due to concerns about youth mental health and privacy.
The trial’s outcomes could influence how other nations approach enforcing age restrictions, despite concerns from some lawmakers and tech companies about privacy violations and free speech. The government has responded by ensuring that no personal data will be required without alternatives. The age-check process could significantly shape global efforts to regulate social media access for children in the coming years.