Google has successfully defended itself against a revived privacy lawsuit in the UK concerning the transfer of patient data from the Royal Free London NHS Trust. The legal case, brought by patient Andrew Prismall on behalf of 1.6 million individuals, alleged that the data shared with Google’s AI division, DeepMind Technologies, was misused.
The Royal Free NHS Trust had transferred the data in 2015 to assist in developing an AI app designed to detect kidney injuries. Although Britain’s Information Commissioner’s Office ruled in 2017 that the data-sharing arrangement violated privacy laws, a subsequent lawsuit against Google and DeepMind was dismissed last year due to insufficient grounds.
On Wednesday, the Court of Appeal upheld this dismissal, rejecting Prismall’s attempt to challenge the earlier ruling. Google has not commented on the outcome, which closes a high-profile chapter in the debate over privacy and technology’s role in healthcare.
BeReal, the selfie-sharing app acquired by French mobile games publisher Voodoo earlier this year, is under scrutiny for allegedly violating European data protection rules. A privacy complaint filed by Noyb, a European privacy rights organisation, accuses the app of using manipulative ‘dark patterns’ to coerce users into consenting to ad tracking, a tactic that may breach the General Data Protection Regulation (GDPR).
The controversy centres on a consent banner introduced in July 2024, which appears to offer users a straightforward choice to accept or refuse tracking. However, Noyb argues that users who decline tracking face daily pop-ups when they try to post, while those who consent are spared further interruptions. This practice, Noyb asserts, pressures users into compliance, undermining the GDPR’s requirement that consent be ‘freely given.’
The complaint has been filed with France’s data protection authority, CNIL, and demands that BeReal revise its consent process to comply with GDPR. It also calls for any improperly obtained data to be deleted and suggests a fine for the alleged violations. BeReal’s parent company, Voodoo, has yet to comment on the complaint.
This case highlights growing concerns over dark patterns in social media apps, with regulators emphasising the need for fair and transparent consent mechanisms in line with user privacy rights.
Samsung has filed a legal challenge against India‘s Competition Commission (CCI), accusing the watchdog of unlawfully detaining employees and seizing data during a 2022 raid connected to an antitrust investigation involving Amazon and Walmart-owned Flipkart. The CCI claims Samsung colluded with the e-commerce giants to launch products exclusively online, a practice it argues violates competition laws.
In its filing with the northern city of Chandigarh’s High Court, Samsung alleged that confidential data was improperly taken from its employees during the raid and requested the return of the material. Samsung has secured an injunction to pause the CCI’s proceedings but seeks a broader ruling to prevent the use of the seized data. The CCI, in turn, has asked the Supreme Court to consolidate similar challenges by Samsung and 22 other parties, arguing that companies are attempting to derail the investigation.
The case stems from findings earlier this year that Amazon, Flipkart, and smartphone companies like Samsung engaged in anti-competitive practices by favouring select sellers and using exclusive product launches. While Amazon and Flipkart deny wrongdoing, brick-and-mortar retailers have long criticised their pricing and market strategies. Samsung, a major smartphone brand in India with a 14% market share, maintains it was wrongly implicated and cooperated only as a third party in the investigation.
Australia’s government is set to introduce new rules requiring major tech companies to pay Australian media outlets for news content. Companies such as Meta and Google could face millions in charges if they fail to reach commercial agreements with publishers. The Assistant Treasurer emphasised that the rules aim to foster fair negotiations, with charges applying only to platforms earning over $250 million in Australian revenue.
The proposed regulations follow previous efforts to hold tech firms accountable for news content. Laws passed in 2021 required firms to compensate publishers, leading to temporary disruptions on Meta’s platforms before agreements were reached. However, Meta announced it would end those arrangements by 2024, scaling back its promotion of news globally.
The plan has drawn criticism from tech companies, who argue that most users do not access platforms for news and that publishers willingly share content for exposure. Despite these objections, Australian media organisations, including News Corp, anticipate benefits. The government’s broader efforts to regulate Big Tech include banning under-16s from social media and targeting scams.
Australia’s bold stance continues to set precedents for handling global tech giants, adding to growing international scrutiny. News publishers are optimistic about forming new commercial relationships under the proposed framework.
Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised. Elizabeth Kelly, director of the US Artificial Intelligence Safety Institute, highlighted challenges such as “jailbreaks” that bypass AI security measures and the ease of tampering with digital watermarks meant to identify AI-generated content. Speaking at the Reuters NEXT conference, Kelly acknowledged the difficulty in establishing best practices without clear evidence of their effectiveness.
The US AI Safety Institute, launched under the Biden administration, is collaborating with academic, industry, and civil society partners to address these issues. Kelly emphasised that AI safety transcends political divisions, calling it a “fundamentally bipartisan issue” amid the upcoming transition to Donald Trump’s presidency. The institute recently hosted a global meeting in San Francisco, bringing together safety bodies from 10 countries to develop interoperable tests for AI systems.
Kelly described the gathering as a convergence of technical experts focused on practical solutions rather than typical diplomatic formalities. While the challenges remain significant, the emphasis on global cooperation and expertise offers a promising path forward.
The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.
The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.
While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.
Microsoft has revamped Copilot on Windows, introducing a floating quick view UI and a new keyboard shortcut. The updated feature, now described as a ‘native’ experience, allows users to trigger Copilot through the Alt + Space shortcut or the system tray. The floating quick view stays on top of other applications until dismissed or deactivated with the same shortcut.
The update is available for both Windows 10 and Windows 11 users, despite ongoing plans to phase out Windows 10 by October 2025. It offers a simplified way to access Copilot, similar to Microsoft’s Companion apps for files and calendar management. However, the choice of the Alt + Space shortcut has raised concerns due to potential conflicts with other applications using the same command.
Copilot initially debuted as a sidebar in Windows 11 before being demoted to a basic web app. While Microsoft now labels the latest version as ‘native,’ it remains essentially a web view with enhanced functionality. Further refinements to keyboard shortcuts are being explored to address usability challenges.
TikTok‘s Canadian branch has filed an emergency motion with the country’s Federal Court to review a government order requiring it to cease operations due to national security concerns. The company, owned by China’s ByteDance, is challenging the December 5 order and seeking either its annulment or a return to the government for further review. The motion argues that shutting down TikTok’s Canadian operations could result in significant job losses.
The legal challenge comes after Canada began investigating TikTok’s plans to expand its business in the country last year. The investigation led to last month’s order, which did not block Canadian access to the app but mandated the company’s exit from the Canadian market. TikTok emphasised the importance of maintaining a local presence for its platform in Canada, where it has over 14 million monthly users.
Under Canadian law, the government can assess foreign investments’ risks to national security, though details of the investigations are kept confidential. The case follows similar actions in the US, where the government has pressured ByteDance to sell TikTok’s US assets by January 2025 or face a ban. TikTok is currently seeking a temporary block on this US law as well.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
The European Data Protection Supervisor (EDPS) is reviewing the European Commission‘s response to a March ruling that its use of Microsoft 365 violated the bloc’s data protection laws. Monday marked the deadline for the Commission to address the EDPS order to halt unlawful data flows and renegotiate its contracts with Microsoft.
On Tuesday, EDPS Wojciech Wiewiórowski confirmed receipt of the Commission’s report, emphasising the complexity of the case and hinting that a detailed analysis will take time. Both the Commission and Microsoft are appealing the EDPS decision, with related cases set to progress through the courts in 2025.
The outcome could have significant implications for the Commission’s use of tech platforms and broader data privacy enforcement in the EU. For now, all parties remain tight-lipped, extending the uncertainty over the resolution of this high-profile dispute.