US court revives Chrome users’ privacy lawsuit against Google

A US appeals court has reinstated a lawsuit against Google, allowing Chrome users to pursue claims that the company collected their data without permission. The case centres on users who chose not to synchronise their Chrome browsers with their Google accounts yet allege that Google still gathered their information.

The 9th US Circuit Court of Appeals in San Francisco determined that a lower court had prematurely dismissed the case without adequately considering whether users had consented to the data collection. The decision follows a previous settlement where Google agreed to destroy billions of records in a similar lawsuit, which accused the company of tracking users who believed they were browsing privately in Chrome’s ‘Incognito’ mode.

Google has expressed disagreement with the ruling, asserting confidence in its privacy controls and the benefits of Chrome Sync, which helps users maintain a consistent experience across devices. However, the plaintiffs’ lawyer welcomed the court’s decision and is preparing for a trial.

Why does this matter?

Initially dismissed in December 2022, the lawsuit has now been sent back to the district court for further proceedings. The case could impact thousands of Chrome users using the browser since July 2016 without enabling the sync function, raising broader questions about the clarity and transparency of Google’s privacy policies.

AI cat spark online controversy and curiosity – meet Chubby

A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.

@mpminds

Step into a new dimension where AI and humans come together!✨ @cantina #cantinapartner #cantina #catsoftiktok #cats #ai #aiart

♬ son original – MPminds

These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.

The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.

As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.

Trump shares fake AI-generated images of Swift fans

Donald Trump has shared AI-generated images on social media, showing Taylor Swift fans endorsing his presidential campaign. The images, which are clearly fake, have sparked controversy, particularly since Swift has not publicly supported any candidates in the 2024 US election.

Trump, however, embraced the images, responding with ‘I accept!’ on his platform. The posts were also shared by an account that reposts his content on X (formerly Twitter). Despite their obvious fabrication, the posts have drawn significant attention online.

Taylor Swift, who endorsed Joe Biden in the last election, has not commented on these fake images. Her history with AI-generated content has been fraught, including deepfake videos that once led to a temporary ban on her searches on X.

Swift’s potential legal actions against AI content providers remain a topic of interest. However, the source of these recent fake posts remains unknown, raising concerns about the use of AI in political propaganda.

California’s child safety law faces legal setback

A US appeals court has upheld an essential aspect of an injunction against a California law designed to protect children from harmful online content. The law, known as the California Age-Appropriate Design Code Act, was challenged by NetChoice, a trade group representing major tech companies because it violated free speech rights under the First Amendment. The court agreed, stating that the law’s requirement for companies to create detailed reports on potential risks to children was likely unconstitutional.

The court suggested that California could protect children through less restrictive means, such as enhancing education for parents and children about online dangers or offering incentives for companies to filter harmful content. The appeals court partially overturned a lower court’s injunction but sent the case back for further review, particularly concerning provisions related to the collection of children’s data.

California’s law, modelled after a similar UK law, was set to take effect in July 2024. Governor Gavin Newsom defended the law, emphasising the need for child safety and urging NetChoice to drop its legal challenge. Despite this, NetChoice hailed the court’s decision as a win for free speech and online security, highlighting the ongoing legal battle over online content regulation.

Brazilian court limits WhatsApp data sharing in landmark ruling

A federal judge in São Paulo has issued a resolution that could significantly change the way WhatsApp handles its users’ data in Brazil, limiting data sharing with other companies in the Meta group. This decision responded to a class action lawsuit filed by the Federal Public Ministry and the Brazilian Consumer Defense Institute.

Concretely, the measures ordered by the judge include the prohibition of sharing data of Brazilian users for the Meta group companies and the order to implement, within 90 days, an ‘opt-out’ functionality within the application.

Daniel Monastersky, partner at Data Governance Latam, explained that although WhatsApp argued that its data-sharing practices are legal and that the company has provided adequate information to its users, the court did not consider it sufficiently clear and transparent. The ruling states that some of the company’s practices could constitute an abuse of consumers in Brazil.

Why does this matter?

The ruling was issued in the context of a growing global concern about protecting personal data and transparency in the practices of big technology companies. The decision could have significant implications for WhatsApp and other technology companies operating in the country. It could also serve as a precedent for similar cases in other jurisdictions, especially in countries seeking to strengthen their data protection laws.

Massive data breach exposes 2.7 billion US records online

A massive data breach has resulted in the exposure of over 2.7 billion records from National Public Data (NPD), now available on a criminal forum. The leaked data includes sensitive information such as names, mailing addresses, and Social Security numbers. Although the exact accuracy of the records is unclear, the breach is substantial, potentially affecting a significant portion of the US population.

The stolen database was posted on Breachforums, a site known for distributing such leaks, and was made available for free download. NPD, which compiles and sells personal data from public sources, is facing multiple lawsuits for failing to protect this information. The breach highlights ongoing issues with data security, as this is not the first time NPD’s data has been compromised.

In response to the data breach, there are increased calls for improved data protection measures and identity theft protection. Affected individuals are advised to monitor their accounts and be cautious of phishing attempts. This incident underscores the need for stronger encryption and security practices to safeguard personal data.

NPD has not yet responded to requests for comment. The breach raises serious concerns about the company’s data management practices and its responsibility to protect the information it collects.

AI innovation at Singapore’s NUHS reduces workload

Singapore’s National University Health System (NUHS) is leveraging advanced AI technologies to enhance efficiency and reduce administrative workloads in healthcare. Through the RUSSELL-GPT platform, which integrates large language models (LLMs) via Amazon Web Services (AWS) Bedrock, over a thousand clinicians now benefit from automated tasks such as drafting referrals and summarising patient data, reducing administrative time by 40%.

The NUHS team is working on event-driven Generative AI models that can perform tasks automatically when triggered by specific events, such as drafting discharge letters without needing any prompts. This approach aims to streamline processes further and reduce the administrative burden on healthcare staff.

Ensuring patient data security is a top priority for NUHS, with robust measures in place to keep data within Singapore and comply with local privacy laws. RUSSELL-GPT also includes features to mitigate the risks of AI hallucinations, with mandatory training for users on recognising and managing such occurrences.

Despite the promise of LLMs, NUHS acknowledges that these models are not a cure-all. Classical AI still plays a critical role in tasks like clustering information and providing predictive insights, underlining the need for a balanced use of it in healthcare.

Call for US investigation of TP-Link amid cybersecurity fears

Two US lawmakers have called on the Biden administration to investigate Chinese company TP-Link Technology Co. over concerns that its WiFi routers could pose a national security risk. The request was made in a letter to the Commerce Department, highlighting the potential for cyber attacks using vulnerabilities in TP-Link firmware. The company, a global leader in WiFi router sales, has not yet responded to the inquiry.

Concerns were raised after reports surfaced that TP-Link routers were exploited in cyber attacks targeting government officials in Europe. The lawmakers expressed fears that similar attacks could be carried out against the US infrastructure. They have urged the Commerce Department to assess the threat posed by Chinese-affiliated routers, particularly TP-Link’s, given its market dominance.

TP-Link, founded in China in 1996, has been linked to cybersecurity concerns before. Last year, the US Cybersecurity and Infrastructure Agency flagged vulnerabilities in the company’s routers that could be used for remote attacks. Around the same time, a Chinese state-sponsored hacking group was found to have targeted European officials using malicious implants in TP-Link routers.

The Commerce Department has the authority to impose bans or restrictions on technology transactions with companies from nations considered adversarial to US interests, including China. The investigation could lead to new measures aimed at preventing potential security risks from Chinese-made equipment in critical US infrastructure.

English Premier League to upgrade offside calls with new technology

The English Premier League is set to enhance offside decision-making with new technology from Genius Sports. Multiple iPhones, paired with advanced machine-learning models, will assist referees in making more accurate offside calls. Traditional Video Assistant Referee (VAR) systems have faced criticism for slow reviews and inconsistent decisions, leading to this shift.

Genius Sports developed ‘Semi-Assisted Offside Technology’ (SAOT) as part of its GeniusIQ system. Up to 28 iPhones will be placed around the pitch to generate 3D models of players, offering precise offside line determinations. Expensive 4K cameras will be replaced by iPhones, which capture between 7,000 and 10,000 data points per player.

Strategically positioned on custom rigs, iPhones will cover optimal areas of the pitch. Data collected will be processed by the GeniusIQ system, using predictive algorithms to assess player positions even when obscured. High framerate recording and local processing capabilities further enhance the system’s accuracy.

Genius Sports plans to fully implement the system in the Premier League by the end of the year. While the exact date remains unconfirmed, this marks a significant advancement in football technology, promising a more precise and consistent approach to offside rulings.

Australia set six-month deadline for AI use disclosure

Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.

The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.

The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.

Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.