Google faces backlash from privacy advocates over new tracking rules

Google has introduced changes to its online tracking policies, allowing fingerprinting, a technique that collects data such as IP addresses and device information to help advertisers identify users. The new rules mark a shift in Google’s approach to online tracking.

Google states that these data signals are already widely used across the industry and that its goal is to balance privacy with the needs of businesses and advertisers. The company previously restricted fingerprinting for ad targeting but now argues that evolving internet usage—such as browsing from smart TVs and gaming consoles—has made conventional tracking methods, like cookies, less effective. The company also emphasises that users continue to have choices regarding personalised ads and that it encourages responsible data use across the industry.

Critics argue that fingerprinting is harder for users to control compared to cookies, as it does not rely on locally stored files but rather collects real-time data about a user’s device and network. Some privacy advocates believe this change marks a shift toward tracking methods that provide users with fewer options to opt out.

Martin Thomson, an engineer at Mozilla, noted that by allowing fingerprinting, Google has given itself—and the advertising industry it dominates—permission to use a form of tracking that people can’t do much to stop. Lena Cohen, staff technologist at the Electronic Frontier Foundation, expressed similar concerns, stating that fingerprinting could make user data more accessible to advertisers, data brokers, and law enforcement.

The UK’s Information Commissioner’s Office (ICO) has raised concerns over fingerprinting, stating that it could reduce users’ ability to control how their information is collected. In a December blog post, Stephen Almond, the ICO’s Executive Director of Regulatory Risk, wrote that this change irresponsible, and that advertisers and businesses using this technology will need to demonstrate compliance with privacy and data laws.

Google responded that it welcomes further discussions with regulators and highlighted that IP addresses have long been used across the industry for fraud prevention and security.

For more information on these topics, visit diplomacy.edu.

EU scraps tech patent, AI liability, and messaging privacy rules

The European Commission has abandoned proposed regulations on technology patents, AI liability, and privacy rules for messaging apps, citing a lack of foreseeable agreement among EU lawmakers and member states. The draft rules faced strong opposition from industry groups and major technology firms. A proposed regulation on standard essential patents, designed to streamline licensing disputes for telecom and smart device technologies, was scrapped after opposition from patent holders like Nokia and Ericsson. Car manufacturers and tech giants such as Apple and Google had pushed for reforms to reduce royalty costs.

A proposal that would have allowed consumers to sue AI developers for harm caused by their technology was also withdrawn. The AI Liability Directive, first introduced in 2022, aimed to hold providers accountable for failures in AI systems. Legal experts say the move does not indicate a shift in the EU’s approach to AI regulation, as several laws already govern the sector. Meanwhile, plans to extend telecom privacy rules to platforms like WhatsApp and Skype have been dropped. The proposal, first introduced in 2017, had been stalled due to disagreements over tracking cookies and child protection measures.

The decision has drawn mixed reactions from industry groups. Nokia welcomed the withdrawal of patent rules, arguing they would have discouraged European investment in research and development. The Fair Standards Alliance, representing firms such as BMW, Tesla, and Google, expressed disappointment, warning that the decision undermines fair patent licensing. The Commission has stated it will reassess the need for revised proposals but has not provided a timeline for future regulatory efforts.

For more information on these topics, visit diplomacy.edu.

Belgium plans AI use for law enforcement and telecom strategy

Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.

Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.

The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.

Europol highlights encryption concerns at the World Economic Forum

At the World Economic Forum in Davos, Europol’s executive director, Catherine De Bolle, urged tech companies to provide law enforcement access to encrypted messages, citing public safety concerns. While she argued this is necessary to combat crime and protect democracy, critics highlighted the risks of undermining encryption, which is essential for privacy and individual freedoms.

De Bolle compared accessing encrypted communications to executing a search warrant in a locked house. However, this analogy oversimplifies the issue, as encryption safeguards sensitive data and ensures private communication, even under authoritarian regimes. Weakening it could lead to widespread misuse, enabling mass surveillance and suppression, as seen in places like Russia.

Advocates for privacy stress that encryption is not merely a barrier to crime but a cornerstone of democracy, enabling free speech and safeguarding against state overreach. While law enforcement has other tools for crime-fighting, creating backdoors to encryption would expose everyone to cyber risks and potentially render digital security obsolete.

If governments succeed in weakening encryption, decentralised solutions backed by blockchain technology could rise, making such access nearly impossible in the future. The debate underscores the critical balance between security and preserving fundamental rights.

World ID forced to stop offering crypto for biometrics in Brazil

Brazil’s data protection authority, ANPD, has ordered Tools for Humanity (TFH), the company behind the World ID project, to cease offering crypto or financial compensation for biometric data collection. The move comes after an investigation launched in November 2023, with the ANPD citing concerns over the potential influence of financial incentives on individuals’ consent to share sensitive biometric data, such as iris scans.

The World ID project, which aims to create a universal digital identity, uses eye-scanning technology developed by TFH. The ANPD’s decision also reflects its concerns over the irreversible nature of biometric data collection and the inability to delete this information once submitted. Under Brazilian law, consent for processing such sensitive data must be freely given and informed, without undue influence.

This is not the first regulatory issue for World ID, as Germany’s data protection authority also issued corrective measures in December 2023, requiring the project to comply with the EU’s General Data Protection Regulations. Meanwhile, the value of World Network’s native token, WLF, has dropped significantly, falling by over 8% in the past 24 hours and 83% from its peak in March 2023.

US regulator escalates complaint against Snap

The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.

Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.

The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.

Google and Microsoft join inauguration donor list

Google and Microsoft have each pledged $1 million to support Donald Trump’s upcoming presidential inauguration, joining other tech giants such as Meta, Amazon, and Apple’s Tim Cook in contributing significant sums. The donations appear to be part of broader strategies by these companies to maintain access to political leadership in a rapidly changing regulatory environment.

Google, which has faced threats from Trump regarding potential break-ups, aims to secure goodwill through financial contributions and online visibility, including a YouTube livestream of the inauguration. Microsoft has also maintained steady political donations, previously giving $500,000 to Trump’s first inauguration as well as to President Joe Biden’s ceremony.

This alignment with Trump marks a notable trend of tech companies seeking to protect their interests, particularly as issues like antitrust regulations and data privacy laws remain in political crosshairs. With both tech giants navigating a landscape of increased government scrutiny, their contributions indicate a cautious approach to preserving influence at the highest levels of power.

These donations reflect a pragmatic move by Silicon Valley, where cultivating political ties is seen as a way to safeguard business operations amid shifting political dynamics.

Study reveals privacy risks of smart home cameras

Smart home cameras have become a staple for security-conscious households, offering peace of mind by monitoring both indoor and outdoor spaces. However, new research by Surfshark exposes alarming privacy concerns, showing that these devices collect far more user data than necessary. Outdoor security camera apps top the list, gathering an average of 12 data points, including sensitive information such as precise location, email addresses, and payment details which is 50% more than other smart devices.

Indoor camera apps are slightly less invasive but still problematic, collecting an average of nine data points, including audio data and purchase histories. Some apps, like those from Arlo, Deep Sentinel, and D-Link, even extract contact information unnecessarily, raising serious questions about user consent and safety. The absence of robust privacy regulations leaves users vulnerable to data breaches, cyberattacks, and misuse of personal information.

Experts recommend limiting data-sharing permissions, using strong passwords, and regularly updating privacy settings to mitigate risks. Options such as enabling local storage instead of cloud services and employing a VPN can further protect against data leaks. While smart cameras bring convenience, they highlight the urgent need for clearer regulations to safeguard consumer privacy in the era of connected technology.

Apple’s iPhone photo feature sparks privacy concerns

Apple has introduced an ‘Enhanced Visual Search’ feature in iOS 18, allowing users to identify landmarks in photos by matching data with a global database. While convenient, the feature has sparked privacy concerns, as it is enabled by default, requiring users to manually turn it off in settings if they prefer not to share photo data with Apple.

The feature uses on-device machine learning to detect landmarks in photos, creating encrypted ‘vector embeddings’ of image data. These are then sent to Apple for comparison with its database. While the company has reportedly implemented privacy safeguards, such as encrypting and condensing data into machine-readable formats, critics argue the feature should have been opt-in rather than opt-out, aligning with Apple’s usual privacy standards.

This toggle builds on Apple’s earlier ‘Visual Look Up’ tool, which identifies objects like plants or symbols without sending data to Apple’s servers. Privacy advocates suggest that Apple could have maintained this approach for Enhanced Visual Search, questioning why it requires shared data for similar functionality.

The debate highlights ongoing tensions between technological convenience and user privacy, raising questions about how far companies should go in enabling features that require data sharing without explicit consent.

Meta resolves Australian privacy dispute over Cambridge Analytica scandal

Meta Platforms, the parent company of Facebook, has settled a major privacy lawsuit in Australia with a record A$50 million payment. This settlement concludes years of legal proceedings over allegations that personal data of 311,127 Australian Facebook users was improperly exposed and risked being shared with consulting firm Cambridge Analytica. The firm was infamous for using such data for political profiling, including work on the Brexit campaign and Donald Trump’s election.

Australia’s privacy watchdog initiated the case in 2020 after uncovering that Facebook’s personality quiz app, This is Your Digital Life, was linked to the broader Cambridge Analytica scandal first revealed in 2018. The Australian Information Commissioner Elizabeth Tydd described the settlement as the largest of its kind in the nation, addressing significant privacy concerns.

Meta stated the agreement was reached on a “no admission” basis, marking an end to the legal battle. The case had already secured a significant victory for Australian regulators when the high court declined Meta’s appeal in 2023, forcing the company into mediation. This outcome highlights Australia’s growing resolve in holding global tech firms accountable for user data protection.