Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.

Aylo Holdings faces legal pressure over privacy concerns

Canada’s privacy commissioner has launched legal action against Aylo Holdings, the Montreal-based operator of Pornhub and other adult websites, for failing to ensure consent from individuals featured in uploaded content.

Commissioner Philippe Dufresne said Aylo had not adequately addressed concerns raised in an earlier investigation, which found the company allowed intimate images to be shared without the direct permission of those depicted.

A Federal Court order is being sought to enforce compliance with privacy laws in Canada. Aylo Holdings has denied violating privacy laws and expressed disappointment at the legal action.

The company claims it has been in ongoing discussions with regulators and has implemented significant measures to prevent non-consensual content from being shared. These include mandatory uploader verification, proof of consent for all participants, stricter moderation, and banning content downloads.

The case stems from a complaint by a woman whose ex-boyfriend uploaded intimate images of her without her consent.

Although Aylo says the incident occurred in 2015 and policies have since improved, the privacy commissioner insists that stronger enforcement is needed. The legal battle could have significant implications for content moderation policies in the adult entertainment industry.

For more information on these topics, visit diplomacy.edu.

Google faces backlash from privacy advocates over new tracking rules

Google has introduced changes to its online tracking policies, allowing fingerprinting, a technique that collects data such as IP addresses and device information to help advertisers identify users. The new rules mark a shift in Google’s approach to online tracking.

Google states that these data signals are already widely used across the industry and that its goal is to balance privacy with the needs of businesses and advertisers. The company previously restricted fingerprinting for ad targeting but now argues that evolving internet usage—such as browsing from smart TVs and gaming consoles—has made conventional tracking methods, like cookies, less effective. The company also emphasises that users continue to have choices regarding personalised ads and that it encourages responsible data use across the industry.

Critics argue that fingerprinting is harder for users to control compared to cookies, as it does not rely on locally stored files but rather collects real-time data about a user’s device and network. Some privacy advocates believe this change marks a shift toward tracking methods that provide users with fewer options to opt out.

Martin Thomson, an engineer at Mozilla, noted that by allowing fingerprinting, Google has given itself—and the advertising industry it dominates—permission to use a form of tracking that people can’t do much to stop. Lena Cohen, staff technologist at the Electronic Frontier Foundation, expressed similar concerns, stating that fingerprinting could make user data more accessible to advertisers, data brokers, and law enforcement.

The UK’s Information Commissioner’s Office (ICO) has raised concerns over fingerprinting, stating that it could reduce users’ ability to control how their information is collected. In a December blog post, Stephen Almond, the ICO’s Executive Director of Regulatory Risk, wrote that this change irresponsible, and that advertisers and businesses using this technology will need to demonstrate compliance with privacy and data laws.

Google responded that it welcomes further discussions with regulators and highlighted that IP addresses have long been used across the industry for fraud prevention and security.

For more information on these topics, visit diplomacy.edu.

EU scraps tech patent, AI liability, and messaging privacy rules

The European Commission has abandoned proposed regulations on technology patents, AI liability, and privacy rules for messaging apps, citing a lack of foreseeable agreement among EU lawmakers and member states. The draft rules faced strong opposition from industry groups and major technology firms. A proposed regulation on standard essential patents, designed to streamline licensing disputes for telecom and smart device technologies, was scrapped after opposition from patent holders like Nokia and Ericsson. Car manufacturers and tech giants such as Apple and Google had pushed for reforms to reduce royalty costs.

A proposal that would have allowed consumers to sue AI developers for harm caused by their technology was also withdrawn. The AI Liability Directive, first introduced in 2022, aimed to hold providers accountable for failures in AI systems. Legal experts say the move does not indicate a shift in the EU’s approach to AI regulation, as several laws already govern the sector. Meanwhile, plans to extend telecom privacy rules to platforms like WhatsApp and Skype have been dropped. The proposal, first introduced in 2017, had been stalled due to disagreements over tracking cookies and child protection measures.

The decision has drawn mixed reactions from industry groups. Nokia welcomed the withdrawal of patent rules, arguing they would have discouraged European investment in research and development. The Fair Standards Alliance, representing firms such as BMW, Tesla, and Google, expressed disappointment, warning that the decision undermines fair patent licensing. The Commission has stated it will reassess the need for revised proposals but has not provided a timeline for future regulatory efforts.

For more information on these topics, visit diplomacy.edu.

Belgium plans AI use for law enforcement and telecom strategy

Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.

Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.

The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.

Europol highlights encryption concerns at the World Economic Forum

At the World Economic Forum in Davos, Europol’s executive director, Catherine De Bolle, urged tech companies to provide law enforcement access to encrypted messages, citing public safety concerns. While she argued this is necessary to combat crime and protect democracy, critics highlighted the risks of undermining encryption, which is essential for privacy and individual freedoms.

De Bolle compared accessing encrypted communications to executing a search warrant in a locked house. However, this analogy oversimplifies the issue, as encryption safeguards sensitive data and ensures private communication, even under authoritarian regimes. Weakening it could lead to widespread misuse, enabling mass surveillance and suppression, as seen in places like Russia.

Advocates for privacy stress that encryption is not merely a barrier to crime but a cornerstone of democracy, enabling free speech and safeguarding against state overreach. While law enforcement has other tools for crime-fighting, creating backdoors to encryption would expose everyone to cyber risks and potentially render digital security obsolete.

If governments succeed in weakening encryption, decentralised solutions backed by blockchain technology could rise, making such access nearly impossible in the future. The debate underscores the critical balance between security and preserving fundamental rights.

World ID forced to stop offering crypto for biometrics in Brazil

Brazil’s data protection authority, ANPD, has ordered Tools for Humanity (TFH), the company behind the World ID project, to cease offering crypto or financial compensation for biometric data collection. The move comes after an investigation launched in November 2023, with the ANPD citing concerns over the potential influence of financial incentives on individuals’ consent to share sensitive biometric data, such as iris scans.

The World ID project, which aims to create a universal digital identity, uses eye-scanning technology developed by TFH. The ANPD’s decision also reflects its concerns over the irreversible nature of biometric data collection and the inability to delete this information once submitted. Under Brazilian law, consent for processing such sensitive data must be freely given and informed, without undue influence.

This is not the first regulatory issue for World ID, as Germany’s data protection authority also issued corrective measures in December 2023, requiring the project to comply with the EU’s General Data Protection Regulations. Meanwhile, the value of World Network’s native token, WLF, has dropped significantly, falling by over 8% in the past 24 hours and 83% from its peak in March 2023.

US regulator escalates complaint against Snap

The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.

Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.

The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.

Google and Microsoft join inauguration donor list

Google and Microsoft have each pledged $1 million to support Donald Trump’s upcoming presidential inauguration, joining other tech giants such as Meta, Amazon, and Apple’s Tim Cook in contributing significant sums. The donations appear to be part of broader strategies by these companies to maintain access to political leadership in a rapidly changing regulatory environment.

Google, which has faced threats from Trump regarding potential break-ups, aims to secure goodwill through financial contributions and online visibility, including a YouTube livestream of the inauguration. Microsoft has also maintained steady political donations, previously giving $500,000 to Trump’s first inauguration as well as to President Joe Biden’s ceremony.

This alignment with Trump marks a notable trend of tech companies seeking to protect their interests, particularly as issues like antitrust regulations and data privacy laws remain in political crosshairs. With both tech giants navigating a landscape of increased government scrutiny, their contributions indicate a cautious approach to preserving influence at the highest levels of power.

These donations reflect a pragmatic move by Silicon Valley, where cultivating political ties is seen as a way to safeguard business operations amid shifting political dynamics.

Study reveals privacy risks of smart home cameras

Smart home cameras have become a staple for security-conscious households, offering peace of mind by monitoring both indoor and outdoor spaces. However, new research by Surfshark exposes alarming privacy concerns, showing that these devices collect far more user data than necessary. Outdoor security camera apps top the list, gathering an average of 12 data points, including sensitive information such as precise location, email addresses, and payment details which is 50% more than other smart devices.

Indoor camera apps are slightly less invasive but still problematic, collecting an average of nine data points, including audio data and purchase histories. Some apps, like those from Arlo, Deep Sentinel, and D-Link, even extract contact information unnecessarily, raising serious questions about user consent and safety. The absence of robust privacy regulations leaves users vulnerable to data breaches, cyberattacks, and misuse of personal information.

Experts recommend limiting data-sharing permissions, using strong passwords, and regularly updating privacy settings to mitigate risks. Options such as enabling local storage instead of cloud services and employing a VPN can further protect against data leaks. While smart cameras bring convenience, they highlight the urgent need for clearer regulations to safeguard consumer privacy in the era of connected technology.