New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to add usernames for better privacy

WhatsApp is preparing to introduce usernames, allowing users to hide their phone numbers and opt for a unique ID instead. Meta’s push reflects growing demand for more secure and anonymous communication online.

Currently in development and not yet available for testing, the new feature will let users create usernames with letters, numbers, periods, and underscores, while blocking misleading formats like web addresses.

The move aims to improve privacy by letting users connect without revealing personal contact details. A system message will alert contacts whenever a username is updated, adding transparency to the process.

Although still in beta, the feature is expected to roll out soon, bringing WhatsApp in line with other major messaging platforms that already support username-based identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of search: Personalised AI and the privacy crossroads

The rise of personalised AI is poised to radically reshape how we interact with technology, with search engines evolving into intelligent agents that not only retrieve information but also understand and act on our behalf. No longer just a list of links, search is merging into chatbots and AI agents that synthesise information from across the web to deliver tailored answers.

Google and OpenAI have already begun this shift, with services like AI Overview and ChatGPT Search leading a trend that analysts say could cut traditional search volume by 25% by 2026. That transformation is driven by the AI industry’s hunger for personal data.

To offer highly customised responses and assistance, AI systems require in-depth profiles of their users, encompassing everything from dietary preferences to political beliefs. The deeper the personalisation, the greater the privacy risks.

OpenAI, for example, envisions a ‘super assistant’ capable of managing nearly every aspect of your digital life, fed by detailed knowledge of your past interactions, habits, and preferences. Google and Meta are pursuing similar paths, with Mark Zuckerberg even imagining AI therapists and friends that recall your social context better than you do.

As these tools become more capable, they also grow more invasive. Wearable, always-on AI devices equipped with microphones and cameras are on the horizon, signalling an era of ambient data collection.

AI assistants won’t just help answer questions—they’ll book vacations, buy gifts, and even manage your calendar. But with these conveniences comes unprecedented access to our most intimate data, raising serious concerns over surveillance and manipulation.

Policymakers are struggling to keep up. Without a comprehensive federal privacy law, the US relies on a patchwork of state laws and limited federal oversight. Proposals to regulate data sharing, such as forcing Google to hand over user search histories to competitors like OpenAI and Meta, risk compounding the problem unless strict safeguards are enacted.

As AI becomes the new gatekeeper to the internet, regulators face a daunting task: enabling innovation while ensuring that the AI-powered future doesn’t come at the expense of our privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft Recall raises privacy alarm again

Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.

Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.

Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.

Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.

With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.

At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.

Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp introduces privacy feature to block Meta AI

Meta has come under fire for integrating its AI assistant into WhatsApp, with users spotting an unremovable blue circle representing Meta AI’s presence.

While Google has favoured opt-in models for AI tools, Meta’s approach has sparked backlash, with some critics accusing it of disregarding WhatsApp’s privacy-first roots. Though users can’t remove the assistant entirely, WhatsApp now offers a workaround to disable its functions in individual chats.

A new ‘Advanced Chat Privacy’ setting allows users to block AI interactions on a chat-by-chat basis. When enabled, this feature prevents chats from being exported, stops media from auto-downloading, and crucially, disables AI from accessing messages.

WhatsApp says this is part of a broader plan to offer greater privacy controls, reaffirming its focus on secure and private messaging.

Meta maintains that it cannot read message content and that only limited data is shared when AI is used. Still, the company advises against sharing sensitive information with Meta AI.

The new privacy setting is being rolled out to all users on the latest version of WhatsApp and can be activated via the chat settings menu.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple challenges UK government over encrypted iCloud access order

A British court has confirmed that Apple is engaged in legal proceedings against the UK government concerning a statutory notice linked to iCloud account encryption. The Investigatory Powers Tribunal (IPT), which handles cases involving national security and surveillance, disclosed limited information about the case, lifting previous restrictions on its existence.

The dispute centres on a government-issued Technical Capability Notice (TCN), which, according to reports, required Apple to provide access to encrypted iCloud data for users in the UK. Apple subsequently removed the option for end-to-end encryption on iCloud accounts in the region earlier this year. While the company has not officially confirmed the connection, it has consistently stated it does not create backdoors or master keys for its products.

The government’s position has been to neither confirm nor deny the existence of individual notices. However, in a rare public statement, a government spokesperson clarified that TCNs do not grant direct access to data and must be used in conjunction with appropriate warrants and authorisations. The spokesperson also stated that the notices are designed to support existing investigatory powers, not expand them.

The IPT allowed the basic facts of the case to be released following submissions from media outlets, civil society organisations, and members of the United States Congress. These parties argued that public interest considerations justified disclosure of the case’s existence. The tribunal concluded that confirming the identities of the parties and the general subject matter would not compromise national security or the public interest.

Previous public statements by US officials, including the former President and the current Director of National Intelligence, have acknowledged concerns surrounding the TCN process and its implications for international technology companies. In particular, questions have been raised regarding transparency and oversight of such powers.

Legal academics and members of the intelligence community have also commented on the broader implications of government access to encrypted platforms, with some suggesting that increased openness may be necessary to maintain public trust.

The case remains ongoing. Future proceedings will be determined once both parties have reviewed a private judgment issued by the court. The IPT is expected to issue a procedural timetable following input from both Apple and the UK Home Secretary.

For more information on these topics, visit diplomacy.edu.

Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.

Aylo Holdings faces legal pressure over privacy concerns

Canada’s privacy commissioner has launched legal action against Aylo Holdings, the Montreal-based operator of Pornhub and other adult websites, for failing to ensure consent from individuals featured in uploaded content.

Commissioner Philippe Dufresne said Aylo had not adequately addressed concerns raised in an earlier investigation, which found the company allowed intimate images to be shared without the direct permission of those depicted.

A Federal Court order is being sought to enforce compliance with privacy laws in Canada. Aylo Holdings has denied violating privacy laws and expressed disappointment at the legal action.

The company claims it has been in ongoing discussions with regulators and has implemented significant measures to prevent non-consensual content from being shared. These include mandatory uploader verification, proof of consent for all participants, stricter moderation, and banning content downloads.

The case stems from a complaint by a woman whose ex-boyfriend uploaded intimate images of her without her consent.

Although Aylo says the incident occurred in 2015 and policies have since improved, the privacy commissioner insists that stronger enforcement is needed. The legal battle could have significant implications for content moderation policies in the adult entertainment industry.

For more information on these topics, visit diplomacy.edu.