EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Human rights must anchor crypto design

Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.

Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.

The collapse of platforms like FTX has only strengthened calls for human-centric solutions.

New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.

Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.

While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe to launch Eurosky to regain digital control

Europe is taking steps to assert its digital independence by launching the Eurosky initiative, a government-backed project to reduce reliance on US tech giants.

Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.

To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.

Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.

The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.

Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to add usernames for better privacy

WhatsApp is preparing to introduce usernames, allowing users to hide their phone numbers and opt for a unique ID instead. Meta’s push reflects growing demand for more secure and anonymous communication online.

Currently in development and not yet available for testing, the new feature will let users create usernames with letters, numbers, periods, and underscores, while blocking misleading formats like web addresses.

The move aims to improve privacy by letting users connect without revealing personal contact details. A system message will alert contacts whenever a username is updated, adding transparency to the process.

Although still in beta, the feature is expected to roll out soon, bringing WhatsApp in line with other major messaging platforms that already support username-based identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of search: Personalised AI and the privacy crossroads

The rise of personalised AI is poised to radically reshape how we interact with technology, with search engines evolving into intelligent agents that not only retrieve information but also understand and act on our behalf. No longer just a list of links, search is merging into chatbots and AI agents that synthesise information from across the web to deliver tailored answers.

Google and OpenAI have already begun this shift, with services like AI Overview and ChatGPT Search leading a trend that analysts say could cut traditional search volume by 25% by 2026. That transformation is driven by the AI industry’s hunger for personal data.

To offer highly customised responses and assistance, AI systems require in-depth profiles of their users, encompassing everything from dietary preferences to political beliefs. The deeper the personalisation, the greater the privacy risks.

OpenAI, for example, envisions a ‘super assistant’ capable of managing nearly every aspect of your digital life, fed by detailed knowledge of your past interactions, habits, and preferences. Google and Meta are pursuing similar paths, with Mark Zuckerberg even imagining AI therapists and friends that recall your social context better than you do.

As these tools become more capable, they also grow more invasive. Wearable, always-on AI devices equipped with microphones and cameras are on the horizon, signalling an era of ambient data collection.

AI assistants won’t just help answer questions—they’ll book vacations, buy gifts, and even manage your calendar. But with these conveniences comes unprecedented access to our most intimate data, raising serious concerns over surveillance and manipulation.

Policymakers are struggling to keep up. Without a comprehensive federal privacy law, the US relies on a patchwork of state laws and limited federal oversight. Proposals to regulate data sharing, such as forcing Google to hand over user search histories to competitors like OpenAI and Meta, risk compounding the problem unless strict safeguards are enacted.

As AI becomes the new gatekeeper to the internet, regulators face a daunting task: enabling innovation while ensuring that the AI-powered future doesn’t come at the expense of our privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft Recall raises privacy alarm again

Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.

Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.

Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.

Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.

With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.

At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.

Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!