Europe to launch Eurosky to regain digital control

Europe is taking steps to assert its digital independence by launching the Eurosky initiative, a government-backed project to reduce reliance on US tech giants.

Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.

To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.

Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.

The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.

Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to add usernames for better privacy

WhatsApp is preparing to introduce usernames, allowing users to hide their phone numbers and opt for a unique ID instead. Meta’s push reflects growing demand for more secure and anonymous communication online.

Currently in development and not yet available for testing, the new feature will let users create usernames with letters, numbers, periods, and underscores, while blocking misleading formats like web addresses.

The move aims to improve privacy by letting users connect without revealing personal contact details. A system message will alert contacts whenever a username is updated, adding transparency to the process.

Although still in beta, the feature is expected to roll out soon, bringing WhatsApp in line with other major messaging platforms that already support username-based identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of search: Personalised AI and the privacy crossroads

The rise of personalised AI is poised to radically reshape how we interact with technology, with search engines evolving into intelligent agents that not only retrieve information but also understand and act on our behalf. No longer just a list of links, search is merging into chatbots and AI agents that synthesise information from across the web to deliver tailored answers.

Google and OpenAI have already begun this shift, with services like AI Overview and ChatGPT Search leading a trend that analysts say could cut traditional search volume by 25% by 2026. That transformation is driven by the AI industry’s hunger for personal data.

To offer highly customised responses and assistance, AI systems require in-depth profiles of their users, encompassing everything from dietary preferences to political beliefs. The deeper the personalisation, the greater the privacy risks.

OpenAI, for example, envisions a ‘super assistant’ capable of managing nearly every aspect of your digital life, fed by detailed knowledge of your past interactions, habits, and preferences. Google and Meta are pursuing similar paths, with Mark Zuckerberg even imagining AI therapists and friends that recall your social context better than you do.

As these tools become more capable, they also grow more invasive. Wearable, always-on AI devices equipped with microphones and cameras are on the horizon, signalling an era of ambient data collection.

AI assistants won’t just help answer questions—they’ll book vacations, buy gifts, and even manage your calendar. But with these conveniences comes unprecedented access to our most intimate data, raising serious concerns over surveillance and manipulation.

Policymakers are struggling to keep up. Without a comprehensive federal privacy law, the US relies on a patchwork of state laws and limited federal oversight. Proposals to regulate data sharing, such as forcing Google to hand over user search histories to competitors like OpenAI and Meta, risk compounding the problem unless strict safeguards are enacted.

As AI becomes the new gatekeeper to the internet, regulators face a daunting task: enabling innovation while ensuring that the AI-powered future doesn’t come at the expense of our privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft Recall raises privacy alarm again

Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.

Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.

Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.

Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.

With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.

At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.

Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp introduces privacy feature to block Meta AI

Meta has come under fire for integrating its AI assistant into WhatsApp, with users spotting an unremovable blue circle representing Meta AI’s presence.

While Google has favoured opt-in models for AI tools, Meta’s approach has sparked backlash, with some critics accusing it of disregarding WhatsApp’s privacy-first roots. Though users can’t remove the assistant entirely, WhatsApp now offers a workaround to disable its functions in individual chats.

A new ‘Advanced Chat Privacy’ setting allows users to block AI interactions on a chat-by-chat basis. When enabled, this feature prevents chats from being exported, stops media from auto-downloading, and crucially, disables AI from accessing messages.

WhatsApp says this is part of a broader plan to offer greater privacy controls, reaffirming its focus on secure and private messaging.

Meta maintains that it cannot read message content and that only limited data is shared when AI is used. Still, the company advises against sharing sensitive information with Meta AI.

The new privacy setting is being rolled out to all users on the latest version of WhatsApp and can be activated via the chat settings menu.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple challenges UK government over encrypted iCloud access order

A British court has confirmed that Apple is engaged in legal proceedings against the UK government concerning a statutory notice linked to iCloud account encryption. The Investigatory Powers Tribunal (IPT), which handles cases involving national security and surveillance, disclosed limited information about the case, lifting previous restrictions on its existence.

The dispute centres on a government-issued Technical Capability Notice (TCN), which, according to reports, required Apple to provide access to encrypted iCloud data for users in the UK. Apple subsequently removed the option for end-to-end encryption on iCloud accounts in the region earlier this year. While the company has not officially confirmed the connection, it has consistently stated it does not create backdoors or master keys for its products.

The government’s position has been to neither confirm nor deny the existence of individual notices. However, in a rare public statement, a government spokesperson clarified that TCNs do not grant direct access to data and must be used in conjunction with appropriate warrants and authorisations. The spokesperson also stated that the notices are designed to support existing investigatory powers, not expand them.

The IPT allowed the basic facts of the case to be released following submissions from media outlets, civil society organisations, and members of the United States Congress. These parties argued that public interest considerations justified disclosure of the case’s existence. The tribunal concluded that confirming the identities of the parties and the general subject matter would not compromise national security or the public interest.

Previous public statements by US officials, including the former President and the current Director of National Intelligence, have acknowledged concerns surrounding the TCN process and its implications for international technology companies. In particular, questions have been raised regarding transparency and oversight of such powers.

Legal academics and members of the intelligence community have also commented on the broader implications of government access to encrypted platforms, with some suggesting that increased openness may be necessary to maintain public trust.

The case remains ongoing. Future proceedings will be determined once both parties have reviewed a private judgment issued by the court. The IPT is expected to issue a procedural timetable following input from both Apple and the UK Home Secretary.

For more information on these topics, visit diplomacy.edu.