Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.
Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.
VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.
This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.
Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Users like Andrew Tutty in Ontario say the devices restore independence, helping with tasks such as identifying food or matching clothes. Others, like Emilee Schevers, rely on them to confirm traffic signals before crossing the road.
The AI glasses, developed by Meta, are cheaper than many other assistive devices, which can cost thousands. They connect to smartphones, using voice commands and apps like Be My Eyes to describe surroundings or link with volunteers.
Experts, however, caution that the glasses come with significant privacy concerns. Built-in cameras stream everything within view to large tech firms, raising questions about surveillance, data use and algorithmic reliability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Imagine dreaming of your next holiday and feeling a rush of excitement. That emotion is when your attention is most engaged. Neuro-contextual advertising aims to meet you at such emotional peaks.
Neuro-contextual AI goes beyond page-level relevance. It interprets emotional signals of interest and intent in real time while preserving user privacy. It asks why users interact with content at a specific moment, not just what they view.
When ads align with emotion, interest and intention, engagement rises. A car ad may shift tone accordingly, action-fuelled visuals for thrill seekers and softer, nostalgic tones for someone browsing family stories.
Emotions shape memory and decisions. Emotionally intelligent advertising fosters connection, meaning and loyalty rather than just attention.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has begun rolling out a feature that enables its Gemini AI chatbot to automatically remember key personal details and preferences from previous chats, unless users opt out. However, this builds upon earlier functionality where memory could only be activated on request.
The update is enabled by default on Gemini 2.5 Pro in select countries and will be extended to the 2.5 Flash version later. Users can turn off the setting under Personal Context in the app to deactivate it.
Alongside auto-memory, Google is introducing Temporary Chats, a privacy tool for one-off interactions. These conversations aren’t saved to your history, aren’t used to train Gemini, and are deleted after 72 hours.
Google is also renaming ‘Gemini Apps Activity’ to ‘Keep Activity’, a setting that, when enabled, lets Google sample uploads like files and photos to improve services from 2 September, while still offering the option to opt out.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.
She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.
Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.
She stated that UK public acceptance would depend on a responsible and targeted application.
By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.
France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.
The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.
In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.
Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.
Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.
The collapse of platforms like FTX has only strengthened calls for human-centric solutions.
New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.
Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.
While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.
To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.
Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.
The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.
Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.
Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.
Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.
However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.
Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.
The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!