Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use ChatGPT for fake ID attack

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.

Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.

Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.

The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising data centre demand pushes utilities to invest

US electricity prices are rising as the energy demands of data centres surge, driven by the rapid growth of AI technologies. The average retail price per kilowatt-hour increased by 6.5% between May 2024 and May 2025, with some states experiencing significantly sharper increases.

Maine saw the sharpest rise in electricity prices at 36.3%, with Connecticut and Utah following closely behind. Utilities are passing on infrastructure costs, including new transmission lines, to consumers. In Northern Virginia, residents could face monthly bill increases of up to $37 by 2040.

Analysts warn that the shift to generative AI will lead to a 160% surge in energy use at data centres by 2030. Water use is also rising sharply, as Google reported its facilities consumed around 6 billion gallons in 2024 alone, amid intensifying global AI competition.

Tech giants are turning to alternative energy to keep pace. Google has announced plans to power data centres with small nuclear reactors through a partnership with Kairos Power, while Microsoft and Amazon are ramping up nuclear investments to secure long-term supply.

President Donald Trump has pledged more than $92 billion in AI and energy infrastructure investments, underlining Washington’s push to ensure the US remains competitive in the AI race despite mounting strain on the grid and water resources.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

German state pushes digital sovereignty

The northern German state of Schleswig-Holstein is pushing ahead with an ambitious plan to replace Microsoft software in its public administration with open-source alternatives.

With around 30,000 civil servants, a workforce comparable to the European Commission, the region has already migrated most staff to new systems. It expects to cut its Office licences by more than two-thirds before the end of the month.

Instead of relying on Word, Outlook or SharePoint, staff are switching to LibreOffice, Thunderbird, Open Xchange and Nextcloud. A Linux pilot is also underway, testing the replacement of Windows itself.

The digital minister, Dirk Schrödter, admitted the schedule is tight but said that 24,000 employees are already using the new setup. By 2029, only a handful of Microsoft licences should remain, kept for compatibility with federal services.

A transition that has not been free of challenges. Some judges have called for a return to Outlook, citing outages, while larger providers such as SAP have proven difficult to adapt.

Still, Schrödter argued the investment is about sovereignty rather than cost-cutting, comparing Europe’s reliance on Big Tech to its dependence on Russian gas before 2022. He urged Brussels to prioritise open-source solutions in procurement rules to reduce dependence on foreign tech giants.

Although Schleswig-Holstein is a relatively small region, its programme has already influenced wider German and European initiatives.

Similar efforts, including Germany’s OpenDesk project, have gained traction in France, Italy and the Netherlands, with several governments now watching the experiment closely.

Schrödter said the state’s progress surprises many observers, but he believes it shows how public administrations can regain control of their digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans AI systems to monitor offenders and prevent crimes before they occur

The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.

One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.

These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.

Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.

Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.

UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple notifies French users after commercial spyware threats surge

France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.

These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.

CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.

Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!