Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use ChatGPT for fake ID attack

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.

Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.

Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.

The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Banana AI saree trend pushes Google Gemini to the top

A few months after Ghibli-style AI images went viral, a new Instagram trend is sweeping users: 1990s Bollywood-style saree portraits generated through Google’s Gemini Nano Banana tool.

Known as the ‘Banana AI saree’ edit, the feature allows users to turn ordinary selfies into nostalgic retro-style images. The edits evoke classic cinema with chiffon sarees, grainy textures, bold make-up, and jasmine-adorned hair, often styled under golden sunlight for a vintage glow.

The tool has quickly become a social media hit. Users can experiment with sarees, retro sherwanis, or even traditional dhotis by adjusting prompts to personalise their look. The trend follows earlier viral edits such as hyper-realistic 3D figurine portraits.

With its popularity soaring, Google Gemini has now topped the Apple App Store’s free apps chart in both India and the US, outpacing competitors like ChatGPT and Grok. Google DeepMind’s chief, Demis Hassabis, praised the Gemini team, calling it ‘just the start’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rising data centre demand pushes utilities to invest

US electricity prices are rising as the energy demands of data centres surge, driven by the rapid growth of AI technologies. The average retail price per kilowatt-hour increased by 6.5% between May 2024 and May 2025, with some states experiencing significantly sharper increases.

Maine saw the sharpest rise in electricity prices at 36.3%, with Connecticut and Utah following closely behind. Utilities are passing on infrastructure costs, including new transmission lines, to consumers. In Northern Virginia, residents could face monthly bill increases of up to $37 by 2040.

Analysts warn that the shift to generative AI will lead to a 160% surge in energy use at data centres by 2030. Water use is also rising sharply, as Google reported its facilities consumed around 6 billion gallons in 2024 alone, amid intensifying global AI competition.

Tech giants are turning to alternative energy to keep pace. Google has announced plans to power data centres with small nuclear reactors through a partnership with Kairos Power, while Microsoft and Amazon are ramping up nuclear investments to secure long-term supply.

President Donald Trump has pledged more than $92 billion in AI and energy infrastructure investments, underlining Washington’s push to ensure the US remains competitive in the AI race despite mounting strain on the grid and water resources.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scientist and DeepMind’s CEO Demis Hassabis says learning how to learn is key to the AI future

The Nobel laureate, Demis Hassabis, has argued that the most crucial ability for the next generation will be learning how to learn.

Speaking at the Odeon of Herodes Atticus in Athens, Greece, he said adaptability was vital as AI reshapes work and education.

The neuroscientist and former chess prodigy predicted that AGI machines with human-level versatility could emerge within a decade. He described it as a development that may create a future of radical abundance, although he warned of risks.

Hassabis urged a stronger focus on ‘meta-skills’ such as optimising approaches to new subjects, instead of relying solely on traditional disciplines.

Given the speed of technological change, he emphasised that people will need to update their knowledge continuously throughout their careers.

His remarks came during a discussion with Greek Prime Minister Kyriakos Mitsotakis, who warned that the unchecked growth of technology giants could fuel economic inequality and social unrest if citizens do not see clear benefits from AI adoption.

Hassabis’s work on protein folding won him the 2024 Nobel Prize in chemistry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI cuts 500 jobs as focus turns to specialist AI tutors

Elon Musk’s company xAI has laid off around 500 staff from its data annotation team after deciding to reduce its focus on general AI tutors. The employees were told their system access would be revoked immediately, although salaries will be paid until contracts end or until 30 November.

According to reports, the company will instead invest more heavily in specialist AI tutors for areas such as video games, web design, data science, medicine, and STEM.

xAI announced plans to expand the specialist team tenfold, describing the roles as highly valuable to developing its technology.

The shift comes as xAI continues to promote its chatbot Grok. Musk recently highlighted its predictive abilities, sharing benchmarks that measure performance in forecasting politics, economics, sports, and cultural events.

Observers see the move toward specialist tutors as a way to refine Grok’s training and strengthen its commercial applications.

The layoffs follow earlier signs of restructuring, with some senior staff reportedly losing access to internal systems before the formal announcement.

Analysts suggest the changes reflect a strategic recalibration, aiming to boost productivity instead of spreading resources too thinly across generalist roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!