Japan-backed AI avatar to highlight climate risks at Osaka Expo

An AI avatar named Una will be presented at the UN pavilion during the 2025 World Expo in Osaka later in the month as part of efforts to promote climate action.

The anime-inspired character, developed with support from the Japanese government, will use 3D hologram technology to engage visitors from 29 September to 4 October.

Una was launched online in May and can respond automatically in multiple languages, including English and Japanese. She was created under the Pacific Green Transformation Project, which supports renewable energy initiatives such as electric vehicles in Samoa and hydropower in Vanuatu.

Her role is to share stories of Pacific island nations facing the impacts of rising sea levels and raise awareness about climate change.

Kanni Wignaraja, UN assistant secretary-general and regional director for Asia and the Pacific, described Una as a strong voice for threatened communities. Influenced by Japanese manga and anime, she is designed to act like a cultural ambassador who connects Pacific struggles with Japanese audiences.

Pacific sea levels have risen by more than 15 centimetres in some regions over the past three decades, leading to flooding, crop damage and migration fears. The risks are existential for nations like Tuvalu, with an average elevation of just two metres.

The UN hopes Una will encourage the public to support renewable energy adoption and climate resilience in vulnerable regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

German state pushes digital sovereignty

The northern German state of Schleswig-Holstein is pushing ahead with an ambitious plan to replace Microsoft software in its public administration with open-source alternatives.

With around 30,000 civil servants, a workforce comparable to the European Commission, the region has already migrated most staff to new systems. It expects to cut its Office licences by more than two-thirds before the end of the month.

Instead of relying on Word, Outlook or SharePoint, staff are switching to LibreOffice, Thunderbird, Open Xchange and Nextcloud. A Linux pilot is also underway, testing the replacement of Windows itself.

The digital minister, Dirk Schrödter, admitted the schedule is tight but said that 24,000 employees are already using the new setup. By 2029, only a handful of Microsoft licences should remain, kept for compatibility with federal services.

A transition that has not been free of challenges. Some judges have called for a return to Outlook, citing outages, while larger providers such as SAP have proven difficult to adapt.

Still, Schrödter argued the investment is about sovereignty rather than cost-cutting, comparing Europe’s reliance on Big Tech to its dependence on Russian gas before 2022. He urged Brussels to prioritise open-source solutions in procurement rules to reduce dependence on foreign tech giants.

Although Schleswig-Holstein is a relatively small region, its programme has already influenced wider German and European initiatives.

Similar efforts, including Germany’s OpenDesk project, have gained traction in France, Italy and the Netherlands, with several governments now watching the experiment closely.

Schrödter said the state’s progress surprises many observers, but he believes it shows how public administrations can regain control of their digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts guidelines on the interplay between DSA and GDPR

The European Data Protection Board (EDPB) has adopted its first guidelines on how the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) work together. The aim is to understand how GDPR should be applied in the context of DSA.

Presented during the EDPB’s September plenary, the guidelines ensure consistent interpretation where the DSA involves personal data processing by online intermediaries like search engines and platforms. While enforcement of DSA falls under authorities’ discretion, the EDPB’s input supports harmonised application across the EU’s evolving digital regulatory framework, including:

  • Notice-and-action systems that help individuals or entities report illegal content,
  • Recommender systems used by online platforms to automatically present specific content to the users of the platform with a particular relative order or prominence,
  • The provisions to ensure minors’ privacy, safety, and security and prohibit profile-based advertising using their data are presented to them.
  • transparency of advertising by online platforms, and
  • Prohibition of profiling-based advertising using special categories of data.

Following initial guidelines on the GDPR and DSA, the EDPB is now working with the European Commission on joint guidelines covering the interplay between the Digital Markets Act (DMA) and GDPR, as well as between the upcoming AI Act and the EU data protection laws. The aim is to ensure consistency and coherent safeguards across the evolving regulatory landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot