AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

German state pushes digital sovereignty

The northern German state of Schleswig-Holstein is pushing ahead with an ambitious plan to replace Microsoft software in its public administration with open-source alternatives.

With around 30,000 civil servants, a workforce comparable to the European Commission, the region has already migrated most staff to new systems. It expects to cut its Office licences by more than two-thirds before the end of the month.

Instead of relying on Word, Outlook or SharePoint, staff are switching to LibreOffice, Thunderbird, Open Xchange and Nextcloud. A Linux pilot is also underway, testing the replacement of Windows itself.

The digital minister, Dirk Schrödter, admitted the schedule is tight but said that 24,000 employees are already using the new setup. By 2029, only a handful of Microsoft licences should remain, kept for compatibility with federal services.

A transition that has not been free of challenges. Some judges have called for a return to Outlook, citing outages, while larger providers such as SAP have proven difficult to adapt.

Still, Schrödter argued the investment is about sovereignty rather than cost-cutting, comparing Europe’s reliance on Big Tech to its dependence on Russian gas before 2022. He urged Brussels to prioritise open-source solutions in procurement rules to reduce dependence on foreign tech giants.

Although Schleswig-Holstein is a relatively small region, its programme has already influenced wider German and European initiatives.

Similar efforts, including Germany’s OpenDesk project, have gained traction in France, Italy and the Netherlands, with several governments now watching the experiment closely.

Schrödter said the state’s progress surprises many observers, but he believes it shows how public administrations can regain control of their digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts guidelines on the interplay between DSA and GDPR

The European Data Protection Board (EDPB) has adopted its first guidelines on how the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) work together. The aim is to understand how GDPR should be applied in the context of DSA.

Presented during the EDPB’s September plenary, the guidelines ensure consistent interpretation where the DSA involves personal data processing by online intermediaries like search engines and platforms. While enforcement of DSA falls under authorities’ discretion, the EDPB’s input supports harmonised application across the EU’s evolving digital regulatory framework, including:

  • Notice-and-action systems that help individuals or entities report illegal content,
  • Recommender systems used by online platforms to automatically present specific content to the users of the platform with a particular relative order or prominence,
  • The provisions to ensure minors’ privacy, safety, and security and prohibit profile-based advertising using their data are presented to them.
  • transparency of advertising by online platforms, and
  • Prohibition of profiling-based advertising using special categories of data.

Following initial guidelines on the GDPR and DSA, the EDPB is now working with the European Commission on joint guidelines covering the interplay between the Digital Markets Act (DMA) and GDPR, as well as between the upcoming AI Act and the EU data protection laws. The aim is to ensure consistency and coherent safeguards across the evolving regulatory landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France pushes for nighttime social media curfews for teens

French lawmakers are calling for stricter regulations on teen social media use, including mandatory nighttime curfews, following a parliamentary report examining TikTok’s psychological impact on minors.

The 324-page report, published Thursday by a National Assembly Inquiry Commission, proposes that social media accounts for 15- to 18-year-olds be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues.

The report contains 43 recommendations, including greater funding for youth mental health services, awareness campaigns in schools, and a national ban on social media access for those under 15. Platforms with algorithmic recommendation systems, like TikTok, are specifically targeted.

Arthur Delaporte, the lead rapporteur and a socialist MP, also announced plans to refer TikTok to the Paris Public Prosecutor, accusing the platform of knowingly exposing minors to harmful content.

The report follows a December 2024 lawsuit filed by seven families who claim TikTok’s content contributed to their children’s suicides.

TikTok rejected the accusations, calling the report “misleading” and highlighting its safety features for minors.

The report urges France not to wait for EU-level legislation and instead to lead on national regulation. President Emmanuel Macron previously demanded an EU-wide ban on social media for under-15s.

However, the European Commission has said cultural differences make such a bloc-wide approach unfeasible.

Looking ahead, the report supports stronger obligations in the upcoming Digital Fairness Act, such as giving users greater control over content feeds and limiting algorithmic manipulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ukraine urges ethical use of AI in education

AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.

Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.

He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.

Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.

The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!