Study reveals why humans adapt better than AI

A new interdisciplinary study from Bielefeld University and other leading institutions explores why humans excel at adapting to new situations while AI systems often struggle. Researchers found humans generalise through abstraction and concepts, while AI relies on statistical or rule-based methods.

The study proposes a framework to align human and AI reasoning, defining generalisation, how it works, and how it can be assessed. Experts say differences in generalisation limit AI flexibility and stress the need for human-centred design in medicine, transport, and decision-making.

Researchers collaborated across more than 20 institutions, including Bielefeld, Bamberg, Amsterdam, and Oxford, under the SAIL project. The initiative aims to develop AI systems that are sustainable, transparent, and better able to support human values and decision-making.

Interdisciplinary insights may guide the responsible use of AI in human-AI teams, ensuring machines complement rather than disrupt human judgement.

The findings underline the importance of bridging cognitive science and AI research to foster more adaptable, trustworthy, and human-aligned AI systems capable of tackling complex, real-world challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions of customer records stolen in Kering luxury brand data breach

Kering has confirmed a data breach affecting several of its luxury brands, including Gucci, Balenciaga, Brioni, and Alexander McQueen, after unauthorised access to its Salesforce systems compromised millions of customer records.

Hacking group ShinyHunters has claimed responsibility, alleging it exfiltrated 43.5 million records from Gucci and nearly 13 million from the other brands. The stolen data includes names, email addresses, dates of birth, sales histories, and home addresses.

Kering stated that the incident occurred in June 2025 and did not compromise bank or credit card details or national identifiers. The company has reported the breach to the relevant regulators and is notifying the affected customers.

Evidence shared by ShinyHunters suggests Balenciaga made an initial ransom payment of €500,000 before negotiations broke down. The group released sample data and chat logs to support its claims.

ShinyHunters has exploited Salesforce weaknesses in previous attacks targeting luxury, travel, and financial firms. Questions remain about the total number of affected customers and the potential exposure of other Kering brands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PDGrapher AI tool aims to speed up precision medicine development

Harvard Medical School researchers have developed an AI tool that could transform drug discovery by identifying multiple drivers of disease and suggesting treatments to restore cells to a healthy state.

The model, called PDGrapher, utilises graph neural networks to map the relationships between genes, proteins, and cellular pathways, thereby predicting the most effective targets for reversing disease. Unlike traditional approaches that focus on a single protein, it considers multiple factors at once.

Trained on datasets of diseased cells before and after treatment, PDGrapher correctly predicted known drug targets and identified new candidates supported by emerging research. The model ranked potential targets up to 35% higher and worked 25 times faster than comparable tools.

Researchers are now applying PDGrapher to complex diseases such as Parkinson’s, Alzheimer’s, and various cancers, where single-target therapies often fail. By identifying combinations of targets, the tool can help overcome drug resistance and expedite treatment design.

Senior author Marinka Zitnik said the ultimate goal is to create a cellular ‘roadmap’ to guide therapy development and enable personalised treatments for patients. After further validation, PDGrapher could become a cornerstone in precision medicine.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reach framework deal on TikTok

The United States and China have reached a tentative ‘framework’ deal on the future of TikTok’s American operations, US Treasury Secretary Scott Bessent confirmed during trade talks in Madrid. The agreement, which still requires the approval of Presidents Donald Trump and Xi Jinping, is aimed at resolving a looming deadline that could see the video-sharing app banned in the US unless its Chinese owner ByteDance sells its American division.

US officials say the framework addresses national security concerns by paving the way for US ownership of TikTok’s operations, while China insists any final deal must not undermine its companies’ interests. The Biden administration has long argued that the app’s access to US user data poses significant risks, while ByteDance maintains its American arm operates independently and respects user privacy.

The law mandating a sale or ban, upheld by the Supreme Court earlier this year, is due to take effect on 17 September. Although the framework marks progress, key details remain unresolved, particularly over whether TikTok’s recommendation algorithm and user data will be fully transferred, stored, and protected in the US.

Experts warn that unless strict safeguards are included, the deal may solve ownership issues without closing potential ‘backdoors’ for Beijing. Concerns also remain over how much influence China retains, with negotiators linking TikTok’s fate to wider tariff discussions between the two powers.

If fully implemented, the agreement could represent a breakthrough in both trade relations and tech governance. But with ByteDance among China’s most powerful AI firms, the stakes go far beyond social media, touching on questions of global competition, national security, and digital sovereignty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use ChatGPT for fake ID attack

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.

Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.

Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.

The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!