Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum breakthroughs could threaten Bitcoin in the 2030s

The rise of quantum computing is sparking fresh concerns over the long-term security of Bitcoin. Unlike classical systems, quantum machines could eventually break the cryptography protecting digital assets.

Experts warn that Shor’s algorithm, once run on a sufficiently powerful quantum computer, could recover private keys from public ones in hours, leaving exposed funds vulnerable. Analysts see the mid-to-late 2030s as the key period for cryptographically relevant breakthroughs.

ChatGPT-5’s probability model indicates less than a 5% chance of Bitcoin being cracked before 2030, but risk rises to 45–60% between 2035 and 2039, and nearly certainty by 2050. Sudden progress in large-scale, fault-tolerant qubits or government directives could accelerate the timeline.

Mitigation strategies include avoiding key reuse, auditing exposed addresses, and gradually shifting to post-quantum or hybrid cryptographic solutions. Experts suggest that critical migrations should be completed by the mid-2030s to secure the Bitcoin network against future quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attacks pose growing threat to shipping industry

The maritime industry faces rising cyber threats, with Nigerian gangs among the most active attackers of shipping firms. HFW lawyers say ‘man-in-the-middle’ frauds are now common, letting hackers intercept communications and steal sensitive financial or operational data.

Costs from cyber attacks are rising sharply, with average mitigation expenses for shipping firms doubling to $550,000 (£410,000) between 2022 and 2023. In cases where hackers remain embedded, ransom payments can reach $3.2m.

The rise in attacks coincides with greater digitisation, satellite connectivity such as Starlink, and increased use of onboard sensors.

Threats now extend beyond financial extortion, with GPS jamming and spoofing posing risks to navigation. Incidents such as the grounding of MSC Antonia in the Red Sea illustrate potential physical damage from cyber interference.

Industry regulators are responding, with the International Maritime Organization introducing mandatory cyber security measures into ship management systems. Experts say awareness has grown, and shipping firms are gradually strengthening defences against criminal and state cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising data centre demand pushes utilities to invest

US electricity prices are rising as the energy demands of data centres surge, driven by the rapid growth of AI technologies. The average retail price per kilowatt-hour increased by 6.5% between May 2024 and May 2025, with some states experiencing significantly sharper increases.

Maine saw the sharpest rise in electricity prices at 36.3%, with Connecticut and Utah following closely behind. Utilities are passing on infrastructure costs, including new transmission lines, to consumers. In Northern Virginia, residents could face monthly bill increases of up to $37 by 2040.

Analysts warn that the shift to generative AI will lead to a 160% surge in energy use at data centres by 2030. Water use is also rising sharply, as Google reported its facilities consumed around 6 billion gallons in 2024 alone, amid intensifying global AI competition.

Tech giants are turning to alternative energy to keep pace. Google has announced plans to power data centres with small nuclear reactors through a partnership with Kairos Power, while Microsoft and Amazon are ramping up nuclear investments to secure long-term supply.

President Donald Trump has pledged more than $92 billion in AI and energy infrastructure investments, underlining Washington’s push to ensure the US remains competitive in the AI race despite mounting strain on the grid and water resources.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple notifies French users after commercial spyware threats surge

France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.

These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.

CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.

Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU enforces tougher cybersecurity rules under NIS2

The European Union’s NIS2 directive has officially come into force, imposing stricter cybersecurity duties on thousands of organisations.

Adopted in 2022 and implemented into national law by late 2024, the rules extend beyond critical infrastructure to cover more industries. Energy, healthcare, transport, ICT, and even waste management firms now face mandatory compliance.

Measures include multifactor authentication, encryption, backup systems, and stronger supply chain security. Senior executives are held directly responsible for failures, with penalties ranging from heavy fines to operational restrictions.

Companies must also report major incidents promptly to national authorities. Unlike ISO certifications, NIS2 requires organisations to prove compliance through internal processes or independent audits, depending on national enforcement.

Analysts warn that firms still reliant on legacy systems face a difficult transition. Yet experts agree the directive signals a decisive shift: cybersecurity is now a legal duty, not simply best practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot