Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mount Fuji eruption simulated in an AI video for Tokyo

Residents of Tokyo have been shown a stark warning of what could happen if Mount Fuji erupts.

The metropolitan government released a three-minute AI-generated video depicting the capital buried in volcanic ash to raise awareness and urge preparation.

The simulation shows thick clouds of ash descending on Shibuya and other districts about one to two hours after an eruption, with up to 10 centimetres expected to accumulate. Unlike snow, volcanic ash does not melt away but instead hardens, damages powerlines, and disrupts communications once wet.

The video also highlights major risks to transport. Ash on train tracks, runways, and roads would halt trains, ground planes, and make driving perilous.

Two-wheel vehicles could become unusable under even modest ashfall. Power outages and shortages of food and supplies are expected as shops run empty, echoing the disruption seen after the 2011 earthquake.

Officials advise people to prepare masks, goggles, and at least three days of emergency food. The narrator warns that because no one knows when Mount Fuji might erupt, daily preparedness in Japan is vital to protect health, infrastructure, and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to spot AI-generated videos with simple visual checks

Mashable offers a hands-on guide to help users detect AI-generated videos by observing subtle technical cues. Key warning signs include mismatched lip movements and speech, where voices are dubbed over real footage and audio isn’t perfectly aligned with mouth motions.

Users are also advised to look for visual anomalies such as unnatural blurs, distorted shadows or odd lighting effects that seem inconsistent with natural environments. Deepfake videos can show slight flickers around faces or uneven reflections that betray their artificial origin.

Blinking, or the lack thereof, can also be revealing. AI faces often fail to replicate natural blinking patterns, and may display either no blinking or irregular frequency.

Viewers should also note unnatural head or body movements that do not align with speech or emotional expression, such as stiff postures or awkward gestures.

Experts stress these cues are increasingly well-engineered, making deepfakes harder to detect visually. They recommend combining observation with source verification, such as tracing the video back to reputable outlets or conducting reverse image searches for robust protection.

Ultimately, better detection tools and digital media literacy are essential to maintaining trust in online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches standalone Password Manager app for Android

Google has released its Password Manager as a standalone app for Android, separating the service from Chrome for easier access. The new app allows users to quickly view and manage saved passwords, passkeys and login details directly from their phone.

The app itself does not introduce new features. It functions mainly as a shortcut to the existing Password Manager already built into Android and Chrome.

For users, there is little practical difference between the app and the integrated option, although some may prefer the clarity of having a dedicated tool instead of navigating through browser settings.

For Google, however, the move brings advantages. By listing Password Manager in the Play Store, the company can compete more visibly with rivals like LastPass and 1Password.

Previously, many users were unaware of the built-in feature since it was hidden within Chrome. The Play Store presence also gives Google a direct way to push updates and raise awareness of the service.

The app arrives with Google’s Material 3 design refresh, giving it a cleaner look that aligns with the rest of Android. Functionality remains unchanged for now, but the shift suggests Google may expand the app in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini AI for government

Google has introduced a new version of its Gemini AI platform tailored specifically for US government use, called Gemini for Government. The platform combines features such as image generation, enterprise search, and AI agent development, with compliance to standards like Sec4 and FedRAMP.

Gemini includes pre-built AI agents for research and idea generation, while also offering tools to create custom agents. US government customers will pay $0.50 per year for basic access, undercutting rivals OpenAI and Anthropic, who each launched $1 government-focused AI packages earlier this year.

Google emphasised security, privacy, and automation in its pitch, positioning the product as an all-in-one solution for public sector institutions. The launch follows the Trump administration’s AI Action Plan, which seeks to promote AI growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orange suffers major data breach

Orange Belgium has confirmed a data breach affecting 850,000 customers, after a cyberattack targeted one of its internal IT systems. The attack, discovered in late July, exposed names, phone numbers, SIM card details, tariff plans and PUK codes. No financial or password data was compromised.

The telecoms provider blocked access to the affected system and notified authorities. A formal complaint has also been filed with the judiciary. All affected users are being informed via email or SMS and are urged to stay alert for phishing and identity fraud attempts.

Orange Belgium has advised users to strengthen account security with strong, unique passwords and to be cautious of suspicious links and messages. This marks the third cyber incident involving Orange in 2025, following earlier attacks, though those breaches varied in impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google enhances AI Mode with personalised dining suggestions

Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.

The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.

Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.

AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump downplays TikTok security concerns as ban stalls

US President Donald Trump has dismissed national security and privacy concerns surrounding TikTok as ‘highly overrated,’ signalling once again that the popular video-sharing platform is unlikely to face a ban anytime soon. Although Congress passed legislation requiring TikTok’s Chinese parent company, ByteDance, to sell its controlling stake or face a nationwide ban, Trump has repeatedly pushed back enforcement deadlines, with the next one set for 17 September.

Trump has already issued three extensions since taking office for his second term. The first came on 20 January, after TikTok briefly went offline when the court-approved ban took effect. Another followed in April, when a potential US buyout collapsed after China objected to Trump’s tariff moves. Trump insists that American buyers remain interested but says the process is ‘complex,’ justifying further delays.

Despite the legal framework for a ban, Trump’s administration has not faced significant legal challenges over his executive orders keeping TikTok active, which contrasts with many of his other directives. The White House even launched its own TikTok account this week, underscoring the platform’s mainstream role in US politics. Trump himself admitted he is a fan, noting its popularity among his children and younger voters.

Public opinion on TikTok remains deeply divided. A Pew Research Center survey found only about one-third of Americans now support a ban, a sharp decline from half of respondents in 2023. Roughly equal shares oppose a ban or remain undecided. Among supporters of restrictions, most cite concerns about user data security. Still, with Trump downplaying risks and signalling a willingness to keep the app alive, TikTok’s future in the US looks increasingly secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ransomware attack at DaVita exposes data of 2.7 million patients in the US

A ransomware attack against dialysis provider DaVita has exposed the personal data of 2.7 million people, according to a notice on the US health department’s website.

The company first disclosed the cyber incident in April, saying it had taken steps to restore operations but could not predict the scale of disruption.

DaVita confirmed that hackers gained unauthorised access to its laboratory database, which contained sensitive information belonging to some current and former patients. The firm said it is now contacting those affected and offering free credit monitoring to help protect against identity theft.

Despite the intrusion, DaVita maintained uninterrupted dialysis services across its network of nearly 3,000 outpatient clinics and home treatment programmes. The company described the cyberattack as a temporary disruption but stressed that patient care was never compromised.

Financial disclosures show the incident led to around $13.5 million in charges during the second quarter of 2025. Most of the costs were linked to system restoration and third-party support, with $1 million attributed to higher patient care expenses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!