New cyberattack method poses major threat to smart grids, study finds

A new study published in ‘Engineering’ highlights a growing cybersecurity threat to smart grids as they become more complex due to increased integration of distributed energy sources.

The research, conducted by Zengji Liu, Mengge Liu, Qi Wang, and Yi Tang, focuses on a sophisticated form of cyberattack known as a false data injection attack (FDIA) that targets data-driven algorithms used in smart grid operations.

As modern power systems adopt technologies like battery storage and solar panels, they rely more heavily on algorithms to manage energy distribution and grid stability. However, these algorithms can be exploited.

The study introduces a novel black-box FDIA method that injects false data directly at the measurement modules of distributed power supplies, using generative adversarial networks (GANs) to produce stealthy attack vectors.

What makes this method particularly dangerous is that it doesn’t require detailed knowledge of the grid’s internal workings, making it more practical and harder to detect in real-world scenarios.

The researchers also proposed an approach to estimate controller and filter parameters in distributed energy systems, making it easier to launch these attacks.

To test the method, the team simulated attacks on the New England 39-bus system, specifically targeting a deep learning model used for transient stability prediction. Results showed a dramatic drop in accuracy—from 98.75% to 56%—after the attack.

The attack also proved effective across multiple neural network models and on larger grid systems, such as IEEE’s 118-bus and 145-bus networks.

These findings underscore the urgent need for better cybersecurity defenses in the evolving smart grid landscape. As systems grow more complex and reliant on AI-driven management, developing robust protection against FDIA threats will be critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing threatens Bitcoin: Experts debate timeline

Recent breakthroughs in quantum computing have revived fears about the long-term security of Bitcoin (BTC).

With IBM aiming to release the first fault-tolerant quantum computer, the IBM Quantum Starling, by 2029, experts are increasingly concerned that such advancements could undermine Bitcoin’s cryptographic backbone.

Bitcoin currently relies on elliptic curve cryptography (ECC) and the SHA-256 hashing algorithm to secure wallets and transactions. However, both are potentially vulnerable to Shor’s algorithm, which a sufficiently powerful quantum computer could exploit.

Google quantum researcher Craig Gidney warned in May 2025 that quantum resources required to break RSA encryption had been significantly overestimated. Although Bitcoin uses ECC, not RSA, Gidney’s research hinted at a threat window between 2030 and 2035 for crypto systems.

Opinions on the timeline vary. Adam Back, Blockstream CEO and early Bitcoin advocate, believes a quantum threat is still at least two decades away. However, he admitted that future progress could force users to migrate coins to quantum-safe wallets—potentially even Satoshi Nakamoto’s dormant holdings.

Others are more alarmed. David Carvalho, CEO of Naoris Protocol, claimed in a June 2025 op-ed that Bitcoin could be cracked within five years, pointing to emerging technologies like Microsoft’s Majorana chip. He estimated that nearly 30% of BTC is stored in quantum-vulnerable addresses.

‘Just one breach could destroy trust in the entire ecosystem,’ Carvalho warned, noting that BlackRock has already acknowledged the quantum risk in its Bitcoin ETF filings.

Echoing this urgency, billionaire investor Chamath Palihapitiya said in late 2024 that SHA-256 could be broken within two to five years if companies scale quantum chips like Google’s 105-qubit Willow. He urged the crypto industry to start updating encryption protocols before it’s too late.

While truly fault-tolerant quantum machines capable of breaking Bitcoin are not yet available, the accelerating pace of research suggests that preparing for a quantum future is no longer optional—it’s a necessity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyberattack on Nova Scotia Power exposes sensitive data of 280,000 customers

Canada’s top cyber-defence official has spoken out following the ransomware attack that compromised the personal data of 280,000 Nova Scotia Power customers.

The breach, which occurred on 19 March but went undetected until 25 April, affected over half of the utility’s customer base. Stolen data included names, addresses, birthdates, driver’s licences, social insurance numbers, and banking details.

Rajiv Gupta, head of the Canadian Centre for Cyber Security, confirmed that Nova Scotia Power had contacted the agency following the incident.

While he refrained from discussing operational details or attributing blame, he highlighted the rising frequency of ransomware attacks against critical infrastructure across Canada.

He explained how criminal groups use double extortion tactics — stealing data and locking systems — to pressure organisations into paying ransoms, often without guaranteeing system restoration or data confidentiality.

Although the utility declined to pay the ransom, the fallout has led to a wave of scrutiny. Gupta warned that interconnectivity and integrating legacy systems with internet-facing platforms have increased vulnerability.

He urged utilities and other infrastructure operators to build defences based on worst-case scenarios and to adopt recommended cyber hygiene practices and the Centre’s ransomware playbook.

In response to the breach, the Nova Scotia Energy Board has approved a $1.8 million investment in cybersecurity upgrades.

The Canadian cyber agency, although lacking regulatory authority, continues to provide support and share lessons from such incidents with other organisations to raise national resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German state leaves Microsoft Teams for digital sovereignty

In a bold move highlighting growing concerns over digital sovereignty, the German state of Schleswig-Holstein is cutting ties with Microsoft. Announced by Digitalisation Minister Dirk Schroedter, the state is uninstalling the tech giant’s ubiquitous software across its entire administration.

‘We’re done with Teams!’ declared Minister Schroedter, signalling a complete shift away from Microsoft products like Word, Excel, Outlook, and eventually the Windows operating system itself. Instead, Schleswig-Holstein is turning to open-source alternatives like LibreOffice and Linux.

The reason? A strong desire to ‘take back control’ of its data and reduce reliance on US tech giants. Minister Schroedter emphasised that recent geopolitical tensions, particularly following Donald Trump’s return to the White House and rising US-EU friction, have ‘strengthened interest’ in their path.

‘The war in Ukraine revealed our energy dependencies,’ he noted, ‘and now we see there are also digital dependencies.’ The transition, affecting all 60,000 public servants, including police, judges, and eventually teachers, begins in less than three months.

Data will also move away from Microsoft-controlled clouds to German infrastructure. Beyond sovereignty, the state expects significant cost savings – potentially tens of millions of euros – compared to licensing fees and mandatory updates, which experts say can leave organisations feeling taken ‘by the throat’. The move also references long-standing antitrust concerns, like the EU’s investigation into Microsoft bundling Teams.

Microsoft was earlier accused of blocking the email of ICC Chief Prosecutor Karim Khan in compliance with US sanctions—an action it denied, noting the ICC had reportedly switched to ProtonMail. The incident raised fresh questions about digital sovereignty and the risks of foreign cloud dependency.

Why does it matter?

While challenges exist, like potential staff resistance highlighted by past struggles in Munich, Schleswig-Holstein is forging ahead. They join other entities like France’s gendarmerie and are watched by cities like Copenhagen and Aarhus. Bolstered by the new EU ‘Interoperable Europe Act‘, Schleswig-Holstein aims to be a pioneer, proving that governments can successfully reclaim control of their digital destiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes users to move away from passwords

Google urges users to move beyond passwords, citing widespread reuse and vulnerability to phishing attacks. The company is now promoting alternatives like passkeys and social sign-ins as more secure and user-friendly options.

Data from Google shows that half of users reuse passwords, while the rest either memorise or write them down. Gen Z is leading the shift and is significantly more likely to adopt passkeys and social logins than older generations.

Passkeys, stored on user devices, eliminate traditional password input and reduce phishing risks by relying on biometrics or device PINs for authentication. However, limited app support and difficulty syncing across devices remain barriers to broader adoption.

Google highlights that while social sign-ins offer convenience, they come with privacy trade-offs by giving large companies access to more user activity data. Users still relying on passwords are advised to adopt app-based two-factor authentication over SMS or email, which are far less secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time, on-device security: The only way to stop modern mobile Trojans

Mobile banking faces a serious new threat: AI-powered Trojans operating silently within legitimate apps. These advanced forms of malware go beyond stealing login credentials—they use AI to intercept biometrics, manipulate app flows in real-time, and execute fraud without raising alarms.

Today’s AI Trojans adapt on the fly. They bypass signature-based detection and cloud-based threat engines by completing attacks directly on the device before traditional systems can react.

Most current security tools weren’t designed for this level of sophistication, exposing banks and users.

To counter this, experts advocate for AI-native security built directly into mobile apps—systems that operate on the device itself, monitoring user interactions and app behaviour in real-time to detect anomalies and stop fraud before it begins.

As these AI threats grow more common, the message is clear: mobile apps must defend themselves from within. Real-time, on-device protection is now essential to safeguarding users and staying ahead of a rapidly evolving risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!