Quantum computing threatens Bitcoin: Experts debate timeline

Recent breakthroughs in quantum computing have revived fears about the long-term security of Bitcoin (BTC).

With IBM aiming to release the first fault-tolerant quantum computer, the IBM Quantum Starling, by 2029, experts are increasingly concerned that such advancements could undermine Bitcoin’s cryptographic backbone.

Bitcoin currently relies on elliptic curve cryptography (ECC) and the SHA-256 hashing algorithm to secure wallets and transactions. However, both are potentially vulnerable to Shor’s algorithm, which a sufficiently powerful quantum computer could exploit.

Google quantum researcher Craig Gidney warned in May 2025 that quantum resources required to break RSA encryption had been significantly overestimated. Although Bitcoin uses ECC, not RSA, Gidney’s research hinted at a threat window between 2030 and 2035 for crypto systems.

Opinions on the timeline vary. Adam Back, Blockstream CEO and early Bitcoin advocate, believes a quantum threat is still at least two decades away. However, he admitted that future progress could force users to migrate coins to quantum-safe wallets—potentially even Satoshi Nakamoto’s dormant holdings.

Others are more alarmed. David Carvalho, CEO of Naoris Protocol, claimed in a June 2025 op-ed that Bitcoin could be cracked within five years, pointing to emerging technologies like Microsoft’s Majorana chip. He estimated that nearly 30% of BTC is stored in quantum-vulnerable addresses.

‘Just one breach could destroy trust in the entire ecosystem,’ Carvalho warned, noting that BlackRock has already acknowledged the quantum risk in its Bitcoin ETF filings.

Echoing this urgency, billionaire investor Chamath Palihapitiya said in late 2024 that SHA-256 could be broken within two to five years if companies scale quantum chips like Google’s 105-qubit Willow. He urged the crypto industry to start updating encryption protocols before it’s too late.

While truly fault-tolerant quantum machines capable of breaking Bitcoin are not yet available, the accelerating pace of research suggests that preparing for a quantum future is no longer optional—it’s a necessity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes users to move away from passwords

Google urges users to move beyond passwords, citing widespread reuse and vulnerability to phishing attacks. The company is now promoting alternatives like passkeys and social sign-ins as more secure and user-friendly options.

Data from Google shows that half of users reuse passwords, while the rest either memorise or write them down. Gen Z is leading the shift and is significantly more likely to adopt passkeys and social logins than older generations.

Passkeys, stored on user devices, eliminate traditional password input and reduce phishing risks by relying on biometrics or device PINs for authentication. However, limited app support and difficulty syncing across devices remain barriers to broader adoption.

Google highlights that while social sign-ins offer convenience, they come with privacy trade-offs by giving large companies access to more user activity data. Users still relying on passwords are advised to adopt app-based two-factor authentication over SMS or email, which are far less secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time, on-device security: The only way to stop modern mobile Trojans

Mobile banking faces a serious new threat: AI-powered Trojans operating silently within legitimate apps. These advanced forms of malware go beyond stealing login credentials—they use AI to intercept biometrics, manipulate app flows in real-time, and execute fraud without raising alarms.

Today’s AI Trojans adapt on the fly. They bypass signature-based detection and cloud-based threat engines by completing attacks directly on the device before traditional systems can react.

Most current security tools weren’t designed for this level of sophistication, exposing banks and users.

To counter this, experts advocate for AI-native security built directly into mobile apps—systems that operate on the device itself, monitoring user interactions and app behaviour in real-time to detect anomalies and stop fraud before it begins.

As these AI threats grow more common, the message is clear: mobile apps must defend themselves from within. Real-time, on-device protection is now essential to safeguarding users and staying ahead of a rapidly evolving risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target recruiters with fake CVs and malware

A financially driven hacking group known as FIN6 has reversed the usual job scam model by targeting recruiters instead of job seekers. Using realistic LinkedIn and Indeed profiles, the attackers pose as candidates and send malware-laced CVs hosted on reputable cloud platforms.

to type in resume URLs, bypassing email security tools manually. These URLs lead to fake portfolio sites hosted on Amazon Web Services that selectively deliver malware to users who pass as humans.

Victims receive a zip file containing a disguised shortcut that installs the more_eggs malware, which is capable of credential theft and remote access.

However, this JavaScript-based tool, linked to another group known as Venom Spider, uses legitimate Windows utilities to evade detection.

The campaign includes stealthy techniques such as traffic filtering, living-off-the-land binaries, and persistent registry modifications. Domains used include those mimicking real names, allowing attackers to gain trust while launching a powerful phishing operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL cracks down on global cybercrime networks

Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.

Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.

The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.

More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.

Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.

In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!