Deepseek searches soar after ChatGPT outage

ChatGPT users faced widespread disruption on 10 June 2025 after a global outage hit OpenAI’s services, affecting both the chatbot and associated APIs. OpenAI has yet to confirm the cause, stating only that users are experiencing high error rates and delays.

The blackout halted work for many creative teams who rely on the tool to generate content and meet deadlines. While some were stalled, others turned to alternatives, sparking a surge in interest in rival AI chatbots.

Searches for DeepSeek, a Chinese-developed AI model, jumped 109% to over 2.1 million on the outage day. Claude AI saw a 95% increase in queries, while interest in Google Gemini and Microsoft Copilot also spiked significantly.

Industry experts say the incident underscores the risk of overdependence on a single platform and highlights the growing maturity of competing AI tools. While frustrating for many, the disruption appears to be fuelling broader competition and diversification in the generative AI market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes users to move away from passwords

Google urges users to move beyond passwords, citing widespread reuse and vulnerability to phishing attacks. The company is now promoting alternatives like passkeys and social sign-ins as more secure and user-friendly options.

Data from Google shows that half of users reuse passwords, while the rest either memorise or write them down. Gen Z is leading the shift and is significantly more likely to adopt passkeys and social logins than older generations.

Passkeys, stored on user devices, eliminate traditional password input and reduce phishing risks by relying on biometrics or device PINs for authentication. However, limited app support and difficulty syncing across devices remain barriers to broader adoption.

Google highlights that while social sign-ins offer convenience, they come with privacy trade-offs by giving large companies access to more user activity data. Users still relying on passwords are advised to adopt app-based two-factor authentication over SMS or email, which are far less secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Major internet outage disrupts Google Cloud and popular platforms

A sweeping internet outage on Wednesday, 12 June 2025, caused significant disruptions to services relying on Google Cloud and Cloudflare. Popular platforms such as Spotify, Discord, Twitch, and Fubo experienced widespread downtime, with many users reporting service interruptions through Downdetector.

Despite rampant speculation online about a large-scale cyberattack, neither Google nor Cloudflare has confirmed any such link. Google Cloud reported an incident affecting users globally, stating engineers had identified the root cause and were actively working on mitigation.

While some services have begun to recover, there is no definitive timeline for full restoration. Google attributed the issue to internal authentication problems, though details remain limited.

Adding to the confusion, social media posts speculated that multiple cloud providers, including AWS and Azure, were affected. However, this was quickly challenged, and Cloudflare clarified that only a small portion of its services, which rely on Google Cloud, experienced issues—its core systems remained fully operational.

While investigations are ongoing and recovery efforts are in progress, the full extent and exact cause of the outage remain unclear. For now, users are advised to monitor official updates from service providers as the situation develops.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK National Cyber Security Centre calls for strategic cybersecurity policy agenda

The United Kingdom’s National Cyber Security Centre (NCSC), part of GCHQ, has called for the adoption of a long-term, strategic policy agenda to address increasing cybersecurity risks. That appeal follows prolonged delays in the introduction of updated cybersecurity legislation by the UK government.

In a blog post, co-authored by Ollie Whitehouse, NCSC’s Chief Technology Officer, and Paul W., the Principal Technical Director, the agency underscored the need for more political engagement in shaping the country’s cybersecurity landscape. Although the NCSC does not possess policymaking powers, its latest message highlights its growing concern over the UK’s limited progress in implementing comprehensive cybersecurity reforms.

Whitehouse has previously argued that the current technology market fails to incentivise the development and maintenance of secure digital products. He asserts that while the technical community knows how to build secure systems, commercial pressures and market conditions often favour speed, cost-cutting, and short-term gains over security. That, he notes, is a structural issue that cannot be resolved through voluntary best practices alone and likely requires legislative and regulatory measures.

The UK government has yet to introduce the long-anticipated Cyber Security and Resilience Bill to Parliament. Initially described by its predecessor as a step toward modernising the country’s cyber legislation, the bill remains unpublished. Another delayed effort is a consultation led by the Home Office on ransomware response policy, which was postponed due to the snap election and is still awaiting an official government response.

The agency’s call mirrors similar debates in the United States, where former Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly advocated for holding software vendors accountable for product security. The Biden administration’s national cybersecurity strategy introduced early steps toward vendor liability, a concept that has gained traction among experts like Whitehouse.

However, the current US administration under President Trump has since rolled back some of these requirements, most notably through a recent executive order eliminating obligations for government contractors to attest to their products’ security.

By contrast, the European Union has advanced several legislative initiatives aimed at strengthening digital security, including the Cyber Resilience Act. Yet, these efforts face challenges of their own, such as reconciling economic priorities with cybersecurity requirements and adapting EU-wide standards to national legal systems.

In its blog post, the NCSC reiterated that the financial and societal burden of cybersecurity failures is currently borne by consumers, governments, insurers, and other downstream actors. The agency argues that addressing these issues requires a reassessment of underlying market dynamics—particularly those that do not reward secure development practices or long-term resilience.

While the NCSC lacks the authority to enforce regulations, its increasingly direct communications reflect a broader shift within parts of the UK’s cybersecurity community toward advocating for more comprehensive policy intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto conferences face rising phishing risks

Crypto events have grown rapidly worldwide in recent years. Unfortunately, this expansion has led to an increase in scams targeting attendees, according to Kraken’s chief security officer, Nick Percoco.

Recent conferences have seen lax personal security, with exposed devices and careless sharing of sensitive information. These lapses make it easier for criminals to launch phishing campaigns and impersonation attacks.

Phishing remains the top threat at these events, exploiting typical conference activities such as QR code scanning and networking. Attackers distribute malicious links disguised as legitimate follow-ups, allowing them to gain access to wallets and sensitive data with minimal technical skill.

Use of public Wi-Fi, unverified QR codes, and openly discussing high-value trades in public areas further increase risks. Attendees are urged to use burner wallets and verify every QR code carefully.

The dangers have become very real, highlighted by violent crimes in France, where prominent crypto professionals were targeted in kidnappings and ransom demands. These incidents show that risks are no longer confined to the digital world.

Basic security mistakes such as leaving devices unlocked or oversharing personal information can have severe consequences. Experts call for a stronger security culture at events and beyond, including multi-factor authentication, cautious password management, and heightened situational awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers target recruiters with fake CVs and malware

A financially driven hacking group known as FIN6 has reversed the usual job scam model by targeting recruiters instead of job seekers. Using realistic LinkedIn and Indeed profiles, the attackers pose as candidates and send malware-laced CVs hosted on reputable cloud platforms.

to type in resume URLs, bypassing email security tools manually. These URLs lead to fake portfolio sites hosted on Amazon Web Services that selectively deliver malware to users who pass as humans.

Victims receive a zip file containing a disguised shortcut that installs the more_eggs malware, which is capable of credential theft and remote access.

However, this JavaScript-based tool, linked to another group known as Venom Spider, uses legitimate Windows utilities to evade detection.

The campaign includes stealthy techniques such as traffic filtering, living-off-the-land binaries, and persistent registry modifications. Domains used include those mimicking real names, allowing attackers to gain trust while launching a powerful phishing operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!