Deepseek searches soar after ChatGPT outage

ChatGPT users faced widespread disruption on 10 June 2025 after a global outage hit OpenAI’s services, affecting both the chatbot and associated APIs. OpenAI has yet to confirm the cause, stating only that users are experiencing high error rates and delays.

The blackout halted work for many creative teams who rely on the tool to generate content and meet deadlines. While some were stalled, others turned to alternatives, sparking a surge in interest in rival AI chatbots.

Searches for DeepSeek, a Chinese-developed AI model, jumped 109% to over 2.1 million on the outage day. Claude AI saw a 95% increase in queries, while interest in Google Gemini and Microsoft Copilot also spiked significantly.

Industry experts say the incident underscores the risk of overdependence on a single platform and highlights the growing maturity of competing AI tools. While frustrating for many, the disruption appears to be fuelling broader competition and diversification in the generative AI market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman says GPT-4o demand overwhelmed OpenAI’s GPU supply

OpenAI faced a significant infrastructure strain after its GPT-4o image generator went viral for producing Ghibli-style memes. The sudden influx of user demand added a million new users in under an hour, putting immense pressure on the company’s systems.

CEO Sam Altman admitted that OpenAI had to slow feature rollouts and borrow computing power from its research division to keep the service running. The platform temporarily introduced rate limits as it coped with overloaded GPUs.

Altman described the situation as unprecedented, saying no other company has had to manage such intense viral spikes. He noted that image generation with GPT-4o requires significant compute resources, which the company could not fully meet with its current GPU inventory.

Despite the challenges, Altman maintained that OpenAI is committed to managing high user demand while continuing development. The company is also considering watermarking the AI images created by free users to help manage scale and traceability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes users to move away from passwords

Google urges users to move beyond passwords, citing widespread reuse and vulnerability to phishing attacks. The company is now promoting alternatives like passkeys and social sign-ins as more secure and user-friendly options.

Data from Google shows that half of users reuse passwords, while the rest either memorise or write them down. Gen Z is leading the shift and is significantly more likely to adopt passkeys and social logins than older generations.

Passkeys, stored on user devices, eliminate traditional password input and reduce phishing risks by relying on biometrics or device PINs for authentication. However, limited app support and difficulty syncing across devices remain barriers to broader adoption.

Google highlights that while social sign-ins offer convenience, they come with privacy trade-offs by giving large companies access to more user activity data. Users still relying on passwords are advised to adopt app-based two-factor authentication over SMS or email, which are far less secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini now summarizes PDFs with actionable prompts in Drive

Google is expanding Gemini’s capabilities by allowing the AI assistant to summarize PDF documents directly in Google Drive—and it’s doing more than just generating summaries.

Users will now see clickable suggestions like drafting proposals or creating interview questions based on resume content, making Gemini a more proactive productivity tool.

However, this update builds on earlier integrations of Gemini in Drive, which now surface pop-up summaries and action prompts when a PDF is opened.

Users with smart features and personalization turned on will notice a new preview window interface, eliminating the need to open a separate tab.

Gemini’s PDF summaries are now available in over 20 languages and will gradually roll out over the next two weeks.

The feature supports personal and business accounts, including Business Standard/Plus users, Enterprise tiers, Gemini Education, and Google AI Pro and Ultra plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta bets big on AI, partners with Scale AI in strategic move

Meta Platforms has made a major move in the AI space by investing $14.8 billion in Scale AI, acquiring a 49% stake and pushing the data-labelling startup’s valuation past $29 billion.

As part of the deal, Scale AI founder Alexandr Wang will join Meta’s leadership to head its new superintelligence unit, while continuing to serve on Scale AI’s board. The investment deepens Meta’s commercial ties with Scale and is seen as a strategic step to secure top-tier AI expertise.

Scale AI will use the funds to drive innovation and strengthen client partnerships, while also providing partial liquidity to shareholders and equity holders. Jason Droege, Scale’s Chief Strategy Officer and former Uber Eats executive, will serve as interim CEO.

‘This partnership is a testament to our team’s work and the scale of opportunity ahead,’ said Droege. Wang added, ‘Meta’s investment affirms the limitless path forward for AI and Scale’s role in bridging human values with transformative technologies.’

Scale will remain independent, continuing to support AI labs, corporations, and government agencies with data infrastructure as the race for AI dominance intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin price climbs as Google searches drop

Bitcoin has surged to around $107,000, close to its all-time high, yet global search interest has dropped to a five-year low. While past price jumps were matched by public curiosity, current data suggests a notable lack of retail attention.

Analysts believe the trend reflects a shift in how Bitcoin is perceived. No longer a fringe phenomenon, the cryptocurrency has matured into a mainstream asset.

Institutional investors, ETFs, and even governments are now the driving force behind Bitcoin’s momentum, with companies such as Ark Invest and Metaplanet continuing to increase their holdings.

Bitwise CEO Hunter Horsley noted the rally appears quieter because corporate players are accumulating Bitcoin strategically, unlike the hype-fuelled surges of previous cycles. Meanwhile, retail interest may be shifting to flashier sectors such as AI tokens and memecoins.

Falling search traffic may signal that Bitcoin has entered a more stable phase. Rather than trending online, it is now being treated as a serious long-term investment — a possible sign of growing market maturity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!