Bitcoin price climbs as Google searches drop

Bitcoin has surged to around $107,000, close to its all-time high, yet global search interest has dropped to a five-year low. While past price jumps were matched by public curiosity, current data suggests a notable lack of retail attention.

Analysts believe the trend reflects a shift in how Bitcoin is perceived. No longer a fringe phenomenon, the cryptocurrency has matured into a mainstream asset.

Institutional investors, ETFs, and even governments are now the driving force behind Bitcoin’s momentum, with companies such as Ark Invest and Metaplanet continuing to increase their holdings.

Bitwise CEO Hunter Horsley noted the rally appears quieter because corporate players are accumulating Bitcoin strategically, unlike the hype-fuelled surges of previous cycles. Meanwhile, retail interest may be shifting to flashier sectors such as AI tokens and memecoins.

Falling search traffic may signal that Bitcoin has entered a more stable phase. Rather than trending online, it is now being treated as a serious long-term investment — a possible sign of growing market maturity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s V-JEPA 2 teaches AI to think, plan, and act in 3D space

Meta has released V-JEPA 2, an open-source AI model designed to understand and predict real-world environments in 3D. Described as a world model’, it enables machines to simulate physical spaces—offering a breakthrough for robotics, self-driving cars, and intelligent assistants.

Unlike traditional AI that relies on labelled data, V-JEPA 2 learns from unlabelled video clips, building an internal simulation of how the world works. However, now, AI can reason, plan, and act more like humans.

Based on Meta’s JEPA architecture and containing 1.2 billion parameters, the model improves significantly on action prediction and environmental modelling compared to its predecessor.

Meta says this approach mirrors how humans intuitively understand cause and effect—like predicting a ball’s motion or avoiding people in a crowd. V-JEPA 2 helps AI agents develop this same intuition, making them more adaptive in dynamic, unfamiliar situations.

Meta’s Chief AI Scientist Yann LeCun describes world models as ‘abstract digital twins of reality’—vital for machines to understand and predict what comes next. This effort aligns with Meta’s broader push into AI, including a planned $14 billion investment in Scale AI for data labelling.

V-JEPA 2 joins a growing wave of interest in world models. Google DeepMind is building its own called Genie, while AI researcher Fei-Fei Li recently raised $230 million for her startup World Labs, focused on similar goals.

Meta believes V-JEPA 2 brings us closer to machines that can learn, adapt, and operate in the physical world with far greater autonomy and intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vitalik unveils Lean Ethereum for post-quantum protection

Ethereum developers have revealed a ‘Lean Ethereum‘ roadmap that seeks to simplify the blockchain’s base layer while preparing it for post-quantum security. The proposal was discussed by co-founder Vitalik Buterin and researcher Justin Drake during a Berlin conference session.

The plan prioritises three core goals: enhanced security through post-quantum signatures, reduced complexity in Ethereum’s structure, and improved efficiency to lower latency and costs.

Developers are already exploring four research tracks, including a three-step-finality protocol, quantum-resistant signatures, zero-knowledge virtual machines, and improved data layering through erasure coding.

Under the broader ‘lean’ concept, Ethereum may soon adopt lean staking, verifiability for low-power devices, and simplified cryptographic design. Modular logic and formal checks are part of the plan, aligned with zkEVM pilots and inclusion list development.

Although the roadmap doesn’t suggest an immediate upgrade, the Ethereum Foundation described it as a cohesive strategy that ties current innovation to long-term resilience. Core teams will prototype components and assess trade-offs in ongoing working group discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI traffic wars: ChatGPT dominates, Gemini and Claude lag behind

ChatGPT has cemented its position as the world’s leading AI assistant, racking up 5.5 billion visits in May 2025 alone—roughly 80% of all global generative AI traffic. That’s more than the combined total of Google’s Gemini, DeepSeek, Grok, Perplexity, and Claude—doubled.

With over 500 million weekly active users and a mobile app attracting 250 million monthly users last autumn, ChatGPT has become the default AI tool for hundreds of millions globally.

Despite a brief dip in early 2025, OpenAI quickly reversed course. Its partnership with Microsoft helped, but ChatGPT works well for the average user.

While other platforms chase benchmark scores and academic praise, ChatGPT has focused on accessibility and usefulness—proven decisive qualities.

Some competitors have made surprising gains. Chinese start-up DeepSeek saw explosive growth, from 33.7 million users in January to 436 million visits by May.

ChatGPT, OpenAI, Claude, Gemini, Grok, Perplexity, DeepSeek
A graph with a bar and a number of different bars

Operating at a fraction of the cost of Western rivals—and relying on older Nvidia chips—DeepSeek is growing rapidly in Asia, particularly in China, India, and Indonesia.

Meanwhile, despite integration across its platforms, Google’s Gemini lags behind with 527 million visits, and Claude, backed by Amazon and Google, is barely breaking 100 million despite high scores in reasoning tasks.

The broader impact of AI’s rise is reshaping the internet. Legacy platforms like Chegg, Quora, and Fiverr are losing traffic fast, while tools focused on code completion, voice generation, and automation are gaining traction.

In the race for adoption, OpenAI has already won. For the rest of the industry, the fight is no longer for first place—but for who finishes next.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tether invests in Canadian gold company to strengthen its reserves

Tether has acquired nearly a third of a Canadian gold firm as part of its expanding dual investment strategy in Bitcoin and gold. The firm has bought 78.4 million shares in Elemental Altus for CA$121.5 million, securing 31.9% of the company.

The company, best known for minting USDT, now holds over 100,000 Bitcoin and nearly 80 tons of physical gold. It describes this as a ‘dual pillar strategy’ designed to safeguard value and improve financial resilience amid rising inflation and monetary uncertainty.

Tether CEO Paolo Ardoino said Bitcoin and gold offer complementary protections, with the former being a decentralised hedge and a long-trusted store of value. The company has also issued XAUT, a stablecoin backed by gold, currently valued at over $833 million.

USDT continues to dominate global trading volumes and has gained popularity in emerging markets as a digital dollar alternative. Tether says holding gold and crypto strengthens its traditional and decentralised finance position.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump highlights crypto plans at Coinbase summit

US President Donald Trump sent a prerecorded message to Coinbase’s State of Crypto Summit, reaffirming his commitment to advancing crypto regulation in the US.

The administration is working with Congress to pass the GENIUS Act supporting dollar-backed stablecoins and clear market frameworks.

Congress is preparing to vote on the GENIUS Act in the Senate, while the House is moving forward with the CLARITY Act. The latter seeks to clarify the regulatory roles of the SEC and the Commodity Futures Trading Commission concerning digital assets.

Both bills form part of a broader effort to create a clear legal environment for the crypto industry.

Some Democrats oppose Trump’s crypto ties, especially the family-backed stablecoin from World Liberty Financial. Despite tensions, Trump continues promoting his crypto agenda through conferences and videos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto conferences face rising phishing risks

Crypto events have grown rapidly worldwide in recent years. Unfortunately, this expansion has led to an increase in scams targeting attendees, according to Kraken’s chief security officer, Nick Percoco.

Recent conferences have seen lax personal security, with exposed devices and careless sharing of sensitive information. These lapses make it easier for criminals to launch phishing campaigns and impersonation attacks.

Phishing remains the top threat at these events, exploiting typical conference activities such as QR code scanning and networking. Attackers distribute malicious links disguised as legitimate follow-ups, allowing them to gain access to wallets and sensitive data with minimal technical skill.

Use of public Wi-Fi, unverified QR codes, and openly discussing high-value trades in public areas further increase risks. Attendees are urged to use burner wallets and verify every QR code carefully.

The dangers have become very real, highlighted by violent crimes in France, where prominent crypto professionals were targeted in kidnappings and ransom demands. These incidents show that risks are no longer confined to the digital world.

Basic security mistakes such as leaving devices unlocked or oversharing personal information can have severe consequences. Experts call for a stronger security culture at events and beyond, including multi-factor authentication, cautious password management, and heightened situational awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot