NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s V-JEPA 2 teaches AI to think, plan, and act in 3D space

Meta has released V-JEPA 2, an open-source AI model designed to understand and predict real-world environments in 3D. Described as a world model’, it enables machines to simulate physical spaces—offering a breakthrough for robotics, self-driving cars, and intelligent assistants.

Unlike traditional AI that relies on labelled data, V-JEPA 2 learns from unlabelled video clips, building an internal simulation of how the world works. However, now, AI can reason, plan, and act more like humans.

Based on Meta’s JEPA architecture and containing 1.2 billion parameters, the model improves significantly on action prediction and environmental modelling compared to its predecessor.

Meta says this approach mirrors how humans intuitively understand cause and effect—like predicting a ball’s motion or avoiding people in a crowd. V-JEPA 2 helps AI agents develop this same intuition, making them more adaptive in dynamic, unfamiliar situations.

Meta’s Chief AI Scientist Yann LeCun describes world models as ‘abstract digital twins of reality’—vital for machines to understand and predict what comes next. This effort aligns with Meta’s broader push into AI, including a planned $14 billion investment in Scale AI for data labelling.

V-JEPA 2 joins a growing wave of interest in world models. Google DeepMind is building its own called Genie, while AI researcher Fei-Fei Li recently raised $230 million for her startup World Labs, focused on similar goals.

Meta believes V-JEPA 2 brings us closer to machines that can learn, adapt, and operate in the physical world with far greater autonomy and intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coreweave expands AI infrastructure with Google tie‑up

CoreWeave has secured a pivotal role in Google Cloud’s new infrastructure partnership with OpenAI. The specialist GPU cloud provider will supply Nvidia‑based compute resources to Google, which will allocate them to OpenAI to support the rising demand for services like ChatGPT.

Already under a $11.9 billion, five‑year contract with OpenAI and backed by a $350 million equity investment, CoreWeave recently expanded the deal by another. 

Adding Google Cloud as a customer helps the company diversify beyond Microsoft, its top client in 2024.

The arrangement positions Google as a neutral provider of AI computing power amid fierce competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270 percent since its March IPO, illustrating investor confidence in its expanding role in the AI infrastructure boom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI traffic wars: ChatGPT dominates, Gemini and Claude lag behind

ChatGPT has cemented its position as the world’s leading AI assistant, racking up 5.5 billion visits in May 2025 alone—roughly 80% of all global generative AI traffic. That’s more than the combined total of Google’s Gemini, DeepSeek, Grok, Perplexity, and Claude—doubled.

With over 500 million weekly active users and a mobile app attracting 250 million monthly users last autumn, ChatGPT has become the default AI tool for hundreds of millions globally.

Despite a brief dip in early 2025, OpenAI quickly reversed course. Its partnership with Microsoft helped, but ChatGPT works well for the average user.

While other platforms chase benchmark scores and academic praise, ChatGPT has focused on accessibility and usefulness—proven decisive qualities.

Some competitors have made surprising gains. Chinese start-up DeepSeek saw explosive growth, from 33.7 million users in January to 436 million visits by May.

ChatGPT, OpenAI, Claude, Gemini, Grok, Perplexity, DeepSeek
A graph with a bar and a number of different bars

Operating at a fraction of the cost of Western rivals—and relying on older Nvidia chips—DeepSeek is growing rapidly in Asia, particularly in China, India, and Indonesia.

Meanwhile, despite integration across its platforms, Google’s Gemini lags behind with 527 million visits, and Claude, backed by Amazon and Google, is barely breaking 100 million despite high scores in reasoning tasks.

The broader impact of AI’s rise is reshaping the internet. Legacy platforms like Chegg, Quora, and Fiverr are losing traffic fast, while tools focused on code completion, voice generation, and automation are gaining traction.

In the race for adoption, OpenAI has already won. For the rest of the industry, the fight is no longer for first place—but for who finishes next.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI video tool Veo 3 Fast rolls out for Gemini and Flow users

Google has introduced Veo 3 Fast, a speedier version of its AI video-generation tool that promises to cut production time in half.

Now available to Gemini Pro and Flow Pro users, the updated model creates 720p videos more than twice as fast as its predecessor—marking a step forward in scaling Google’s video AI infrastructure.

Gemini Pro subscribers can now generate three Veo 3 Fast videos daily as part of their plan. Meanwhile, Flow Pro users can create videos using 20 credits per clip, significantly reducing costs compared to previous models. Gemini Ultra subscribers enjoy even more generous limits under their premium tier.

The upgrade is more than a performance boost. According to Google’s Josh Woodward, the improved infrastructure also paves the way for smoother playback and better subtitles—enhancements that aim to make video creation more seamless and accessible.

Google also tests voice prompt capabilities, allowing users to express video ideas and watch them materialise on-screen.

Although Veo 3 Fast is currently limited to 720p resolution, it encourages creativity through rapid iteration. Users can experiment with prompts and edits without perfecting their first try.

While the results won’t rival Hollywood, the model opens up new possibilities for businesses, creators, and filmmakers looking to prototype video ideas or produce content without traditional filming quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Czechia bids to host major EU AI computing centre

Czechia is positioning itself to host one of the European Union’s planned AI ‘gigafactories’—large-scale computing centres designed to strengthen Europe’s AI capabilities and reduce dependence on global powers like the United States.

Jan Kavalírek, the Czech government’s AI envoy, confirmed to the Czech News Agency that talks with a private investor are progressing well and potential locations have already been identified.

While the application for the EU funding is not yet final, Kavalírek said, ‘We are very close.’ The EU has allocated around €20 billion for these AI infrastructure projects, with significant contributions also expected from private sources.

Germany and Denmark are also vying to host similar facilities. If successful, the bid made by Czechia could transform the country into a key AI infrastructure hub for Europe, offering powerful computational resources for sectors such as public administration, healthcare, and finance.

Lukáš Benzl, director of the Czech Association of Artificial Intelligence, described the initiative as a potential ‘motor for the AI economy’ across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva makes AI use mandatory in coding interviews

Australian design giant Canva has revamped its technical interview process to reflect modern software development, requiring job candidates to demonstrate their ability to use AI coding assistants.

The shift aims to assess better how candidates would perform on the job, where tools like Copilot and Claude are already part of engineers’ daily workflows.

Previously, interviews focused on coding fundamentals without assistance. Now, candidates must solve engineering problems using AI tools in ways that reflect real-world scenarios, demanding effective prompting and judgement rather than simply getting correct outputs.

The change follows internal experiments where Canva found that AI could easily handle traditional interview questions. Company leaders argue that the old approach no longer measured actual job readiness, given that many engineers rely on AI to navigate codebases and accelerate prototyping.

By integrating AI into hiring, Canva joins many firms that are adapting to a tech workforce increasingly shaped by intelligent automation. The company says the goal is not to test if candidates know how to use AI but how well they use it to build solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and Samsung invest in Skild AI, boosting robotics innovation

Nvidia and Samsung are joining a major Series B funding round for Skild AI, a robotics software start-up, with investments of $25 million and $10 million, respectively.

According to Bloomberg, the round, led by SoftBank with a $100 million commitment, is expected to value the company at approximately $4.5 billion.

Skild AI develops foundation models and software designed for various robotic systems, from consumer devices to industrial machines. The company previously raised $300 million in Series A funding in 2023, when it was valued at $1.5 billion.

Samsung’s latest investment reinforces its growing focus on robotics. Earlier this year, it became the largest shareholder in South Korea-based Rainbow Robotics, which is known for its collaborative robots. The company also operates a Future Robotics Office to steer strategic innovation.

For Nvidia, the investment aligns with broader efforts in AI and automation. In March, the chipmaker partnered with General Motors to co-develop AI systems that train next-generation manufacturing models for use in vehicles, factories, and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!