Coreweave expands AI infrastructure with Google tie‑up

CoreWeave has secured a pivotal role in Google Cloud’s new infrastructure partnership with OpenAI. The specialist GPU cloud provider will supply Nvidia‑based compute resources to Google, which will allocate them to OpenAI to support the rising demand for services like ChatGPT.

Already under a $11.9 billion, five‑year contract with OpenAI and backed by a $350 million equity investment, CoreWeave recently expanded the deal by another. 

Adding Google Cloud as a customer helps the company diversify beyond Microsoft, its top client in 2024.

The arrangement positions Google as a neutral provider of AI computing power amid fierce competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270 percent since its March IPO, illustrating investor confidence in its expanding role in the AI infrastructure boom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI video tool Veo 3 Fast rolls out for Gemini and Flow users

Google has introduced Veo 3 Fast, a speedier version of its AI video-generation tool that promises to cut production time in half.

Now available to Gemini Pro and Flow Pro users, the updated model creates 720p videos more than twice as fast as its predecessor—marking a step forward in scaling Google’s video AI infrastructure.

Gemini Pro subscribers can now generate three Veo 3 Fast videos daily as part of their plan. Meanwhile, Flow Pro users can create videos using 20 credits per clip, significantly reducing costs compared to previous models. Gemini Ultra subscribers enjoy even more generous limits under their premium tier.

The upgrade is more than a performance boost. According to Google’s Josh Woodward, the improved infrastructure also paves the way for smoother playback and better subtitles—enhancements that aim to make video creation more seamless and accessible.

Google also tests voice prompt capabilities, allowing users to express video ideas and watch them materialise on-screen.

Although Veo 3 Fast is currently limited to 720p resolution, it encourages creativity through rapid iteration. Users can experiment with prompts and edits without perfecting their first try.

While the results won’t rival Hollywood, the model opens up new possibilities for businesses, creators, and filmmakers looking to prototype video ideas or produce content without traditional filming quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Czechia bids to host major EU AI computing centre

Czechia is positioning itself to host one of the European Union’s planned AI ‘gigafactories’—large-scale computing centres designed to strengthen Europe’s AI capabilities and reduce dependence on global powers like the United States.

Jan Kavalírek, the Czech government’s AI envoy, confirmed to the Czech News Agency that talks with a private investor are progressing well and potential locations have already been identified.

While the application for the EU funding is not yet final, Kavalírek said, ‘We are very close.’ The EU has allocated around €20 billion for these AI infrastructure projects, with significant contributions also expected from private sources.

Germany and Denmark are also vying to host similar facilities. If successful, the bid made by Czechia could transform the country into a key AI infrastructure hub for Europe, offering powerful computational resources for sectors such as public administration, healthcare, and finance.

Lukáš Benzl, director of the Czech Association of Artificial Intelligence, described the initiative as a potential ‘motor for the AI economy’ across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva makes AI use mandatory in coding interviews

Australian design giant Canva has revamped its technical interview process to reflect modern software development, requiring job candidates to demonstrate their ability to use AI coding assistants.

The shift aims to assess better how candidates would perform on the job, where tools like Copilot and Claude are already part of engineers’ daily workflows.

Previously, interviews focused on coding fundamentals without assistance. Now, candidates must solve engineering problems using AI tools in ways that reflect real-world scenarios, demanding effective prompting and judgement rather than simply getting correct outputs.

The change follows internal experiments where Canva found that AI could easily handle traditional interview questions. Company leaders argue that the old approach no longer measured actual job readiness, given that many engineers rely on AI to navigate codebases and accelerate prototyping.

By integrating AI into hiring, Canva joins many firms that are adapting to a tech workforce increasingly shaped by intelligent automation. The company says the goal is not to test if candidates know how to use AI but how well they use it to build solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top AI talent from Google and Sesame

Meta is assembling a new elite AI research team aimed at developing artificial general intelligence (AGI), luring top talent from rivals including Google and AI voice startup Sesame.

Among the high-profile recruits is Jack Rae, a principal researcher from Google DeepMind, and Johan Schalkwyk, a machine learning lead from Sesame.

Meta is also close to finalising a multibillion-dollar investment in Scale AI, a data-labelling startup led by CEO Alexandr Wang, who is also expected to join the new initiative.

The new group, referred to internally as the ‘superintelligence’ team, is central to CEO Mark Zuckerberg’s plan to close the gap with competitors like Google and OpenAI.

Following disappointment over Meta’s recent AI model, Llama 4, Zuckerberg hopes the newly acquired expertise will help improve future models and expand AI capabilities in areas like voice and personalisation.

Zuckerberg has taken a hands-on approach, personally recruiting engineers and researchers, sometimes meeting with them at his homes in California. Meta is reportedly offering compensation packages worth tens of millions of dollars, including equity, to attract leading AI talent.

The company aims to hire around 50 people for the team and is also seeking a chief scientist to help lead the effort.

The broader strategy involves investing heavily in data, chips, and human expertise — three pillars of advanced AI development. By partnering with Scale AI and recruiting high-profile researchers, Meta is trying to strengthen its position in the AI race.

Meanwhile, rivals like Google are reinforcing their defences, with Koray Kavukcuoglu named as chief AI architect in a new senior leadership role to ensure DeepMind’s technologies are more tightly integrated into Google’s products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cheating crisis leaves teachers in despair

Teachers across the US are growing alarmed by widespread student use of AI for assignments, calling it a crisis that undermines education itself. Some professors report that students now rely on AI for everything from note-taking to essay writing, leaving educators questioning the future of learning.

The fear of false accusations is rising among honest students, with some recording their screens to prove their work is genuine. Detection tools often misfire, further complicating efforts to distinguish real effort from AI assistance.

While some argue for banning tech and returning to traditional classroom methods, others suggest rethinking US education entirely. Rather than fighting AI, some believe it offers a chance to re-engage students by giving them meaningful work they want to do.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cisco to reinvent network security for the AI era

Cisco has introduced a major evolution in security policy management, aiming to help enterprises scale securely without increasing complexity. At the centre of this transformation is Cisco’s Security Cloud Control, a unified policy framework designed to simplify and centralise the enforcement of security policies across a wide range of environments and technologies.

With the introduction of the Mesh Policy Engine, organisations can now define a single, intent-based policy that applies seamlessly across Cisco and third-party firewalls. Cisco is also upgrading its network security infrastructure to support AI-ready environments.

The new Hybrid Mesh Firewall includes the high-performance 6100 Series for data centres and the cost-efficient 200 Series for branch deployments, offering advanced threat inspection and integrated SD-WAN. Enforcement is extended across SD-WAN, smart switches, and ACI fabric, ensuring consistent protection.

Additionally, Cisco has deepened its integration with Splunk to enhance threat detection, investigation, and response (TDIR). Firewall log data feeds into Splunk for advanced analytics, while new SOAR integrations automate key responses like host isolation and policy enforcement.

Combined with telemetry from Cisco’s broader ecosystem, these tools provide faster, more informed threat management. Together, these advancements position Cisco as a leader in AI-era cybersecurity, offering a unified and intelligent platform that reduces complexity, improves detection and response, and secures emerging technologies like agentic AI. By embedding policy-driven security into the core of enterprise networks, Cisco is enabling organisations to innovate with AI safely and securely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI to teach machines physical reasoning

Meta Platforms has unveiled V-JEPA 2, an open-source AI model designed to help machines understand and interact with the physical world more like humans do.

The technology allows AI agents, including delivery robots and autonomous vehicles, to observe object movement and predict how those objects may behave in response to actions.

The company explained that just as people intuitively understand that a ball tossed into the air will fall due to gravity, AI systems using V-JEPA 2 gain a similar ability to reason about cause and effect in the real world.

Trained using video data, the model recognises patterns in how humans and objects move and interact, helping machines learn to reach, grasp, and reposition items more naturally.

Meta described the tool as a step forward in building AI that can think ahead, plan actions and respond intelligently to dynamic environments. In lab tests, robots powered by V-JEPA 2 performed simple tasks that relied on spatial awareness and object handling.

The company, led by CEO Mark Zuckerberg, is ramping up its AI initiatives to compete with rivals like Microsoft, Google, and OpenAI. By improving machine reasoning through world models such as V-JEPA 2, Meta aims to accelerate its progress toward more advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!