Google Cloud and NVIDIA join forces to accelerate enterprise AI and industrial digitalisation

NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads.

The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications.

An announcement that also makes NVIDIA Omniverse and Isaac Sim available on the Google Cloud Marketplace, offering enterprises new tools for digital twin development, robotics simulation, and AI-driven industrial operations.

These integrations enable customers to build realistic virtual environments, train intelligent systems, and streamline design processes.

Powered by the Blackwell architecture, the RTX PRO 6000 GPUs support next-generation AI inference and advanced graphics capabilities. Enterprises can use them to accelerate complex workloads ranging from generative and agentic AI to high-fidelity simulations.

The partnership strengthens Google Cloud’s AI infrastructure and cements NVIDIA’s role as the leading provider of end-to-end computing for enterprise transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple launches M5 with bigger AI gains

Apple unveiled the M5 chip, targeting a major jump in on-device AI. Apple says peak GPU compute for AI is over four times M4, with a Neural Accelerator in each of the 10 GPU cores.

The CPU pairs up to four performance cores with six efficiency cores for up to 15 percent faster multithreaded work versus M4. A faster 16-core Neural Engine and higher unified memory bandwidth at 153 GB/s aim to speed Apple Intelligence features.

Graphics upgrades include third-generation ray tracing and reworked caching for up to 45 percent higher performance than M4 in supported apps. With the help of AI, Apple notes smoother gameplay and quicker 3D renders, plus Vision Pro refresh up to 120 Hz.

The M5 chip reaches the 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, with pre-orders open. Apple highlights tighter tie-ins with Core ML, Metal 4 and Tensor APIs, and support for larger local models via unified memory up to 32 GB.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SenseTime and Cambricon strengthen cooperation for China’s AI future

SenseTime and Cambricon Technologies have entered a strategic cooperation agreement to jointly develop an open and mutually beneficial AI ecosystem in China. The partnership will focus on software-hardware integration, vertical industry innovation, and the globalisation of AI technologies.

By combining SenseTime’s strengths in large model R&D, AI infrastructure, and industrial applications with Cambricon’s expertise in intelligent computing chips and high-performance hardware, the collaboration supports the national ‘AI+’ strategy of China.

Both companies aim to foster a new AI development model defined by synergy between software and hardware, enhancing domestic innovation and global competitiveness in the AI sector.

The agreement also includes co-development of adaptive chip solutions and integrated AI systems for enterprise and industrial use. By focusing on compatibility between the latest AI models and hardware architectures, the two firms plan to offer scalable, high-efficiency computing solutions.

A partnership that seeks to drive intelligent transformation across industries and promote the growth of emerging AI enterprises through joint innovation and ecosystem building.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokens-at-scale with Intel’s Crescent Island and Xe architecture

Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.

Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.

Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.

Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.

Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dell joins Microsoft and Nscale on hyperscale AI capacity

Nscale has signed an expanded deal with Microsoft to deliver about 200,000 NVIDIA GB300 GPUs across Europe and the US, with Dell collaborating. The company calls it one of the largest AI infrastructure contracts to date. The build-out targets surging enterprise demand for GPU capacity.

A ~240MW hyperscale AI campus in Texas, US, will host roughly 104,000 GB300s from Q3 2026, leased from Ionic Digital. Nscale plans to scale the site to 1.2GW, with Microsoft holding an option on a second 700MW phase from late 2027. The campus is optimised for air-cooled, power-efficient deployments.

In Europe, Nscale will deploy about 12,600 GB300s from Q1 2026 at Start Campus in Sines, Portugal, supporting sovereign AI needs within the EU. A separate UK facility at Loughton will house around 23,000 GB300s from Q1 2027. The 50MW site is scalable to 90MW to support Azure services.

A Norway programme also advances Aker-Nscale’s joint venture plans for about 52,000 GB300s at Narvik, along with Nscale’s GW+ greenfield sites and orchestration for target training, fine-tuning, and inference at scale. Microsoft emphasises sustainability and global availability.

Both firms cast the pact as deepening transatlantic tech ties and accelerating the rollout of next-gen AI services. Nscale says few providers can deploy GPU fleets at this pace. The roadmap points to sovereign-grade, multi-region capacity with lower-latency platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Broadcom unite to deploy 10 gigawatts of AI accelerators

The US firm, OpenAI, has announced a multi-year collaboration with Broadcom to design and deploy 10 gigawatts of custom AI accelerators.

The partnership will combine OpenAI’s chip design expertise with Broadcom’s networking and Ethernet technologies to create large-scale AI infrastructure. The deployment is expected to begin in the second half of 2026 and be completed by the end of 2029.

A collaboration that enables OpenAI to integrate insights gained from its frontier models directly into the hardware, enhancing efficiency and performance.

Broadcom will develop racks of AI accelerators and networking systems across OpenAI’s data centres and those of its partners. The initiative is expected to meet growing global demand for advanced AI computation.

Executives from both companies described the partnership as a significant step toward the next generation of AI infrastructure. OpenAI CEO Sam Altman said it would help deliver the computing capacity needed to realise the benefits of AI for people and businesses worldwide.

Broadcom CEO Hock Tan called the collaboration a milestone in the industry’s pursuit of more capable and scalable AI systems.

The agreement strengthens Broadcom’s position in AI networking and underlines OpenAI’s move toward greater control of its technological ecosystem. By developing its own accelerators, OpenAI aims to boost innovation while advancing its mission to ensure artificial general intelligence benefits humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia DGX Spark launches as the world’s smallest AI supercomputer

Nvidia has launched the DGX Spark, described as the world’s smallest AI supercomputer.

Designed for developers and smaller enterprises, the Spark offers data centre-level performance without the need for costly AI server infrastructure or cloud rentals. It features Nvidia’s GB10 Grace Blackwell superchip, ConnectX-7 networking, and the company’s complete AI software stack.

The system, co-developed with ASUS and Dell, can support up to 128GB of memory, enabling users to train and run substantial AI models locally.

Nvidia CEO Jensen Huang compared Spark’s mission to the 2016 DGX-1, which he delivered to Elon Musk’s OpenAI, marking the start of the AI revolution. The new Spark, he said, aims to place supercomputing power directly in the hands of every developer.

Running on Nvidia’s Linux-based DGX OS, the Spark is built for AI model creation rather than general computing or gaming. Two units can be connected to handle models with up to 405 billion parameters.

The device complements Nvidia’s DGX Station, powered by the more advanced GB300 Grace Blackwell Ultra chip.

Nvidia continues to dominate the AI chip industry through its powerful hardware and CUDA platform, securing multi-billion-dollar deals with companies such as OpenAI, Google, Meta, Microsoft, and Amazon. The DGX Spark reinforces its position by expanding access to AI computing at the desktop level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands safeguards economic security through Nexperia intervention

The Dutch Minister of Economic Affairs has invoked the Goods Availability Act in response to serious governance issues at semiconductor manufacturer Nexperia.

The measure, announced on 30 September 2025, seeks to ensure the continued availability of the company’s products in the event of an emergency. Nexperia, headquartered in Nijmegen, will be allowed to maintain its normal production activities.

A decision that follows recent indications of significant management deficiencies and actions within Nexperia that could affect the safeguarding of vital technological knowledge and capacity in the Netherlands and across Europe.

Authorities view these capabilities as essential for economic security, as Nexperia supplies chips for the automotive sector and consumer electronics industries.

Under the order, the Minister of Economic Affairs may block or reverse company decisions considered harmful to Nexperia’s long-term stability or to the preservation of Europe’s semiconductor value chain.

The Netherlands government described the use of the Goods Availability Act as exceptional, citing the urgency and scale of the governance concerns.

Officials emphasised that the action applies only to Nexperia and does not target other companies, sectors, or countries. The decision may be contested through the courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beijing tightens grip on rare earth exports

China has announced new restrictions on rare earth and permanent magnet exports, significantly escalating its control over critical materials essential for advanced technologies and defence production. The move, revealed ahead of President Donald Trump’s expected meeting with President Xi Jinping, introduces the most rigid export controls yet.

For the first time, Beijing will require foreign companies to obtain approval to export magnets that contain even minimal Chinese-sourced materials or were made with Chinese technology, effectively extending its influence across the global supply chain.

The restrictions could have profound implications for the US defence and semiconductor industries. Rare earth elements are indispensable for producing fighter jets, submarines, missiles, and other advanced systems.

Beginning 1 December 2025, any company tied to foreign militaries, particularly the US, will likely be denied export licenses, while applications for high-tech uses, such as next-generation semiconductors, will face case-by-case reviews. These measures grant Chinese authorities broad discretion to delay or deny exports, tightening their strategic control at a time when Washington already struggles to boost domestic production.

Beijing’s announcement also limits Chinese nationals from participating in overseas rare earth projects without government authorisation, aiming to block the transfer of technical know-how abroad. Analysts suggest the move serves both as a negotiation tactic ahead of renewed trade talks and as a continuation of China’s long-term strategy to weaponise its dominance in the rare earth sector, which supplies over 90% of the world’s magnet manufacturing.

Meanwhile, the US is racing to build resilience. Noveon Magnetics and Lynas Rare Earths are partnering to establish a domestic magnet supply chain, while the Department of War has invested heavily in MP Materials to expand rare earth mining and processing capacity.

Yet experts warn that developing these capabilities will take years, leaving China with significant leverage over global supply chains critical to US national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot