Jensen Huang of Nvidia rules out China Blackwell talks for now

Nvidia CEO Jensen Huang said the company is not in active discussions to sell Blackwell-family AI chips to Chinese firms and has no current plans to ship them. He also clarified remarks about the US-China AI race, saying he intended to acknowledge China’s technical strength rather than predict an outcome.

Huang spoke in Taiwan ahead of meetings with TSMC, as Nvidia expands partnerships and pitches its platforms across regions and industries. The company has added roughly a trillion dollars in value this year and remains the world’s most valuable business despite recent share volatility.

US controls still bar sales of Nvidia’s most advanced data-centre AI chips into China, and a recent bilateral accord did not change that. Officials have indicated approvals for Blackwell remain off the table, keeping a potentially large market out of reach for now.

Analysts say uncertainty around China’s access to the technology feeds broader questions about the durability of hyperscale AI spending. Rivals, including AMD and Broadcom, are racing to win share as customers weigh long-term returns on data-centre buildouts.

Huang is promoting Nvidia’s end-to-end stack to reassure buyers that massive investments will yield productivity gains across sectors. He said he hopes policy environments eventually allow Nvidia to serve China again, but reiterated there are no active talks.

Naver expands physical AI ambitions with $690 million GPU investment

South Korean technology leader Naver is deepening its AI ambitions through a $690 million investment in graphics processing units from 2025.

A move that aims to strengthen its AI infrastructure and drive the development of physical AI, a field merging digital intelligence with robotics, logistics, and autonomous systems.

Beyond its internal use, Naver plans to monetise its expanded computing power by offering GPU-as-a-Service to clients across sectors, creating new revenue opportunities aligned with its AI ecosystem.

Chief Executive Choi Soo-yeon described physical AI as the firm’s next growth pillar, combining robotics, data, and generative AI to reshape both digital and industrial environments. The company already holds a significant share of the global robotics operating system market, underlining its technological maturity.

An investment that marks a strategic shift from software-based AI to infrastructure-driven intelligence, positioning Naver as a leader in integrating AI with real-world applications.

As global competition intensifies, Naver’s model of coupling high-performance computing with robotics innovation signals the emergence of South Korea as a centre for applied AI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Blackwell stance on China exports holds as Washington weighs tech pace

AI export policy in Washington remains firm, with officials saying the most advanced Nvidia Blackwell chips will not be sold to China. A White House spokesperson confirmed the stance during a briefing. The position follows weeks of speculation about scaled-down variants.

Senior economic officials floated the possibility of a shift later, citing the rapid pace of chip development. If Blackwell quickly becomes superseded, future sales could be reconsidered. Any change would depend on achieving parity in technology, licensing, and national security assessments.

Nvidia’s chief executive signalled hope that parts for Blackwell family products could be supplied from China, while noting there are no current plans to do so. Company guidance emphasises both commercial and research applications. Analysts say licensing clarity will dictate data centre buildouts and training roadmaps.

Policy hawks argue that cutting-edge accelerators should remain in US allied markets to protect strategic advantages. Others counter that export channels can be reopened once hardware is no longer state-of-the-art. The debate now centres on timelines measured in product cycles.

Diplomatic calendars may influence further discussions, with potential leader-level meetings next year alongside major international gatherings. Officials portrayed the broader bilateral relationship as steadier. The industry will track any signals that link geopolitical dialogue to chip export regulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexperia chip exports may resume as China softens stance on ban

China’s Ministry of Commerce announced plans to exempt specific Nexperia orders from its export ban, aiming to stabilise the global semiconductor supply chain after the Netherlands seized control of the Chinese-owned Dutch chipmaker.

The ministry stated that exemptions would be granted when the criteria were met, encouraging affected firms to apply.

A move that follows a meeting between Chinese President Xi Jinping and US President Donald Trump in Busan, where both sides reached a framework allowing Nexperia to resume shipments under eased restrictions.

Washington reportedly agreed to pause the 50 percent subsidiary rule, which restricts exports from companies half-owned by entities on its trade blocklist. Wingtech Technology, Nexperia’s Chinese parent, has been under these restrictions since December.

Beijing’s export ban, introduced after the Dutch takeover citing national security concerns, disrupted supplies from Nexperia’s Dongguan factory, which assembles about 70 percent of its products.

China condemned the Netherlands for intervening in corporate affairs, warning that such actions deepen global supply chain instability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A new phase for Hyundai and NVIDIA in AI mobility and manufacturing

NVIDIA and Hyundai Motor Group will build a Blackwell-powered AI factory for autonomous vehicles, smart plants and robotics. The partners will co-develop core physical AI, shifting from tool adoption to capability building across mobility, manufacturing and on-device chips.

The programme targets integrated training, validation and deployment on 50,000 Blackwell GPUs. In parallel, both sides will back the physical AI cluster in South Korea with about $3 billion, creating an NVIDIA AI Technology Center, Hyundai’s Physical AI Application Center and regional data centres.

Hyundai will use NVIDIA DGX for model training, Omniverse and Cosmos on RTX PRO Servers for digital twins and simulation, and DRIVE AGX Thor in vehicles and robots for real-time intelligence. The stack underpins design, testing and deployment at an industrial scale.

Factory digital twins will unify data, enable virtual commissioning and improve predictive maintenance, supporting safer human-robot work. Isaac Sim will validate tasks and ergonomics before line deployment, speeding robot integration and lifting throughput, quality and uptime.

Vehicles will gain evolving features via Nemotron and NeMo, from autonomy to personalised assistants and adaptive comfort. DRIVE AGX Thor with safety-certified DriveOS will power driver assistance and next-generation safety, linking car and factory into one intelligent ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

An AI factory brings Nvidia compute into Samsung’s fabs

Nvidia and Samsung outlined a semiconductor AI factory that embeds accelerated computing into production. Over 50,000 GPUs will drive digital twins, predictive maintenance, and real-time optimisation. Partners present the project as a template for autonomous fabs.

The alliance spans design and manufacturing. Samsung uses CUDA-X and EDA tools to speed simulation and verification. Integrating cuLitho into OPC reports roughly twentyfold gains in computational lithography at advanced nodes.

Factory planning and logistics run on Omniverse digital twins and RTX PRO servers. Unified analytics support anomaly detection, capacity planning, and flow balancing. Managers expect shorter ramps and smoother changeovers with higher equipment effectiveness.

Robotics and edge AI extend intelligence to the line. Isaac Sim, Cosmos models, and Jetson Thor target safe collaboration, faster task retargeting, and teleoperation. Samsung’s in-house models enable multilingual assistance and on-site decision support.

A decades-long Nvidia–Samsung relationship underpins the effort, from NV1 DRAM to HBM3E and HBM4. Work continues on memory, modules, and foundry services, plus AI-RAN research with networks in South Korea and academia linking factory intelligence with next-gen connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CXMT launches LPDDR5X chips as China advances in semiconductor race

ChangXin Memory Technologies has begun mass production of LPDDR5X chips, marking a major milestone in China’s effort to strengthen its position in the global semiconductor market.

The Hefei-based manufacturer, preparing for a Shanghai stock listing, said its new DRAM generation will support faster data transfer and lower power use across mobile devices and AI systems.

The LPDDR5X range includes chips with speeds of up to 10,667 Mbps, positioning CXMT as a growing competitor to industry leaders such as Samsung, SK Hynix and Micron.

Earlier LPDDR5 versions launched in 2023 had already helped the firm progress towards advanced 16-nanometre manufacturing, narrowing the technological gap with global rivals.

Industry data indicate a rising global demand for memory chips, driven by AI applications and high-bandwidth computing. Additionally, DRAM revenue increased 17.1 percent in the second quarter, reaching US$31.6 billion.

CXMT’s expansion comes as it targets a Shanghai IPO valued at around 300 billion yuan, highlighting both investor interest and the ambition of China to achieve greater chip self-sufficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trainium2 power surges as AWS’s Project Rainier enters service for Anthropic

Anthropic and AWS switched on Project Rainier, a vast Trainium2 cluster spanning multiple US sites to accelerate Claude’s evolution.

Project Rainier is now fully operational, less than a year after its announcement. AWS engineered an EC2 UltraCluster of Trainium2 UltraServers to deliver massive training capacity. Anthropic says it offers more than five times the compute used for prior Claude models.

UltraServers bind four Trainium2 servers with high-speed NeuronLinks so 64 chips act as one. Tens of thousands of networks are connected through Elastic Fabric Adapter across buildings. The design reduces latency within racks while preserving flexible scale across data centres.

Anthropic is already training and serving Claude on Rainier across the US and plans to exceed one million Trainium2 chips by year’s end. More computing should raise model accuracy, speed evaluations, and shorten iteration cycles for new frontier releases.

AWS controls the stack from chip to data centre for reliability and efficiency. Teams tune power delivery, cooling, and software orchestration. New sites add water-wise cooling, contributing to the company’s renewable energy and net-zero goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robots set to power Foxconn’s new Nvidia server plant in Houston

Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.

Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.

AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.

Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.

Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and Deutsche Telekom plan €1 billion AI data centre in Germany

Plans are being rolled out for a €1 billion data centre in Germany to bolster Europe’s AI infrastructure, with Nvidia and Deutsche Telekom set to co-fund the project.

The facility is expected to serve enterprise customers, including SAP SE, Europe’s largest software company, and to deploy around 10,000 advanced chips known as graphics processing units (GPUs).

While significant for Europe, the build is modest compared with gigawatt-scale sites elsewhere, highlighting the region’s push to catch up with US and Chinese capacity.

An announcement is anticipated next month in Berlin alongside senior industry and government figures, with Munich identified as the planned location.

The move aligns with the EU efforts to expand AI compute, including the €200 billion initiative announced in February to grow capacity over the next five to seven years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!