Next-gen AI infrastructure boosted by Samsung HBM4
Mass production of the new high-bandwidth memory signals intensifying competition in AI semiconductor supply chains and performance optimisation.
Samsung Electronics has commenced mass production and commercial shipments of its next-generation HBM4 memory, marking the first industry deployment of the advanced high-bandwidth solution.
The launch strengthens the company’s position in AI infrastructure hardware as demand for accelerated computing intensifies.
Built on sixth-generation 10nm-class DRAM and a 4nm logic base die, HBM4 delivers transfer speeds of 11.7Gbps, with performance scalable to 13Gbps. Bandwidth per stack has surged, reducing data bottlenecks as AI models and processing demands grow.
Engineering upgrades extend beyond raw speed. Enhanced stacking architecture, low-power design integration, and thermal optimisation have improved energy efficiency and heat dissipation, supporting large-scale data centre deployments and sustained GPU workloads.
Production scale-up is already in motion, backed by expanded manufacturing capacity and industry partnerships. Samsung expects HBM revenue growth to accelerate into 2026, with next-generation variants and custom configurations scheduled for future release cycles.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
