NVIDIA brings RDMA acceleration to S3 object storage for AI workloads
Cloudian, Dell and HPE adopt NVIDIA’s accelerated object storage libraries.
AI workloads are driving unprecedented data growth, with enterprises projected to generate almost 400 zettabytes annually by 2028. NVIDIA says traditional storage models cannot match the speed and scale needed for modern training and inference systems.
The company is promoting RDMA for S3-compatible storage, which accelerates object data transfers by bypassing host CPUs and removing bottlenecks associated with TCP networking. The approach promises higher throughput per terabyte and reduced latency across AI factories and cloud deployments.
Key benefits include lower storage costs, workload portability across environments and faster access for training, inference and vector database workloads. NVIDIA says freeing CPU resources also improves overall GPU utilisation and project efficiency.
RDMA client libraries run directly on GPU compute nodes, enabling faster object retrieval during training. While initially optimised for NVIDIA hardware, the architecture is open and can be extended by other vendors and users seeking higher storage performance.
Cloudian, Dell and HPE are integrating the technology into products such as HyperStore, ObjectScale and Alletra Storage MP X10000. NVIDIA is working with partners to standardise the approach, arguing that accelerated object storage is now essential for large-scale AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
