Accelerated AI Storage Performance With RDMA for S3-Compatible
AI workloads are data-intensive, requiring more scalable and affordable storage. By 2028, enterprises are projected to generate nearly 400 zettabytes of data annually
Today’s AI workloads are data-intensive, requiring more scalable and affordable storage than ever. By 2028, enterprises are projected to generate nearly 400 zettabytes of data annually, with 90% of new data being unstructured, comprising audio, video, PDFs, images and more.
This massive scale, combined with the need for data portability between on-premises infrastructure and the cloud, is pushing the AI industry to evaluate new storage options.
Enter RDMA for S3-compatible storage — which uses remote direct memory access (RDMA) to accelerate the S3-application programming interface (API)-based storage protocol and is optimized for AI data and workloads.
Object storage has long been used as a lower-cost storage option for applications, such as archive, backups, data lakes and activity logs, that didn’t require the fastest performance. While some customers are already using object storage for AI training, they want more performance for the fast-paced world of AI.
This solution, which incorporates NVIDIA networking, delivers faster and more efficient object storage by using RDMA for object data transfers.
For customers, this means higher throughput per terabyte of storage, higher throughput per watt, lower cost per terabyte and significantly lower latencies compared with TCP, the traditional network transport protocol for object storage.
Other benefits include:
- Lower Cost: End users can lower the cost of their AI storage, which can also speed up project approval and implementation.
- Workload Portability: Customers can run their AI workloads unmodified in both on premises and in cloud service provider and neocloud environments, using a common storage API.
- Accelerated Storage: Faster data access and performance for AI training and inference — including vector databases and key-value cache storage for inference in AI factories.
- AI data platform solutions gain faster storage object storage access and more metadata for content indexing and retrieval.
- Reduced CPU Utilization: RDMA for S3-compatible storage doesn’t use the host CPU for data transfer, meaning this critical resource is available to deliver AI value for customers.
Reward this post with your reaction or TipDrop:
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
TipDrop
0









