By using this site, you agree to the IDC Privacy Policy

Feb 2018 - IDC Survey Spotlight - Doc # US43587818

What Type of Storage Architecture Will Be Used for On-Premises Run of AI/ML/DL Workloads?

By: Ritu JyotiProgram Vice President, Artificial Intelligence Strategies

On-line Presentation

Abstract

This IDC Survey Spotlight provides analysis of the storage architecture currently used/will be used for on-premises run of AI/ML/DL workloads. Specifically, this Survey Spotlight highlights the expected increased use of software-defined storage, hyperconverged infrastructure, and all-flash arrays.

"Today, traditional SAN/NAS is largely used for on-premises run of AI/ML/DL workloads due to their existing deployment footprint and earlier stages of AI adoption, but with the need to scale dynamically, store large volumes of data at relatively low cost, and support high performance, software-defined storage, hyperconverged infrastructure, and all-flash arrays will gain adoption, aligned with the individual offering specific advantages and the data pipeline stage of AI deployment," said Ritu Jyoti, research director, for IDC's Enterprise Storage, Server, and Infrastructure software team at IDC. "This will mean speed and power to support faster analysis and decision making, along with overall improvement in operational efficiency."


Coverage

Content
  • 2 slides


Related Links