Mar 2019 - Conference Proceeding: Tech Supplier - Doc # DR2019_T2_PR
AI Infrastructure: Horsepower Changes Everything
These event proceedings were presented at the IDC Directions conferences in Santa Clara and Boston in March 2019.
The days of the omnipresent homogeneous, general-purpose server are over. The speed with which training and inferencing for ML and DL can be executed is of critical importance for organizations that are developing and deploying AI applications. The growing adoption of AI has led to an eruption of different infrastructure technologies aimed at increasing performance and reducing latency of the AI data flows. Increasingly, parallelization is the preferred approach, with AI infrastructure starting to resemble HPC infrastructure. Existing as well as start-up technology companies are developing new technologies such as processors, coprocessors, interconnects, and orchestration layers aimed at AI workloads. Server vendors are incorporating these technologies in various ways and expanding their products up the stack and beyond. Meanwhile, providers are competing with new AI-focused instances. Peter Rutten provides an overview of the current AI infrastructure landscape, where it is heading, and how to navigate the myriad options.
SambaNova Systems, Inc., Wave Computing, Inc., Advanced Micro Devices, Inc., Amazon Web Services Inc., NVIDIA Corporation, Intel Corporation, Xilinx, Inc., Habana Labs Ltd., Cerebras Systems Inc., Graphcore Ltd., Google Inc.