Understanding the Limits of Conventional Hardware Architectures for Deep-Learning

12/04/2021
by   Michael Davies, et al.
0

Deep learning and hardware for it has garnered immense academic and industry interest in the past 5 years – including almost 100 startups, more than 5B of VC investment – and a re-relevance of the role of architecture. However, the state-of-art remains NVIDIA's TensorCore-based systems that provide i) top-of-line performance, ii) turnkey software stack, and iii) coverage across a wide-spectrum of DL network styles (DL-architecture in AI parlance). Other academic and industry efforts have included novel approaches like spatial dataflow, CGRAs, systolic arrays, blended FPGA LUTs with fixed function units and more. These have all necessitated their own innovations in architecture, compiler, and software stack integration. However, none of these have yet satisfied all the 3 metrics that NVIDIA's TensorCore and software stack provides, and generally seem to perform worse. In this paper, we systematically investigate the behavior of DL workloads and imputed needs on hardware/compiler/software. We show that SIMD/short-vector, caching, and synchronization in a fairly well-understood multicore chip organization we call UPCYCLE can achieve day-zero software maturity, and provide big integer factor speedups over the state-of-art NVIDIA solutions. Compared to an A100, UPCYCLE at small-batch size is geo-mean 3.8X faster for inference, geo-mean 4.2X faster at training, while consuming only half the power. Second, the UPCYCLE architecture requires no new compiler or software stack innovation. Third, it provides full DL-architecture coverage, and can be instantiated to provide training-optimized, inference-optimized, or balanced training and inference systems. Overall, this paper motivates the treatment of software maturity as a first class design constraint in developing new architectures for DL. This is achieved by revisiting well understood ideas, upcycling them for future DL architectures...

READ FULL TEXT

page 1

page 2

page 5

page 7

page 9

research
02/06/2020

The Deep Learning Compiler: A Comprehensive Survey

The difficulty of deploying various deep learning (DL) models on diverse...
research
05/26/2021

A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators

The rapidly-changing deep learning landscape presents a unique opportuni...
research
10/10/2020

Cross-Stack Workload Characterization of Deep Recommendation Systems

Deep learning based recommendation systems form the backbone of most per...
research
12/12/2021

Demystifying Developers' Issues in Distributed Training of Deep Learning Software

Deep learning (DL) has been pervasive in a wide spectrum of nowadays sof...
research
03/17/2022

A Survey of Multi-Tenant Deep Learning Inference on GPU

Deep Learning (DL) models have achieved superior performance. Meanwhile,...
research
11/24/2019

FusionStitching: Boosting Execution Efficiency of Memory Intensive Computations for DL Workloads

Performance optimization is the art of continuous seeking a harmonious m...
research
07/15/2021

A64FX – Your Compiler You Must Decide!

The current number one of the TOP500 list, Supercomputer Fugaku, has dem...

Please sign up or login with your details

Forgot password? Click here to reset