On the Difficulty of Designing Processor Arrays for Deep Neural Networks

06/24/2020
by   Kevin Stehle, et al.
0

Systolic arrays are a promising computing concept which is in particular inline with CMOS technology trends and linear algebra operations found in the processing of artificial neural networks. The recent success of such deep learning methods in a wide set of applications has led to a variety of models, which albeit conceptual similar as based on convolutions and fully-connected layers, in detail show a huge diversity in operations due to a large design space: An operand's dimension varies substantially since it depends on design principles such as receptive field size, number of features, striding, dilating and grouping of features. Last, recent networks extent previously plain feedforward models by various connectivity, such as in ResNet or DenseNet. The problem of choosing an optimal systolic array configuration cannot be solved analytically, thus instead methods and tools are required that facilitate a fast and accurate reasoning about optimality in terms of total cycles, utilization, and amount of data movements. In this work we introduce Camuy, a lightweight model of a weight-stationary systolic array for linear algebra operations that allows quick explorations of different configurations, such as systolic array dimensions and input/output bitwidths. Camuy aids accelerator designers in either finding optimal configurations for a particular network architecture or for robust performance across a variety of network architectures. It offers simple integration into existing machine learning tool stacks (e.g TensorFlow) through custom operators. We present an analysis of popular DNN models to illustrate how it can estimate required cycles, data movement costs, as well as systolic array utilization, and show how the progress in network architecture design impacts the efficiency of inference on accelerators based on systolic arrays.

READ FULL TEXT
research
03/22/2022

Scale-out Systolic Arrays

Multi-pod systolic arrays are emerging as the architecture of choice in ...
research
10/30/2018

MPNA: A Massively-Parallel Neural Array Accelerator with Dataflow Optimization for Convolutional Neural Networks

The state-of-the-art accelerators for Convolutional Neural Networks (CNN...
research
06/25/2022

Heterogeneous Multi-core Array-based DNN Accelerator

In this article, we investigate the impact of architectural parameters o...
research
01/12/2021

Self-Adaptive Reconfigurable Arrays (SARA): Using ML to Assist Scaling GEMM Acceleration

With increasing diversity in Deep Neural Network(DNN) models in terms of...
research
04/27/2020

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

Modern deep learning models have high memory and computation cost. To ma...
research
08/16/2021

AIRCHITECT: Learning Custom Architecture Design and Mapping Space

Design space exploration is an important but costly step involved in the...
research
10/16/2018

SCALE-Sim: Systolic CNN Accelerator Simulator

Systolic Arrays are one of the most popular compute substrates within De...

Please sign up or login with your details

Forgot password? Click here to reset