A Formalism of DNN Accelerator Flexibility

06/07/2022
by   Sheng-Chun Kao, et al.
0

The high efficiency of domain-specific hardware accelerators for machine learning (ML) has come from specialization, with the trade-off of less configurability/ flexibility. There is growing interest in developing flexible ML accelerators to make them future-proof to the rapid evolution of Deep Neural Networks (DNNs). However, the notion of accelerator flexibility has always been used in an informal manner, restricting computer architects from conducting systematic apples-to-apples design-space exploration (DSE) across trillions of choices. In this work, we formally define accelerator flexibility and show how it can be integrated for DSE. Specifically, we capture DNN accelerator flexibility across four axes: tiling, ordering, parallelization, and array shape. We categorize existing accelerators into 16 classes based on their axes of flexibility support, and define a precise quantification of the degree of flexibility of an accelerator across each axis. We leverage these to develop a novel flexibility-aware DSE framework. We demonstrate how this can be used to perform first-of-their-kind evaluations, including an isolation study to identify the individual impact of the flexibility axes. We demonstrate that adding flexibility features to a hypothetical DNN accelerator designed in 2014 improves runtime on future (i.e., present-day) DNNs by 11.8x geomean.

READ FULL TEXT

page 1

page 7

page 8

page 13

page 14

page 15

page 16

page 17

research
08/17/2021

O-HAS: Optical Hardware Accelerator Search for Boosting Both Acceleration Performance and Development Speed

The recent breakthroughs and prohibitive complexities of Deep Neural Net...
research
01/12/2021

Self-Adaptive Reconfigurable Arrays (SARA): Using ML to Assist Scaling GEMM Acceleration

With increasing diversity in Deep Neural Network(DNN) models in terms of...
research
09/11/2021

2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency

The recent breakthroughs of deep neural networks (DNNs) and the advent o...
research
08/23/2023

An Open-Source ML-Based Full-Stack Optimization Framework for Machine Learning Accelerators

Parameterizable machine learning (ML) accelerators are the product of re...
research
10/16/2018

SCALE-Sim: Systolic CNN Accelerator

Systolic Arrays are one of the most popular compute substrates within De...
research
10/16/2018

SCALE-Sim: Systolic CNN Accelerator Simulator

Systolic Arrays are one of the most popular compute substrates within De...
research
06/10/2020

STONNE: A Detailed Architectural Simulator for Flexible Neural Network Accelerators

The design of specialized architectures for accelerating the inference p...

Please sign up or login with your details

Forgot password? Click here to reset