Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks

06/01/2017
by   Alberto Delmas, et al.
0

Stripes is a Deep Neural Network (DNN) accelerator that uses bit-serial computation to offer performance that is proportional to the fixed-point precision of the activation values. The fixed-point precisions are determined a priori using profiling and are selected at a per layer granularity. This paper presents Dynamic Stripes, an extension to Stripes that detects precision variance at runtime and at a finer granularity. This extra level of precision reduction increases performance by 41

READ FULL TEXT

page 1

page 2

page 3

research
04/17/2018

DPRed: Making Typical Activation Values Matter In Deep Learning Computing

We show that selecting a fixed precision for all activations in Convolut...
research
02/09/2015

Deep Learning with Limited Numerical Precision

Training of large-scale deep neural networks is often constrained by the...
research
12/31/2018

Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

The high computational and parameter complexity of neural networks makes...
research
09/29/2019

AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference

Conventional hardware-friendly quantization methods, such as fixed-point...
research
11/25/2020

Ax-BxP: Approximate Blocked Computation for Precision-Reconfigurable Deep Neural Network Acceleration

Precision scaling has emerged as a popular technique to optimize the com...
research
12/04/2021

On the Implementation of Fixed-point Exponential Function for Machine Learning and Signal Processing Accelerators

The natural exponential function is widely used in modeling many enginee...
research
05/23/2000

Applying MDL to Learning Best Model Granularity

The Minimum Description Length (MDL) principle is solidly based on a pro...

Please sign up or login with your details

Forgot password? Click here to reset