Deep Learning with Limited Numerical Precision

02/09/2015
by   Suyog Gupta, et al.
0

Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2021

TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNT

In this research, we propose a new low-precision framework, TENT, to lev...
research
03/24/2021

A Simple and Efficient Stochastic Rounding Method for Training Neural Networks in Low Precision

Conventional stochastic rounding (CSR) is widely employed in the trainin...
research
02/19/2023

Fixflow: A Framework to Evaluate Fixed-point Arithmetic in Light-Weight CNN Inference

Convolutional neural networks (CNN) are widely used in resource-constrai...
research
12/31/2018

Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

The high computational and parameter complexity of neural networks makes...
research
02/24/2021

FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism

In this paper, we present a deep reinforcement learning platform named F...
research
06/01/2017

Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks

Stripes is a Deep Neural Network (DNN) accelerator that uses bit-serial ...
research
02/27/2017

Low-Precision Batch-Normalized Activations

Artificial neural networks can be trained with relatively low-precision ...

Please sign up or login with your details

Forgot password? Click here to reset