A Simple and Efficient Stochastic Rounding Method for Training Neural Networks in Low Precision

03/24/2021
by   Lu Xia, et al.
0

Conventional stochastic rounding (CSR) is widely employed in the training of neural networks (NNs), showing promising training results even in low-precision computations. We introduce an improved stochastic rounding method, that is simple and efficient. The proposed method succeeds in training NNs with 16-bit fixed-point numbers and provides faster convergence and higher classification accuracy than both CSR and deterministic rounding-to-the-nearest method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2015

Deep Learning with Limited Numerical Precision

Training of large-scale deep neural networks is often constrained by the...
research
07/08/2016

Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks

It is known that training deep neural networks, in particular, deep conv...
research
11/20/2016

Efficient Stochastic Inference of Bitwise Deep Neural Networks

Recently published methods enable training of bitwise neural networks wh...
research
05/04/2019

NAMSG: An Efficient Method For Training Neural Networks

We introduce NAMSG, an adaptive first-order algorithm for training neura...
research
12/14/2015

On non-iterative training of a neural classifier

Recently an algorithm, was discovered, which separates points in n-dimen...
research
01/23/2023

On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality

When training neural networks with low-precision computation, rounding e...
research
05/31/2020

Improved stochastic rounding

Due to the limited number of bits in floating-point or fixed-point arith...

Please sign up or login with your details

Forgot password? Click here to reset