Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks

07/08/2016
by   Darryl D. Lin, et al.
0

It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the well-accepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2014

Training deep neural networks with low precision multiplications

Multipliers are the most space and power-hungry arithmetic operators of ...
research
06/29/2017

A Fixed-Point of View on Gradient Methods for Big Data

Interpreting gradient methods as fixed-point iterations, we provide a de...
research
03/26/2018

A Provably Correct Algorithm for Deep Learning that Actually Works

We describe a layer-by-layer algorithm for training deep convolutional n...
research
03/24/2021

A Simple and Efficient Stochastic Rounding Method for Training Neural Networks in Low Precision

Conventional stochastic rounding (CSR) is widely employed in the trainin...
research
01/23/2023

On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality

When training neural networks with low-precision computation, rounding e...
research
10/21/2017

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for k-means Clustering

In this paper, we propose an implicit gradient descent algorithm for the...
research
08/16/2022

On the generalization of learning algorithms that do not converge

Generalization analyses of deep learning typically assume that the train...

Please sign up or login with your details

Forgot password? Click here to reset