An Efficient Approach to Mitigate Numerical Instability in Backpropagation for 16-bit Neural Network Training

07/30/2023
by   Juyoung Yun, et al.
0

In this research, we delve into the intricacies of the numerical instability observed in 16-bit computations of machine learning models, particularly when employing popular optimization algorithms such as RMSProp and Adam. This instability is commonly experienced during the training phase of deep neural networks, leading to disrupted learning processes and hindering the effective deployment of such models. We identify the single hyperparameter, epsilon, as the main culprit behind this numerical instability. An in-depth exploration of the role of epsilon in these optimizers within 16-bit computations reveals that a minor adjustment of its value can restore the functionality of RMSProp and Adam, consequently enabling the effective utilization of 16-bit neural networks. We propose a novel method to mitigate the identified numerical instability issues. This method capitalizes on the updates from the Adam optimizer and significantly improves the robustness of the learning process in 16-bit computations. This study contributes to better understanding of optimization in low-precision computations and provides an effective solution to a longstanding issue in training deep neural networks, opening new avenues for more efficient and stable model training.

READ FULL TEXT
research
04/30/2021

PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit

Low-precision formats have proven to be an efficient way to reduce not o...
research
11/01/2017

Towards Effective Low-bitwidth Convolutional Neural Networks

This paper tackles the problem of training a deep convolutional neural n...
research
06/06/2022

8-bit Numerical Formats for Deep Neural Networks

Given the current trend of increasing size and complexity of machine lea...
research
11/23/2019

Training Modern Deep Neural Networks for Memory-Fault Robustness

Because deep neural networks (DNNs) rely on a large number of parameters...
research
10/22/2019

Neural Network Training with Approximate Logarithmic Computations

The high computational complexity associated with training deep neural n...
research
10/13/2020

Revisiting BFloat16 Training

State-of-the-art generic low-precision training algorithms use a mix of ...
research
12/02/2018

Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization

In this paper, we study the problem of improving computational resource ...

Please sign up or login with your details

Forgot password? Click here to reset