Unintended Effects on Adaptive Learning Rate for Training Neural Network with Output Scale Change

03/05/2021
by   Ryuichi Kanoh, et al.
0

A multiplicative constant scaling factor is often applied to the model output to adjust the dynamics of neural network parameters. This has been used as one of the key interventions in an empirical study of lazy and active behavior. However, we show that the combination of such scaling and a commonly used adaptive learning rate optimizer strongly affects the training behavior of the neural network. This is problematic as it can cause unintended behavior of neural networks, resulting in the misinterpretation of experimental results. Specifically, for some scaling settings, the effect of the adaptive learning rate disappears or is strongly influenced by the scaling factor. To avoid the unintended effect, we present a modification of an optimization algorithm and demonstrate remarkable differences between adaptive learning rate optimization and simple gradient descent, especially with a small (<1.0) scaling factor.

READ FULL TEXT

page 5

page 7

page 11

research
12/24/2019

CProp: Adaptive Learning Rate Scaling from Past Gradient Conformity

Most optimizers including stochastic gradient descent (SGD) and its adap...
research
06/22/2023

Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation

We develop an approach to efficiently grow neural networks, within which...
research
09/02/2022

Normalization effects on deep neural networks

We study the effect of normalization on the layers of deep neural networ...
research
05/13/2019

Scaling Distributed Training of Flood-Filling Networks on HPC Infrastructure for Brain Mapping

Mapping all the neurons in the brain requires automatic reconstruction o...
research
11/30/2019

Learning Rate Dropout

The performance of a deep neural network is highly dependent on its trai...
research
11/01/2019

Does Adam optimizer keep close to the optimal point?

The adaptive optimizer for training neural networks has continually evol...
research
10/15/2020

Neograd: gradient descent with an adaptive learning rate

Since its inception by Cauchy in 1847, the gradient descent algorithm ha...

Please sign up or login with your details

Forgot password? Click here to reset