DeepAI
Log In Sign Up

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

10/15/2020
by   Juntang Zhuang, et al.
48

Most popular optimizers for deep learning can be broadly categorized as adaptive methods (e.g. Adam) and accelerated schemes (e.g. stochastic gradient descent (SGD) with momentum). For many models such as convolutional neural networks (CNNs), adaptive methods typically converge faster but generalize worse compared to SGD; for complex settings such as generative adversarial networks (GANs), adaptive methods are typically the default because of their stability.We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the "belief" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer. Code is available at https://github.com/juntang-zhuang/Adabelief-Optimizer

READ FULL TEXT

page 5

page 9

page 28

page 29

10/12/2022

AdaNorm: Adaptive Gradient Norm Correction based Optimizer for CNNs

The stochastic gradient descent (SGD) optimizers are generally used to t...
08/02/2019

Calibrating the Learning Rate for Adaptive Gradient Methods to Improve Generalization Performance

Although adaptive gradient methods (AGMs) have fast speed in training de...
06/22/2021

Adapting Stepsizes by Momentumized Gradients Improves Optimization and Generalization

Adaptive gradient methods, such as Adam, have achieved tremendous succes...
09/07/2021

Tom: Leveraging trend of the observed gradients for faster convergence

The success of deep learning can be attributed to various factors such a...
07/19/2019

Lookahead Optimizer: k steps forward, 1 step back

The vast majority of successful deep neural networks are trained using v...
02/09/2020

On the distance between two neural networks and the stability of learning

How far apart are two neural networks? This is a foundational question i...
07/18/2021

A New Adaptive Gradient Method with Gradient Decomposition

Adaptive gradient methods, especially Adam-type methods (such as Adam, A...

Code Repositories

golem

A simple tensor library for the computational graph on CPU


view repo

adabelief-toy-examples-pytorch

numerical experiments in https://arxiv.org/pdf/2010.07468.pdf


view repo