An improvement of the convergence proof of the ADAM-Optimizer

04/27/2018
by   Sebastian Bock, et al.
0

A common way to train neural networks is the Backpropagation. This algorithm includes a gradient descent method, which needs an adaptive step size. In the area of neural networks, the ADAM-Optimizer is one of the most popular adaptive step size methods. It was invented in Kingma.2015 by Kingma and Ba. The 5865 citations in only three years shows additionally the importance of the given paper. We discovered that the given convergence proof of the optimizer contains some mistakes, so that the proof will be wrong. In this paper we give an improvement to the convergence proof of the ADAM-Optimizer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2019

On the convergence proof of AMSGrad and a new version

The adaptive moment estimation algorithm Adam (Kingma and Ba, ICLR 2015)...
research
06/30/2020

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Learning-to-learn (using optimization algorithms to learn a new optimize...
research
03/05/2020

On the Convergence of Adam and Adagrad

We provide a simple proof of the convergence of the optimization algorit...
research
07/24/2023

An Isometric Stochastic Optimizer

The Adam optimizer is the standard choice in deep learning applications....
research
07/31/2023

Lookbehind Optimizer: k steps back, 1 step forward

The Lookahead optimizer improves the training stability of deep neural n...
research
07/02/2023

Bidirectional Looking with A Novel Double Exponential Moving Average to Adaptive and Non-adaptive Momentum Optimizers

Optimizer is an essential component for the success of deep learning, wh...
research
03/12/2022

Optimizer Amalgamation

Selecting an appropriate optimizer for a given problem is of major inter...

Please sign up or login with your details

Forgot password? Click here to reset