Divergence Results and Convergence of a Variance Reduced Version of ADAM

10/11/2022
by   Ruiqi Wang, et al.
0

Stochastic optimization algorithms using exponential moving averages of the past gradients, such as ADAM, RMSProp and AdaGrad, have been having great successes in many applications, especially in training deep neural networks. ADAM in particular stands out as efficient and robust. Despite of its outstanding performance, ADAM has been proved to be divergent for some specific problems. We revisit the divergent question and provide divergent examples under stronger conditions such as in expectation or high probability. Under a variance reduction assumption, we show that an ADAM-type algorithm converges, which means that it is the variance of gradients that causes the divergence of original ADAM. To this end, we propose a variance reduced version of ADAM and provide a convergent analysis of the algorithm. Numerical experiments show that the proposed algorithm has as good performance as ADAM. Our work suggests a new direction for fixing the convergence issues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2023

Near-Optimal High-Probability Convergence for Non-Convex Stochastic Optimization with Variance Reduction

Traditional analyses for non-convex stochastic optimization problems cha...
research
04/19/2019

On the Convergence of Adam and Beyond

Several recently proposed stochastic optimization methods that have been...
research
11/05/2022

Stochastic Variance Reduced Gradient for affine rank minimization problem

We develop an efficient stochastic variance reduced gradient descent alg...
research
05/09/2016

Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction

We propose a stochastic variance reduced optimization algorithm for solv...
research
04/27/2023

Convergence of Adam Under Relaxed Assumptions

In this paper, we provide a rigorous proof of convergence of the Adaptiv...
research
01/30/2023

Distributed Stochastic Optimization under a General Variance Condition

Distributed stochastic optimization has drawn great attention recently d...
research
05/15/2021

On the Distributional Properties of Adaptive Gradients

Adaptive gradient methods have achieved remarkable success in training d...

Please sign up or login with your details

Forgot password? Click here to reset