Sequential convergence of AdaGrad algorithm for smooth convex optimization

11/24/2020
by   Cheik Traoré, et al.
0

We prove that the iterates produced by, either the scalar step size variant, or the coordinatewise variant of AdaGrad algorithm, are convergent sequences when applied to convex objective functions with Lipschitz gradient. The key insight is to remark that such AdaGrad sequences satisfy a variable metric quasi-Fejér monotonicity property, which allows to prove convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization

Although ADAM is a very popular algorithm for optimizing the weights of ...
research
02/04/2015

Composite convex minimization involving self-concordant-like cost functions

The self-concordant-like property of a smooth convex function is a new a...
research
11/14/2018

Revisiting Projection-Free Optimization for Strongly Convex Constraint Sets

We revisit the Frank-Wolfe (FW) optimization under strongly convex const...
research
06/20/2019

Accelerating Mini-batch SARAH by Step Size Rules

StochAstic Recursive grAdient algoritHm (SARAH), originally proposed for...
research
09/05/2022

The Proxy Step-size Technique for Regularized Optimization on the Sphere Manifold

We give an effective solution to the regularized optimization problem g ...
research
10/15/2019

Variable Metric Proximal Gradient Method with Diagonal Barzilai-Borwein Stepsize

Variable metric proximal gradient (VM-PG) is a widely used class of conv...
research
01/27/2022

From the Ravine method to the Nesterov method and vice versa: a dynamical system perspective

We revisit the Ravine method of Gelfand and Tsetlin from a dynamical sys...

Please sign up or login with your details

Forgot password? Click here to reset