SAdam: A Variant of Adam for Strongly Convex Functions

05/08/2019
by   Guanghui Wang, et al.
0

The Adam algorithm has become extremely popular for large-scale machine learning. Under convexity condition, it has been proved to enjoy a data-dependant O(√(T)) regret bound where T is the time horizon. However, whether strong convexity can be utilized to further improve the performance remains an open problem. In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependant O( T) regret bound for strongly convex functions. The essential idea is to maintain a faster decaying yet under controlled step size for exploiting strong convexity. In addition, under a special configuration of hyperparameters, our SAdam reduces to SC-RMSprop, a recently proposed variant of RMSprop for strongly convex functions, for which we provide the first data-dependent logarithmic regret bound. Empirical results on optimizing strongly convex functions and training deep networks demonstrate the effectiveness of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2017

Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

Adaptive gradient methods have become recently very popular, in particul...
research
04/28/2021

FastAdaBelief: Improving Convergence Rate for Belief-based Adaptive Optimizer by Strong Convexity

The AdaBelief algorithm demonstrates superior generalization ability to ...
research
03/21/2021

Online Strongly Convex Optimization with Unknown Delays

We investigate the problem of online convex optimization with unknown de...
research
10/16/2020

Projection-free Online Learning over Strongly Convex Sets

To efficiently solve online problems with complicated constraints, proje...
research
09/22/2020

Strongly Convex Divergences

We consider a sub-class of the f-divergences satisfying a stronger conve...
research
10/15/2020

Revisiting Projection-free Online Learning: the Strongly Convex Case

Projection-free optimization algorithms, which are mostly based on the c...
research
05/08/2021

A Simple yet Universal Strategy for Online Convex Optimization

Recently, several universal methods have been proposed for online convex...

Please sign up or login with your details

Forgot password? Click here to reset