Riemannian stochastic recursive momentum method for non-convex optimization

08/11/2020
∙
by   Andi Han, et al.
∙
33
∙

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a near-optimal complexity of 𝒊Ėƒ(Ïĩ^-3) to find Ïĩ-approximate solution with one sample. That is, our method requires 𝒊(1) gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain the faster rate. Extensive experiment results demonstrate the superiority of our proposed algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 07/03/2020

Variance reduction for Riemannian non-convex optimization with batch size adaptation

Variance reduction techniques are popular in accelerating gradient desce...
research
∙ 06/06/2021

Minibatch and Momentum Model-based Methods for Stochastic Non-smooth Non-convex Optimization

Stochastic model-based methods have received increasing attention lately...
research
∙ 05/15/2020

Momentum with Variance Reduction for Nonconvex Composition Optimization

Composition optimization is widely-applied in nonconvex machine learning...
research
∙ 11/01/2021

STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization

In this work we investigate stochastic non-convex optimization problems ...
research
∙ 02/12/2018

Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization

The problem of minimizing sum-of-nonconvex functions (i.e., convex funct...
research
∙ 07/18/2018

Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders

RMSProp and ADAM continue to be extremely popular algorithms for trainin...

Please sign up or login with your details

Forgot password? Click here to reset