Riemannian stochastic recursive momentum method for non-convex optimization

08/11/2020 ∙ by Andi Han, et al. ∙ 33 ∙

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a near-optimal complexity of 𝒊Ėƒ(Ïĩ^-3) to find Ïĩ-approximate solution with one sample. That is, our method requires 𝒊(1) gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain the faster rate. Extensive experiment results demonstrate the superiority of our proposed algorithm.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.