Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis

03/15/2017
by   Hiroyuki Kasai, et al.
0

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite number of loss functions. The present paper proposes a Riemannian stochastic quasi-Newton algorithm with variance reduction (R-SQN-VR). The key challenges of averaging, adding, and subtracting multiple gradients are addressed with notions of retraction and vector transport. We present convergence analyses of R-SQN-VR on both non-convex and retraction-convex functions under retraction and vector transport operators. The proposed algorithm is evaluated on the Karcher mean computation on the symmetric positive-definite manifold and the low-rank matrix completion on the Grassmann manifold. In all cases, the proposed algorithm outperforms the state-of-the-art Riemannian batch and stochastic gradient algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2017

Riemannian stochastic variance reduced gradient

Stochastic variance reduction algorithms have recently become popular fo...
research
04/06/2017

Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds

We propose an L-BFGS optimization algorithm on Riemannian manifolds usin...
research
05/24/2016

Riemannian stochastic variance reduced gradient on Grassmann manifold

Stochastic variance reduction algorithms have recently become popular fo...
research
06/28/2021

Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization

This paper proposes a distributed stochastic algorithm with variance red...
research
05/26/2016

Stochastic Variance Reduced Riemannian Eigensolver

We study the stochastic Riemannian gradient algorithm for matrix eigen-d...
research
08/14/2020

On the globalization of Riemannian Newton method

In the present paper, in order to fnd a singularity of a vector field de...
research
02/01/2023

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions

In many learning applications, the parameters in a model are structurall...

Please sign up or login with your details

Forgot password? Click here to reset