Exponentially convergent stochastic k-PCA without variance reduction

04/03/2019
by   Cheng Tang, et al.
0

We present Matrix Krasulina, an algorithm for online k-PCA, by generalizing the classic Krasulina's method (Krasulina, 1969) from vector to matrix case. We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k-PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.

READ FULL TEXT
research
07/05/2013

Stochastic Optimization of PCA with Capped MSG

We study PCA as a stochastic optimization problem and propose a novel st...
research
09/30/2015

Convergence of Stochastic Gradient Descent for PCA

We consider the problem of principal component analysis (PCA) in a strea...
research
05/17/2019

Online Distributed Estimation of Principal Eigenspaces

Principal components analysis (PCA) is a widely used dimension reduction...
research
04/03/2018

Average performance analysis of the stochastic gradient method for online PCA

This paper studies the complexity of the stochastic gradient algorithm f...
research
06/22/2021

Stochastic Polyak Stepsize with a Moving Target

We propose a new stochastic gradient method that uses recorded past loss...
research
12/13/2017

Exponential convergence of testing error for stochastic gradient methods

We consider binary classification problems with positive definite kernel...
research
09/07/2017

Adaptive PCA for Time-Varying Data

In this paper, we present an online adaptive PCA algorithm that is able ...

Please sign up or login with your details

Forgot password? Click here to reset