Accelerated Stochastic Power Iteration

07/10/2017
by   Christopher De Sa, et al.
0

Principal component analysis (PCA) is one of the most powerful tools in machine learning. The simplest method for PCA, the power iteration, requires O(1/Δ) full-data passes to recover the principal component of a matrix with eigen-gap Δ. Lanczos, a significantly more complex method, achieves an accelerated rate of O(1/√(Δ)) passes. Modern applications, however, motivate methods that only ingest a subset of available data, known as the stochastic setting. In the online stochastic setting, simple algorithms like Oja's iteration achieve the optimal sample complexity O(σ^2/Δ^2). Unfortunately, they are fully sequential, and also require O(σ^2/Δ^2) iterations, far from the O(1/√(Δ)) rate of Lanczos. We propose a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity. In the full-pass setting, standard analysis shows that momentum achieves the accelerated rate, O(1/√(Δ)). We demonstrate empirically that naively applying momentum to a stochastic method, does not result in acceleration. We perform a novel, tight variance analysis that reveals the "breaking-point variance" beyond which this acceleration does not occur. By combining this insight with modern variance reduction techniques, we construct stochastic PCA algorithms, for the online and offline setting, that achieve an accelerated iteration complexity O(1/√(Δ)). Due to the embarassingly parallel nature of our methods, this acceleration translates directly to wall-clock time if deployed in a parallel environment. Our approach is very general, and applies to many non-convex optimization problems that can now be accelerated using the same technique.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2018

ASVRG: Accelerated Proximal SVRG

This paper proposes an accelerated proximal stochastic variance reduced ...
research
06/28/2018

Direct Acceleration of SAGA using Sampled Negative Momentum

Variance reduction is a simple and effective technique that accelerates ...
research
03/18/2016

Katyusha: The First Direct Acceleration of Stochastic Gradient Methods

Nesterov's momentum trick is famously known for accelerating gradient de...
research
10/22/2021

Multiplication-Avoiding Variant of Power Iteration with Applications

Power iteration is a fundamental algorithm in data analysis. It extracts...
research
05/15/2020

Non-Sparse PCA in High Dimensions via Cone Projected Power Iteration

In this paper, we propose a cone projected power iteration algorithm to ...
research
05/26/2023

Accelerating Value Iteration with Anchoring

Value Iteration (VI) is foundational to the theory and practice of moder...
research
10/16/2018

Biologically Plausible Online Principal Component Analysis Without Recurrent Neural Dynamics

Artificial neural networks that learn to perform Principal Component Ana...

Please sign up or login with your details

Forgot password? Click here to reset