The Fast Convergence of Incremental PCA

01/15/2015
by   Akshay Balsubramani, et al.
0

We consider a situation in which we see samples in R^d drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion - with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates for both.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2017

Finite Sample Analyses for TD(0) with Function Approximation

TD(0) is one of the most commonly used algorithms in reinforcement learn...
research
08/28/2018

Convergence of Krasulina Scheme

Principal component analysis (PCA) is one of the most commonly used stat...
research
05/16/2019

Basis Expansions for Functional Snippets

Estimation of mean and covariance functions is fundamental for functiona...
research
06/28/2023

Finite-Sample Symmetric Mean Estimation with Fisher Information Rate

The mean of an unknown variance-σ^2 distribution f can be estimated from...
research
07/01/2014

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

In this work we introduce a new optimisation method called SAGA in the s...
research
02/05/2021

Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency

We offer a theoretical characterization of off-policy evaluation (OPE) i...
research
04/08/2018

Pointwise adaptation via stagewise aggregation of local estimates for multiclass classification

We consider a problem of multiclass classification, where the training s...

Please sign up or login with your details

Forgot password? Click here to reset