The Fast Convergence of Incremental PCA

01/15/2015 ∙ by Akshay Balsubramani, et al. ∙ University of California, San Diego 0

We consider a situation in which we see samples in R^d drawn i.i.d. from some distribution with mean zero and unknown covariance A. We wish to compute the top eigenvector of A in an incremental fashion - with an algorithm that maintains an estimate of the top eigenvector in O(d) space, and incrementally adjusts the estimate with each new data point that arrives. Two classical such schemes are due to Krasulina (1969) and Oja (1983). We give finite-sample convergence rates for both.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Principal component analysis (PCA) is a popular form of dimensionality reduction that projects a data set on the top eigenvector(s) of its covariance matrix. The default method for computing these eigenvectors uses space for data in , which can be prohibitive in practice. It is therefore of interest to study incremental schemes that take one data point at a time, updating their estimates of the desired eigenvectors with each new point. For computing one eigenvector, such methods use space.

For the case of the top eigenvector, this problem has long been studied, and two elegant solutions were obtained by Krasulina [7] and Oja [9]. Their methods are closely related. At time , they have some estimate of the top eigenvector. Upon seeing the next data point, , they update this estimate as follows:

(Krasulina)
(Oja)

Here is a “learning rate” that is typically proportional to .

Suppose the points are drawn i.i.d. from a distribution on with mean zero and covariance matrix . The original papers proved that these estimators converge almost surely to the top eigenvector of (call it ) under mild conditions:

  • while .

  • If

    denote the top two eigenvalues of

    , then .

  • for some suitable (for instance, works).

There are also other incremental estimators for which convergence has not been established; see, for instance, [12] and [16].

In this paper, we analyze the rate of convergence of the Krasulina and Oja estimators. They can be treated in a common framework, as stochastic approximation algorithms for maximizing the Rayleigh quotient

The maximum value of this function is , and is achieved at (or any nonzero multiple thereof). The gradient is

Since

, we see that Krasulina’s method is stochastic gradient descent. The Oja procedure is closely related: as pointed out in

[10], the two are identical to within second-order terms.

Recently, there has been a lot of work on rates of convergence for stochastic gradient descent (for instance, [11]), but this has typically been limited to convex cost functions. These results do not apply to the non-convex Rayleigh quotient, except at the very end, when the system is near convergence. Most of our analysis focuses on the buildup to this finale.

We measure the quality of the solution at time using the potential function

where is taken to have unit norm. This quantity lies in the range , and we are interested in the rate at which it approaches zero. The result, in brief, is that , under conditions that are similar to those above, but stronger. In particular, we require that be proportional to and that be bounded.

1.1 The algorithm

We analyze the following procedure.

  1. Set starting time. Set the clock to time .

  2. Initialization. Initialize uniformly at random from the unit sphere in .

  3. For time :

    1. Receive the next data point, .

    2. Update step. Perform either the Krasulina or Oja update, with .

The first step is similar to using a learning rate of the form , as is often done in stochastic gradient descent implementations [1]. We have adopted it because the initial sequence of updates is highly noisy: during this phase moves around wildly, and cannot be shown to make progress. It becomes better behaved when the step size becomes smaller, that is to say when gets larger than some suitable . By setting the start time to

, we can simply fast-forward the analysis to this moment.

1.2 Initialization

One possible initialization is to set to the first data point that arrives, or to the average of a few data points. This seems sensible enough, but can fail dramatically in some situations.

Here is an example. Suppose can take on just possible values: , where the are coordinate directions and is a small constant. Suppose further that the distribution of is specified by a single positive number :

Then has mean zero and covariance . We will assume that and are chosen so that ; in our notation, the top eigenvalues are then and

, and the target vector is

.

If is ever orthogonal to some , it will remain so forever. This is because both the Krasulina and Oja updates have the following properties:

If

is initialized to a random data point, then with probability

, it will be assigned to some with , and will converge to a multiple of that same rather than to . Likewise, if it is initialized to the average of data points, then with constant probability it will be orthogonal to and remain so always.

Setting to a random unit vector avoids this problem. However, there are doubtless cases, for instance when the data has intrinsic dimension , in which a better initializer is possible.

1.3 The setting of the learning rate

In order to get a sense of what rates of convergence we might expect, let’s return to the example of a random vector with possible values. In the Oja update , we can ignore normalization if we are merely interested in the progress of the potential function . Since the correspond to coordinate directions, each update changes just one coordinate of :

Recall that we initialize to a random vector from the unit sphere. For simplicity, let’s just suppose that and that this initial value is the all-ones vector (again, we don’t have to worry about normalization). On each iteration the first coordinate is updated with probability exactly , and thus

since . Likewise, for ,

If all goes according to expectation, then at time ,

(This is all very rough, but can be made precise by obtaining concentration bounds for .) From this, we can see that it is not possible to achieve a rate unless . Therefore, we will assume this when stating our final results, although most of our analysis is in terms of general . An interesting practical question, to which we do not have an answer, is how one would empirically set without prior knowledge of the eigenvalue gap.

1.4 Nested sample spaces

For , let denote the sigma-field of all outcomes up to and including time : . We start by showing that

Initially is likely to be close to . For instance, if the initial is picked uniformly at random from the surface of the unit sphere in , then we’d expect . This means that the initial rate of decrease is very small, because of the term.

To deal with this, we divide the analysis into epochs: the first takes

from to , the second from to , and so on until finally drops below . We use martingale large deviation bounds to bound the length of each epoch, and also to argue that does not regress. In particular, we establish a sequence of times such that (with high probability)

(1)

The analysis of each epoch uses martingale arguments, but at the same time, assumes that remains bounded above. Combining the two requires a careful specification of the sample space at each step. Let denote the sample space of all realizations , and

the probability distribution on these sequences. For any

, we define a nested sequence of spaces such that each is -measurable, has probability , and moreover consists exclusively of realizations that satisfy the constraints (1) up to and including time . We can then build martingale arguments by restricting attention to when computing the conditional expectations of quantities at time .

1.5 Main result

We make the following assumptions:

  1. The are i.i.d. with mean zero and covariance .

  2. There is a constant such that .

  3. The eigenvalues of satisfy .

  4. The step sizes are of the form .

Under these conditions, we get the following rate of convergence for the Krasulina update.

Theorem 1.1.

There are absolute constants and for which the following holds. Pick any , and any . Set the step sizes to , where , and set the starting time to . Then there is a nested sequence of subsets of the sample space such that for any , we have:

where denotes expectation restricted to .

Since , this bound is of the form .

The result above also holds for the Oja update up to absolute constants.

We also remark that a small modification to the final step in the proof of the above yields a rate of for , with an identical definition of . The details are in the proof, in Appendix D.2.

1.6 Related work

There is an extensive line of work analyzing PCA from the statistical perspective, in which the convergence of various estimators is characterized under certain conditions, including generative models of the data [5] and various assumptions on the covariance matrix spectrum [14, 4] and eigenvalue spacing [17]. Such works do provide finite-sample guarantees, but they apply only to the batch case and/or are computationally intensive, rather than considering an efficient incremental algorithm.

Among incremental algorithms, the work of Warmuth and Kuzmin [15] describes and analyzes worst-case online PCA, using an experts-setting algorithm with a super-quadratic per-iteration cost. More efficient general-purpose incremental PCA algorithms have lacked finite-sample analyses [2]. There have been recent attempts to remedy this situation by relaxing the nonconvexity inherent in the problem [3] or making generative assumptions [8]. The present paper directly analyzes the oldest known incremental PCA algorithms under relatively mild assumptions.

2 Outline of proof

We now sketch the proof of Theorem 1.1; almost all the details are relegated to the appendix.

Recall that for , we take to be the sigma-field of all outcomes up to and including time , that is, .

An additional piece of notation: we will use to denote , the unit vector in the direction of . Thus, for instance, the Rayleigh quotient can be written .

2.1 Expected per-step change in potential

We first bound the expected improvement in in each step of the Krasulina or Oja algorithms.

Theorem 2.1.

For any , we can write , where

and where is a

-measurable random variable with the following properties:

  • .

  • .

The theorem follows from Lemmas A.4 and A.5 in the appendix. Its characterization of the two estimators is almost identical, and for simplicity we will henceforth deal only with Krasulina’s estimator. All the subsequent results hold also for Oja’s method, up to constants.

2.2 A large deviation bound for

We know from Theorem 2.1 that , where is non-stochastic and is a quantity of positive expected value. Thus, in expectation, and modulo a small additive term, decreases monotonically. However, the amount of decrease at the th time step can be arbitrarily small when is close to 1. Thus, we need to show that is eventually bounded away from 1, i.e. there exists some and some time such that for any , we have .

Recall from the algorithm specification that we advance the clock so as to skip the pre- phase. Given this, what can we expect to be? If the initial estimate is a random unit vector, then and, roughly speaking, . If is sufficiently large, then may subsequently increase a little bit, but not by very much. In this section, we establish the following bound.

Theorem 2.2.

Suppose the initial estimate is chosen uniformly at random from the surface of the unit sphere in . Assume also that the step sizes are of the form , for some constant . Then for any , if , we have

To prove this, we start with a simple recurrence for the moment-generating function of

.

Lemma 2.3.

Consider a filtration and random variables such that there are two sequences of nonnegative constants, and , for which:

  • .

  • Each takes values in an interval of length .

Then for any , we have .

This relation shows how to define a supermartingale based on , from which we can derive a large deviation bound on .

Lemma 2.4.

Assume the conditions of Lemma 2.3, and also that . Then, for any integer and any ,

In order to apply this to the sequence , we need to first calculate the moment-generating function of its starting value .

Lemma 2.5.

Suppose a vector is picked uniformly at random from the surface of the unit sphere in , where . Define . Then, for any ,

Putting these pieces together yields Theorem 2.2.

2.3 Intermediate epochs of improvement

We have seen that, for suitable and , it is likely that for all . We now define a series of epochs in which successively doubles, until finally drops below .

To do this, we specify intermediate goals , where and , with the intention that:

(2)

Of course, this can only hold with a certain probability.

Let denote the sample space of all realizations , and the probability distribution on these sequences. We will show that, for a certain choice of , all constraints (2) can be met by excluding just a small portion of .

We consider a specific realization to be good if it satisfies (2). Call this set :

For technical reasons, we also need to look at realizations that are good up to time . Specifically, for each , define

Crucially, this is -measurable. Also note that .

We can talk about expectations under the distribution restricted to subsets of . In particular, let be the restriction of to ; that is, for any , we have . As for expectations with respect to , for any function , we define

Here is the main result of this section.

Theorem 2.6.

Assume that , where and . Pick any and select a schedule that satisfies the conditions

(3)

as well as . Then .

The first step towards proving this theorem is bounding the moment-generating function of in terms of that of .

Lemma 2.7.

Suppose . Suppose also that , where . Then for any ,

We would like to use this result to bound in terms of for . The shift in sample spaces is easily handled using the following observation.

Lemma 2.8.

If is nondecreasing, then for any .

A repeated application of Lemmas 2.7 and 2.8 yields the following.

Lemma 2.9.

Suppose that conditions (3) hold. Then for and any ,

Now that we have bounds on the moment-generating functions of intermediate , we can apply martingale deviation bounds, as in Lemma 2.4, to obtain the following, from which Theorem 2.6 ensues.

Lemma 2.10.

Assume conditions (3) hold. Pick any , and set . Then

2.4 The final epoch

Recall the definition of the intermediate goals in (2), (3). The final epoch is the period , at which point . The following consequence of Lemmas A.4 and 2.8 captures the rate at which decreases during this phase.

Lemma 2.11.

For all ,

where and .

By solving this recurrence relation, and piecing together the various epochs, we get the overall convergence result of Theorem 1.1.

Note that Lemma 2.11 closely resembles the recurrence relation followed by the squared distance from the optimum of stochastic gradient descent (SGD) on a strongly convex function [11]. As , the incremental PCA algorithms we study have convergence rates of the same form as SGD in this scenario.

3 Experiments

When performing PCA in practice with massive and a large/growing dataset, an incremental method like that of Krasulina or Oja remains practically viable, even as quadratic-time and -memory algorithms become increasingly impractical. Arora et al. [2] have a more complete discussion of the empirical necessity of incremental PCA algorithms, including a version of Oja’s method which is shown to be extremely competitive in practice.

Since the efficiency benefits of these types of algorithms are well understood, we now instead focus on the effect of the learning rate on the performance of Oja’s algorithm (results for Krasulina’s are extremely similar). We use the CMU PIE faces [13], consisting of 11554 images of size

, as a prototypical example of a dataset with most of its variance captured by a few PCs, as shown in Fig. 1. We set

.

Figures 1 and 2.

We expect from Theorem 1.1 and the discussion in the introduction that varying (the constant in the learning rate) will influence the overall rate of convergence. In particular, if is low, then halving it can be expected to halve the exponent of , and the slope of the log-log convergence graph (ref. the remark after Thm. 1.1). This is exactly what occurs in practice, as illustrated in Fig. 2. The dotted line in that figure is a convergence rate of , drawn as a guide.

4 Open problems

Several fundamental questions remain unanswered. First, the convergence rates of the two incremental schemes depend on the multiplier in the learning rate . If it is too low, convergence will be slower than . If it is too high, the constant in the rate of convergence will be large. Is there a simple and practical scheme for setting ?

Second, what can be said about incrementally estimating the top eigenvectors, for ? Both methods we consider extend easily to this case [10]; the estimate at time is a matrix whose columns correspond to the eigenvectors, with the invariant always maintained. In Oja’s algorithm, for instance, when a new data point arrives, the following update is performed:

where the second step orthonormalizes the columns, for instance by Gram-Schmidt. It would be interesting to characterize the rate of convergence of this scheme.

Finally, our analysis applies to a modified procedure in which the starting time is artificially set to a large constant. This seems unnecessary in practice, and it would be useful to extend the analysis to the case where .

Acknowledgments

The authors are grateful to the National Science Foundation for support under grant IIS-1162581.

References

  • [1] A. Agarwal, O. Chapelle, M. Dudík, and J. Langford. A reliable effective terascale linear learning system. CoRR, abs/1110.4198, 2011.
  • [2] R. Arora, A. Cotter, K. Livescu, and N. Srebro. Stochastic optimization for PCA and PLS. In 50th Annual Allerton Conference on Communication, Control, and Computing, pages 861–868. 2012.
  • [3] R. Arora, A. Cotter, and N. Srebro. Stochastic optimization of PCA with capped MSG. In Advances in Neural Information Processing Systems, 2013.
  • [4] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component analysis. Machine Learning, 66(2-3):259–294, 2007.
  • [5] T. T. Cai, Z. Ma, and Y. Wu. Sparse PCA: Optimal rates and adaptive estimation. CoRR, abs/1211.1309, 2012.
  • [6] R. Durrett. Probability: Theory and Examples. Duxbury, second edition, 1995.
  • [7] T.P. Krasulina. A method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix. USSR Computational Mathematics and Mathematical Physics, 9(6):189–195, 1969.
  • [8] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In Advances in Neural Information Processing Systems, 2013.
  • [9] E. Oja.

    Subspace Methods of Pattern Recognition

    .
    Research Studies Press, 1983.
  • [10] E. Oja and J. Karhunen.

    On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix.

    Journal of Math. Analysis and Applications, 106:69–84, 1985.
  • [11] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In International Conference on Machine Learning, 2012.
  • [12] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing Systems, 1997.
  • [13] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1615–1618, 2003.
  • [14] V.Q. Vu and J. Lei. Minimax rates of estimation for sparse PCA in high dimensions. Journal of Machine Learning Research - Proceedings Track, 22:1278–1286, 2012.
  • [15] M.K. Warmuth and D. Kuzmin. Randomized PCA algorithms with regret bounds that are logarithmic in the dimension. In Advances in Neural Information Processing Systems. 2007.
  • [16] J. Weng, Y. Zhang, and W.-S. Hwang. Candid covariance-free incremental principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8):1034–1040, 2003.
  • [17] L. Zwald and G. Blanchard.

    On the convergence of eigenspaces in kernel principal component analysis.

    In Advances in Neural Information Processing Systems, 2005.

Appendix A Expected per-step change in potential

a.1 The change in potential of Krasulina’s update

Write Krasulina’s update equation as

We start with some basic observations.

Lemma A.1.

For all ,

  1. is orthogonal to .

  2. .

  3. .

  4. .

Proof.

For (a), let denote the component of orthogonal to . Then

For (b), note from the previous formulation that .

Part (c) follows directly from .

For (d), we use . ∎

We now check that grows in expectation with each iteration.

Lemma A.2.

For any , we have

  1. .

  2. .

Proof.

Part (a) follows directly from the update rule:

Part (b) follows by substituting the expression for from Lemma A.1(c):

In order to use Lemma A.2 to bound the change in potential , we need to relate to the quantity .

Lemma A.3.

For any , we have .

Proof.

It is easiest to think of in the eigenbasis of : the component of in direction is , and the orthogonal component is . Then

Therefore,

We can now explicitly bound the expected change in in each iteration.

Lemma A.4.

For any , we can write , where and where

is a -measurable random variable with the following properties:

  • .

  • .

Proof.

Using Lemmas A.1 and A.2(a),

which is . The conditional expectation of can be determined from Lemma A.2(b):

and this can be lower-bounded using Lemma A.3.

Finally, we need to determine the range of possible values of . By expanding , we get

<