Sparse PCA via Covariance Thresholding

11/20/2013
by   Yash Deshpande, et al.
0

In sparse principal component analysis we are given noisy observations of a low-rank matrix of dimension n× p and seek to reconstruct it under additional sparsity assumptions. In particular, we assume here each of the principal components v_1,...,v_r has at most s_0 non-zero entries. We are particularly interested in the high dimensional regime wherein p is comparable to, or even much larger than n. In an influential paper, johnstone2004sparse introduced a simple algorithm that estimates the support of the principal vectors v_1,...,v_r by the largest entries in the diagonal of the empirical covariance. This method can be shown to identify the correct support with high probability if s_0< K_1√(n/ p), and to fail with high probability if s_0> K_2 √(n/ p) for two constants 0<K_1,K_2<∞. Despite a considerable amount of work over the last ten years, no practical algorithm exists with provably better support recovery guarantees. Here we analyze a covariance thresholding algorithm that was recently proposed by KrauthgamerSPCA. On the basis of numerical simulations (for the rank-one case), these authors conjectured that covariance thresholding correctly recover the support with high probability for s_0< K√(n) (assuming n of the same order as p). We prove this conjecture, and in fact establish a more general guarantee including higher-rank as well as n much smaller than p. Recent lower bounds berthet2013computational, ma2015sum suggest that no polynomial time algorithm can do significantly better. The key technical component of our analysis develops new bounds on the norm of kernel random matrices, in regimes that were not considered before.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset