DeepAI AI Chat
Log In Sign Up

The Sparse Principal Component of a Constant-rank Matrix

by   Megasthenis Asteris, et al.
Technical University of Crete

The computation of the sparse principal component of a matrix is equivalent to the identification of its principal submatrix with the largest maximum eigenvalue. Finding this optimal submatrix is what renders the problem NP-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O(N^D+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.


page 1

page 2

page 3

page 4


ReFACTor: Practical Low-Rank Matrix Estimation Under Column-Sparsity

Various problems in data analysis and statistical genetics call for reco...

A Framework for Private Matrix Analysis

We study private matrix analysis in the sliding window model where only ...

Kullback-Leibler Principal Component for Tensors is not NP-hard

We study the problem of nonnegative rank-one approximation of a nonnegat...

Sparse PCA through Low-rank Approximations

We introduce a novel algorithm that computes the k-sparse principal comp...

Faster Principal Component Regression and Stable Matrix Chebyshev Approximation

We solve principal component regression (PCR), up to a multiplicative ac...

An iterative Jacobi-like algorithm to compute a few sparse eigenvalue-eigenvector pairs

In this paper, we describe a new algorithm to compute the extreme eigenv...