DeepAI AI Chat
Log In Sign Up

The Sparse Principal Component of a Constant-rank Matrix

12/20/2013
by   Megasthenis Asteris, et al.
Technical University of Crete
0

The computation of the sparse principal component of a matrix is equivalent to the identification of its principal submatrix with the largest maximum eigenvalue. Finding this optimal submatrix is what renders the problem NP-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O(N^D+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/22/2017

ReFACTor: Practical Low-Rank Matrix Estimation Under Column-Sparsity

Various problems in data analysis and statistical genetics call for reco...
09/06/2020

A Framework for Private Matrix Analysis

We study private matrix analysis in the sliding window model where only ...
11/21/2017

Kullback-Leibler Principal Component for Tensors is not NP-hard

We study the problem of nonnegative rank-one approximation of a nonnegat...
03/03/2013

Sparse PCA through Low-rank Approximations

We introduce a novel algorithm that computes the k-sparse principal comp...
08/16/2016

Faster Principal Component Regression and Stable Matrix Chebyshev Approximation

We solve principal component regression (PCR), up to a multiplicative ac...
05/30/2021

An iterative Jacobi-like algorithm to compute a few sparse eigenvalue-eigenvector pairs

In this paper, we describe a new algorithm to compute the extreme eigenv...