Subspace Learning from Extremely Compressed Measurements

04/03/2014
by   Akshay Krishnamurthy, et al.
0

We consider learning the principal subspace of a large set of vectors from an extremely small number of compressive measurements of each vector. Our theoretical results show that even a constant number of measurements per column suffices to approximate the principal subspace to arbitrary precision, provided that the number of vectors is large. This result is achieved by a simple algorithm that computes the eigenvectors of an estimate of the covariance matrix. The main insight is to exploit an averaging effect that arises from applying a different random projection to each vector. We provide a number of simulations confirming our theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2014

Subspace Learning From Bits

Networked sensing, where the goal is to perform complex inference using ...
research
06/02/2015

Extreme Compressive Sampling for Covariance Estimation

This paper studies the problem of estimating the covariance of a collect...
research
11/02/2012

Minimax sparse principal subspace estimation in high dimensions

We study sparse principal components analysis in high dimensions, where ...
research
02/02/2016

On the Nyström and Column-Sampling Methods for the Approximate Principal Components Analysis of Large Data Sets

In this paper we analyze approximate methods for undertaking a principal...
research
12/08/2022

A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces

Many machine learning problems encode their data as a matrix with a poss...
research
02/19/2014

Subspace Learning with Partial Information

The goal of subspace learning is to find a k-dimensional subspace of R^d...
research
07/02/2023

Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model

This paper introduces a novel framework called Mode-wise Principal Subsp...

Please sign up or login with your details

Forgot password? Click here to reset