Subspace Learning with Partial Information

02/19/2014
by   Alon Gonen, et al.
0

The goal of subspace learning is to find a k-dimensional subspace of R^d, such that the expected squared distance between instance vectors and the subspace is as small as possible. In this paper we study subspace learning in a partial information setting, in which the learner can only observe r < d attributes from each instance vector. We propose several efficient algorithms for this task, and analyze their sample complexity

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2017

RANSAC Algorithms for Subspace Recovery and Subspace Clustering

We consider the RANSAC algorithm in the context of subspace recovery and...
research
10/02/2014

Deterministic Conditions for Subspace Identifiability from Incomplete Sampling

Consider a generic r-dimensional subspace of R^d, r<d, and suppose that ...
research
11/02/2022

70 years of Krylov subspace methods: The journey continues

Using computed examples for the Conjugate Gradient method and GMRES, we ...
research
02/25/2020

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

We study the linear subspace fitting problem in the overparameterized se...
research
08/17/2019

On the Adversarial Robustness of Subspace Learning

In this paper, we study the adversarial robustness of subspace learning ...
research
12/12/2018

Gradient Descent Happens in a Tiny Subspace

We show that in a variety of large-scale deep learning scenarios the gra...
research
04/03/2014

Subspace Learning from Extremely Compressed Measurements

We consider learning the principal subspace of a large set of vectors fr...

Please sign up or login with your details

Forgot password? Click here to reset