Automatic Subspace Learning via Principal Coefficients Embedding

11/17/2014
by   Xi Peng, et al.
0

In this paper, we address two challenging problems in unsupervised subspace learning: 1) how to automatically identify the feature dimension of the learned subspace (i.e., automatic subspace learning), and 2) how to learn the underlying subspace in the presence of Gaussian noise (i.e., robust subspace learning). We show that these two problems can be simultaneously solved by proposing a new method (called principal coefficients embedding, PCE). For a given data set D∈R^m× n, PCE recovers a clean data set D_0∈R^m× n from D and simultaneously learns a global reconstruction relation C∈R^n× n of D_0. By preserving C into an m^'-dimensional space, the proposed method obtains a projection matrix that can capture the latent manifold structure of D_0, where m^'≪ m is automatically determined by the rank of C with theoretical guarantees. PCE has three advantages: 1) it can automatically determine the feature dimension even though data are sampled from a union of multiple linear subspaces in presence of the Gaussian noise, 2) Although the objective function of PCE only considers the Gaussian noise, experimental results show that it is robust to the non-Gaussian noise (e.g., random pixel corruption) and real disguises, 3) Our method has a closed-form solution and can be calculated very fast. Extensive experimental results show the superiority of PCE on a range of databases with respect to the classification accuracy, robustness and efficiency.

READ FULL TEXT

page 1

page 6

page 10

page 14

research
02/14/2012

Rank/Norm Regularization with Closed-Form Solutions: Application to Subspace Clustering

When data is sampled from an unknown subspace, principal component analy...
research
01/13/2012

Nonparametric Sparse Representation

This paper suggests a nonparametric scheme to find the sparse solution o...
research
07/06/2020

Non-Gaussian component analysis: testing the dimension of the signal subspace

Dimension reduction is a common strategy in multivariate data analysis w...
research
02/03/2017

Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

Principal Component Analysis (PCA) is a fundamental method for estimatin...
research
02/28/2018

Exactly Robust Kernel Principal Component Analysis

We propose a novel method called robust kernel principal component analy...
research
11/03/2020

Kernel Two-Dimensional Ridge Regression for Subspace Clustering

Subspace clustering methods have been widely studied recently. When the ...
research
02/20/2023

Fast and Painless Image Reconstruction in Deep Image Prior Subspaces

The deep image prior (DIP) is a state-of-the-art unsupervised approach f...

Please sign up or login with your details

Forgot password? Click here to reset