DeepAI AI Chat
Log In Sign Up

Manifold Denoising by Nonlinear Robust Principal Component Analysis

by   He Lyu, et al.

This paper extends robust principal component analysis (RPCA) to nonlinear manifolds. Suppose that the observed data matrix is the sum of a sparse component and a component drawn from some low dimensional manifold. Is it possible to separate them by using similar ideas as RPCA? Is there any benefit in treating the manifold as a whole as opposed to treating each local region independently? We answer these two questions affirmatively by proposing and analyzing an optimization framework that separates the sparse component from the manifold under noisy data. Theoretical error bounds are provided when the tangent spaces of the manifold satisfy certain incoherence conditions. We also provide a near optimal choice of the tuning parameters for the proposed optimization formulation with the help of a new curvature estimation method. The efficacy of our method is demonstrated on both synthetic and real datasets.


page 11

page 12


Auto-associative models, nonlinear Principal component analysis, manifolds and projection pursuit

In this paper, auto-associative models are proposed as candidates to the...

Big Data Approaches to Knot Theory: Understanding the Structure of the Jones Polynomial

We examine the structure and dimensionality of the Jones polynomial usin...

Curvature of point clouds through principal component analysis

In this article, we study curvature-like feature value of data sets in E...

Ricci Curvature and the Manifold Learning Problem

Consider a sample of n points taken i.i.d from a submanifold Σ of Euclid...

Chordal Averaging on Flag Manifolds and Its Applications

This paper presents a new, provably-convergent algorithm for computing t...

Exactly Robust Kernel Principal Component Analysis

We propose a novel method called robust kernel principal component analy...