
A Tangent Distance Preserving Dimensionality Reduction Algorithm
This paper considers the problem of nonlinear dimensionality reduction. ...
read it

A Nonlinear Dimensionality Reduction Framework Using Smooth Geodesics
Existing dimensionality reduction methods are adept at revealing hidden ...
read it

Product Manifold Learning
We consider problems of dimensionality reduction and learning data repre...
read it

Error Metrics for Learning Reliable Manifolds from Streaming Data
Spectral dimensionality reduction is frequently used to identify lowdim...
read it

Exact Nonlinear Model Reduction for a von Karman beam: SlowFast Decomposition and Spectral Submanifolds
We apply two recently formulated mathematical techniques, SlowFast Deco...
read it

Geodesic Centroidal Voronoi Tessellations: Theories, Algorithms and Applications
Nowadays, big data of digital media (including images, videos and 3D gra...
read it

Intrinsic Isometric Manifold Learning with Application to Localization
Data living on manifolds commonly appear in many applications. We show t...
read it
Geometric Foundations of Data Reduction
The purpose of this paper is to write a complete survey of the (spectral) manifold learning methods and nonlinear dimensionality reduction (NLDR) in data reduction. The first two NLDR methods in history were respectively published in Science in 2000 in which they solve the similar reduction problem of highdimensional data endowed with the intrinsic nonlinear structure. The intrinsic nonlinear structure is always interpreted as a concept in manifolds from geometry and topology in theoretical mathematics by computer scientists and theoretical physicists. In 2001, the concept of Manifold Learning first appears as an NLDR method called Laplacian Eigenmaps purposed by Belkin and Niyogi. In the typical manifold learning setup, the data set, also called the observation set, is distributed on or near a low dimensional manifold M embedded in ℝ^D, which yields that each observation has a Ddimensional representation. The goal of (spectral) manifold learning is to reduce these observations as a compact lowerdimensional representation based on the geometric information. The reduction procedure is called the (spectral) manifold learning method. In this paper, we derive each (spectral) manifold learning method with the matrix and operator representation, and we then discuss the convergence behavior of each method in a geometric uniform language. Hence, we name the survey Geometric Foundations of Data Reduction.
READ FULL TEXT
Comments
There are no comments yet.