Regularizers versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations

by   Yaoliang Yu, et al.

We demonstrate that almost all non-parametric dimensionality reduction methods can be expressed by a simple procedure: regularized loss minimization plus singular value truncation. By distinguishing the role of the loss and regularizer in such a process, we recover a factored perspective that reveals some gaps in the current literature. Beyond identifying a useful new loss for manifold unfolding, a key contribution is to derive new convex regularizers that combine distance maximization with rank reduction. These regularizers can be applied to any loss.


page 1

page 2

page 3

page 4


A Local Similarity-Preserving Framework for Nonlinear Dimensionality Reduction with Neural Networks

Real-world data usually have high dimensionality and it is important to ...

Grassmannian Discriminant Maps (GDM) for Manifold Dimensionality Reduction with Application to Image Set Classification

In image set classification, a considerable progress has been made by re...

A Review, Framework and R toolkit for Exploring, Evaluating, and Comparing Visualizations

This paper gives a review and synthesis of methods of evaluating dimensi...

Application of Fuzzy Clustering for Text Data Dimensionality Reduction

Large textual corpora are often represented by the document-term frequen...

Robust Linear Classification from Limited Training Data

We consider the problem of linear classification under general loss func...

On the convergence of maximum variance unfolding

Maximum Variance Unfolding is one of the main methods for (nonlinear) di...

On Nonlinear Dimensionality Reduction, Linear Smoothing and Autoencoding

We develop theory for nonlinear dimensionality reduction (NLDR). A numbe...