Theoretical Foundations of t-SNE for Visualizing High-Dimensional Clustered Data

by   T. Tony Cai, et al.

This study investigates the theoretical foundations of t-distributed stochastic neighbor embedding (t-SNE), a popular nonlinear dimension reduction and data visualization method. A novel theoretical framework for the analysis of t-SNE based on the gradient descent approach is presented. For the early exaggeration stage of t-SNE, we show its asymptotic equivalence to a power iteration based on the underlying graph Laplacian, characterize its limiting behavior, and uncover its deep connection to Laplacian spectral clustering, and fundamental principles including early stopping as implicit regularization. The results explain the intrinsic mechanism and the empirical benefits of such a computational strategy. For the embedding stage of t-SNE, we characterize the kinematics of the low-dimensional map throughout the iterations, and identify an amplification phase, featuring the intercluster repulsion and the expansive behavior of the low-dimensional map. The general theory explains the fast convergence rate and the exceptional empirical performance of t-SNE for visualizing clustered data, brings forth the interpretations of the t-SNE output, and provides theoretical guidance for selecting tuning parameters in various applications.



There are no comments yet.


page 5


Learning Low-Dimensional Nonlinear Structures from High-Dimensional Noisy Data: An Integral Operator Approach

We propose a kernel-spectral embedding algorithm for learning low-dimens...

Geometric Laplacian Eigenmap Embedding

Graph embedding seeks to build a low-dimensional representation of a gra...

GRASPEL: Graph Spectral Learning at Scale

Learning meaningful graphs from data plays important roles in many data ...

Consistent Semi-Supervised Graph Regularization for High Dimensional Data

Semi-supervised Laplacian regularization, a standard graph-based approac...

Low-Norm Graph Embedding

Learning distributed representations for nodes in graphs has become an i...

Auto-adaptative Laplacian Pyramids for High-dimensional Data Analysis

Non-linear dimensionality reduction techniques such as manifold learning...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.