Stochastic Neighbor Embedding separates well-separated clusters

02/09/2017
by   Uri Shaham, et al.
0

Stochastic Neighbor Embedding and its variants are widely used dimensionality reduction techniques -- despite their popularity, no theoretical results are known. We prove that the optimal SNE embedding of well-separated clusters from high dimensions to any Euclidean space R^d manages to successfully separate the clusters in a quantitative way. The result also applies to a larger family of methods including a variant of t-SNE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2023

Supervised Stochastic Neighbor Embedding Using Contrastive Learning

Stochastic neighbor embedding (SNE) methods t-SNE, UMAP are two most pop...
research
05/03/2022

A unified view on Self-Organizing Maps (SOMs) and Stochastic Neighbor Embedding (SNE)

We propose a unified view on two widely used data visualization techniqu...
research
08/18/2021

Stochastic Cluster Embedding

Neighbor Embedding (NE) that aims to preserve pairwise similarities betw...
research
09/22/2020

Stochastic Neighbor Embedding with Gaussian and Student-t Distributions: Tutorial and Survey

Stochastic Neighbor Embedding (SNE) is a manifold learning and dimension...
research
08/10/2017

Automatic Selection of t-SNE Perplexity

t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most w...
research
10/06/2021

T-SNE Is Not Optimized to Reveal Clusters in Data

Cluster visualization is an essential task for nonlinear dimensionality ...
research
06/10/2011

A Computational Framework for Nonlinear Dimensionality Reduction of Large Data Sets: The Exploratory Inspection Machine (XIM)

In this paper, we present a novel computational framework for nonlinear ...

Please sign up or login with your details

Forgot password? Click here to reset