Heavy-tailed kernels reveal a finer cluster structure in t-SNE visualisations

02/15/2019 ∙ by Dmitry Kobak, et al. ∙ Universität Tübingen Yale University 0

T-distributed stochastic neighbour embedding (t-SNE) is a widely used data visualisation technique. It differs from its predecessor SNE by the low-dimensional similarity kernel: the Gaussian kernel was replaced by the heavy-tailed Cauchy kernel, solving the "crowding problem" of SNE. Here, we develop an efficient implementation of t-SNE for a t-distribution kernel with an arbitrary degree of freedom ν, with ν→∞ corresponding to SNE and ν=1 corresponding to the standard t-SNE. Using theoretical analysis and toy examples, we show that ν<1 can further reduce the crowding problem and reveal finer cluster structure that is invisible in standard t-SNE. We further demonstrate the striking effect of heavier-tailed kernels on large real-life data sets such as MNIST, single-cell RNA-sequencing data, and the HathiTrust library. We use domain knowledge to confirm that the revealed clusters are meaningful. Overall, we argue that modifying the tail heaviness of the t-SNE kernel can yield additional insight into the cluster structure of the data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

Code Repositories

finer-tsne

Code for the paper on t-SNE with variable degree of freedom


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

T-distributed stochastic neighbour embedding (t-SNE) [12] and related methods [15, 13] are used for data visualisation in many scientific fields dealing with thousands or even millions of high-dimensional samples. They range from single-cell cytometry [1] and transcriptomics [16, 19], where samples are cells and features are proteins or genes, to population genetics [4], where samples are people and features are single-nucleotide polymorphisms, to humanities [14], where samples are books and features are words.

T-SNE was developed from an earlier method called SNE [5]

. The central idea of SNE was to describe pairwise relationships between high-dimensional points in terms of normalised affinities: close neighbours have high affinity whereas distant samples have near-zero affinity. SNE then positions the points in two dimensions such that the Kullback-Leibler divergence between the high- and low-dimensional affinities is minimised. This worked to some degree but suffered from what was later called the “crowding problem”. The idea of t-SNE was to adjust the kernel transforming pairwise low-dimensional distances into affinities: the Gaussian kernel was replaced by the heavy-tailed Cauchy kernel (t-distribution with one degree of freedom

), ameliorating the crowding problem.

The choice of the specific heavy-tailed kernel was mostly motivated by mathematical and computational simplicity: a t-distribution with has a density proportional to which is mathematically compact and fast to compute. However, a t-distribution with any finite

has heavier tails than the Gaussian distribution (which corresponds to

). It is therefore reasonable to explore the whole spectrum of the values of from to 0. Given that t-SNE () outperforms SNE (), it might be that for some data sets would offer additional insights into the structure of the data.

While this seems like a straightforward extension and has already been discussed in the literature [10, 18], no efficient implementation of this idea has been available until now. T-SNE is usually optimised via adaptive gradient descent. While it is easy to write down the gradient for an arbitrary value of , the exact t-SNE from the original paper requires time and memory, and cannot be run for sample sizes much larger than . Efficient approximations have been developed allowing to run approximate t-SNE for much larger sample sizes [11, 9], but until now have only been implemented for . As a result, the effect of on large real-life datasets has remained unknown.

Here we show that the recent FIt-SNE approximation [9] can be modified to use an arbitrary value of and demonstrate that can reveal “hidden” structure, invisible with standard t-SNE.

2 Results

2.1 t-SNE with arbitrary degree of freedom

SNE defines directional affinity of point to point as

For each

, this forms a probability distribution over all points

(all

are set to zero). The variance of the Gaussian kernel

is chosen such that the perplexity of this probability distribution

has some pre-specified value. In symmetric SNE (SSNE)111In the following text we will not make a distinction between the symmetric SNE (SSNE) and the original, asymmetric, SNE. and t-SNE the affinities are symmetrised and normalised

to form a probability distribution on the set of all pairs .

The points are then arranged in a low-dimensional space to minimise the Kullback-Leibler (KL) divergence between and the affinities in the low-dimensional space, :

Here is a kernel that transforms Euclidean distance between any two points into affinities, and are low-dimensional coordinates (all are set to 0).

SNE uses the Gaussian kernel

. T-SNE uses the t-distribution with one degree of freedom (also known as Cauchy distribution):

. Here we consider a general t-distribution kernel

()

As in [18], we use a simplified version defined as

()

This kernel corresponds to the scaled t-distribution with . This means that using (2.1) instead of (2.1) in t-SNE produces an identical output apart from the global scaling by . At the same time, (2.1) allows to use any , including corresponding to negative , i.e. it allows kernels with tails heavier than any possible t-distribution.222Equivalently, we could use an even simpler kernel that differs from (2.1) only by scaling. We prefer (2.1) because of the explicit Gaussian limit at . Yang et al. [18] use the same kernel but with replaced by , and call it “heavy-tailed SNE” (HSSNE).

The gradient of the loss function (see Appendix or

[18]) is

Any implementation of exact t-SNE can be easily modified to use this expression instead of the gradient.

Modern t-SNE implementations make two approximations. First, they set most to zero, apart from only a small number of close neighbours [11, 9], accelerating the attractive force computations (that can be very efficiently parallelised). This carries over to the

case. The repulsive forces are approximated in FIt-SNE by interpolation on a grid, further accelerated with the Fourier transform

[9]. This interpolation can be carried out for the case in full analogy to the case (see Appendix).

Importantly, the runtime of FIt-SNE with is practically the same as with . For example, embedding MNIST () with perplexity 50 as described below took 90 seconds with and 97 seconds with on a computer with 4 double-threaded cores, 3.4 GHz each.333The numbers correspond to 1000 gradient descent iterations. The slight speed decrease is due to a more efficient implementation of the interpolation code for the special case of .

2.2 Toy examples

We first applied exact t-SNE with various values of to a simple toy data set consisting of several well-separated clusters. Specifically, we generated a 10-dimensional data set with 100 data points in each of the 10 classes (1000 points overall). The points in class were sampled from a Gaussian distribution with covariance and mean where is the

-th basis vector. We used perplexity 50, and default optimisation parameters (1000 iterations, learning rate 200, early exaggeration 12, length of early exaggeration 250, initial momentum 0.5, switching to 0.8 after 250 iterations).

Figure 1: Toy example with ten Gaussian clusters. (A) SNE visualisation of 10 spherical clusters that are all equally far away from each other (). (B) Standard t-SNE visualisation of the same data set (). (C) t-SNE visualisation with . The same random seed was used for initialisation in all panels. Scale bars are shown in the bottom-right of each panel.

Figure 1 shows the t-SNE results for , , and . A t-distribution with degrees of freedom is very close to the Gaussian distribution, so here and below we will refer to the result as SNE. We see that class separation monotonically increases with decreasing : t-SNE (Figure 1B) separates the classes much better than SNE (Figure 1A), but t-SNE with separates them much better still (Figure 1C).

Figure 2: Toy example with ten “dumbbell”-shaped clusters. (A) SNE visualisation of 10 dumbbell-shaped clusters (). (B) Standard t-SNE visualisation (). (C) t-SNE visualisation with .

In the above toy example, the choice between different values of is mostly aesthetic. This is not the case in the next toy example. Here we change the dimensionality to 20 and shift 50 points in each class by and the remaining 50 points by (where is the class number). The intuition is that now each of the 10 classes has a “dumbbell” shape. This shape is invisible in SNE (Figure 2A) and hardly visible in standard t-SNE (Figure 2B), but becomes apparent with (Figure 2C). In this case, decreasing below 1 is necessary to bring out the fine structure of the data.

2.3 Mathematical analysis

We showed that decreasing increases cluster separation (Figures 1, 2). Why does this happen? An informal argument is that in order to match the between-cluster affinities , the distance between clusters in the t-SNE embedding needs to grow when the kernel becomes progressively more heavy-tailed [12].

To quantify this effect, we consider an example of two standard Gaussian clusters in 10 dimensions ( in each) with the between-centroid distance set to ; these clusters can be unambiguously separated. We use exact t-SNE (perplexity 50) with various values of from 0.2 to 3.0 and measure the cluster separation in the embedding. As a scale-invariant measure of separation we used between-centroids distance divided by the root-mean-square within-cluster distance. Indeed, we observed a monotonic decrease of this measure with growing (Figure 3).

Figure 3: Separation in the t-SNE visualisation between the two well-separated clusters as a function of . Separation was measured as the between-centroids distance divided by the root-mean-square within-cluster distance.

The informal argument mentioned above can be replaced by the following formal one. Consider two high-dimensional clusters ( points in each) with all pairwise within-cluster distances equal to and all pairwise between-cluster distances equal to (this can be achieved in the space of dimensions). In this case, the matrix has only two unique non-zero values: all within-cluster affinities are given by and all between-cluster affinities by ,

where is the Gaussian kernel corresponding to the chosen perplexity value. Consider an exact t-SNE mapping to the space of the same dimensionality. In this idealised case, t-SNE can achieve zero loss by setting within- and between-cluster distances and in the embedding such that and . This will happen if

Plugging in the expression for and denoting the constant right-hand side by , we obtain

The left-hand side can be seen as a measure of class separation close to the one used in Figure 3, and the right-hand side monotonically decreases with increasing .

In the simulation shown in Figure 3, the matrix does not have only two unique elements, the target dimensionality is two, and the t-SNE cannot possibly achieve zero loss. Still, qualitatively we observe the same behaviour: approximately power-law decrease of separation with increasing .

2.4 Real-life data sets

We now demonstrate that these theoretical insights are relevant to practical use cases on large-scale data sets. Here we use approximate t-SNE (FIt-SNE).

2.4.1 Mnist

We applied t-SNE with various values of to the MNIST data set (Figure 4), comprising grayscale

images of handwritten digits. As a pre-processing step, we used principal component analysis (PCA) to reduce the dimensionality from 784 to 50. We used perplexity 50 and default optimisation parameters apart from learning rate that we increased to

.444To get a good t-SNE visualisation of MNIST, it is helpful to increase either the learning rate or the length of the early exaggeration phase. Default optimisation parameters often lead to some of the digits being split into two clusters. In the cytometric context, this phenomenon was described in detail by [2].

For easier reproducibility, we initialised the t-SNE embedding with the first two PCs (scaled such that PC1 had standard deviation 0.0001).

Figure 4: MNIST data set (. (A) SNE visualisation (). (B) Standard t-SNE visualisation (). (C) t-SNE visualisation with . The colours are consistent across panels (A–C), labels are shown in (A). PCA initialisation was used in all three cases. Transparency 0.5 for all dots in all three panels. (D) Average images for some individual sub-clusters from (C). The sub-clusters were isolated via DBSCAN with default settings as it is implemented in scikit-learn. Up to five subclusters with at least 100 points are shown, ordered from top to bottom by abundance.

To the best of our knowledge, Figure 4A is the first existing SNE () visualisation of the whole MNIST: we are not aware of any SNE implementation that can handle a dataset of this size. It produces a surprisingly good visualisation but is nevertheless clearly outperformed by standard t-SNE (, Figure 4B): many digits coalesce together in SNE but get separated into clearly distinct clusters in t-SNE. Remarkably, reducing to 0.5 makes each digit further split into multiple separate sub-clusters (Figure 4C), revealing a fine structure within each of the digits.

To demonstrate that these sub-clusters are meaningful, we computed the average MNIST image for some of the sub-clusters (Figure 4

D). In each case, the shapes appear to be meaningfully distinct: e.g. for the digit “4”, the hand-writing is more italic in one sub-cluster, more wide in another, and features a non-trivial homotopy group (i.e. has a loop) in yet another one. Similarly, digit “2” is separated into three sub-clusters, with the most abundant one showing a loop in the bottom-left and the next abundant one having a sharp angle instead. Digit “1” is split according to the stroke angle. Re-running t-SNE using random initialisation with different seeds yielded consistent results. Points that appear as outliers in Figure 

4C mostly correspond to confusingly written digits.

MNIST has been a standard example for t-SNE starting from the original t-SNE paper [12], and it has been often observed that t-SNE preserves meaningful within-digit structure. Indeed, the sub-clusters that we identified in Figure 4C are usually close together in Figure 4B.555This can be clearly seen in an animation that slowly decreases from 100 to 0.5, see http://github.com/berenslab/finer-tsne. However, standard t-SNE does not separate them into visually isolated sub-clusters, and so does not make this internal structure obvious.

2.4.2 Single-cell RNA-sequencing data

For the second example, we took the transcriptomic dataset from [16], comprising cells from adult mouse cortex (sequenced with Smart-seq2 protocol). Dimensions are genes, and the data are the integer counts of RNA transcripts of each gene in each cell. Using a custom expert-validated clustering procedure, the authors divided these cells into 133 clusters. In Figure 5, we used the cluster ids and cluster colours from the original publication.

Figure 5A shows the standard t-SNE () of this data set, following common transcriptomic pre-processing steps as described in [7]. Briefly, we row-normalised and log-transformed the data, selected 3000 most variable genes and used PCA to further reduce dimensionality to 50. We used perplexity 50 and PCA initialisation. The resulting t-SNE visualisation is in a reasonable agreement with the clustering results, however it lumps many clusters together into contiguous “islands” or “continents” and overall suggests many fewer than 133 distinct clusters.

Figure 5: Tasic et al. data set (). (A) Standard t-SNE visualisation (). Cluster ids and cluster colours are taken from the original publication [16]

: cold colours for excitatory neurons, warm colours for inhibitory neurons, and grey/brown colours for non-neural cells such as astrocytes or microglia.

(B) t-SNE visualisation with . (C) A zoom-in into the left side of panel (B) showing all Vip clusters from Tasic et al. Black circles mark cluster centroids (medians).

Reducing the number of degrees of freedom to splits many of the contiguous islands into “archipelagos” of smaller disjoint areas (Figure 5B). In many cases, this roughly agrees with the clustering results of [16]. Figure 5C shows a zoom-in into the Vip clusters (west-southwest part of panel B) that provide one such example: isolated islands correspond well to the individual clusters (or sometimes pairs of clusters). Importantly, the cluster labels in this data set are not ground truth; nevertheless the agreement between cluster labels and t-SNE with provides additional evidence that this data categorisation is meaningful.

2.4.3 HathiTrust library

For the final example, we used the HathiTrust library data set [14]. The full data set comprises 13.6 million books and can be described with several million features that represent word counts of each word in each book. We used the pre-processed data from [14]: briefly, the word counts were row-normalised, log-transformed, projected to 1280 dimensions using random linear projection with coefficients , and then reduced to 100 PCs.666The data set was downloaded from https://zenodo.org/record/1477018. The available meta-data include author name, book title, publication year, language, and Library of Congress classification (LCC) code. For simplicity, we took a subset consisting of all books in Russian language. We used perplexity 50 and learning rate .

Figure 6: Russian language part of the HathiTrust library (). (A) Standard t-SNE visualisation (). Colour denotes publication year. (B) t-SNE visualisation with

. Black contours in both panels are kernel density estimate contour lines for mathematical literature (lower right) and poetry (upper left), plotted with

seaborn.kdeplot() with Gaussian bandwidth set to 2.0. Contour levels were manually tuned to enclose the majority of the books).

Figure 6A shows the standard t-SNE visualisation () coloured by the publication year. The most salient feature is that pre-1917 books cluster together (orange/red colours): this is due to the major reform of Russian orthography implemented in 1917, leading to most words changing their spelling. However, not much of a substructure can be seen among the books published after (or before) 1917. In contrast, t-SNE visualisation with fragments the corpus into a large number of islands (Figure 6B).

We can identify some of the islands by inspecting the available meta-data. For example, mathematical literature (LCC code QA, books) is not separated from the rest in standard t-SNE, but occupies the leftmost island in t-SNE with (blue contour lines in both panels). Several neighbouring islands correspond to the physics literature (LCC code QC, books; not shown). In an attempt to capture something radically different from mathematics, we selected all books authored by several famous Russian poets777Anna Akhmatova, Alexander Blok, Joseph Brodsky, Afanasy Fet, Osip Mandelstam, Vladimir Mayakovsky, Alexander Pushkin, and Fyodor Tyutchev. ( in total). This is not a curated list: there are non-poetry books authored by these authors, while many other poets were not included (the list of poets was not cherry-picked; we made the list before looking at the data). Nevertheless, when using , the poetry books printed after 1917 seemed to occupy two neighbouring islands, and the ones printed before 1917 were reasonably isolated as well (Figure 6B, black contour lines). In the standard t-SNE visualisation poetry was not at all separated from the surrounding population of books.

3 Related work

Yang et al. [18] introduced symmetric SNE with the kernel family

calling it “heavy-tailed symmetric SNE” (HSSNE). This is exactly the same kernel family as (2.1), but with replaced by . However, Yang et al. did not show any examples of heavier-tailed kernels revealing additional structure compared to and did not provide an implementation suitable for large sample sizes. Interestingly, Yang et al. argued that gradient descent is not suitable for HSSNE and suggested an alternative optimisation algorithm; here we demonstrated that the standard t-SNE optimisation works reasonably well in a wide range of values (but see Discussion).

Van der Maaten [10] discussed the choice of the degree of freedom in the t-distribution kernel in the context of parametric t-SNE. He argued that might be warranted when embedding the data in more than two dimensions. He also implemented a version of parametric t-SNE that optimises over . However, similar to [18], [10] did not contain any examples of being actually useful in practice.

UMAP [13] is a promising recent algorithm closely related to an earlier largeVis [15]; both are similar to t-SNE but modify the repulsive forces to make them amenable for a sampling-based stochastic optimisation. UMAP uses the following family of similarity kernels:

which reduces to Cauchy when and is more heavy-tailed when . UMAP default is and with both parameters adjusted via the min_dist input parameter (default value 0.1). Decreasing min_dist all the way to zero corresponds to decreasing to 0.79. In our experiments, we observed that modifying min_dist (or directly) led to an effect qualitatively similar to modifying in t-SNE. For some data sets this required manually decreasing below 0.79. In case of MNIST, , but not , revealed sub-digit structure (Figure S1) — an effect that has not been described before (cf. [13] where McInnes et al. state that min_dist is “an essentially aesthetic parameter”). In other words, the same conclusion seems to apply to UMAP: heavy-tailed kernels reveal a finer cluster structure. A more in-depth study of the relationships between the two algorithms is beyond the scope of this paper.

4 Discussion

We showed that using in t-SNE can yield insightful visualisations that are qualitatively different compared to the standard choice of . Crucially, the choice of was made in [12] for the reasons of mathematical convenience, and we are not aware of any a priori argument in favour of . As still yields a t-distribution kernel (scaled t-distribution to be precise), we prefer not to use a separate acronym (HSSNE [18]). If needed, one can refer to t-SNE with as “heavy-tailed” t-SNE.

We found that lowering below 1 makes progressively finer structure apparent in the visualisation and brings out smaller clusters, which — at least in the data sets studied here — are often meaningful. In a way, can be thought of as a “magnifying glass” for the standard t-SNE representation. We do not think that there is one ideal value of suitable for all data sets and all situations; instead we consider it a useful adjustable parameter of t-SNE, complementary to the perplexity. We observed a non-trivial interaction between

and perplexity: Small vs. large perplexity makes the affinity matrix

represent the local vs. global structure of the data [7]. Small vs. large makes the embedding represent the finer vs. coarser structure of the affinity matrix. In practice, it can make sense to treat it as a two-dimensional parameter space to explore. However, for large data sets (), it is computationally unfeasible to substantially increase the perplexity from its standard range of 30–100 (as it would prohibitively increase the runtime), and so becomes the only available parameter to adjust.

Figure 7: Quality assessment of the MNIST embedding with after 1000 gradient descent iterations with learning rate (scaled PCA initialisation). The horizontal axis is on the log scale. The values were sampled on a grid with step for , for and for . The black line shows KL divergence (left axis) with minimum at . Running gradient descent with for iterations (Figure S3) lowered KL divergence down to 3.6, which was still above the minimum value. Blue lines show neighbourhood preservation (the fraction of nearest neighbours of each point that remain within nearest neighbours in the embedding, averaged over all points) for , , and .

One important caveat is to be kept in mind. It is well-known that t-SNE, especially with low perplexity, can find “clusters” in pure noise, picking up random fluctuations in the density [17]. This can happen with but gets exacerbated with lower values of . A related point concerns clustered real-life data where separate clusters (local density peaks) can sometimes be connected by an area of lower but non-zero density: for example, [16] argued that many pairs of their 133 clusters have intermediate cells. Our experiments demonstrate that lowering can make such clusters more and more isolated in the embedding, creating a potentially misleading appearance of perfect separation (see e.g. Figure 1). In other words, there is a trade-off between bringing out finer cluster structure and preserving continuities between clusters.

Choosing a value of that yields the most faithful representation of a given data set is challenging because it is difficult to quantify “faithfulness” of any given embedding [8]. For example, for MNIST, KL divergence is minimised at (Figure 7), but it may not be the ideal metric to quantify the embedding quality [6]. Indeed, we found that

-nearest neighbour (KNN) preservation

[8] peaked elsewhere: the peak for was at , for at , and for at (Figure 7

). We stress that we do not think that KNN preservation is the most appropriate metric here; our point is that different metrics can easily disagree with each other. In general, there may not be a single “best” embedding of high-dimensional data in a two-dimensional space. Rather, by varying

, one can obtain different complementary “views” of the data.

Very low values of correspond to kernels with very wide and very flat tails, leading to vanishing gradients and difficult convergence. We found that was about the smallest value that could be safely used (Figure S2). In fact, it may take more iterations to reach convergence for compared to . As an example, running t-SNE on MNIST with for ten times longer than we did for Figure 4C, led to the embedding expanding much further (which leads to a slow-down of FIt-SNE interpolation) and, as a result, resolving additional sub-clusters (Figure S3). On a related note, when using only one single MNIST digit as an input for t-SNE with , the embedding also fragments into many more clusters (Figure S4), which we hypothesise is due to the points rapidly expanding to occupy a much larger area compared to what happens in the full MNIST embedding (Figure S4). This can be counterbalanced by increasing the strength of the attractive forces (Figure S4). Overall, the effect of the embedding scale on the cluster resolution remains an open research question.

In conclusion, we have shown that adjusting the heaviness of the kernel tails in t-SNE can be a valuable tool for data exploration and visualisation. As a practical recommendation, we suggest to embed any given data set using various values of , each inducing a different level of clustering, and hence providing insight that cannot be obtained from the standard choice alone.888Our code is available at http://github.com/berenslab/finer-tsne. The main FIt-SNE repository at http://github.com/klugerlab/FIt-SNE was updated to support any (version 1.1.0).

5 Appendix

The loss function, up to a constant term , can be rewritten as follows:

(1)

where we took into account that . The first term in Eq. (1) contributes attractive forces to the gradient while the second term yields repulsive forces. The gradient is

(2)
(3)

The first expression is more convenient for numeric optimisation while the second one can be more convenient for mathematical analysis.

For the kernel

the gradient of is

(4)

Plugging Eq. 4 into Eq. 3, we obtain the expression for the gradient [18]999Note that the C++ Barnes-Hut t-SNE implementation [11] absorbed the factor 4 into the learning rate, and the FIt-SNE implementation [9] followed this convention.

For numeric optimisation it is convenient to split this expression into the attractive and the repulsive terms. Plugging Eq. 4 into Eq. 2, we obtain

where

It is noteworthy that the expression for has raised to the power, which cancels out the fractional power in . This makes the runtime of computation unaffected by the value of . In FIt-SNE, the sum over in is approximated by the sum over approximate nearest neighbours of point obtained using Annoy [3], where is the provided perplexity value. The heuristic comes from [11]. The remaining values are set to zero.

The can be approximated using the interpolation scheme from [9]. It allows fast approximate computation of the sums of the form

and

where is any smooth kernel, by using polynomial interpolation of on a fine grid.101010The accuracy of in the interpolation can somewhat decrease for small values of . One can increase the accuracy by decreasing the spacing of the interpolation grid (see FIt-SNE documentation). We found that it did not noticeably affect the visualisations. All kernels appearing in are smooth.

Acknowledgements

This work was supported by the Deutsche Forschungsgemeinschaft (BE5601/4-1, EXC 2064, Project ID 390727645) (PB), the Federal Ministry of Education and Research (FKZ 01GQ1601, 01IS18052C), and the National Institute of Mental Health under award number U19MH114830 (DK and PB), NIH grants F30HG010102 and U.S. NIH MSTP Training Grant T32GM007205 (GCL), NSF grant DMS-1763179 and the Alfred P. Sloan Foundation (SS), and the NIH grant R01HG008383 (YK). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

  • [1] Amir, E.a.D., Davis, K.L., Tadmor, M.D., Simonds, E.F., Levine, J.H., Bendall, S.C., Shenfeld, D.K., Krishnaswamy, S., Nolan, G.P., Pe’er, D.: viSNE enables visualization of high dimensional single-cell data and reveals phenotypic heterogeneity of leukemia. Nature Biotechnology 31(6),  545 (2013)
  • [2] Belkina, A.C., Ciccolella, C.O., Anno, R., Spidlen, J., Halpert, R., Snyder-Cappione, J.: Automated optimal parameters for t-distributed stochastic neighbor embedding improve visualization and allow analysis of large datasets. bioRxiv (2018)
  • [3] Bernhardsson, E.: Annoy. https://github.com/spotify/annoy (2013)
  • [4] Diaz-Papkovich, A., Anderson-Trocme, L., Gravel, S.: Revealing multi-scale population structure in large cohorts. bioRxiv (2018)
  • [5] Hinton, G., Roweis, S.: Stochastic neighbor embedding. In: Advances in Neural Information Processing Systems. pp. 857–864 (2003)
  • [6] Im, D.J., Verma, N., Branson, K.: Stochastic neighbor embedding under f-divergences. arXiv (2018)
  • [7] Kobak, D., Berens, P.: The art of using t-SNE for single-cell transcriptomics. bioRxiv (2018)
  • [8] Lee, J.A., Verleysen, M.: Quality assessment of dimensionality reduction: Rank-based criteria. Neurocomputing 72(7-9), 1431–1443 (2009)
  • [9] Linderman, G.C., Rachh, M., Hoskins, J.G., Steinerberger, S., Kluger, Y.: Fast interpolation-based t-SNE for improved visualization of single-cell RNA-seq data. Nature Methods 16, 243–245 (2019)
  • [10]

    van der Maaten, L.: Learning a parametric embedding by preserving local structure. In: International Conference on Artificial Intelligence and Statistics. pp. 384–391 (2009)

  • [11]

    van der Maaten, L.: Accelerating t-SNE using tree-based algorithms. Journal of Machine Learning Research

    15(1), 3221–3245 (2014)
  • [12] van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. Journal of Machine Learning Research 9(Nov), 2579–2605 (2008)
  • [13] McInnes, L., Healy, J., Melville, J.: UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv (2018)
  • [14] Schmidt, B.: Stable random projection: Lightweight, general-purpose dimensionality reduction for digitized libraries. Journal of Cultural Analytics (2008)
  • [15] Tang, J., Liu, J., Zhang, M., Mei, Q.: Visualizing large-scale and high-dimensional data. In: Proceedings of the 25th International Conference on World Wide Web. pp. 287–297. International World Wide Web Conferences Steering Committee (2016)
  • [16] Tasic, B., Yao, Z., Graybuck, L.T., Smith, K.A., Nguyen, T.N., Bertagnolli, D., Goldy, J., Garren, E., Economo, M.N., Viswanathan, S., et al.: Shared and distinct transcriptomic cell types across neocortical areas. Nature 563(7729),  72 (2018)
  • [17] Wattenberg, M., Viégas, F., Johnson, I.: How to use t-SNE effectively. Distill 1(10),  e2 (2016)
  • [18] Yang, Z., King, I., Xu, Z., Oja, E.: Heavy-tailed symmetric stochastic neighbor embedding. In: Advances in Neural Information Processing Systems. pp. 2169–2177 (2009)
  • [19] Zeisel, A., Hochgerner, H., Lonnerberg, P., Johnsson, A., Memic, F., van der Zwan, J., Haring, M., Braun, E., Borm, L., La Manno, G., et al.: Molecular architecture of the mouse nervous system. Cell 174(4), 999–1014 (2018)

Supplementary Figures

Figure S1: UMAP visualisations of the MNIST data set with default parameters and min_dist set to 0.5, 0.1 (default), 0.01, and manually set to . Each subplot shows the and values of the UMAP kernel . With min_dist below 0.01 the as a function of min_dist hardly changes. As a side note, one can obtain an MNIST embedding very similar to the one that UMAP gives with default settings using t-SNE with late exaggeration of about , i.e. multiplying all attractive forces by after the early exaggeration period (first 250 iterations) is over. This makes sense, given that the main difference between t-SNE and UMAP is in the repulsive term.
Figure S2: Toy example with ten “dumbbell”-shaped clusters from Figure 2, here embedded with . Top-left plot shows the result after 1000 gradient descent iterations (default). Note that the dumbbell shape is lost: whereas the number of visible clusters increased as was lowered from 100 to (Figure 2), it decreased when it was further lowered to . We believe the reason for this is that the strong repulsion between dumbbells “squashes” them in the beginning of optimisation into very compact blobs. It is likely that longer optimisation would resolve the dumbbell shapes. This is difficult to test because the kernel with is extremely wide and flat, leading to slow convergence. Top-right plot shows the result after 5000 iterations. Here a few outlying points get pushed to the periphery. Zooming-in to the main 10 clusters (bottom-left) still does not resolve the dumbbell shapes. Further zooming in on one of the dumbbells (bottom-right) shows that the points are squashed into 1D which may be a sign of poor convergence. In a separate sets of experiments, we observed the similar phenomenon with MNIST: after 1000 iterations yielded fewer clusters than . Our conclusion is that smaller values of should be used with caution.
Figure S3: t-SNE visualisation of the MNIST dataset with . The top-left panel is identical to Figure 4C; it was obtained with 1000 gradient descent iterations (the default value). The top-right panel corresponds to iterations and has many more isolated sub-clusters. This can also be seen in the bottom row showing the respective zoom-ins into the digit “4”. At the same time, the embedding after 1000 iterations is not misleading and is simply a coarser-grained version of the embedding after iterations. Using iterations is impractical: whereas 1000 iterations were finished in 1.5 minutes, iterations took 4 hours 30 minutes. This is because FIt-SNE interpolation scheme uses regular interpolation grid with the number of nodes growing quadratically with the embedding size. While the left embedding is contained within , the right one expands to . In principle, an implementation based on the fast multipole method (FMM) could be developed to dramatically accelerate the gradient computation in this setting where most of the embedding space is “empty space”, but current FIt-SNE implementation does not support it. Note that the standard t-SNE embedding with also expands much further after iterations, compared to the 1000 iterations. However, with it does not resolve additional sub-clusters, at least in MNIST.
Figure S4: t-SNE visualisation of a MNIST subset consisting of all images of digit “4” () (perplexity 50). Left: , the same as in Figure 4C. Note the large number of isolated clusters. We believe this happens because the embedding rapidly expands to a larger area, compared to Figure S3 (bottom-left). One evidence for that is that re-running t-SNE after adding several random Gaussian clusters with each, roughly recovers the shape of the digit “4” archipelago from the full MNIST embedding (middle). Right: and exaggeration factor 1.5 [7], i.e. all attractive forces are multiplied by 1.5 after the end of the early exaggeration phase (during the early exaggeration they are multiplied by 12). This roughly recovers the sub-clusters from the full MNIST embedding (Figure S3). The relationship between and exaggeration remains for future work.