R package for Barnes-Hut t-SNE
The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects.READ FULL TEXT VIEW PDF
R package for Barnes-Hut t-SNE
Data-visualization techniques are an essential tool for any data analyst, as they allow the analyst to visually explore the data and generate hypotheses. One of the key limitations of traditional visualization techniques such as histograms, scatter plots, and parallel coordinate plots (see, e.g.,  for an overview) is that they only facilitate the visualization of one or a few data variables at a time. In order to get an idea of the structure of all variables in the data, it is therefore necessary to perform an automatic analysis of the data before making the visualization, for instance, by learning a low-dimensional embedding of the data. In such an embedding, each data object is represented by a low-dimensional point in such a way, that nearby points correspond to similar data objects and that distant points correspond to dissimilar data objects. The low-dimensional embedding can readily be visualized in, e.g., a scatter plot or a parallel coordinate plot.
A plethora of embedding techniques have been proposed over the last decade, e.g., [5, 15, 20, 23, 25, 26]. For creating two- or three-dimensional embeddings that can be readily visualized in a scatter plot, a family of techniques based on stochastic neighbor embedding (SNE; ) has recently become very popular. These techniques compute an
similarity matrix in both the original data space and in the low-dimensional embedding space; the similarities take the form of a probability distribution over pairs of points in which high probabilities correspond to similar objects or points. The probabilities are generally defined as normalized Gaussian or Student-t kernels, which makes that SNE focuses on preservinglocal
data structure. The embedding is learned by minimizing the Kullback-Leibler divergence between the probability distributions in the original data space and the embedding space with respect to the locations of the points in the embedding. As the resulting cost function is non-convex, this minimization is typically performed using first- or second-order gradient-descent techniques[5, 11, 27]. The gradient of the Kullback-Leibler divergence may be interpreted as an -body system in which all of the points exert forces on each other.
One of the key limitations of SNE (and of its variants) is that its computational and memory complexity scales quadratically in the number of data objects . In practice, this limits the applicability of SNE to data sets with only a few thousand points. To visualize larger data sets, landmark implementations of SNE may be used , but this is hardly a satisfactory solution.
In this paper, we develop a new algorithm for (t-)SNE that requires only computation and memory. Our new algorithm computes a sparse approximation of the similarities between the original data objects using vantage-point trees , and subsequently, approximates the forces between the points in the embedding using a Barnes-Hut algorithm  — an algorithm commonly used by astronomers to perform -body simulations. The Barnes-Hut algorithm reduces the number of pairwise forces that needs to be computed by exploiting the fact that the forces exerted by a group of points on a point that is relatively far away are all very similar.
A large body of previous work has focused on decreasing the computational complexity of algorithms that scale quadratically in the amount of data when implemented naively. Most of these studies focus on speeding up nearest-neighbor searches using space-partitioning (metric) trees (e.g., B-trees , cover trees , and vantage-point trees ) or using locality sensitive hashing approaches (e.g., [12, 29]). Motivated by their strong performance reported in earlier work in , we opt to use metric trees to approximate the similarities of the input objects in our algorithm.
Several prior studies have also developed algorithms to speed up -body computations. Most prominently, [7, 8] developed a dual-tree algorithm that is similar in spirit to the Barnes-Hut algorithm we use in this work. The dual-tree algorithm does not consider interactions between single points and groups of points like the Barnes-Hut algorithm, but it only considers interactions between groups of points. In preliminary experiments (see appendix), we found the dual-tree and Barnes-Hut algorithms to perform on par when used in the context of t-SNE — we opt for the Barnes-Hut algorithm here because it is conceptually simpler. Prior work  has also used the fast Gaussian transform [9, 30] (a special case of a fast multipole method ) to speed up the computation of Gaussian -body interactions. Since in t-SNE, the forces exerted on the bodies are non-Gaussian, such an approach cannot readily be applied here.
t-Distributed Stochastic Neighbor Embedding (t-SNE) minimizes the divergence between two distributions: a distribution that measures pairwise similarities between the original data objects and a distribution that measures pairwise similarities between the corresponding points in the embedding. Suppose we are given a data set of objects and a function that computes a distance between a pair of objects, e.g., their Euclidean distance. Our aim is to learn an -dimensional embedding in which each object is represented by a point, with . To this end, t-SNE defines joint probabilities that measure the pairwise similarity between objects and by symmetrizing two conditional probabilities as follows:
Herein, the bandwidth of the Gaussian kernels is set such that the perplexity of the conditional distribution equals a predefined perplexity . The optimal value of varies per object, and is found using a simple binary search; see  for details. A heavy-tailed distribution is used to measure the similarity between the two corresponding points and in the embedding:
In the embedding, a normalized Student-t kernel is used to measure similarities rather than a normalized Gaussian kernel to account for the difference in volume between high- and low-dimensional spaces . The locations of the embedding points
are learned by minimizing the Kullback-Leibler divergence between the joint distributionsand :
This cost function is non-convex; it is typically minimized by descending along the gradient:
where we defined the normalization term . The evaluation of both joint distributions and is , because their respective normalization terms sum over all pairs of points. Since t-SNE scales quadratically in the number of objects , its applicability is limited to data sets with only a few thousand data objects; beyond that, learning becomes very slow.
Barnes-Hut-SNE uses metric trees to approximate by a sparse distribution in which only values are non-zero, and approximates the gradients using a Barnes-Hut algorithm.
As the input similarities are computed using a (normalized) Gaussian kernel, probabilities corresponding to dissimilar input objects and are (nearly) infinitesimal. Therefore, we can use a sparse approximation to the probabilities without a substantial negative effect on the quality of the final embeddings. In particular, we compute the sparse approximation by finding the nearest neighbors of each of the data objects, and redefining the pairwise similarities as:
Herein, represents the set of the nearest neighbors of , and is set such that the perplexity of the conditional distribution equals . The nearest neighbor sets are found in time by building a vantage-point tree on the data set.
Vantage-point tree. In a vantage-point tree, each node stores a data object and the radius of a (hyper)ball that is centered on this object . All non-leaf nodes have two children: data objects that are located inside the ball are stored under the left child of the node, whereas data objects that are located outside the ball are stored under the right child. The tree is constructed by presenting the data objects one-by-one, traversing the tree based on whether the current data object lies inside or outside a ball, and creating a new leaf node in which the object is stored. The radius of the new leaf node is set to the median distance between its object and all other objects that lie inside the ball represented by its parent node. To construct a vantage-point tree, the objects need not necessarily be points in a high-dimensional feature space; the availability of a metric suffices. (In our experiments, however, we use and .)
A nearest-neighbor search is performed using a depth-first search on the tree that computes the distance of the objects stored in the nodes to the target object, whilst maintaining i) a list of the current nearest neighbors and ii) the distance to the furthest nearest neighbor in the current neighbor list. The value of determines whether or not a node should be explored: if there can still be objects inside the ball whose distance to the target object is smaller than , the left node is searched, and if there can still be objects outside the ball whose distance to the target object is smaller than
, the right node is searched. The order in which children are searched depends on whether or not the target object lies inside or outside the current node ball: the left child is examined first if the object lies inside the ball, because the odds are that the nearest neighbors of the target object are also located inside the ball. The right child is searched first whenever the target object lies outside of the ball.
To approximate the t-SNE gradient, we start by splitting the gradient into two parts as follows:
where denotes the sum of all attractive forces (the left sum), whereas denotes the sum of all repulsive forces (the right sum). Computing the sum of all attractive forces, , is computationally efficient; it can be done by summing over all non-zero elements of the sparse distribution in . (Note that the term can be computed in .) However, a naive computation of the sum of all repulsive forces, , is . We now develop a Barnes-Hut algorithm to approximate efficiently in .
Consider three points , , and with . In this situation, the contributions of and to will be roughly equal. The Barnes-Hut algorithm  exploits this by i) constructing a quadtree on the current embedding, ii) traversing the quadtree using a depth-first search, and iii) at every node in the quadtree, deciding whether the corresponding cell can be used as a “summary” for the gradient contributions of all points in that cell.
Quadtree. A quadtree is a tree in which each node represents a rectangular cell with a particular center, width, and height. Non-leaf nodes have four children that split up the cell into four smaller cells (quadrants) that lie “northwest”, “northeast”, “southwest”, and “southeast” of the center of the parent node (see Figure 1 for an illustration). Leaf nodes represent cells that contain at most one point of the embedding; the root node represents the cell that contains the complete embedding. In each node, we store the center-of-mass of the embedding points that are located inside the corresponding cell, , and the total number of points that lie inside the cell, . A quadtree has nodes and can be constructed in time by inserting the points one-by-one, splitting a leaf node whenever a second point is inserted in its cell, and updating and of all visited nodes.
Approximating the gradient. To approximate the repulsive part of the gradient, , we note that if a cell is sufficiently small and sufficiently far away from point , the contributions to will be roughly similar for all points inside that cell. We can, therefore, approximate these contributions by , where we define . We first approximate
by performing a depth-first search on the quadtree, assessing at each node whether or not that node may be used as a “summary” for all the embedding points that are located in the corresponding cell. During this search, we construct an estimate ofin the same way. The two approximations thus obtained are then used to compute via .
We use the condition proposed by  to decide whether a cell may be used as a “summary” for all points in that cell. The condition compares the distance of the cell to the target point with its size:
where represents the length of the diagonal of the cell under consideration and is a threshold that trades off speed and accuracy (higher values of lead to poorer approximations). In preliminary experiments, we also explored various other conditions that take into account the rapid decay of the Student-t tail, but we did not find to lead these alternative conditions to lead to a better accuracy-speed trade-off. (The problem of more complex conditions is that they require expensive computations at each cell. By contrast, the condition in Equation 9 can be evaluated very rapidly.)
Dual-tree algorithms. Whilst the Barnes-Hut algorithm considers point-cell interactions, further speed-ups may be obtained by computing only cell-cell interactions. This can be done using a dual-tree algorithm  that simultaneously traverses the quadtree twice, and for every pair of nodes decides whether the interaction between the corresponding cells can be used as “summary” for the interactions between all points inside these two cells. Perhaps surprisingly, we did find such an approach to perform on par with the Barnes-Hut algorithm in preliminary experiments. The computational advantages of the dual-tree algorithm evaporate because after computing an interaction between two cells, one still needs to determine to which set of points the interaction applies. This can be done by searching the cell or by storing a list of children in each node during tree construction. Both these approaches are computationally costly. (It should be noted that the dual-tree algorithm is, however, much faster in approximating the value of the t-SNE cost function.) The results of our experiments with dual-tree algorithms are presented in the appendix.
We performed experiments on four large data sets to evaluate the performance of Barnes-Hut-SNE. Code for our algorithm is available from http://homepage.tudelft.nl/19j49/tsne.
Data sets. We performed experiments on four data sets: i) the MNIST data set, ii) the CIFAR-10 data set, iii) the NORB data set, and iv) the TIMIT data set. The MNIST data set contains grayscale handwritten digit images of size pixels, each of which corresponds to one of ten classes. The CIFAR-10 data set  is an annotated subset of the 80 million tiny images data set  that contains RGB images of size pixels, leading to a -dimensional input objects; each image corresponds to one of ten classes. The (small) NORB data set  contains grayscale images of toys from five different classes, rendered on a uniform background under lighting conditions, elevations ( to degrees every 5 degrees), and azimuths ( to every degrees). All images contain pixels. The TIMIT data set contains speech data from which MFCC, delta, and delta-delta features were extracted, leading to -dimensional features ; each frame in the data has one of phone labels. We used the TIMIT training set of frames in our experiments.
Experimental setup. In all our experiments, we follow the experimental setup of 
as closely as possible. In particular, we initialize the embedding points by sampling from a Gaussian with a variance of. We run a gradient-descent optimizer for iterations, setting the initial step size to ; the step size is updated during the optimization use the scheme of . We use an additional momentum term that has weight during the first iterations, and afterwards. The perplexity is fixed to . Following , all data sets with a dimensionality larger than were preprocessed using PCA to reduce their dimensionality to .
During the first learning iterations, we multiplied all -values by a user-defined constant . As explained in , this trick enables t-SNE to find a better global structure in the early stages of the optimization. In preliminary experiments, we found that this trick becomes increasingly important to obtain good embeddings when the data set size increases, as it becomes harder for the optimization to find a good global structure when there are more points in the embedding because there is less space for clusters to move around. In our experiments, we fix (by contrast,  used ).
We present the results of three sets of experiments. In the first experiment, we investigate the effect of the trade-off parameter on the speed and the quality of embeddings produced by Barnes-Hut-SNE on the MNIST data set. In the second experiment, we investigate the computation time required to run Barnes-Hut-SNE as a function of the number of data objects (also on the MNIST data set). In the third experiment, we construct and visualize embeddings of all four data sets.
Results. Figure 2 presents the results of an experiment in which we varied the speed-accuracy trade-off parameter used to learn the embedding. The figure shows the computation time required to construct embeddings of all MNIST digit images, as well as the -nearest neighbor error (computed based on the digit labels) of the corresponding embeddings. The results presented in the figure show that the trade-off parameter may be increased to a value of approximately without negatively affecting the quality of the embedding. At the same time, increasing the value of to leads to very substantial improvements in terms of the amount of computation required: the time required to embed all MNIST digits is reduced to just seconds when . (Note that the special case corresponds to standard t-SNE ; we did not run an experiment with because standard t-SNE would take days to complete on the full MNIST data set.)
In Figure 3, we compare standard t-SNE and Barnes-Hut-SNE in terms of i) the computation time required for the embedding of MNIST digit images as a function of the data set size and ii) the -nearest neighbor errors of the corresponding embeddings. (Note that the -axis of the left figure, which represents the required computation time in seconds, uses a logarithmic scale.) In the experiments, we fixed the parameter that trades off speed and accuracy to . The results presented in the figure show that Barnes-Hut-SNE is orders of magnitude faster than standard t-SNE, whilst the difference in quality of the constructed embeddings (which is measured by the nearest-neighbor errors) is negligible. Most prominently, the computational advantages of Barnes-Hut-SNE rapidly increase as the number of objects in the data set increases.
Figure 4 presents embeddings of all four data sets constructed using Barnes-Hut-SNE. The colors of the points indicate the classes of the corresponding objects; the titles of the plots indicate the computation time that was used to construct the corresponding embeddings. As before, we fixed in all four experiments. The results in the figure shows that Barnes-Hut-SNE can construct high-quality embeddings of, e.g., the MNIST handwritten digit images in just over minutes. (Although our MNIST embedding contains many more points, it may be compared with that in . Visually, the structure of the two embeddings is very similar.) The results also show that Barnes-Hut-SNE makes it practical to embed data sets with more than a million data points: the TIMIT embedding shows all data points, and was constructed in less than four hours.
A version of the MNIST embedding in which the original digit images are shown is presented in Figure 5. The results show that, like standard t-SNE, Barnes-Hut-SNE is very good at preserving local structure of the data in the embedding: for instance, the visualization clearly shows that orientation is one of the main sources of variation within the cluster of ones.
We presented a new t-SNE algorithm , called Barnes-Hut-SNE, that i) constructs a sparse approximation to the similarities between input objects using vantage-point trees, and ii) approximates the t-SNE gradient using a variant of the Barnes-Hut algorithm. The new algorithm runs in rather than , and requires only memory. Our experimental evaluation of Barnes-Hut-SNE shows that it is substantially faster than standard t-SNE, and that it facilitates the visualization of data sets with millions of data objects in scatter plots.
A drawback of Barnes-Hut-SNE is that it does not provide any error bounds . Indeed, there exist alternative algorithms that do provide such error bounds (e.g., ); we aim to explore these alternatives in future work to see whether they can be used to bound the error made in our t-SNE gradient computations, and to bound the error in the final embedding. Another limitation of Barnes-Hut-SNE is that it can only be used to embed data in two or three dimensions. Generalizations to higher dimensions are infeasible because the size of the tree grows exponentially in the dimensionality of the embedding space. Having said that, this limitation is not very severe since t-SNE is mainly used for visualization (i.e. for embedding in two or three dimensions). Moreover, it is relatively straightforward to replace the quadtree by metric trees that scale better to high-dimensional spaces.
In future work, we plan to further scale up our algorithm by developing parallelized implementations that can run on data sets that are too large to be fully stored in memory. We also aim to investigate the effect of varying the value of during the optimization. In addition, we plan to explore to what extent adapted versions of our algorithm (that use metric trees instead of quadtrees) can be used to speed up techniques for relational embedding (e.g., [4, 18]).
The author is supported by EU-FP7 Social Signal Processing (SSPNet) and by the Netherlands Institue for Advanced Study (NIAS). The author thanks Geoffrey Hinton for many helpful discussions, and two anonymous reviewers for their helpful comments.
Proceedings of the International Conference on Machine Learning, pages 97–104, 2006.
Conference on Artificial Intelligence (AAAI), 2011.
Approximate nearest neighbors: Towards removing the curse of dimensionality.In Proceedings of
Symposium on Theory of Computing, 1998.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 97–104, 2004.
Learning distributed representations of concepts using linear relational embedding.IEEE Transactions on Knowledge and Data Engineering, 13(2):232–244, 2001.
Large margin hidden Markov models for automatic speech recognition.In Advances in Neural Information Processing Systems, volume 19, pages 1249–1456, 2007.
80 million tiny images: A large dataset for non-parametric object and scene recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008.
Improved fast Gauss transform and efficient kernel density estimation.In Proceedings of the IEEE International Conference on Computer Vision, pages 664–671, 2003.
We also performed experiments with a dual-tree implementation  of t-SNE. Dual-tree t-SNE differs from Barnes-Hut-SNE in that it considers only cell-cell instead of point-cell interactions. It simultaneously traverses the quadtree twice, and decides for each pair of nodes whether the interaction between these nodes can be used as a “summary” for all points in the cells corresponding to these two nodes. We use the following condition to check whether the interaction between a pair of nodes may be used as a “summary” interaction:
where and represent the center-of-mass of the two cells, and represent the diameter of the two cells, and is a speed-accuracy trade-off parameter (similar to in Barnes-Hut-SNE).
Figure 6 presents the results of an experiment in which we investigate the influence of the trade-off parameter on the learning time and the quality of the embedding on the MNIST data set. The results in the figure may be readily compared to those in Figure 2. The results in the figure show that, whilst the dual-tree algorithm provides additional speed-ups compared to the Barnes-Hut algorithm, the quality of the embedding also deteriorates much faster as the trade-off parameter increases. The quality of the embedding obtained with a dual-tree algorithm with roughly equals that of a Barnes-Hut embedding with , and these two embeddings are constructed in roughly the same time (viz. in approximately 650–700 seconds). Figure 7 shows the performance of dual-tree t-SNE with as a function of the number of MNIST digits . The results in Figure 7 can be readily compared to those in Figure 3. Again, the results show that dual-tree t-SNE performs roughly on par with Barnes-Hut-SNE, irrespective of the size of the data set .