Generative adversarial networks (GANs) (Goodfellow et al., 2014) are a class of methods for training generative models, which have been recently shown to be very successful in producing image samples of excellent quality. They have been applied in numerous areas (Radford et al., 2015; Salimans et al., 2016; Ho & Ermon, 2016). Briefly, this framework can be described as follows. We attempt to mimic a given target distribution by constructing two networks and
called the generator and the discriminator. The generator learns to sample from the target distribution by transforming a random input vectorz to a vector , and the discriminator learns to distinguish the model distribution from
. The training procedure for GANs is typically based on applying gradient descent in turn to the discriminator and the generator in order to minimize a loss function. Finding a good loss function is a topic of ongoing research, and several options were proposed in(Mao et al., 2016; Arjovsky et al., 2017).
in the GANs framework is estimating the quality of the generated samples. In traditional GAN models, the discriminator loss cannot be used as a metric and does not necessarily decrease during training. In more involved architectures such as WGAN(Arjovsky et al., 2017) the discriminator (critic) loss is argued to be in correlation with the image quality, however, using this loss as a measure of quality is nontrivial. Training GANs is known to be difficult in general and presents such issues as mode collapse when fails to capture a multimodal nature of and in extreme cases all the generated samples might be identical. Several techniques to improve the training procedure were proposed in (Salimans et al., 2016; Gulrajani et al., 2017).
In this work, we attack the problem of estimating the quality and diversity of the generated images by using the machinery of topology. The well-known Manifold Hypothesis(Goodfellow et al., 2016) states that in many cases such as the case of natural images the support of the distribution is concentrated on a low dimensional manifold in a Euclidean space. This manifold is assumed to have a very complex non-linear structure and is hard to define explicitly. It can be argued that interesting features and patterns of the images from can be analyzed in terms of topological properties of , namely in terms of loops and higher dimensional holes in . Similarly, we can assume that is supported on a manifold (under mild conditions on the architecture of the generator this statement can be made precise (Shao et al., 2017)), and for sufficiently good GANs this manifold can be argued to be quite similar to (see Fig. 1
). This intuitive claim will be later supported by numerical experiments. Based on this hypothesis we develop an approach which allows for comparing the topology of the underlying manifolds for two point clouds in a stochastic manner providing us with a visual way to detect mode collapse and a score which allows for comparing the quality of various trained models. Informally, since the task of computing the precise topological properties of the underlying manifolds based only on samples is ill-posed by nature, we estimate them using a certain probability distribution (seeSection 4).
We test our approach on several real–life datasets and popular GAN models (DCGAN, WGAN, WGAN-GP) and show that the obtained results agree well with the intuition and allow for comparison of various models (see Section 5).
2 Main idea
Let us briefly discuss our approach before dwelling into technical details. As described in the introduction we would like to compare topological properties of and in some way. This task is complicated by the fact that we do not have access to the manifolds themselves but merely to samples from them. A natural approach in this case is to approximate these manifolds using some simpler spaces in such a way that topological properties of these spaces resemble those of and .
The main example of such spaces are simplicial complexes (Fig. 2), which are build from intervals, triangles and other higher dimensional simplices. In order to reconstruct the underlying manifold using a simplicial complex several methods exist. In all such approaches proximity information of the data is used, such as pairwise distances between samples. Typically one chooses some threshold parameter and based on the value of this parameter one decides which simplices are added into the approximation (see Fig. 3).
However a single value is not enough — for very small values the reconstructed space will be just a disjoint union of points and for very large it will be a single connected blob, while the correct approximation is somewhere in between. This issue is resolved by considering a family (Fig. 4, a) of simplicial complexes, parametrized by the (‘persistence’) parameter . It is also convenient to refer to the parameter as time, with the idea that we gradually throw more simplices into our simplicial complex as time goes by. For each value of we can compute topological properties of the corresponding simplicial complex, namely homology which encodes the number of holes of various dimensions in a space. Controlling the value of allows us to decide holes of which size are meaningful and should not be discarded as a noise. For simplicial complex presented on Fig. 3 there are two one-dimensional holes, and for slightly bigger value of the lower hole disappeared (Fig. 4, b), while the top one remained intact, which suggests that the top hole is more important topological feature. Information about how homology is changing with respect to can be conveniently encoded in the so-called persistence barcodes (Ghrist, 2008; Zomorodian & Carlsson, 2005). An example of such barcode is given on (Fig. 4, c). In general, to find the rank of -homology (delivering the number of -dimensional holes) at some fixed value one has to count intersections of the vertical line with the intervals at the desired block .
These barcodes provide a way to compare topological properties of the underlying manifolds. In principle, we could obtain a metric of similarity of two datasets by comparing the barcodes of the simplicial complexes constructed based on each dataset (as described on Fig. 3), but there are disadvantages of this approach, such as a huge number of simplices for large datasets. Moreover, in order to extract interesting topological properties from such large simplicial complexes various tricks are required (Ghrist, 2008). To remedy these issues we can note that we are in fact interested in topological approximations rather than geometrical. The difference is that to obtain a correct estimate of the topological properties much smaller number of simplices is often sufficient, e.g., for any number of points sampled from a circle the correct answer could be obtained by taking just three points (thus obtaining a triangle which is topologically equivalent to the circle). Based on these ideas the so-called witness complex is introduced (De Silva & Carlsson, 2004), which provides a topological approximation with a small number of simplices. In order to achieve this a small subset of landmark points is chosen and a simplicial complex is constructed using these points as vertices (while also taking into account the proximity information about all the remaining points called witnesses).
To construct a numerical measure which could be compared across datasets we would like to estimate the correct values of homology. Comparing the computed barcodes is a challenging task since they are non-trivial mathematical objects (though some metrics exist they are hard to compute). We take the simpler route and to extract meaningful topological data from the barcode we propose computing Relative Living Times (RLT) of each number of holes that was observed. They are defined as the ratio of the total time when this number was present and of the value when points connect into a single blob. These relative living times could be interpreted as a confidence in our approximation — if say for of all period of topological activity we have observed that there is at least one-dimensional hole (as on Fig. 4), then it is probably an accurate estimation of topology of the underlying space.
Choosing the correct landmarks is a nontrivial task. We follow the discussion in (De Silva & Carlsson, 2004) which advises doing it randomly. To account for this randomness, we compute the RLT stochastically by repeating the experiment a large number of times. By averaging the obtained RLT we compute the Mean Relative Living Times (MRLT).
By construction, they add up to and employing Bayesian point of view we can interpret them as a probability distribution reflecting our confidence about the correct number of holes on average. An example of such distribution is given on Fig. 5, where we run our method for a simple planar dataset (in a high dimensional space). To quantitatively evaluate the topological difference between two datasets we propose computing themay fail to be a manifold in precise mathematical sense, however, the analysis is still applicable since it deals with arbitrary topological spaces. Now let us introduce all the technical details.
3 Homology to the rescue
In this section we briefly discuss the important concepts of simplicial complexes and homology. For thorough introduction we refer the reader to the classical texts such as (Hatcher, 2002; May, 1999).
Simplicial complex is a classical concept widely used in topology. Formally it is defined as follows.
A simplicial complex (more precisely an abstract simplicial complex) is specified by the following data:
The vertex set
A collection of simplices , where -dimensional simplex is defined just as a element subset of :
We require that the collection is closed under taking faces, that is for each -dimensional simplex all the -dimensional simplices obtained by deleting one of the vertices are also elements of .
An example of a simplicial complex is presented on Fig. 2. It contains vertices and several edges and faces: two-dimensional face and one-dimensional edges . Note that these are maximal simplices, since by the third property all the edges of are also elements of . Important topological properties of (such as connectedness, existence of one-dimensional loop) do not depend on in which Euclidean space is embedded or on precise positions of vertices, but merely on the combinatorial data — the number of points and which vertices together span a simplex.
As was described in Section 2 given a dataset sampled from a manifold we would like to compute a family of simplicial complexes topologically approximating on various scales, namely witness complexes. This family is defined as follows. First we choose some subset of points called landmarks (whereas points in are called witnesses) and some distance function , e.g., the ordinary Euclidean distance. There is not much theory about how to choose the best landmarks, but several strategies were proposed in (De Silva & Carlsson, 2004). The first one is to choose landmarks sequentially by solving a certain minimax problem, and the second one is to just pick landmarks at random (by uniformly selecting a fixed number of points from
). We follow the second approach since the minimax strategy is known to have some flaws such as the tendency to pick up outliers. The selected landmarks will serve as the vertices of the simplicial complex and witnesses will help to decide on which simplices are inserted via a predicate “is witnessed”:
with being a relaxation parameter which provides us with a sequence of simplicial complexes. The maximal value of for the analysis is typically chosen to be proportional to the maximal pairwise distance between points in . Witness complexes even for small values of are good topological approximations to . The main advantage of a witness complex is that it allows constructing a reliable approximation using a relatively small number of simplices and makes the problem tractable even for large datasets. Even though it is known that in some cases it may fail to recover the correct topology (Boissonnat et al., 2009), it still can be used to compare topological properties of datasets, and if any better method is devised, we can easily replace the witness complex by this new more reliable simplicial complex.
The precise definition of the homology is technical, and we have to omit it due to the limited space. We refer the reader to [Chapter 2] (Hatcher, 2002) for a thorough discussion. The most important properties of homology can be summarized as follows. For any topological space the so-called homology groups are introduced. The actual number of -dimensional holes in is given the rank of , the concept which is quite similar to the dimension of a vector space. These ranks are called the Betti numbers and serve as a coarse numerical measure of homology.
Homology is known to be one of the most easily computable topological invariants. In the case of being a simplicial complex can be computed by pretty much linear algebra, namely by analyzing kernels and images of certain linear maps. Dimensions of matrices appearing in this task are equal to the numbers of simplices of specific dimension in , e.g. in the case of Fig. 2 we have and matrices will be of sizes and . Existent algorithms (Kaczynski et al., 2006) can handle extremely large simplicial complexes (with millions of simplices) and are available in numerous software packages. An important property of homology is that homology depends only on simplices of dimension at most , which significantly speeds up computations.
In Section 2 we discussed that to find a proxy of the correct topology of it is insufficient to use single simplicial complex but rather a family of simplicial complexes is required. As we transition from one simplicial complex to another, some holes may appear, and some disappear. To distinguish between which are essential and which should be considered noise the concept of persistence was introduced (Edelsbrunner et al., 2000; Zomorodian & Carlsson, 2005). The formal Structure Theorem (Zomorodian & Carlsson, 2005) states that for each generator of homology (“hole” in our notation) one could provide the time of its “birth” and “death”. This data is pictorially represented as (Fig. 4, [bottom]), with the horizontal axis representing the parameter and the vertical axis representing various homology generators. To perform the computation of these barcodes, an efficient algorithm was proposed in (Zomorodian & Carlsson, 2005). As an input to this algorithm one has to supply a sequence of , with being a simplex and being its time of appearance in a family. This algorithm is implemented in several software packages such as Dionysus and GUDHI (Maria et al., 2014), but the witness complex is supported only in the latter.
Let us now explain how we apply these concepts to construct a metric to compare the topological properties of two datasets. First let us define the key part of the algorithm – the relative living times (RLT) of homology. Suppose that for a dataset and some choice of landmarks we have obtained a persistence barcode with the persistence parameter spanning the range . Let us fix the dimension in which we study the homology, and let be the collection of persistence intervals in this dimension. Then in order to find the Betti number for a fixed value one has to count the number of persistence intervals containing , and we obtain the integer valued function
Then the RLT are defined as follows (for non-negative integers ):
that it is for each possible value of we find how long it existed relatively to the whole period of topological activity. Note that in our analysis we use witness complexes which depend on the choice of landmarks, which is random. Thus it is reasonable to consider the distribution of on the set of landmarks (tuples of points), in other words, we repeatedly sample the landmarks and compute the RLT of the obtained persistence barcode. After sufficiently many experiments we can approximate the Mean Relative Living Times (MRLT):
We hypothesize that these quantities provide us with a good way to compare the topological properties of datasets, as they serve as measures of confidence in the estimation of the topology of the underlying manifolds. From Eq. 3 it follows that
which suggest that for a fixed value of we could interpret as a probability distribution (over integers). This distribution defines our certainty about the number of -dimensional holes in the underlying manifold of on average. In this work we consider the case , i.e. we study the first homology of datasets. We motivate this by drawing an analogy with the Taylor series: we can get a good understanding of behavior of a function by looking at the first term of the series (see also (Ghrist, 2008) for discussion). Based on the probabilistic understanding given two datasets and we define a measure of their topological similarity (Geometry Score) in the following way:
with being an upper bound on for and (for typical datasets we found that suffices).
To construct the witness complex given the sets of landmarks and witnesses one has to provide the matrix of pairwise distances between and and the maximal value of persistence parameter (see Eq. 1). In our experiments, we have chosen to be proportional to the maximal pairwise distance between points in with some coefficient . Since we only compute the simplices of dimension at most are needed. In principle to compare two datasets any value of suffices, however in our experiments we found that to get a reasonable distribution for datasets of size the value yields good results (for large a lot of time is spend in the regime of a single connected blob which shifts the distributions towards ). We summarize our approach in Algorithm 1 and Algorithm 2. We also suggest that to obtain accurate results datasets of the same size should be used for comparison
Let us briefly discuss the complexity of each step in the main loop of Algorithm 1. Suppose that we have a dataset . Computing the matrix of pairwise distances between all points in the dataset and the landmarks points requires operations. The complexity of the next piece involving computing the persistence barcode is hard to estimate, however we can note that it does not depend on the dimensionality of the data. In practice this computation is done faster than computing the matrix in the previous step (for datasets of significant dimensionality). All the remaining pieces of the algorithm take negligible amount of time. This linear scaling of the complexity w.r.t dimensionality of the data allows us to apply our method even for high–dimensional datasets. On a typical laptop (3.1 GHz Intel Core i5 processor) one iteration of the inner loop of Algorithm 1 for one class of the MNIST dataset takes approximately ms.
We have implemented Algorithms 2 and 1 in Python using GUDHI111http://gudhi.gforge.inria.fr/ for computing witness complexes and persistence barcodes. Our code is available on Github222https://github.com/KhrulkovV/geometry-score. Default values of parameters in Algorithm 1 were used for experiments unless otherwise specified. We test our method on several datasets and GAN models:
Synthetic data — on synthetic datasets we demonstrate that our method allows for distinguishing the datasets based on their topological properties.
CelebA — to demonstrate that our method can be applied to datasets of large dimensionality we analyze the CelebA dataset (Liu et al., 2015) and check if we can detect mode collapse in a GAN using MRLT.
CaloGAN — as the final experiment we apply our algorithm to a dataset of a non-visual origin and evaluate the specific generative model CaloGAN (Paganini et al., 2017).
For this experiment we have generated a collection of simple datasets (see Fig. 6) each containing points. As a test problem we would like to evaluate which of the datasets is the best approximation to the ground truth . For each of we ran Algorithm 1 using and compute MRLT using Eq. 4. The resulting distributions are visualized on Fig. 6, [bottom]. We observe that we can correctly identify the number of -dimensional holes in each space using the MAP estimate
It is clear that is the most similar dataset to , which is supported by the fact that their MRLT are almost identical. Note that on such simple datasets we were able to recover the correct homology with almost confidence and this will not be the case for more complicated manifolds in the next experiment.
In this experiment we compare topological properties of the MNIST dataset and samples generated by the WGAN and WGAN-GP models trained on MNIST. It was claimed that the WGAN-GP model produces better images and we would like to verify if we can detect it using topology. For the GAN implementations we used the code333https://github.com/igul222/improved_wgan_training provided by the authors of (Gulrajani et al., 2017). We have trained each model for epochs and generated
samples. To compare topology of each class individually we trained a CNN classifier on MNIST (withtest accuracy) and split generated datasest into classes (containing roughly images each). For every class and each of the corresponding datasets (‘base’, ‘wgan’, ‘wgan–gp’) we run Algorithm 1 and compute MRLT with . Similarly we evaluate MRLT for the entire datasets without splitting them into classes using . The obtained MRLT are presented on Fig. 7 and the corresponding Geometry Scores for each model are given in Table 1. We observe that both models produce distributions which are very close to the ground truth, but for almost all classes WGAN-GP shows better scores. We can also note that for the entire datasets (Fig. 7, [right]) the predicted values of homology does not seem to be much bigger than for each individual digit. One possible explanation is that some samples (like say of class ‘’) fill the holes in the underlying manifolds of other classes (like class ‘’ in this case) since they look quite similar.
We now analyze the popular CelebA dataset consisting of photos of various celebrities. In this experiment we would like to study if we can detect mode collapse using our method. To achieve this we train two GAN models — a good model with the generator having high capacity and a second model with the generator much weaker than the discriminator. In this experiment we utilize the DCGAN model and use the implementation provided444https://github.com/carpedm20/DCGAN-tensorflow by the authors (Radford et al., 2015). For the first model (‘dcgan’) we use the default settings, and for the second (‘bad-dcgan’) we set the latent dimension to and reduce the size of the fully connected layer in the generator to and number of filters in convolutional layers to . Images in the dataset are of size and to obtain faces we perform the central crop which reduces the size to . We trained both models for epochs and produced images for our analysis. Similarly, we randomly picked (cropped) images from the original dataset. We report the obtained results on Fig. 8. MRLT obtained using the good model matches the ground truth almost perfectly and Geometry Score of the generated dataset is equal to , confirming the good visual quality of the samples (Radford et al., 2015). MRLT obtained using the weak model are maximized for , which suggests that the samples are either identical or present very little topological diversity (compare with Fig. 5), which we confirmed visually. On Fig. 8, [right] we report the behavior of the Geometry Score and Inception Score (Salimans et al., 2016) w.r.t the iteration number. The Inception Score introduced uses the pretrained Inception network (Szegedy et al., 2015) and is defined as
where is approximated by the Inception network and is computed as . Note that the Geometry Score of the better model rapidly decreases and of the mode collapsed model stagnates at high values. Such behavior could not be observed in the Inception Score.
In this experiment, we will apply our technique to the dataset appearing in the experimental particle physics. This dataset555https://data.mendeley.com/datasets/pvn3xc3wy5/1 represents a collection of a calorimeter (an experimental apparatus measuring the energy of particles) responses, and it was used to create a generative model (Paganini et al., 2017) in order to help physicists working at the LHC. Evaluating the obtained model666https://github.com/hep-lbdl/CaloGAN is a non-trivial task and was performed by comparing physical properties of the obtained and the real data. Since our method is not limited to visual datasets we can apply it in order to confirm the quality of this model. For the analysis we used ‘eplus’ dataset which is split into parts (‘layer 0’, ‘layer 1’, ‘layer 2’) containing matrices of sizes correspondingly. We train the CaloGAN model with default settings for epochs and generate samples (each sample combines data for all layers). We then randomly pick samples from the original dataset and compare MRLT of the data and generated samples for each layer. Results are presented on Fig. 9. It appears that topological properties of this dataset are rather trivial, however, they are correctly identified by CaloGAN. Slight dissimilarities between the distributions could be connected to the fact that the physical properties of the generated samples do not exactly match those of the real ones, as was analyzed by the authors of (Paganini et al., 2017).
6 Related work and discussion
Several performance measures have been introduced to assess the performance of GANs used for natural images. Inception Score (Salimans et al., 2016) uses the outputs of the pretrained Inception network, and a modification called Fréchet Inception Distance (FID) (Heusel et al., 2017) also takes into account second order information of the final layer of this model. Contrary to these methods, our approach does not use auxiliary networks and is not limited to visual data. We note, however, that since we only take topological properties into account (which do not change if we say shift the entire dataset by ) assessing the visual quality of samples may be difficult based only on our algorithm, thus in the case of natural images we propose to use our method in conjunction with other metrics such as FID. We also hypothesize that in the case of the large dimensionality of data Geometry Score
of the features extracted using some network will adequately assess the performance of a GAN.
We have introduced a new algorithm for evaluating a generative model. We show that the topology of the underlying manifold of generated samples may be different from the topology of the original data manifold, which provides insight into properties of GANs and can be used for hyperparameter tuning. We do not claim however that the obtained metric correlates with the visual quality as estimated by humans and leave the analysis to future work. We hope that our research will be useful to further theoretical understanding of GANs.
We would like to thank the anonymous reviewers for their valuable comments. We also thank Maxim Rakhuba for productive discussions and making our illustrations better. This study was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).
- Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
- Barratt & Sharma (2018) Barratt, S. and Sharma, R. A note on the Inception Score. arXiv preprint arXiv:1801.01973, 2018.
- Boissonnat et al. (2009) Boissonnat, J.-D., Guibas, L. J., and Oudot, S. Y. Manifold reconstruction in arbitrary dimensions using witness complexes. Discrete & Computational Geometry, 42(1):37–70, 2009.
- De Silva & Carlsson (2004) De Silva, V. and Carlsson, G. E. Topological estimation using witness complexes. SPBG, 4:157–166, 2004.
- Edelsbrunner et al. (2000) Edelsbrunner, H., Letscher, D., and Zomorodian, A. Topological persistence and simplification. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pp. 454–463. IEEE, 2000.
- Ghrist (2008) Ghrist, R. Barcodes: the persistent topology of data. Bulletin of the American Mathematical Society, 45(1):61–75, 2008.
- Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates, Inc., 2014.
- Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT press Cambridge, 2016.
- Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028, 2017.
- Hatcher (2002) Hatcher, A. Algebraic topology. Cambridge University Press, 2002.
- Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Klambauer, G., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a Nash equilibrium. arXiv preprint arXiv:1706.08500, 2017.
Ho & Ermon (2016)
Ho, J. and Ermon, S.
Generative adversarial imitation learning.In Advances in Neural Information Processing Systems 29, pp. 4565–4573. Curran Associates, Inc., 2016.
- Kaczynski et al. (2006) Kaczynski, T., Mischaikow, K., and Mrozek, M. Computational homology, volume 157. Springer Science & Business Media, 2006.
Liu et al. (2015)
Liu, Z., Luo, P., Wang, X., and Tang, X.
Deep learning face attributes in the wild.
Proceedings of International Conference on Computer Vision (ICCV), 2015.
- Lucic et al. (2017) Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. Are gans created equal? a large-scale study. arXiv preprint arXiv:1711.10337, 2017.
- Mao et al. (2016) Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Smolley, S. P. Least squares generative adversarial networks. arXiv preprint ArXiv:1611.04076, 2016.
- Maria et al. (2014) Maria, C., Boissonnat, J.-D., Glisse, M., and Yvinec, M. The Gudhi library: Simplicial complexes and persistent homology. In International Congress on Mathematical Software, pp. 167–174. Springer, 2014.
- May (1999) May, J. P. A concise course in algebraic topology. University of Chicago press, 1999.
- Paganini et al. (2017) Paganini, M., de Oliveira, L., and Nachman, B. CaloGAN: Simulating 3D high energy particle showers in multi-layer electromagnetic calorimeters with generative adversarial networks. arXiv preprint arXiv:1705.02355, 2017.
- Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Salimans et al. (2016) Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X., and Chen, X. Improved techniques for training GANs. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 2234–2242. Curran Associates, Inc., 2016.
- Shao et al. (2017) Shao, H., Kumar, A., and Fletcher, P. T. The Riemannian geometry of deep generative models. arXiv preprint arXiv:1711.08014, 2017.
- Szegedy et al. (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015. URL http://arxiv.org/abs/1409.4842.
- Zomorodian & Carlsson (2005) Zomorodian, A. and Carlsson, G. Computing persistent homology. Discrete & Computational Geometry, 33(2):249–274, 2005.