Deep generative models, such as Variational Autoencoders (VAE) Kingma and Welling (2014) and Generative Adversarial Networks (GAN) Goodfellow et al. (2014), have received a great deal of attention due to their ability to learn complex, high-dimensional distributions. One of the biggest impediments to future research is the lack of quantitative evaluation methods to accurately assess the quality of trained models. Without a proper evaluation metric researchers often need to visually inspect generated samples or resort to qualitative techniques which can be subjective. One of the main difficulties for quantitative assessment lies in the fact that the distribution is only specified implicitly – one can learn to sample from a predefined distribution, but cannot evaluate the likelihood efficiently. In fact, even if likelihood computation were computationally tractable, it might be inadequate and misleading for high-dimensional problems Theis et al. (2016).
As a result, surrogate metrics are often used to assess the quality of the trained models. Some proposed measures, such as Inception Score (IS) Salimans et al. (2016) and Fréchet Inception Distance (FID) Heusel et al. (2017), have shown promising results in practice. In particular, FID has been shown to be robust to image corruption, it correlates well with the visual fidelity of the samples, and it can be computed on unlabeled data.
However, all of the metrics commonly applied to evaluating generative models share a crucial weakness: Since they yield a one-dimensional score, they are unable to distinguish between different failure cases. For example, the generative models shown in Figure 1 obtain similar FIDs but exhibit different sample characteristics: the model on the left trained on MNIST LeCun et al. (1998) produces realistic samples, but only generates a subset of the digits. On the other hand, the model on the right produces low-quality samples which appear to cover all digits. A similar effect can be observed on the CelebA Liu et al. (2015) data set. In this work we argue that a single-value summary is not adequate to compare generative models.
Motivated by this shortcoming, we present a novel approach which disentangles the divergence between distributions into two components: precision and recall. Given a reference distribution and a learned distribution , precision intuitively measures the quality of samples from , while recall measures the proportion of that is covered by . Furthermore, we propose an elegant algorithm which can compute these quantities based on samples from and . In particular, using this approach we are able to quantify the degree of mode dropping and mode inventing based on samples from the true and the learned distributions.
Our contributions: (1) We introduce a novel definition of precision and recall for distributions and prove that the notion is theoretically sound and has desirable properties, (2) we propose an efficient algorithm to compute these quantities, (3) we relate these notions to total variation, IS and FID, (4) we demonstrate that in practice one can quantify the degree of mode dropping and mode inventing on real world data sets (image and text data), and (5) we compare several types of generative models based on the proposed approach – to our knowledge, this is the first metric that experimentally confirms the folklore that GANs often produce "sharper" images, but can suffer from mode collapse (high precision, low recall), while VAEs produce "blurry" images, but cover more modes of the distribution (low precision, high recall).
2 Background and Related Work
The task of evaluating generative models is an active research area. Here we focus on recent work in the context of deep generative models for image and text data. Classic approaches relying on comparing log-likelihood have received some criticism due the fact that one can achieve high likelihood, but low image quality, and conversely, high-quality images but low likelihood Theis et al. (2016)
. While the likelihood can be approximated in some settings, kernel density estimation in high-dimensional spaces is extremely challengingWu et al. (2017); Theis et al. (2016). Other failure modes related to density estimation in high-dimensional spaces have been elaborated in Theis et al. (2016); Huszár (2015). A recent review of popular approaches is presented in Borji (2018).
The Inception Score (IS) Salimans et al. (2016) offers a way to quantitatively evaluate the quality of generated samples in the context of image data. Intuitively, the conditional label distribution of samples containing meaningful objects should have low entropy, while the label distribution over the whole data set should have high entropy. Formally,Barratt and Sharma (2018).
The FID Heusel et al. (2017) provides an alternative approach which requires no labeled data. The samples are first embedded in some feature space (e.g., a specific layer of Inception network for images). Then, a continuous multivariate Gaussian is fit to the data and the distance computed as , where and denote the mean and covariance of the corresponding samples. FID is sensitive to both the addition of spurious modes as well as to mode dropping (see Figure 5 and results in Lucic et al. (2018)). Bińkowski et al. (2018) recently introduced an unbiased alternative to FID, the Kernel Inception Distance. While unbiased, it shares an extremely high Spearman rank-order correlation with FID Kurach et al. (2018).
Another approach is to train a classifier between the real and fake distributions and to use its accuracy on a test set as a proxy for the quality of the samples Lopez-Paz and Oquab (2016); Im et al. (2018). This approach necessitates training of a classifier for each model which is seldom practical. Furthermore, the classifier might detect a single dimension where the true and generated samples differ (e.g., barely visible artifacts in generated images) and enjoy high accuracy, which runs the risk of assigning lower quality to a better model.
To the best of our knowledge, all commonly used metrics for evaluating generative models are one-dimensional in that they only yield a single score or distance. A notion of precision and recall has previously been introduced in Lucic et al. (2018) where the authors compute the distance to the manifold of the true data and use it as a proxy for precision and recall on a synthetic data set. Unfortunately, it is not possible to compute this quantity for more complex data sets.
3 PRD: Precision and Recall for Distributions
In this section, we derive a novel notion of precision and recall to compare a distribution to a reference distribution . The key intuition is that precision should measure how much of can be generated by a “part” of while recall should measure how much of can be generated by a “part” of . Figure 4 (a)-(d) show four toy examples for and to visualize this idea: (a) If is bimodal and only captures one of the modes, we should have perfect precision but only limited recall. (b) In the opposite case, we should have perfect recall but only limited precision. (c) If , we should have perfect precision and recall. (d) If the supports of and are disjoint, we should have zero precision and recall.
Let be the (non-empty) intersection of the supports111For a distribution defined on a finite state space , we define . of and . Then, may be viewed as a two-component mixture where the first component
is a probability distribution onand the second component is defined on the complement of . Similarly, may be rewritten as a mixture of and . More formally, for some , we define
This decomposition allows for a natural interpretation: is the part of that cannot be generated by , so its mixture weight may be viewed as a loss in recall. Similarly, is the part of that cannot be generated by , so may be regarded as a loss in precision. In the case where , i.e., the distributions and agree on up to scaling, and provide us with a simple two-number precision and recall summary satisfying the examples in Figure 4 (a)-(d).
If , we are faced with a conundrum: Should the differences in and be attributed to losses in precision or recall? Is inadequately “covering” or is it generating “unnecessary” noise? Inspired by PR curves for binary classification, we propose to resolve this predicament by providing a trade-off between precision and recall instead of a two-number summary for any two distributions and . To parametrize this trade-off, we consider a distribution on that signifies a “true” common component of and and similarly to (1), we decompose both and as
The distribution is viewed as a two-component mixture where the first component is and the second component signifies the part of that is “missed” by and should thus be considered a recall loss. Similarly, is decomposed into and the part that signifies noise and should thus be considered a precision loss. As is varied, this leads to a trade-off between precision and recall.
It should be noted that unlike PR curves for binary classification where different thresholds lead to different classifiers, trade-offs between precision and recall here do not constitute different models or distributions – the proposed PRD curves only serve as a description of the characteristics of the model with respect to the target distribution.
3.2 Formal definition
For simplicity, we consider distributions and that are defined on a finite state space, though the notion of precision and recall can be extended to arbitrary distributions. By combining (1) and (2), we obtain the following formal definition of precision and recall.
For , the probability distribution has precision at recall w.r.t. if there exist distributions , and such that
The set of attainable pairs of precision and recall of a distribution w.r.t. a distribution is denoted by and it consists of all satisfying Definition 1 and the pair .
The set characterizes the above-mentioned trade-off between precision and recall and can be visualized similarly to PR curves in binary classification: Figure 4 (a)-(d) show the set on a 2D-plot for the examples (a)-(d) in Figure 4. Note how the plot distinguishes between (a) and (b): Any symmetric evaluation method (such as FID) assigns these cases the same score although they are highly different. The interpretation of the set is further aided by the following set of basic properties which we prove in Section A.1 in the appendix.
Let and be probability distributions defined on a finite state space . The set satisfies the following properties:
if , , (monotonicity)
Property 1 in combination with Property 5 guarantees that if the set contains the interior of the unit square, see case (c) in Figures 4 and 4. Similarly, Property 2 assures that whenever there is no overlap between and , only contains the origin, see case (d) of Figures 4 and 4. Properties 3 and 4 provide a connection to the decomposition in (1) and allow an analysis of the cases (a) and (b) in Figures 4 and 4: As expected, in (a) achieves a maximum precision of 1 but only a maximum recall of 0.5 while in (b), maximum recall is 1 but maximum precision is 0.5. Note that the quantities and here are by construction the same as in (1). Finally, Property 6 provides a natural interpretation of precision and recall: The precision of w.r.t. is equal to the recall of w.r.t. and vice versa.
Clearly, not all cases are as simple as the examples (a)-(d) in Figures 4 and 4, in particular if and are different on the intersection of their support. The examples (e) and (f) in Figure 4 and the resulting sets in Figure 4 illustrate the importance of the trade-off between precision and recall as well as the utility of the set . In both cases, and have the same support while has high precision and low recall in case (e) and low precision and high recall in case (f). This is clearly captured by the sets . Intuitively, the examples (e) and (f) may be viewed as noisy versions of the cases (a) and (b) in Figure 4.
Computing the set based on Definitions 1 and 2 is non-trivial as one has to check whether there exist suitable distributions , and for all possible values of and . We introduce an equivalent definition of in Theorem 2 that does not depend on the distributions , and and that leads to an elegant algorithm to compute practical PRD curves.
Let and be two probability distributions defined on a finite state space . For define the functions
Then, it holds that
We prove the theorem in Section A.2 in the appendix. The key idea of Theorem 2 is illustrated in Figure 4: The set of may be viewed as a union of segments of the lines over all . Each segment starts at the origin and ends at the maximal achievable value . This provides a surprisingly simple algorithm to compute in practice: Simply compute pairs of and as defined in (4) for an equiangular grid of values of . For a given angular resolution , we compute
To compare different distributions , one may simply plot their respective PRD curves , while an approximation of the full sets
may be computed by interpolation betweenand the origin. An implementation of the algorithm is available at https://github.com/msmsajjadi/precision-recall-distributions.
3.4 Connection to total variation distance
Theorem 2 provides a natural interpretation of the proposed approach. For , we have
where denotes the total variation distance between and . As such, our notion of precision and recall may be viewed as a generalization of total variation distance.
4 Application to Deep Generative Models
In this section, we show that the algorithm introduced in Section 3.3 can be readily applied to evaluate precision and recall of deep generative models. In practice, access to and is given via samples and . Given that both and are continuous distributions, the probability of generating a point sampled from is . Furthermore, there is strong empirical evidence that comparing samples in image space runs the risk of assigning higher quality to a worse model Lopez-Paz and Oquab (2016); Salimans et al. (2016); Theis et al. (2016). A common remedy is to apply a pre-trained classifier trained on natural images and to compare and at a feature level. Intuitively, in this feature space the samples should be compared based on statistical regularities in the images rather than random artifacts resulting from the generative process Lopez-Paz and Oquab (2016); Odena et al. (2016).
Following this line of work, we first use a pre-trained Inception network to embed the samples (i.e. using the Pool3 layer Heusel et al. (2017)). We then cluster the union of and
in this feature space using mini-batch k-means withSculley (2010). Intuitively, we reduce the problem to a one dimensional problem where the histogram over the cluster assignments can be meaningfully compared. Hence, failing to produce samples from a cluster with many samples from the true distribution will hurt recall, and producing samples in clusters without many real samples will hurt precision. As the clustering algorithm is randomized, we run the procedure several times and average over the PRD curves. We note that such a clustering is meaningful as shown in Figure 9 in the appendix and that it can be efficiently scaled to very large sample sizes Bachem et al. (2016b, a).
We stress that from the point of view of the proposed algorithm, only a meaningful embedding is required. As such, the algorithm can be applied to various data modalities. In particular, we show in Section 4.1
that besides image data the algorithm can be applied to a text generation task.
4.1 Adding and dropping modes from the target distribution
Mode collapse or mode dropping is a major challenge in GANs Goodfellow et al. (2014); Salimans et al. (2016). Due to the symmetry of commonly used metrics with respect to precision and recall, the only way to assess whether the model is producing low-quality images or dropping modes is by visual inspection. In stark contrast, the proposed metric can quantitatively disentangle these effects which we empirically demonstrate.
We consider three data sets commonly used in the GAN literature: MNIST LeCun et al. (1998), Fashion-MNIST Xiao et al. (2017), and CIFAR-10 Krizhevsky and Hinton (2009). These data sets are labeled and consist of 10 balanced classes. To show the sensitivity of the proposed measure to mode dropping and mode inventing, we first fix to contain samples from the first 5 classes in the respective test set. Then, for a fixed , we generate a set , which consists of samples from the first classes from the training set. As increases, covers an increasing number of classes from which should result in higher recall. As we increase beyond 5, includes samples from an increasing number of classes that are not present in which should result in a loss in precision, but not in recall as the other classes are already covered. Finally, the set covers the same classes as , so it should have high precision and high recall.
Figure 5 (left) shows the IS and FID for the CIFAR-10 data set (results on the other data sets are shown in Figure 11 in the appendix). Since the IS is not computed w.r.t. a reference distribution, it is invariant to the choice of , so as we add classes to , the IS increases. The FID decreases as we add more classes until before it starts to increase as we add spurious modes. Critically, FID fails to distinguish the cases of mode dropping and mode inventing: and share similar FIDs. In contrast, Figure 5 (middle) shows our PRD curves as we vary the number of classes in . Adding correct modes leads to an increase in recall, while adding fake modes leads to a loss of precision.
We also apply the proposed approach on text data as shown in Figure 5 (right). In particular, we use the MultiNLI corpus of crowd-sourced sentence pairs annotated with topic and textual entailment information Williams et al. (2017). After discarding the entailment label, we collect all unique sentences for the same topic. Following Cífka et al. (2018)
, we embed these sentences using a BiLSTM with 2048 cells in each direction and max pooling, leading to a 4096-dimensional embeddingConneau et al. (2017). We consider 5 classes from this data set and fix to contain samples from all classes to measure the loss in recall for different . Figure 5 (right) curves successfully demonstrate the sensitivity of recall to mode dropping.
4.2 Assessing class imbalances for GANs
In this section we analyze the effect of class imbalance on the PRD curves.
Figure 6 shows a pair of GANs trained on MNIST which
have virtually the same FID, but very different PRD curves.
The model on the left generates a subset of the digits of high quality, while the model on the right seems to generate all digits, but each has low quality. We can naturally interpret this difference via the PRD curves: For a desired recall level of less than , the model on the left enjoys higher precision – it generates several digits of high quality. If, however, one desires a recall higher than , the model on the right enjoys higher precision as it covers all digits.
To confirm this, we train an MNIST classifier on the embedding of with the ground truth labels and plot the distribution of the predicted classes for both models. The histograms clearly show that the model on the left failed to generate all classes (loss in recall), while the model on the right is producing a more balanced distribution over all classes (high
recall). At the same time, the classifier has an average
confidence222 We denote the output of the classifier for its
highest value at the softmax layer as confidence. The intuition is that
higher values signify higher confidence of the model for the given label.
We denote the output of the classifier for its highest value at the softmax layer as confidence. The intuition is that higher values signify higher confidence of the model for the given label.of 96.7% on the model on the left compared to 88.6% on the model on the right, indicating that the sample quality of the former is higher. This aligns very well with the PRD plots: samples on the left have high quality but are not diverse in contrast to the samples on the right which are diverse but have low quality.
This analysis reveals a connection to IS which is based on the premise that the conditional label distribution should have low entropy, while the marginal should have high entropy. To further analyze the relationship between the proposed approach and PRD curves, we plot against precision and against recall in Figure 10 in the appendix. The results over a large number of GANs and VAEs show a large Spearman correlation of -0.83 for precision and 0.89 for recall. We however stress two key differences between the approaches: Firstly, to compute the quantities in IS one needs a classifier and a labeled data set in contrast to the proposed PRD metric which can be applied on unlabeled data. Secondly, IS only captures losses in recall w.r.t. classes, while our metric measures more fine-grained recall losses (see Figure 8 in the appendix).
4.3 Application to GANs and VAEs
We evaluate the precision and recall of 7 GAN types and the VAE with 100 hyperparameter settings each as provided byLucic et al. (2018). In order to visualize this vast quantity of models, one needs to summarize the PRD curves. A natural idea is to compute the maximum
score, which corresponds to the harmonic mean between precision and recall as a single-number summary. This idea is fundamentally flawed asis symmetric. However, its generalization, defined as , provides a way to quantify the relative importance of precision and recall: weighs recall higher than precision, whereas weighs precision higher than recall. As a result, we propose to distill each PRD curve into a pair of values: and .
Figure 7 compares the maximum with the maximum for these models on the Fashion-MNIST data set. We choose as it offers a good insight into the bias towards precision versus recall. Since weighs recall higher than precision and does the opposite, models with higher recall than precision will lie below the diagonal and models with higher precision than recall will lie above. To our knowledge, this is the first metric which confirms the folklore that VAEs are biased towards higher recall, but may suffer from precision issues (e.g., due to blurring effects), at least on this data set. On the right, we show samples from four models on the extreme ends of the plot for all combinations of high and low precision and recall. We have included similar plots on the MNIST, CIFAR-10 and CelebA data sets in the appendix.
Quantitatively evaluating generative models is a challenging task of paramount importance. In this work we show that one-dimensional scores are not sufficient to capture different failure cases of current state-of-the-art generative models. As an alternative, we propose a novel notion of precision and recall for distributions and prove that both notions are theoretically sound and have desirable properties. We then connect these notions to total variation distance as well as FID and IS and we develop an efficient algorithm that can be readily applied to evaluate deep generative models based on samples. We investigate the properties of the proposed algorithm on real-world data sets, including image and text generation, and show that it captures the precision and recall of generative models. Finally, we find empirical evidence supporting the folklore that VAEs produce samples of lower quality, while being less prone to mode collapse than GANs.
- Bachem et al. [2016a] Olivier Bachem, Mario Lucic, Hamed Hassani, and Andreas Krause. Fast and provably good seedings for k-means. In Advances in Neural Information Processing Systems (NIPS), 2016a.
- Bachem et al. [2016b] Olivier Bachem, Mario Lucic, S Hamed Hassani, and Andreas Krause. Approximate k-means++ in sublinear time. In AAAI, 2016b.
- Barratt and Sharma  Shane Barratt and Rishi Sharma. A Note on the Inception Score. arXiv preprint arXiv:1801.01973, 2018.
- Bińkowski et al.  Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In International Conference on Learning Representations (ICLR), 2018.
- Borji  Ali Borji. Pros and Cons of GAN Evaluation Measures. arXiv preprint arXiv:1802.03446, 2018.
- Cífka et al.  Ondřej Cífka, Aliaksei Severyn, Enrique Alfonseca, and Katja Filippova. Eval all, trust a few, do wrong to none: Comparing sentence generation models. arXiv preprint arXiv:1804.07972, 2018.
- Conneau et al.  Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. arXiv preprint arXiv:1705.02364, 2017.
- Goodfellow et al.  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In Advances in Neural Information Processing Systems (NIPS), 2014.
- Heusel et al.  Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a Nash equilibrium. In Advances in Neural Information Processing Systems (NIPS), 2017.
- Huszár  Ferenc Huszár. How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? arXiv preprint arXiv:1511.05101, 2015.
- Im et al.  Daniel Jiwoong Im, He Ma, Graham Taylor, and Kristin Branson. Quantitatively evaluating GANs with divergences proposed for training. In International Conference on Learning Representations (ICLR), 2018.
- Kingma and Welling  Diederik P Kingma and Max Welling. Auto-encoding Variational Bayes. In International Conference on Learning Representations (ICLR), 2014.
- Krizhevsky and Hinton  Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
- Kurach et al.  Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The GAN Landscape: Losses, architectures, regularization, and normalization. arXiv preprint arXiv:1807.04720, 2018.
- LeCun et al.  Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In IEEE, 1998.
Liu et al. 
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
Deep learning face attributes in the wild.
Proceedings of International Conference on Computer Vision (ICCV), 2015.
- Lopez-Paz and Oquab  David Lopez-Paz and Maxime Oquab. Revisiting Classifier Two-Sample Tests. In International Conference on Learning Representations (ICLR), 2016.
- Lucic et al.  Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are GANs Created Equal? A Large-Scale Study. In Advances in Neural Information Processing Systems (NIPS), 2018.
- Odena et al.  Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
- Salimans et al.  Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems (NIPS), 2016.
- Sculley  David Sculley. Web-scale k-means clustering. In International Conference on World Wide Web (WWW), 2010.
- Theis et al.  Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations (ICLR), 2016.
- Williams et al.  Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017.
- Wu et al.  Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of decoder-based generative models. In International Conference on Learning Representations (ICLR), 2017.
- Xiao et al.  Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv preprint arXiv:1708.07747, 2017.
Appendix A Proofs
Let and be probability distributions defined on a finite state space . Let and . Then, if and only if there exists a distribution such that for all
a.1 Proof of Theorem 1
We show each of the properties independently:
2 Disjoint support: We show both directions of the claim by contraposition, i.e., we show . Consider an arbitrary . Then, by definition we have and . Let be defined as the distribution with and for all . Clearly, it holds that and for all . Hence, by Lemma 1, we have which implies that as claimed. Conversely, implies by Lemma 1 that there exist and as well as a distribution satisfying (5). Let which implies and thus by (5) also and . Hence, is both in the support of and which implies as claimed.
To prove the claim, we next show that there exists with . Let . If , then and by Definition 2 as claimed. For the case , let . By definition of , we have . Furthermore, since for at least one . Consider the distribution where for all and for . By construction, satisfies (5) in Lemma 1 and hence as claimed.
a.2 Proof of Theorem 2
We first show that . We consider an arbitrary element and show that for some and . For the case , the result holds trivially for the choice of and . For the case , we choose and . Since by definition, this implies as required. Furthermore, since by Definitions 1 and 2 if and only if . Similarly, we show that : By Lemma 1 there exists a distribution such that and for all . This implies that and thus for all . Summing over all , we obtain which implies .
Finally, we show that . Consider arbitrary and . If , the claim holds trivially since . Otherwise, define the distribution by for all . By definition, for all . Similarly, for all since . Because , this implies and for all . Hence, by Lemma 1, for all and as claimed. ∎
Appendix B Further figures
|Real images||Generated images|