MMD
MMD and Relative MMD test
view repo
Probabilistic generative models provide a powerful framework for representing data that avoids the expense of manual annotation typically needed by discriminative approaches. Model selection in this generative setting can be challenging, however, particularly when likelihoods are not easily accessible. To address this issue, we introduce a statistical test of relative similarity, which is used to determine which of two models generates samples that are significantly closer to a realworld reference dataset of interest. We use as our test statistic the difference in maximum mean discrepancies (MMDs) between the reference dataset and each model dataset, and derive a powerful, lowvariance test based on the joint asymptotic distribution of the MMDs between each referencemodel pair. In experiments on deep generative models, including the variational autoencoder and generative moment matching network, the tests provide a meaningful ranking of model performance as a function of parameter and training settings.
READ FULL TEXT VIEW PDFMMD and Relative MMD test
Generative models based on deep learning techniques aim to provide sophisticated and accurate models of data, without expensive manual annotation
(Bengio, 2009; Kingma et al., 2014). This is especially of interest as deep networks tend to require comparatively large training samples to achieve a good result (Krizhevsky et al., 2012). Model selection within this class of techniques can be a challenge, however. First, likelihoods can be difficult to compute for some families of recently proposed models based on deep learning (Goodfellow et al., 2014; Li et al., 2015b). The current best method to evaluate such models is based on Parzenwindow estimates of the log likelihood
(Goodfellow et al., 2014, Section 5). Second, if we are given two models with similar likelihoods, we typically do not have a computationally inexpensive hypothesis test to determine whether one likelihood is significantly higher than the other. Permutation testing or other generic strategies are often computationally prohibitive, bearing in mind the relatively high computational requirements of deep networks (Krizhevsky et al., 2012).In this work, we provide an alternative strategy for model selection, based on a novel, nonparametric hypothesis test of relative similarity. We treat the two trained networks being compared as generative models (Goodfellow et al., 2014; Hinton et al., 2006; Salakhutdinov and Hinton, 2009)
, and test whether the first candidate model generates samples significantly closer to a reference validation set. The null hypothesis is that the ordering is reversed, and the second candidate model is closer to the reference (further, both samples are assumed to remain distinct from the reference, as will be the case for any sufficiently complex modeling problem).
Our model selection criterion is based on the maximum mean discrepancy (MMD) (Gretton et al., 2006, 2012a)
, which represents the distance between embeddings of empirical distributions in a reproducing kernel Hilbert space (RKHS). The maximum mean discrepancy is a metric on the space of probability distirbutions when a characteristic kernel is used
(Fukumizu et al., 2008; Sriperumbudur et al., 2010), meaning that the distribution embeddings are unique for each probability measure. Recently, the MMD has been used in training generative models adversarially, (Li et al., 2015b; Dziugaite et al., 2015), where the MMD measures the distance of the generated samples to some reference target set; it has been used for statistical model criticism (Lloyd and Ghahramani, 2015); and to minimize the effect of nuisance variables on learned representations (Louizos et al., 2016).Rather than train a single model using the MMD distance to a reference distribution, our goal in this work is to evaluate the relative performance of two models, by testing whether one generates samples significantly closer to the reference distribution than the other. This extends the applicability of the MMD to problems of model selection and evaluation. Key to this result is a novel expression for the joint asymptotic distribution of two correlated MMDs (between samples generated from each model, and samples from the reference distribution). Li et al. (2015a)
have derived the joint distribution of a specific MMD estimator under the assumption that the distributions are equal. By contrast, we derive the case in which the distributions are unequal, as is expected due to irreducible model error.
We provide a detailed introduction to the MMD and its associated notation in Section 2. We derive the joint asymptotic distribution of the MMDs in Section 3: this uses similar ideas to the relative dependence test in Bounliphone et al. (2015), with the additional complexity due to there being three independent samples to deal with, rather than a single joint sample. We formulate a hypothesis test of relative similarity, to determine whether the difference in MMDs is statistically significant. Our first test benchmark is on a synthetic data for which the ground truth is known (Section 4), where we verify that the test performs correctly under the null and the alternative.
Finally, in Section 5, we demonstrate the performance of our test over a broad selection of model comparison problems in the deep learning setting, by evaluating relative similarity of pairs of model outputs to a validation set over a range of training regimes and settings. Our benchmark models include the variational autoencoder (Kingma and Welling, 2014) and the generative moment matching network (Li et al., 2015b). We first demonstrate that the test performs as expected in scenarios where the same model is trained with different training set sizes, and the relative ordering of model performance is known. We then fix the training set size and change various architectural parameters of these networks, showing which models are significantly preferred with our test. We validate the rankings returned by the test using a separate set of data for which we compute alternate metrics for assessing the models, such as classification accuracy and likelihood.
In comparing samples from distributions, we use the Maximum Mean Discrepancy () (Gretton et al., 2006, 2012a). We briefly review this statistic and its asymptotic behaviour for a single pair of samples.
(Gretton et al., 2012a, Definition 2: Maximum Mean Discrepancy ()) Let be an RKHS, with the continuous feature mapping from each , such that the inner product between the features is given by the kernel function . Then the squared population is
(1) 
The following theorem describes an unbiased quadratictime estimate of the MMD, and its asymptotic distribution when and are different.
(Gretton et al., 2012a, Lemma 6 Corollary 16: Unbiased empirical estimate Asymptotic distribution of ) Define observations and independently and identically distributed (i.i.d.) from and , respectively. An unbiased empirical estimate of is a sum of two statistics and a sample average,
(2)  
Let be
i.i.d. random variables, where
. When , an unbiased empirical estimate of is(3) 
with . We assume . When , converges in distribution to a Gaussian according to
(4) 
where
(5) 
uniformly at rate .
A twosample test may be constructed using the MMD as a test statistic, however when the statistic is degenerate, and the asymptotic distribution is a weighted sum of variables (which can have infinitely many terms, (Gretton et al., 2012a)).
By contrast, our problem setting is to determine with high significance whether a target distribution is closer to one of two candidate distributions , based on two empirical estimates of the MMD and their variances. This requires us to characterize for as well as the covariance of two dependent estimates, and (the dependence arises from the shared sample ). Fortunately, degeneracy does not arise if we assume are each distinct from .
In the next section, we obtain the joint asymptotic distribution of two dependent MMD statistics. We demonstrate how this joint distribution can be empirically estimated, and use the resulting parametric form to construct a computationally efficient and powerful hypothesis test for relative similarity.
In this section, we derive our statistical test for relative similarity as measured by MMD. In order to maximize the statistical efficiency of the test, we will reuse samples from the reference distribution, denoted by , to compute the MMD estimates with two candidate distributions and . We consider two MMD estimates and , and as the data sample is identical between them, these estimates will be correlated. We therefore first derive the joint asymptotic distribution of these two metrics and use this to construct a statistical test.
We assume that , , , and , then
(6) 
We substitute the kernel MMD definition from Equation (2), expand the terms in the expectation, and determine their empirical estimates in order to compute the variances in practice. The proof and additional details of the following derivations are given in Appendix A.
An empirical estimate of in Equation (6), neglecting higher order terms, can be computed in :
(7)  
where
is a vector of
s with appropriate size, while and refer to the kernel matrices, with indicating that the diagonal entries have been set to zero (cf. Appendix A). Similarly, Equation (5) is constructed as in Equation (7).Based on the empirical distribution from Equation (6), we now describe a statistical test to solve the following problem:
Let and be defined as above, be an independent random variables with distribution . Given observations , and i.i.d. from , and respectively such that , , we test the hypothesis that is closer to than i.e. we test the null hypothesis : versus the alternative hypothesis : at a given significance level
The test statistic is used to compute the value
for the standard normal distribution. The test statistic is obtained by rotating the joint distribution (cf. Eq.
6) by about the origin, and integrating the resulting projection on the first axis, in a manner similar to Bounliphone et al. (2015). Denote the asymptotically normal distribution of as . The resulting distribution from rotating by and projecting onto the primary axis is where(8)  
(9) 
with is the rotation by Then, the values for testing versus are
(10) 
where is the CDF of a standard normal distribution. We have made code for performing the test is available.^{1}^{1}1Code and examples can be found at https://github.com/eugenium/MMD
We verify the validity of the hypothesis test described above using a synthetic data set in which we can directly control the relative similarity between distributions. We constructed three Gaussian distributions as illustrated in Figure
1. These Gaussian distributions are specified with different means so that we can control the degree of relative similarity between them. The question is whether the similarity between and is greater than the similarity between and . In these experiments, we used a Gaussian kernel with bandwidth selected as the median pairwise distance between data points, and we fixed , and varied such that , for 41 regularly spaced values of (avoiding the degenerate cases or ). Figure 3 shows the values of the relative similarity test for different distribution. When is varying around , i.e., when is almost equal to , the values quickly transition from to , indicating strong discrimination of the test. In Figure 2, we compare the power of our test to the power of a naive test when the reference sample is split in two, and the MMDs have no covariance: clearly, the latter simple approach does worse than ours (a similar comparison in testing relative dependence returned the same advantage for a test based on the joint distribution; see Bounliphone et al. (2015, Section 3)). Figure 4 shows an empirical scatter plot of the pairs of MMD statistics along with a isocurve of the estimated distribution, demonstrating that the parametric Gaussian distribution is well calibrated to the empirical values. Futhermore, we validate our derived formulas using simulations in Appendix B, where we show the pvalues have the correct distribution under the null.
Power of the tests 


An important potential application of the Relative MMD can be found in recent work on unsupervised learning with deep neural networks
(Kingma and Welling, 2014; Bengio et al., 2014; Larochelle and Murray, 2011; Salakhutdinov and Hinton, 2009; Li et al., 2015b; Goodfellow et al., 2014). As noted by several authors, the evaluation of generative models is a challenging open problem (Li et al., 2015b; Goodfellow et al., 2014), and the distributions of samples from these models are very complex and difficult to evaluate. The relative MMD performance can be used to compare different model settings, or even model families, in a statistically valid framework. To compare two models using our test, we generate samples from both, and compare these to a set of real target data samples that were not used to train either model.In the experiments in the sequel we focus on the recently introduced variational autoencoder (VAE) (Kingma and Welling, 2014) and the generative moment matching networks (GMMN) (Li et al., 2015b). The former trains an encoder and decoder network jointly minimizing a regularized variational lower bound (Kingma and Welling, 2014). While the latter class of models is purely generative minimizing an MMD based objective, this model works best when coupled with a separate autoencoder which reduces the dimensionality of the data. An architectural schematic for both classes of models is provided in Fig. 5
. Both these models can be trained using standard backpropagation
(Rumelhart et al., 1988). Using the latent variable prior we can directly sample the data distribution of these models without using MCMC procedures (Hinton et al., 2006; Salakhutdinov and Hinton, 2009).We use the MNIST and FreyFace datasets for our analysis (LeCun et al., 1998; Kingma and Welling, 2014; Goodfellow et al., 2014). We first demonstrate the effectiveness of our test in a setting where we have a theoretical basis for expecting superiority of one unsupervised model versus another. Specifically, we use a setup where more training samples were used to create one model versus the other. We find that the Relative MMD framework agrees with the expected results (models trained with more data generalize better). We then demonstrate how the Relative MMD can be used in evaluating network architecture choices, and we show that our test strongly agrees with other established metrics, but in contrast can provide significance results using just the validation data while other methods may require an additional test set.
Several practical matters must be considered when applying the Relative MMD test. The selection of kernel can affect the quality of results, particularly more suitable kernels can give a faster convergence. In this work we extend the logic of the median heuristic
(Gretton et al., 2012b) for bandwidth selection by computing the median pairwise distance between samples from and and averaging that with the median pairwise distance between samples from and , which helps to maximize the difference between the two MMD statistics. Although the derivations for the variance of our statistic hold for all cases, the estimates require asymptotic arguments and thus a sufficiently large . Selecting the kernel bandwidth in an appropriate range can therefore substantially increase the power of the test at a fixed sample size. While we observed the median heuristic to work well in our experiments, there are cases where alternative choices of kernel can provide greater power: for instance, the kernel can be chosen to maximize the expected test power on a heldout dataset (Gretton et al., 2012b).(a)  (b) 
We use the architecture from Kingma and Welling (2014) with a hidden layer at both the encoder and decoder and a latent variable layer as shown in Figure 5a. We use sigmoidal activation for the hidden layers of encoder and decoder. For the FreyFace data, we use a Gaussian prior on the latent space and data space. For MNIST, we used a Bernoulli prior for the data space. We fix the training set size of the second autoencoder to 300 images for the FreyFace data and 1500 images for the MNIST data. We vary the number of training samples for the first autoencoder. We then generate samples from both autoencoders and compare them using Relative MMD to a held out set of data. We use 1500 FreyFace samples as the target in Relative MMD and 15000 images from MNIST. Since a single sample of the data might lead to better generalization performance by chance, we repeat this experiment multiple times and record whether the relative similarity test indicated a network is preferred or if it failed to reject the null hypothesis. The results are shown in Figure 6
which demonstrates that we are closely following the expected model preferences. Additionally for MNIST we use another separate set of supervised training and test data. We encode this data using both autoencoders and use logistic regression to obtain a classification accuracy. The indicated accuracies closely match the results of the relative similarity test, further validating the test.
We consider model selection between networks using different architectures. We train two encoders, one a fixed reference model (400 hidden units and 20 latent variables), and the other varying as specified in Table 1. 25000 images from the MNIST data set were used for training. We use another 20000 images as the target data in Relative MMD. Finally, we use a set of 10000 training and 10000 test images for a supervised task experiment. We use the labels in the MNIST data and perform training and classification using an regularized logistic regression on the encoded features. In addition we use the supervised task test data to evaluate the variational lower bound of the data under the two models (Kingma and Welling, 2014). We show the result of this experiment in Table 1. For each comparison we take a different subset of training data which helps demonstrate the variation in lower bound and accuracy when retraining the reference architecture. We use a significance value of and indicate when the test favors one autoencoder over another or fails to reject the null hypothesis. We find that Relative MMD evaluation of the models closely matches performance on the supervised task and the test set variational lower bound.
Hidden  Latent  Result  Accuracy (%)  Accuracy (%)  Lower Bound  Lower Bound 

VAE 1  VAE 1  RelativeMMD  VAE 1  VAE 2  VAE 1  VAE 2 
200  5  Favor VAE 2  126  97  
200  20  Favor VAE 2  115  105  
400  50  Favor VAE 1  99.6  123.44  
800  20  Favor VAE 1  111  115  
800  50  Favor VAE 1  101  103 
We demonstrate our hypothesis test on a different class of deep generative models called Generative Moment Matching Networks (GMMN) Li et al. (2015b). This recently introduced model has shown competitive performance in terms of test set likelihood on the MNIST data. Furthermore the training of this model is based on the MMD criterion. Li et al. (2015b) proposes to use that model along with an autoencoder, which is the setup we employ in this work. Here a standard autoencoder model is trained on the data to obtain a low dimensional representation, then a GMMN network is trained on the latent representations (Figure 5).
We use the relative similarity test to evaluate various architectural choices in this new class of models. We start from the baseline model specified in Li et al. (2015b) and associated software. The details of the reference model are specified in Figure 5.
We vary the number of autoencoder hidden layers (1 to 4), generative model layers(1, 4, or 5), the number of network nodes (all or 50% of the reference model), and use of dropout on the autoencoder. We use the same training set of 55000, validation set of 5000 and test set of 10000 as in (Li et al., 2015b; Goodfellow et al., 2014). In total we train 48 models. We use these to compare 4 simplified binary network architecture choices using the Relative MMD: using dropout on the autoencoder, few (1) or more (4 or 5) GMMN layers, few (1 or 2) or more (3 or 4) autoencoder layers, and the number of network nodes. We use our test to compare these model settings using the validation set as the target in the relative similarity test, and samples from the models as the two sources. To validate our results we compare it to likelihoods computed on the test set. The results are shown in Table 2. We see that the likelihood results computed on a separate test set follow the conclusions obtained from MMD on the validation set. Particularly, we find that using fewer hidden layers for the GMMN and more hidden nodes generally produces better models.
RelativeMMD Preference  

Experimental Condition (A/B)  A  Inconclusive  B  Avg Likelihood A  Avg Likelihood B 
Dropout/No Dropout  199  17  360  
More/Fewer GMMN Layers  105  14  393  
More/Fewer Nodes  450  13  113  
More/Fewer AE layers  231  21  324 
In these experiments we have seen that the Relative MMD test can be used to compare deep generative models obtaining judgements aligned with other metrics. Comparisons to other metrics are important for verifying our test is sensible, but it can occlude the fact that MMD is a valid evaluation technique on its own. When evaluating only sample generating models where likelihood computation is not possible, MMD is an appropriate and tractable metric to consider in addition to ParzenWindow log likelihoods and visual appearance of the samples. In several ways it is potentially more appropriate than Parzenwindows as it allows one to consider directly the discrepancy between the test data samples and the model samples while allowing for significance results. In such a situation, comparing the performance of several models using the MMD against a single set of test samples, the Relative MMD test can provide an automatic significance value without expensive crossvalidation procedures.
Gaussian kernels are closely related to Parzenwindow estimates, thus computing an MMD in this case can be considered related to comparing Parzen window loglikelihoods. The MMD gives several advantages, however. First, the asymptotics of MMD are quite different to Parzenwindows, since the Parzenwindow bandwidth shrinks as grows. Asymptotics of relative tests with shrinking bandwidth are unknown: even for two samples this is challenging (Krishnamurthy et al., 2015). Other two sample tests are not easily extendable to relative tests (Rosenbaum, 2005; Friedman and Rafsky, 1979; Hall and Tajvidi, 2002). This is because the tests above rely on graph edge counting or nearest neighbortype statistics, and null distributions are obtained via combinatorial arguments which are not easily extended from two to three samples. MMD is a statistic, hence its asymptotic behavior is much more easily generalised to multiple dependent statistics.
There are two primary advantages of the MMD over the variational lower bound, where it is known (Kingma and Welling, 2014): first, we have a characterization of the asymptotic behavior, which allows us to determine when the difference in performance is significant; second, comparing two lower bounds produced from two different models is unreliable, as we do not know how conservative either lower bound is.
We have described a novel nonparametric statistical hypothesis test for relative similarity based on the Maximum Mean Discrepancy. The test is consistent, and the computation time is quadratic. Our proposed test statistic is theoretically justified for the task of comparing samples from arbitrary distributions as it can be shown to converge to a quantity which compares all moments of the two pairs of distributions.
We evaluate test performance on synthetic data, where the degree of similarity can be controlled. Our experimental results on model selection for deep generative networks show that Relative MMD can be a useful approach to comparing such models. There is a strong correspondence between the test resuts and the expected likelihood, prediction accuracy, and variational lower bounds on the models tested. Moreover, our test has the advantage over these alternatives of providing guarantees of statistical significance to its conclusions. This suggests that the relative similarity test will be useful in evaluating hypotheses about network architectures, for example that AEGMMN models may generalize better when fewer layers are used in the generative model. Code for our method is available.^{2}^{2}2 https://github.com/eugenium/MMD
We thank Joel Veness for helpful comments. This work is partially funded by Internal Funds KU Leuven, ERC Grant 259112, FP7MCCIG 334380, the Royal Academy of Engineering through the Newton Alumni Scheme, and DIGITEO 20130788DSOPRANO. WB is supported by a CentraleSupélec fellowship.
Foundations and trends in Machine Learning
, 2(1):1–127, 2009.Conference on Uncertainty in Artificial Intelligence
, 2015.Deep Boltzmann machines.
In International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009.The variance and the covariance for a statistic is described in Hoeffding [1948, Eq. 5.13] and Serfling [2009, Chap. 5].
Similarly, let be iid random variables where . An unbiased estimator of is
(12) 
with
Then the variance/covariance for a statistic with a kernel of order 2 is given by
(13) 
Equation (13) with neglecting higher terms can be written as
(14) 
where for the variance term, and for the covariance term .
for all and for . Same for and . We will also make use of the fact that for an appropriately chosen inner product, and function . We then denote
(15) 
We note many terms in expansion of the squares above cancel out due to independence. For example .
We can thus simplify to the following expression for
(16)  
(17)  
(18)  
(19)  
Substituting empirical expectations over the data sample for the population expectations in Eq. (19) gives
(20)  
Derivation of the first term for example
(21)  
We note many terms in expansion of the squares above cancel out due to independence. For example .
We can thus simplify to the following expression for
(22)  
(23)  
In this section we propose an alternate strategy of deriving directly the variance of a ustatistic of the difference of MMDs with a joint variable. This formulation agrees with the derivation of the covariance matrix and subsequent projection, and provides extra insights.
Let be iid random variables where . Then the difference of the unbiased estimators of and is given by
(24) 
with , the kernel of of order 2 as follows
(25) 
Equation (24) is a statistic and thus we can apply Equation (14) to obtain its variance. We first note
(26)  
(27) 
We are now ready to derive the dominant leading term,, in the variance expression (14).
(28)  