Log In Sign Up

A Test of Relative Similarity For Model Selection in Generative Models

by   Wacha Bounliphone, et al.
KU Leuven

Probabilistic generative models provide a powerful framework for representing data that avoids the expense of manual annotation typically needed by discriminative approaches. Model selection in this generative setting can be challenging, however, particularly when likelihoods are not easily accessible. To address this issue, we introduce a statistical test of relative similarity, which is used to determine which of two models generates samples that are significantly closer to a real-world reference dataset of interest. We use as our test statistic the difference in maximum mean discrepancies (MMDs) between the reference dataset and each model dataset, and derive a powerful, low-variance test based on the joint asymptotic distribution of the MMDs between each reference-model pair. In experiments on deep generative models, including the variational auto-encoder and generative moment matching network, the tests provide a meaningful ranking of model performance as a function of parameter and training settings.


page 7

page 16


Fast Non-Parametric Tests of Relative Dependency and Similarity

We introduce two novel non-parametric statistical hypothesis tests. The ...

Generative Moment Matching Networks

We consider the problem of learning deep generative models from data. We...

Out-of-distribution Detection via Frequency-regularized Generative Models

Modern deep generative models can assign high likelihood to inputs drawn...

Goodness-of-Fit Test for Self-Exciting Processes

Recently there have been many research efforts in developing generative ...

Generative Model Selection Using a Scalable and Size-Independent Complex Network Classifier

Real networks exhibit nontrivial topological features such as heavy-tail...

Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors

Probabilistic generative models provide a flexible and systematic framew...

Medical Diagnosis From Laboratory Tests by Combining Generative and Discriminative Learning

A primary goal of computational phenotype research is to conduct medical...

Code Repositories

1 Introduction

Generative models based on deep learning techniques aim to provide sophisticated and accurate models of data, without expensive manual annotation 

(Bengio, 2009; Kingma et al., 2014). This is especially of interest as deep networks tend to require comparatively large training samples to achieve a good result (Krizhevsky et al., 2012). Model selection within this class of techniques can be a challenge, however. First, likelihoods can be difficult to compute for some families of recently proposed models based on deep learning (Goodfellow et al., 2014; Li et al., 2015b)

. The current best method to evaluate such models is based on Parzen-window estimates of the log likelihood

(Goodfellow et al., 2014, Section 5). Second, if we are given two models with similar likelihoods, we typically do not have a computationally inexpensive hypothesis test to determine whether one likelihood is significantly higher than the other. Permutation testing or other generic strategies are often computationally prohibitive, bearing in mind the relatively high computational requirements of deep networks (Krizhevsky et al., 2012).

In this work, we provide an alternative strategy for model selection, based on a novel, non-parametric hypothesis test of relative similarity. We treat the two trained networks being compared as generative models (Goodfellow et al., 2014; Hinton et al., 2006; Salakhutdinov and Hinton, 2009)

, and test whether the first candidate model generates samples significantly closer to a reference validation set. The null hypothesis is that the ordering is reversed, and the second candidate model is closer to the reference (further, both samples are assumed to remain distinct from the reference, as will be the case for any sufficiently complex modeling problem).

Our model selection criterion is based on the maximum mean discrepancy (MMD) (Gretton et al., 2006, 2012a)

, which represents the distance between embeddings of empirical distributions in a reproducing kernel Hilbert space (RKHS). The maximum mean discrepancy is a metric on the space of probability distirbutions when a characteristic kernel is used

(Fukumizu et al., 2008; Sriperumbudur et al., 2010), meaning that the distribution embeddings are unique for each probability measure. Recently, the MMD has been used in training generative models adversarially, (Li et al., 2015b; Dziugaite et al., 2015), where the MMD measures the distance of the generated samples to some reference target set; it has been used for statistical model criticism (Lloyd and Ghahramani, 2015); and to minimize the effect of nuisance variables on learned representations (Louizos et al., 2016).

Rather than train a single model using the MMD distance to a reference distribution, our goal in this work is to evaluate the relative performance of two models, by testing whether one generates samples significantly closer to the reference distribution than the other. This extends the applicability of the MMD to problems of model selection and evaluation. Key to this result is a novel expression for the joint asymptotic distribution of two correlated MMDs (between samples generated from each model, and samples from the reference distribution). Li et al. (2015a)

have derived the joint distribution of a specific MMD estimator under the assumption that the distributions are equal. By contrast, we derive the case in which the distributions are unequal, as is expected due to irreducible model error.

We provide a detailed introduction to the MMD and its associated notation in Section 2. We derive the joint asymptotic distribution of the MMDs in Section 3: this uses similar ideas to the relative dependence test in Bounliphone et al. (2015), with the additional complexity due to there being three independent samples to deal with, rather than a single joint sample. We formulate a hypothesis test of relative similarity, to determine whether the difference in MMDs is statistically significant. Our first test benchmark is on a synthetic data for which the ground truth is known (Section 4), where we verify that the test performs correctly under the null and the alternative.

Finally, in Section 5, we demonstrate the performance of our test over a broad selection of model comparison problems in the deep learning setting, by evaluating relative similarity of pairs of model outputs to a validation set over a range of training regimes and settings. Our benchmark models include the variational auto-encoder (Kingma and Welling, 2014) and the generative moment matching network (Li et al., 2015b). We first demonstrate that the test performs as expected in scenarios where the same model is trained with different training set sizes, and the relative ordering of model performance is known. We then fix the training set size and change various architectural parameters of these networks, showing which models are significantly preferred with our test. We validate the rankings returned by the test using a separate set of data for which we compute alternate metrics for assessing the models, such as classification accuracy and likelihood.

2 Background Material

In comparing samples from distributions, we use the Maximum Mean Discrepancy ((Gretton et al., 2006, 2012a). We briefly review this statistic and its asymptotic behaviour for a single pair of samples.

Definition 1.

(Gretton et al., 2012a, Definition 2: Maximum Mean Discrepancy ()) Let be an RKHS, with the continuous feature mapping from each , such that the inner product between the features is given by the kernel function . Then the squared population is


The following theorem describes an unbiased quadratic-time estimate of the MMD, and its asymptotic distribution when and are different.

Theorem 1.

(Gretton et al., 2012a, Lemma 6 Corollary 16: Unbiased empirical estimate Asymptotic distribution of ) Define observations and independently and identically distributed (i.i.d.) from and , respectively. An unbiased empirical estimate of is a sum of two -statistics and a sample average,


Let be

i.i.d. random variables, where

. When , an unbiased empirical estimate of is


with . We assume . When , converges in distribution to a Gaussian according to




uniformly at rate .

A two-sample test may be constructed using the MMD as a test statistic, however when the statistic is degenerate, and the asymptotic distribution is a weighted sum of variables (which can have infinitely many terms, (Gretton et al., 2012a)).

By contrast, our problem setting is to determine with high significance whether a target distribution is closer to one of two candidate distributions , based on two empirical estimates of the MMD and their variances. This requires us to characterize for as well as the covariance of two dependent estimates, and (the dependence arises from the shared sample ). Fortunately, degeneracy does not arise if we assume are each distinct from .

In the next section, we obtain the joint asymptotic distribution of two dependent MMD statistics. We demonstrate how this joint distribution can be empirically estimated, and use the resulting parametric form to construct a computationally efficient and powerful hypothesis test for relative similarity.

3 Joint asymptotic distribution of two correlated MMDs and a resulting test statistic

In this section, we derive our statistical test for relative similarity as measured by MMD. In order to maximize the statistical efficiency of the test, we will reuse samples from the reference distribution, denoted by , to compute the MMD estimates with two candidate distributions and . We consider two MMD estimates and , and as the data sample is identical between them, these estimates will be correlated. We therefore first derive the joint asymptotic distribution of these two metrics and use this to construct a statistical test.

Theorem 2.

We assume that , , , and , then


We substitute the kernel MMD definition from Equation (2), expand the terms in the expectation, and determine their empirical estimates in order to compute the variances in practice. The proof and additional details of the following derivations are given in Appendix A.

An empirical estimate of in Equation (6), neglecting higher order terms, can be computed in :



is a vector of

s with appropriate size, while and refer to the kernel matrices, with indicating that the diagonal entries have been set to zero (cf. Appendix A). Similarly, Equation (5) is constructed as in Equation (7).

Based on the empirical distribution from Equation (6), we now describe a statistical test to solve the following problem:

Problem 1 (Relative similarity test).

Let and be defined as above, be an independent random variables with distribution . Given observations , and i.i.d. from , and respectively such that , , we test the hypothesis that is closer to than i.e. we test the null hypothesis : versus the alternative hypothesis : at a given significance level

The test statistic is used to compute the -value

for the standard normal distribution. The test statistic is obtained by rotating the joint distribution (cf. Eq. 

6) by about the origin, and integrating the resulting projection on the first axis, in a manner similar to Bounliphone et al. (2015). Denote the asymptotically normal distribution of as . The resulting distribution from rotating by and projecting onto the primary axis is where


with is the rotation by Then, the -values for testing versus are


where is the CDF of a standard normal distribution. We have made code for performing the test is available.111Code and examples can be found at

4 Experimental validation of the relative MMD test

We verify the validity of the hypothesis test described above using a synthetic data set in which we can directly control the relative similarity between distributions. We constructed three Gaussian distributions as illustrated in Figure 

1. These Gaussian distributions are specified with different means so that we can control the degree of relative similarity between them. The question is whether the similarity between and is greater than the similarity between and . In these experiments, we used a Gaussian kernel with bandwidth selected as the median pairwise distance between data points, and we fixed , and varied such that , for 41 regularly spaced values of (avoiding the degenerate cases or ). Figure 3 shows the -values of the relative similarity test for different distribution. When is varying around , i.e., when is almost equal to , the -values quickly transition from to , indicating strong discrimination of the test. In Figure 2, we compare the power of our test to the power of a naive test when the reference sample is split in two, and the MMDs have no covariance: clearly, the latter simple approach does worse than ours (a similar comparison in testing relative dependence returned the same advantage for a test based on the joint distribution; see Bounliphone et al. (2015, Section 3)). Figure 4 shows an empirical scatter plot of the pairs of MMD statistics along with a iso-curve of the estimated distribution, demonstrating that the parametric Gaussian distribution is well calibrated to the empirical values. Futhermore, we validate our derived formulas using simulations in Appendix B, where we show the p-values have the correct distribution under the null.

Figure 1: Illustration of the synthetic dataset where , and are respectively Gaussian distributed with mean , , and with variance .

Power of the tests

Figure 2: Comparison of the power of the proposed method to an independent test analogous to Bounliphone et al. (2015, Section 3) as a function of .
-values Figure 3: We fixed , and varied such that , for 41 regularly spaced values of versus p-values for 100 repeated tests. Figure 4: The empirical scatter plot of the joint MMD statistics with for 200 repeated tests, along with the iso-curve of the analytical Gaussian distribution estimated by Equation (6). The analytical distribution closely matches the empirical scatter plot, verifying the correctness of the variances.

5 Model Selection for Deep Unsupervised Neural Networks

An important potential application of the Relative MMD can be found in recent work on unsupervised learning with deep neural networks

(Kingma and Welling, 2014; Bengio et al., 2014; Larochelle and Murray, 2011; Salakhutdinov and Hinton, 2009; Li et al., 2015b; Goodfellow et al., 2014). As noted by several authors, the evaluation of generative models is a challenging open problem (Li et al., 2015b; Goodfellow et al., 2014), and the distributions of samples from these models are very complex and difficult to evaluate. The relative MMD performance can be used to compare different model settings, or even model families, in a statistically valid framework. To compare two models using our test, we generate samples from both, and compare these to a set of real target data samples that were not used to train either model.

In the experiments in the sequel we focus on the recently introduced variational auto-encoder (VAE) (Kingma and Welling, 2014) and the generative moment matching networks (GMMN) (Li et al., 2015b). The former trains an encoder and decoder network jointly minimizing a regularized variational lower bound (Kingma and Welling, 2014). While the latter class of models is purely generative minimizing an MMD based objective, this model works best when coupled with a separate auto-encoder which reduces the dimensionality of the data. An architectural schematic for both classes of models is provided in Fig. 5

. Both these models can be trained using standard backpropagation

(Rumelhart et al., 1988). Using the latent variable prior we can directly sample the data distribution of these models without using MCMC procedures (Hinton et al., 2006; Salakhutdinov and Hinton, 2009).

We use the MNIST and FreyFace datasets for our analysis (LeCun et al., 1998; Kingma and Welling, 2014; Goodfellow et al., 2014). We first demonstrate the effectiveness of our test in a setting where we have a theoretical basis for expecting superiority of one unsupervised model versus another. Specifically, we use a setup where more training samples were used to create one model versus the other. We find that the Relative MMD framework agrees with the expected results (models trained with more data generalize better). We then demonstrate how the Relative MMD can be used in evaluating network architecture choices, and we show that our test strongly agrees with other established metrics, but in contrast can provide significance results using just the validation data while other methods may require an additional test set.

Several practical matters must be considered when applying the Relative MMD test. The selection of kernel can affect the quality of results, particularly more suitable kernels can give a faster convergence. In this work we extend the logic of the median heuristic

(Gretton et al., 2012b) for bandwidth selection by computing the median pairwise distance between samples from and and averaging that with the median pairwise distance between samples from and , which helps to maximize the difference between the two MMD statistics. Although the derivations for the variance of our statistic hold for all cases, the estimates require asymptotic arguments and thus a sufficiently large . Selecting the kernel bandwidth in an appropriate range can therefore substantially increase the power of the test at a fixed sample size. While we observed the median heuristic to work well in our experiments, there are cases where alternative choices of kernel can provide greater power: for instance, the kernel can be chosen to maximize the expected test power on a held-out dataset (Gretton et al., 2012b).

(a) (b)
Figure 5: (a) Variational auto-encoder reference model. We have 400 hidden nodes (both encoder and decoder) and 20 latent variables in the reference model for our experiments. (b) Auto-Encoder + GMMN reference model. The auto-encoder (indicated in orange) is trained separately and has 1024 and 32 hidden nodes in decode and encode hidden layers. The GMMN has 10 variables generated by the prior, and the hidden layers have 64, 256, 256, 1024 nodes in each layer respectively. In both networks red arrows indicate the data flow during sampling

5.1 Variational Auto-Encoder Sample Size and Architecture Experiments

We use the architecture from Kingma and Welling (2014) with a hidden layer at both the encoder and decoder and a latent variable layer as shown in Figure 5a. We use sigmoidal activation for the hidden layers of encoder and decoder. For the FreyFace data, we use a Gaussian prior on the latent space and data space. For MNIST, we used a Bernoulli prior for the data space. We fix the training set size of the second auto-encoder to 300 images for the FreyFace data and 1500 images for the MNIST data. We vary the number of training samples for the first auto-encoder. We then generate samples from both auto-encoders and compare them using Relative MMD to a held out set of data. We use 1500 FreyFace samples as the target in Relative MMD and 15000 images from MNIST. Since a single sample of the data might lead to better generalization performance by chance, we repeat this experiment multiple times and record whether the relative similarity test indicated a network is preferred or if it failed to reject the null hypothesis. The results are shown in Figure 6

which demonstrates that we are closely following the expected model preferences. Additionally for MNIST we use another separate set of supervised training and test data. We encode this data using both auto-encoders and use logistic regression to obtain a classification accuracy. The indicated accuracies closely match the results of the relative similarity test, further validating the test.

Figure 6: We show the effect of (a) varying the training set size of one auto-encoder trained on MNIST data. (c) As a secondary validation we compute the classification accuracy of MNIST on a separate train/test set encoded using encoder 1 and encoder 2. (b) We then show the effect of varying the training set size of one auto-encoder using the FreyFace data. We note that due to the size of the FreyFace dataset, we limit the range of ratios used. From this figure we see that the results of the relative similarity test match our expectation: more data produces models which more closely match the true distribution.

We consider model selection between networks using different architectures. We train two encoders, one a fixed reference model (400 hidden units and 20 latent variables), and the other varying as specified in Table 1. 25000 images from the MNIST data set were used for training. We use another 20000 images as the target data in Relative MMD. Finally, we use a set of 10000 training and 10000 test images for a supervised task experiment. We use the labels in the MNIST data and perform training and classification using an -regularized logistic regression on the encoded features. In addition we use the supervised task test data to evaluate the variational lower bound of the data under the two models (Kingma and Welling, 2014). We show the result of this experiment in Table 1. For each comparison we take a different subset of training data which helps demonstrate the variation in lower bound and accuracy when re-training the reference architecture. We use a significance value of and indicate when the test favors one auto-encoder over another or fails to reject the null hypothesis. We find that Relative MMD evaluation of the models closely matches performance on the supervised task and the test set variational lower bound.

Hidden Latent Result Accuracy (%) Accuracy (%) Lower Bound Lower Bound
VAE 1 VAE 1 RelativeMMD VAE 1 VAE 2 VAE 1 VAE 2
200 5 Favor VAE 2 -126 -97
200 20 Favor VAE 2 -115 -105
400 50 Favor VAE 1 -99.6 -123.44
800 20 Favor VAE 1 -111 -115
800 50 Favor VAE 1 -101 -103
Table 1: We compare several variational auto encoder (VAE) architectural choices for the number of hidden units in both decoder and encoder and the number of latent variables for the VAE. The reference encoder, denoted encoder 2, has 400 hidden units and 20 latent variables. We denote the competing architectural models as encoder 1. We vary the number of hidden nodes in both the decoder and encoder and the number of latent variables. Our test closely follows the performance difference of the auto-encoder on a supervised task (MNIST digit classification) as well as the variational lower bound on a withheld set of data. The data used for evaluating the Accuracy and Lower Bound is separate from that used to train the auto-encoders and for the hypothesis test.

5.2 Generative Moment Matching Networks Architecture Experiments

We demonstrate our hypothesis test on a different class of deep generative models called Generative Moment Matching Networks (GMMN) Li et al. (2015b). This recently introduced model has shown competitive performance in terms of test set likelihood on the MNIST data. Furthermore the training of this model is based on the MMD criterion. Li et al. (2015b) proposes to use that model along with an auto-encoder, which is the setup we employ in this work. Here a standard auto-encoder model is trained on the data to obtain a low dimensional representation, then a GMMN network is trained on the latent representations (Figure 5).

We use the relative similarity test to evaluate various architectural choices in this new class of models. We start from the baseline model specified in Li et al. (2015b) and associated software. The details of the reference model are specified in Figure 5.

We vary the number of auto-encoder hidden layers (1 to 4), generative model layers(1, 4, or 5), the number of network nodes (all or 50% of the reference model), and use of drop-out on the auto-encoder. We use the same training set of 55000, validation set of 5000 and test set of 10000 as in (Li et al., 2015b; Goodfellow et al., 2014). In total we train 48 models. We use these to compare 4 simplified binary network architecture choices using the Relative MMD: using dropout on the auto-encoder, few (1) or more (4 or 5) GMMN layers, few (1 or 2) or more (3 or 4) auto-encoder layers, and the number of network nodes. We use our test to compare these model settings using the validation set as the target in the relative similarity test, and samples from the models as the two sources. To validate our results we compare it to likelihoods computed on the test set. The results are shown in Table 2. We see that the likelihood results computed on a separate test set follow the conclusions obtained from MMD on the validation set. Particularly, we find that using fewer hidden layers for the GMMN and more hidden nodes generally produces better models.

RelativeMMD Preference
Experimental Condition (A/B) A Inconclusive B Avg Likelihood A Avg Likelihood B
Dropout/No Dropout 199 17 360
More/Fewer GMMN Layers 105 14 393
More/Fewer Nodes 450 13 113
More/Fewer AE layers 231 21 324
Table 2: For each experimental condition (e.g. dropout or no dropout) we show the number of times the Relative MMD prefers models in group 1 or 2 and number of inconclusive tests. We use the validation set as the target data for Relative MMD. An average likelihood for the MNIST test set for each group is shown with error bars. We can see that the MMD choices are in agreement with likelihood evaluations. Particularly we identify that models with fewer GMMN layers and models with more nodes have more favourable samples, which is confirmed by the likelihood results.

5.3 Discussion

In these experiments we have seen that the Relative MMD test can be used to compare deep generative models obtaining judgements aligned with other metrics. Comparisons to other metrics are important for verifying our test is sensible, but it can occlude the fact that MMD is a valid evaluation technique on its own. When evaluating only sample generating models where likelihood computation is not possible, MMD is an appropriate and tractable metric to consider in addition to Parzen-Window log likelihoods and visual appearance of the samples. In several ways it is potentially more appropriate than Parzen-windows as it allows one to consider directly the discrepancy between the test data samples and the model samples while allowing for significance results. In such a situation, comparing the performance of several models using the MMD against a single set of test samples, the Relative MMD test can provide an automatic significance value without expensive cross-validation procedures.

Gaussian kernels are closely related to Parzen-window estimates, thus computing an MMD in this case can be considered related to comparing Parzen window log-likelihoods. The MMD gives several advantages, however. First, the asymptotics of MMD are quite different to Parzen-windows, since the Parzen-window bandwidth shrinks as grows. Asymptotics of relative tests with shrinking bandwidth are unknown: even for two samples this is challenging (Krishnamurthy et al., 2015). Other two sample tests are not easily extendable to relative tests (Rosenbaum, 2005; Friedman and Rafsky, 1979; Hall and Tajvidi, 2002). This is because the tests above rely on graph edge counting or nearest neighbor-type statistics, and null distributions are obtained via combinatorial arguments which are not easily extended from two to three samples. MMD is a -statistic, hence its asymptotic behavior is much more easily generalised to multiple dependent statistics.

There are two primary advantages of the MMD over the variational lower bound, where it is known (Kingma and Welling, 2014): first, we have a characterization of the asymptotic behavior, which allows us to determine when the difference in performance is significant; second, comparing two lower bounds produced from two different models is unreliable, as we do not know how conservative either lower bound is.

6 Conclusion

We have described a novel non-parametric statistical hypothesis test for relative similarity based on the Maximum Mean Discrepancy. The test is consistent, and the computation time is quadratic. Our proposed test statistic is theoretically justified for the task of comparing samples from arbitrary distributions as it can be shown to converge to a quantity which compares all moments of the two pairs of distributions.

We evaluate test performance on synthetic data, where the degree of similarity can be controlled. Our experimental results on model selection for deep generative networks show that Relative MMD can be a useful approach to comparing such models. There is a strong correspondence between the test resuts and the expected likelihood, prediction accuracy, and variational lower bounds on the models tested. Moreover, our test has the advantage over these alternatives of providing guarantees of statistical significance to its conclusions. This suggests that the relative similarity test will be useful in evaluating hypotheses about network architectures, for example that AE-GMMN models may generalize better when fewer layers are used in the generative model. Code for our method is available.222


We thank Joel Veness for helpful comments. This work is partially funded by Internal Funds KU Leuven, ERC Grant 259112, FP7-MC-CIG 334380, the Royal Academy of Engineering through the Newton Alumni Scheme, and DIGITEO 2013-0788D-SOPRANO. WB is supported by a CentraleSupélec fellowship.


  • Bengio (2009) Y. Bengio. Learning deep architectures for AI.

    Foundations and trends in Machine Learning

    , 2(1):1–127, 2009.
  • Bengio et al. (2014) Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. In Proceedings of the 31st International Conference on Machine Learning, 2014.
  • Bounliphone et al. (2015) W. Bounliphone, A. Gretton, A. Tenenhaus, and M. B. Blaschko. A low variance consistent test of relative dependency. In F. Bach and D. Blei, editors, Proceedings of The 32nd International Conference on Machine Learning, volume 37 of JMLR Workshop and Conference Proceedings, pages 20–29, 2015.
  • Dziugaite et al. (2015) G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In

    Conference on Uncertainty in Artificial Intelligence

    , 2015.
  • Friedman and Rafsky (1979) J. H. Friedman and L. C. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics, pages 697–717, 1979.
  • Fukumizu et al. (2008) K. Fukumizu, A. Gretton, X. Sun, and B. Schölkopf. Kernel measures of conditional dependence. pages 489–496, Cambridge, MA, 2008. MIT Press.
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
  • Gretton et al. (2006) A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pages 513–520, 2006.
  • Gretton et al. (2012a) A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012a.
  • Gretton et al. (2012b) A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B. K. Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1205–1213. 2012b.
  • Hall and Tajvidi (2002) P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensional settings. Biometrika, 89(2):359–374, 2002.
  • Hinton et al. (2006) G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  • Hoeffding (1948) W. Hoeffding. A class of statistics with asymptotically normal distribution. The annals of mathematical statistics, pages 293–325, 1948.
  • Kingma and Welling (2014) D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014.
  • Kingma et al. (2014) D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014.
  • Krishnamurthy et al. (2015) A. Krishnamurthy, K. Kandasamy, B. Póczos, and L. A. Wasserman. On estimating divergence. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 2015.
  • Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
  • Larochelle and Murray (2011) H. Larochelle and I. Murray. The neural autoregressive distribution estimator. Journal of Machine Learning Research, 15:29–37, 2011.
  • LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Li et al. (2015a) S. Li, Y. Xie, H. Dai, and L. Song. M-statistic for kernel change-point detection. In Advances in Neural Information Processing Systems, pages 3348–3356, 2015a.
  • Li et al. (2015b) Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In International Conference on Machine Learning, pages 1718–1727, 2015b.
  • Lloyd and Ghahramani (2015) J. R. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In Advances in Neural Information Processing Systems, 2015.
  • Louizos et al. (2016) C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel. The variational fair auto encoder. In International Conference on Learning Representations, 2016.
  • Rosenbaum (2005) P. R. Rosenbaum. An exact distribution-free test comparing two multivariate distributions based on adjacency. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(4):515–530, 2005.
  • Rumelhart et al. (1988) D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. In J. A. Anderson and E. Rosenfeld, editors, Neurocomputing: Foundations of Research, pages 696–699. MIT Press, Cambridge, MA, USA, 1988.
  • Salakhutdinov and Hinton (2009) R. Salakhutdinov and G. E. Hinton.

    Deep Boltzmann machines.

    In International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009.
  • Serfling (2009) R. J. Serfling. Approximation theorems of mathematical statistics, volume 162. John Wiley & Sons, 2009.
  • Sriperumbudur et al. (2010) B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. Lanckriet, and B. Schölkopf. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517–1561, 2010.

Appendix A Detailed Derivations of the Test Variance and Covariance

The variance and the covariance for a -statistic is described in Hoeffding [1948, Eq. 5.13] and Serfling [2009, Chap. 5].

Let be iid random variables where

. An unbiased estimator of



with .

Similarly, let be iid random variables where . An unbiased estimator of is



Then the variance/covariance for a -statistic with a kernel of order 2 is given by


Equation (13) with neglecting higher terms can be written as


where for the variance term, and for the covariance term .


for all and for . Same for and . We will also make use of the fact that for an appropriately chosen inner product, and function . We then denote


a.1 Variance of MMD

We note many terms in expansion of the squares above cancel out due to independence. For example .

We can thus simplify to the following expression for


Substituting empirical expectations over the data sample for the population expectations in Eq. (19) gives


Derivation of the first term for example


a.2 Covariance of MMD

We note many terms in expansion of the squares above cancel out due to independence. For example .

We can thus simplify to the following expression for


a.3 Derivation of the variance of the difference of two MMD statistics

In this section we propose an alternate strategy of deriving directly the variance of a u-statistic of the difference of MMDs with a joint variable. This formulation agrees with the derivation of the covariance matrix and subsequent projection, and provides extra insights.

Let be iid random variables where . Then the difference of the unbiased estimators of and is given by


with , the kernel of of order 2 as follows


Equation (24) is a -statistic and thus we can apply Equation (14) to obtain its variance. We first note


We are now ready to derive the dominant leading term,, in the variance expression (14).