Detecting when dataset shifts occur is a fundamental problem in learning, as one need to re-train its system to adapt to the new data before making wrong predictions. Strictly speaking, if is the training data and is the test data, a dataset shift occurs when the hypothesis that and are sampled from the same distribution wanes, that is . The aim of this work is to provide a statistical test to determine whether such a shift has occurred given a set of samples from training and testing set.
To cope with the complexity of joint distribution, a lot of literature has emerged in recent years trying to approach easier versions of the problem, where the distributions were assumed to differ only by a factor. For example a covariate shift is met when, in the decomposition , but . Prior distribution shift, conditional shift and others can be defined in a similar way. For a good reference, the reader may want to consider Quiñonero-Candela et al.  or Moreno-Torres et al. .
As it often happens, such assumptions are too strong to hold in practice and do require an expertise about the data distribution at hand which cannot be given for granted. A recent work by Long et al.  has tried to tackle the same question, but without making restricting hypothesis on what was changing between training and test distributions. They developed the Joint Distribution Discrepancy (JDD), a way of measuring distance between any two joint distributions – regardless of everything else. They build on the Maximum Mean Discrepancy (MMD) introduced in Gretton et al. 
by noticing that a joint distribution can be mapped into a tensor product feature space via kernel embedding.
The main idea behind MMD and JDD is to measure distance between distributions by comparing their embeddings in a Reproducing Kernel Hilbert Space (RKHS). RKHS is a Hilbert space of functions equipped with inner products and norms . In the context of this work, all elements of the space are probability distributions that can be evaluated by means of inner products with , thanks to the reproducing property. is a kernel function that takes care of the embedding by defining an implicit feature mapping , where . As always, can be viewed as a measure of similarity between points . If a characteristic kernel is used, then the embedding is injective and can uniquely preserve all the information about a distribution . According to the seminal work by Smola et al. , the kernel embedding of a distribution in is given by .
Having all the required tools in place, we can introduce the MMD and the JDD.
Definition 1 (Maximum Mean Discrepancy (MMD) ).
Let be the unit ball in a RKHS. If and are samples from distributions and respectively, then the MMD is
Definition 2 (Joint Distribution Discrepancy (JDD) ).
Let be the unit ball in a RKHS. If and are samples from joint distributions and respectively, then the JDD is
where and are the mappings yielding to kernels and , respectively.
). A biased empirical estimation of JDD can be obtained by replacing the population expectation with the empirical expectation computed on samplesfrom and samples from :
Moreover, throughout the paper, we restrict ourselves to the case of bounded kernels, specifically , for all and and for all kernels.
3 The test
Under the null hypothesis that, we would expect the JDD to be zero and the empirical JDD to be converging towards zero as more samples are acquired. The following theorem provides a bound on deviations of the empirical JDD from the ideal value of zero. These deviations may happen in practice, but if they are too large we will want to reject the null hypothesis.
Interestingly, Type II errors probability decreases to zero at rate– preserving the same convergence properties found in the kernel two-sample test of Gretton et al. . We warn the reader that this result was obtained by neglecting dependency between and . See Sec. 4 and following for a deeper discussion.
To validate our proposal, we handcraft joint distribution starting from MNIST data as follows. We sample an image from a specific class and define the pair of observation as the vertical and horizontal projection histograms of the sampled image. Fig. 1(a) depicts the process. The number of samples obtained in the described manner is defined by and they all belong to the same class. It is easy to see why the distribution is joint. For all the experiments we employed an RBF kernel, which is known to be characteristic , i.e. induces a one-to-one embedding. Formally, for and distributed according to or indistinctly, we have
The parameters and have been experimentally set to 0.25. Accordingly, both kernels are bounded by .
In the first experiment we obtain by sampling images from the class of number . Similarly, we collect by applying a rotation to other sampled images from the same class. Of course, when , the two samples and come from the same distribution and the null hypothesis that should not be rejected. On the opposite, as increases in absolute value we expect to see JDD increase as well – up to the point of exceeding the critical value defined in Eq. (4). Fig. 1(b) illustrates the behavior of JDD when and are sampled from increasingly different distributions.
To deepen the analysis, in Fig. 2 we study the behavior of the critical value by changing the significance level and the sample size .
5 Limitations and conclusions
The proof of Theorem 1
is based on the McDiarmid’s inequality which is not defined for joint distributions. As a result, we considered all random variables of both distributions independent of each others, despite being clearly rarely the case. However, empirical (but preliminary) experiments show encouraging results, suggesting that the test could be safely applied to evaluate the equivalence of joint distributions under broad independence cases.
6.1 Preliminaries to the proofs
In order to prove our test, we first need to introduce McDiarmid’s inequality and a modified version of Rademacher average with respect to the -sample obtained from a joint distribution.
Theorem 2 (McDiarmid’s inequality ).
Let be a function such that for all , there exist for which
Then for all probability measures and every ,
where denotes the expectation over the random variables , and denotes the probability over these variables.
Definition 3 (Joint Rademacher average).
Let be the unit ball in an RKHS on the domain , with kernels bound between and . Let be an i.i.d. sample drawn according to probability measure on , and let be i.i.d. and taking values in with equal probability. We define the joint Rademacher average
6.2 Proof of Theorem 1
We start by applying McDiarmid’s inequality to under the simplifying hypothesis that ,
Without loss of generality, let us consider the variation of with respect to any . Since is the unit ball in the Reproducing Kernel Hilbert Space we have
for all and for all . Consequently, the largest variation to is bounded by , as the bound in Eq. 11 also holds for all and for all . Summing up squared maximum variations for all and , the denominator in Eq. (7) becomes
To fully exploit McDiarmid’s inequality, we also need to bound the expectation of . To this end, similarly to Gretton et al. , we exploit symmetrisation (Eq. (14)(d)) by means of a ghost sample, i.e. a set of observations whose sampling bias is removed through expectation (Eq. (14)(b)). In particular, let and be i.i.d. samples of size drawn independently of and respectively, then
In Eq. (14), (b) adds a difference that equals 0 since by the null hypothesis, and (c) employs Jensen’s inequality.
-  Mingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636, 2016.
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and
A kernel two-sample test.
Journal of Machine Learning Research, 13(Mar):723–773, 2012.
-  Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
-  Jose G Moreno-Torres, Troy Raeder, RocíO Alaiz-RodríGuez, Nitesh V Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. Pattern Recognition, 45(1):521–530, 2012.
-  Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. Kernel measures of conditional dependence. In NIPS, volume 20, pages 489–496, 2007.
-  Alex Smola, Arthur Gretton, Le Song, and Bernhard Schölkopf. A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13–31. Springer, 2007.
-  Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex J Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pages 513–520, 2006.
-  Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
-  Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.