A Statistical Test for Joint Distributions Equivalence

07/25/2016 ∙ by Francesco Solera, et al. ∙ 0

We provide a distribution-free test that can be used to determine whether any two joint distributions p and q are statistically different by inspection of a large enough set of samples. Following recent efforts from Long et al. [1], we rely on joint kernel distribution embedding to extend the kernel two-sample test of Gretton et al. [2] to the case of joint probability distributions. Our main result can be directly applied to verify if a dataset-shift has occurred between training and test distributions in a learning framework, without further assuming the shift has occurred only in the input, in the target or in the conditional distribution.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Detecting when dataset shifts occur is a fundamental problem in learning, as one need to re-train its system to adapt to the new data before making wrong predictions. Strictly speaking, if is the training data and is the test data, a dataset shift occurs when the hypothesis that and are sampled from the same distribution wanes, that is . The aim of this work is to provide a statistical test to determine whether such a shift has occurred given a set of samples from training and testing set.

To cope with the complexity of joint distribution, a lot of literature has emerged in recent years trying to approach easier versions of the problem, where the distributions were assumed to differ only by a factor. For example a covariate shift is met when, in the decomposition , but . Prior distribution shift, conditional shift and others can be defined in a similar way. For a good reference, the reader may want to consider Quiñonero-Candela et al. [3] or Moreno-Torres et al. [4].

2 Preliminaries

As it often happens, such assumptions are too strong to hold in practice and do require an expertise about the data distribution at hand which cannot be given for granted. A recent work by Long et al. [1] has tried to tackle the same question, but without making restricting hypothesis on what was changing between training and test distributions. They developed the Joint Distribution Discrepancy (JDD), a way of measuring distance between any two joint distributions – regardless of everything else. They build on the Maximum Mean Discrepancy (MMD) introduced in Gretton et al. [2]

by noticing that a joint distribution can be mapped into a tensor product feature space via kernel embedding.

The main idea behind MMD and JDD is to measure distance between distributions by comparing their embeddings in a Reproducing Kernel Hilbert Space (RKHS). RKHS is a Hilbert space of functions equipped with inner products and norms . In the context of this work, all elements of the space are probability distributions that can be evaluated by means of inner products with , thanks to the reproducing property. is a kernel function that takes care of the embedding by defining an implicit feature mapping , where . As always, can be viewed as a measure of similarity between points . If a characteristic kernel is used, then the embedding is injective and can uniquely preserve all the information about a distribution [5]. According to the seminal work by Smola et al. [6], the kernel embedding of a distribution in is given by .

Having all the required tools in place, we can introduce the MMD and the JDD.

Definition 1 (Maximum Mean Discrepancy (MMD) [2]).

Let be the unit ball in a RKHS. If and are samples from distributions and respectively, then the MMD is

(1)
Definition 2 (Joint Distribution Discrepancy (JDD) [1]).

Let be the unit ball in a RKHS. If and are samples from joint distributions and respectively, then the JDD is

(2)

where and are the mappings yielding to kernels and , respectively.

Note that, conversely to Long et al. [1], we don’t square the norm in Eq. (2

). A biased empirical estimation of JDD can be obtained by replacing the population expectation with the empirical expectation computed on samples

from and samples from :

(3)

Moreover, throughout the paper, we restrict ourselves to the case of bounded kernels, specifically , for all and and for all kernels.

3 The test

Under the null hypothesis that

, we would expect the JDD to be zero and the empirical JDD to be converging towards zero as more samples are acquired. The following theorem provides a bound on deviations of the empirical JDD from the ideal value of zero. These deviations may happen in practice, but if they are too large we will want to reject the null hypothesis.

Theorem 1.

Let be defined as in Sec. 1 and Sec. 2. If the null hypothesis holds, and for simplicity , we have

(4)

with probability at least . As a consequence, the null hypothesis can be rejected with a significance level if Eq. 4 is not satisfied.

Interestingly, Type II errors probability decreases to zero at rate

– preserving the same convergence properties found in the kernel two-sample test of Gretton et al. [2]. We warn the reader that this result was obtained by neglecting dependency between and . See Sec. 4 and following for a deeper discussion.

4 Experiments

To validate our proposal, we handcraft joint distribution starting from MNIST data as follows. We sample an image from a specific class and define the pair of observation as the vertical and horizontal projection histograms of the sampled image. Fig. 1(a) depicts the process. The number of samples obtained in the described manner is defined by and they all belong to the same class. It is easy to see why the distribution is joint. For all the experiments we employed an RBF kernel, which is known to be characteristic [7], i.e. induces a one-to-one embedding. Formally, for and distributed according to or indistinctly, we have

(5)

The parameters and have been experimentally set to 0.25. Accordingly, both kernels are bounded by .

In the first experiment we obtain by sampling images from the class of number . Similarly, we collect by applying a rotation to other sampled images from the same class. Of course, when , the two samples and come from the same distribution and the null hypothesis that should not be rejected. On the opposite, as increases in absolute value we expect to see JDD increase as well – up to the point of exceeding the critical value defined in Eq. (4). Fig. 1(b) illustrates the behavior of JDD when and are sampled from increasingly different distributions.

To deepen the analysis, in Fig. 2 we study the behavior of the critical value by changing the significance level and the sample size .

                       
     (a)                  (b)
Figure 1: In (a) we show an exemplar image drawn from the MNIST dataset. Observation are the projection histograms along both axis, i.e. () is obtained by summing values across rows (columns). On the right, (b) depicts the behavior of the JDD measure when samples are drawn from a different distribution w.r.t. , specifically the distribution of rotated images. The rotation is controlled by the parameter. The green line shows the critical value for rejecting the null hypothesis (acceptance region below).
(a) (b)
Figure 2: On the left, (a) depicts the JDD critical value for the test of Eq. (4) when the significance level and the sample size change. Cooler colors correspond to lower values of the JDD threshold. Not surprisingly, more conclusive and desirable tests can be obtained either by lowering or by increasing . Complementary, (b) shows the convergence rate of the test threshold at increasing size of sample, for a fix value of . It is worth noticing, that the elbow of the convergence curve is found around .

5 Limitations and conclusions

The proof of Theorem 1

is based on the McDiarmid’s inequality which is not defined for joint distributions. As a result, we considered all random variables of both distributions independent of each others, despite being clearly rarely the case. However, empirical (but preliminary) experiments show encouraging results, suggesting that the test could be safely applied to evaluate the equivalence of joint distributions under broad independence cases.

6 Appendices

6.1 Preliminaries to the proofs

In order to prove our test, we first need to introduce McDiarmid’s inequality and a modified version of Rademacher average with respect to the -sample obtained from a joint distribution.

Theorem 2 (McDiarmid’s inequality [8]).

Let be a function such that for all , there exist for which

(6)

Then for all probability measures and every ,

(7)

where denotes the expectation over the random variables , and denotes the probability over these variables.

Definition 3 (Joint Rademacher average).

Let be the unit ball in an RKHS on the domain , with kernels bound between and . Let be an i.i.d. sample drawn according to probability measure on , and let be i.i.d. and taking values in with equal probability. We define the joint Rademacher average

(8)
Theorem 3 (Bound on joint Rademacher average).

Let be the joint Rademacher average defined as in Def. 3, then

(9)
Proof.

The proof follows the main steps from Bartlett and Mendelson [9], lemma 22. Recall that and , for all and .

(10)

6.2 Proof of Theorem 1

We start by applying McDiarmid’s inequality to under the simplifying hypothesis that ,

Without loss of generality, let us consider the variation of with respect to any . Since is the unit ball in the Reproducing Kernel Hilbert Space we have

(11)

for all and for all . Consequently, the largest variation to is bounded by , as the bound in Eq. 11 also holds for all and for all . Summing up squared maximum variations for all and , the denominator in Eq. (7) becomes

(12)

yielding to

(13)

To fully exploit McDiarmid’s inequality, we also need to bound the expectation of . To this end, similarly to Gretton et al. [2], we exploit symmetrisation (Eq. (14)(d)) by means of a ghost sample, i.e. a set of observations whose sampling bias is removed through expectation (Eq. (14)(b)). In particular, let and be i.i.d. samples of size drawn independently of and respectively, then

(14)

In Eq. (14), (b) adds a difference that equals 0 since by the null hypothesis, and (c) employs Jensen’s inequality.

By substituting the upper bound of Eq. (14) in Eq. (13), we obtain Theorem 1.

References

  • [1] Mingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636, 2016.
  • [2] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test.

    Journal of Machine Learning Research

    , 13(Mar):723–773, 2012.
  • [3] Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
  • [4] Jose G Moreno-Torres, Troy Raeder, RocíO Alaiz-RodríGuez, Nitesh V Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. Pattern Recognition, 45(1):521–530, 2012.
  • [5] Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. Kernel measures of conditional dependence. In NIPS, volume 20, pages 489–496, 2007.
  • [6] Alex Smola, Arthur Gretton, Le Song, and Bernhard Schölkopf. A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13–31. Springer, 2007.
  • [7] Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex J Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pages 513–520, 2006.
  • [8] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
  • [9] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.