Effects of sampling skewness of the importance-weighted risk estimator on model selection

04/19/2018 ∙ by Wouter M. Kouw, et al. ∙ Delft University of Technology 0

Importance-weighting is a popular and well-researched technique for dealing with sample selection bias and covariate shift. It has desirable characteristics such as unbiasedness, consistency and low computational complexity. However, weighting can have a detrimental effect on an estimator as well. In this work, we empirically show that the sampling distribution of an importance-weighted estimator can be skewed. For sample selection bias settings, and for small sample sizes, the importance-weighted risk estimator produces overestimates for datasets in the body of the sampling distribution, i.e. the majority of cases, and large underestimates for data sets in the tail of the sampling distribution. These over- and underestimates of the risk lead to suboptimal regularization parameters when used for importance-weighted validation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Sampling with selection bias is often the only means to acquire data. Bias in this context refers to the fact that certain observations occur more frequently than normal [1]

. For instance, in social science experiments, data collected from university students will have different properties than data collected from the larger national or global population. This results in a statistical classification problem where the training and test data come from different distributions. Such problems are very challenging, because the information that is relevant to accurately classify training samples might not be relevant to classify test samples. This problem is more commonly known as

sample selection bias or covariate shift [2, 3, 4]. The setting from which the training data originates is often referred to as the source domain, while the setting of interest is called the target domain [5]. Instead of attempting to collect data in an unbiased manner, which might be difficult due to operational, financial or ethical reasons, we are interested in correcting for the domain difference and generalize from the source to the target domain.

In the case of covariate shift, the dominant method of accounting for the differences between domains is importance-weighting: samples in the source domain are weighted based on their importance to the target domain. The classifier will subsequently change its predictions in order to avoid misclassifying highly important samples. It has been shown that, under certain conditions, an importance-weighted classifier will converge to the optimal target classifier [6]. How fast it learns depends heavily on how different the domains are, expressed by for instance the Rényi divergence [6]. The larger the divergence between the domains, the slower the rate of convergence of the classifier parameter estimator.

Although importance-weighted classifiers are consistent under the right circumstances, their performance still depends strongly on how the importance weights themselves are determined. There has been quite a large variety of work into the behavior of different types of weight estimators: ratio of parametric probability distributions

[7]

, kernel density estimators

[8], kernel mean matching [9], logistic discrimination [10], Kullback-Leibler importance estimation procedure [11], unconstrained least-squares importance fitting [12], nearest-neighbour based [13] or conservative minimax estimators [14]. Interestingly, these weight estimators can trade off consistency of the estimator for faster convergence, by enforcing smoothness, inhibiting weight distribution bimodality or ensuring a minimum weight value.

Importance-weighting is crucial to evaluating classifiers as well. Model selection is often done through cross-validation, where the training set is split into parts and each part is held back once to be evaluated on later [15, 16]

. However, the standard cross-validation procedure will not account for domain differences. As a result, its hyperparameter estimates are not optimal with respect to the target domain

[17, 11, 18]. Effectively, the standard cross-validation procedure can produce a model selection bias [19]. The importance-weighted risk, on the other hand, can account for domain differences. By weighting the source validation data, it approximates the target risk more closely. Better approximations of the target risk will allow for hyperparameter estimates that will make the model generalize better.

Importance-weighting is a widely-trusted and influential method, but it can act in quite surprising ways. In this paper we show that, for small sample sizes, the sampling distribution of an importance-weighted estimator can be skewed. Skewness refers to the fact that a distribution is not symmetric. That means that, although the estimator is unbiased, it will underestimate the parameter of interest for the majority of data sets, in the case of positive skew. Conversely, it will overestimate the true parameter for the majority of data sets in the case of negative skew. We explore the subsequent effects of this property on model selection under covariate shift.

Ii Preliminaries

In this section, we introduce our notation, our example setting and explain importance-weighting.

Ii-a Notation

Consider an input space , part of a

-dimensional vector space such as

, and a set of classes

. A source domain is a joint distribution defined over these spaces,

, marked with the subscript and a target domain is another , marked with . Assuming covariate shift implies that the domains’ conditional distributions are equal, i.e. , while the marginal data distributions are different, i.e. .

Samples from the source domain are denoted as the pair , with samples forming the source dataset . Similarly, target samples are denoted as with samples forming the target dataset . A classifier is a function that maps the input space to the set of classes, .

Ii-B Example setting

For the purposes of illustrating a few concepts in the upcoming sections, we generate an example of a covariate shift classification problem. For the target data distribution, a normal distribution with a mean of

and a standard deviation of

is taken; . For the source data distribution, we take a normal distribution with a mean of as well, but with a standard deviation of . The class priors in both domains are set to be equal: . Similarly, the class-posterior distributions are set to be equal as well, both in the form of a cumulative normal distribution: . Figure 1 plots the class-conditional distributions for the source domain (top) and the target domain (bottom). Essentially, the source domain is a biased sample of the target domain, because it favors samples close to and the decision boundary. Data is drawn through rejection sampling.

Fig. 1: Example case of a covariate shift classification problem. (Top) source domain, with . (Bottom) target domain, with .

Ii-C Empirical Risk Minimization

A classifier is a function that assigns a class to each input. Here we will focus on linear classifiers, which project the data onto a vector and make decisons based on which side of the decision boundary the datapoint falls; [20]

. In the empirical risk minimization framework, the classifier’s decisions are evaluated using a loss function.

Risk corresponds to the expected loss that the classifier incurs: [21]. For the examples in this paper, we choose a quadratic loss function, (known for the Fisher classifier and the least-squares SVM). Because the risk function is an expectation, it can be approximated using the sample average .

Ii-D Importance weighting

Considering that each domain has its own joint distribution, it has its own risk function as well. The source risk is , while the target risk is . Their estimators are, respectively:

It is possible to relate the source and target risk functions with each other as follows:

In the case of covariate shift, where , the ratio of the joint distributions can be reduced to the ratio of data marginal distributions . The new estimator is:

where . So, the target risk can be estimated through a weighted average with respect to the source samples. Hence, the ratio of distributions can be recognized as importance weights: the ratio is larger than for samples that have a high probability under the target distribution relative to the source distribution and smaller than for samples that have a relatively low probability.

Fig. 2: Histogram of the importance weights in the example scenario.

The importance weights themselves are often distributed according to an exponential or geometric distribution: many weights are small and a few weights are large. Figure

2 presents a histogram for the example setting. As the domains become more dissimilar, eventually, all samples will be nearly zero.

Weighting can have interesting effects on the behavior of an estimator. The next section discusses the variation in estimates as a function of different data sets.

Iii Sampling distribution

The probability distribution of an estimator’s results as a function of data, is called the sampling distribution. Properties of this distribution are interesting for a number of reasons. Firstly, the difference between the expected value of the sampling distribution and the underlying true risk is called the estimator’s bias. It can be desirable to have an unbiased risk estimator: for all . In other words, there should be no systematic deviation in its estimates. For the case of importance-weighting, it is possible to show that the risk estimator is unbiased:

Fig. 3: Histograms of the risk estimates over data sets drawn by rejection sampling from the setting described in Section II-B, for different sample sizes. Note that the skewness diminishes with more samples.

Iii-a Sampling variance

Secondly, the variance of the sampling distribution is informative on how uncertain, or conversely how accurate, an estimator is. If the sampling variance reduces as a function of the sample size, then the estimator becomes more accurate with more data

[22]. However, in the case of a weighted estimator, depending on the size of the weights, the sampling variance might diverge to infinity [6, 23]. For instance, it can be shown that the variance of the sampling distribution diverges for cases where the domains are too far apart [6]. In fact, for our example case, it can be shown how the weights directly scale the sampling variance:

Doing the same derivation for the target risk estimator yields:

They differ in the expectation term: the weights scale the expected squared loss. For settings where the weights are small, i.e. settings where the domains are close, the importance-weighted estimator converges faster and is more accurate. This fact is exploited in importance sampling [24, 25, 26]. However, for settings where the weights are large, i.e. settings where the domains are far apart, the weighted estimator has a larger sampling variance, is therefore more uncertain and will need more samples to achieve the same level of accuracy as the target risk estimator.

Iii-B Sampling skewness

The skewness

of a distribution is an indicator of how symmetric it is around its expected value. For small sample sizes, the distribution of the weights can skew the sampling distribution of an importance-weighted estimator. The skewness of a distribution can be expressed using the moment coefficient of skewness:

[27, 28]. A negative skew (also known as left-skewed) means that the probability mass of the distribution is concentrated to the right of the mean, while a positive skew (a.k.a. right-skewed) implies that the probability mass concentrates to the left of the mean. Our importance-weighted estimator is skewed as:

Again, doing the same derivation for the target risk estimator leads to:

showing that the skew of the importance-weighted estimator depends on multiplying the cubic loss with the squared weights. If the weights are large, the existing skew is scaled up. Note that the skew also reduces as the sampling variance increases.

The moments of the sampling distribution of the risk estimator depend heavily on the problem setting. It is therefore difficult to make general statements regarding all possible covariate shift classification problems. We can, however, illustrate the skew for the example case. In order to evaluate the risk estimator’s ability to validate a classifier, the classifier needs to remain fixed while the risk function is computed for different data sets. We took a linear classifier, with . Figure 3 plots the histograms of repetitions of rejection sampling. Note that each repetition correponds to a single validation data set. After computing the risks, it becomes apparent that the sampling distribution of is positively skewed and that its skew diminishes as the sample size increases.

Fig. 4: Boxplots of the regularization parameter estimates based on the importance-weighted risk estimator , for different sample sizes.

Iv Model selection

The importance-weighted risk estimator is crucial to model selection under covariate shift. Standard cross-validation does not account for domain differences [17]. Validating the model on source samples leads to hyperparameters that are not optimal with respect to the target domain [18]. Importance-weighting the source data results in a validation data set that more closely matches the target domain [11]. However, importance-weighed cross-validation suffers from a number of issues: for large domain differences, the sampling variance can diverge, resulting in highly inaccurate estimates [6, 23] and for small sample sizes, the sampling distribution can be skewed. How this skew affects validation will be shown in the following experiment.

Iv-a Body versus tail

The defining property of a skewed distribution is that the majority of the probability mass lies to the side of its expected value. The narrow region with a large amount of probability mass is called the body, while the long low-probability-mass region to the side of the body is called the tail. In the case of the example setting, the weighted risk estimator’s sampling distribution has a body on the left and a tail that drops off slowly to the right, as can be seen in Figure 3 for . Note that high probability mass regions of a sampling distribution correspond to many data sets. The risk estimates in the body are smaller than that of the expected value of the sampling distribution, i.e., the true target risk . Hence, the body contains underestimates of the target risk. The right-hand tail on the other hand contains overestimates. Note that the body contains many, relatively small, underestimates while the tail contains a few, relatively large, overestimates. We know that they cancel out, because we know the importance-weighted risk estimator is unbiased. However, for the large majority of data sets, the risk is underestimated. This directly affects the hyperparameter estimates obtained in cross-validation.

Iv-B Regularization parameter selection

In order to evaluate the importance-weighted risk estimator’s usefulness for model selection, we evaluate it for a regularized classifier. The problem setting is still the the example setting from Section II-B. We take the same linear classifier as before, but this time we add -regularization: where . We draw samples from the source domain and evaluate using the importance-weighted risk estimator. Following that, we select the for which the risk is minimal: . For the example case, , the expected risk can be found analytically. The true risk is minimal for , which means the optimal value of is . The better the risk estimator approximates the expected risk, the better the resulting will approximate the optimal value for .

The above procedure of drawing samples, computing risk and selecting is repeated times. All data sets for which the risk is smaller than the average risk over all repetitions are deemed part of the body, while all data sets with risks larger than the average are deemed part of the tail. Figure 4 shows the boxplots of for the body and tail separately. Each subplot covers one sample size: . The dotted line corresponds to the optimal , and the black bars in the boxplots mark the average estimates. For , the body produces overestimates of the regularization parameter on the order of , while the tail produces underestimates on the order of . For to , the effect is smaller, with the tail producing more accurate estimates. For , the differences between the body and the tail are nearly gone.

Figure 5 plots the proportions of data sets belonging to the body versus the tail, out of the repetitions. Looking at the amount of data sets that make up each part, we can conclude that the majority is part of the body. For the smallest data sets, there are, in fact, twice as many data sets in the body. As the sample size increases, the sampling distribution becomes less skewed and the proportions become equal.

Fig. 5: Proportions of data sets in the body (blue) versus tail (red).

V Discussion

Although the current problem setting is -dimensional, we do believe that higher-dimensional problem settings behave along the same lines. However, the current indications of ”enough” validation data may not be completely indicative of higher-dimensional settings; we expect that more validation data is required in that case. Also, in the reverse setting, where the source domain is wider than the target domain, the skew of the risk estimator’s sampling distribution is negative instead of positive, which means that all statements regarding over- and underestimates are reversed.

We have chosen a quadratic loss function to evaluate risk, but we believe that the results presented here will hold for other choices of loss functions as well. The skewness stems from the skewness of the importance-weights, which does not depend on the loss function.

A limitation of our study is the fact that it only covers the case of Gaussian data distributions. It would be helpful to attain a more general understanding of the effects of the skewness of the risk estimator’s sampling distribution. Unfortunately, generalizing the behavior of a sampling distribution for all possible covariate shift problem settings is not trivial.

Nonetheless, if the skew in the sampling distribution is indeed caused by the geometric distribution of the weights, then the current results might extend to importance-weighted classifiers as well. If the sampling distributions of classifier parameter estimators are skewed, then we might also see many overestimates of classifier parameters for the majority of data sets and a few large underestimates for rare cases, when sample sizes are small. Regularization has the potential to correct this, which again stresses the importance of having a good model selection procedure for covariate shift problems.

Vi Conclusion

We presented an empirical study of the effects of a skewed sampling distribution for the importance-weighted risk estimator on model selection. Depending on the problem setting, for small sample sizes, the estimator will frequently produce underestimates and infrequently produce large overestimates of the target risk. When used for validation, the risk estimator ensures that the regularization parameter is overestimated for the majority of data sets, for cases of sample selection bias. However, with enough data, the skew diminishes.

References

  • [1] J. Heckman, “Varieties of selection bias,” The American Economic Review, vol. 80, no. 2, pp. 313–318, 1990.
  • [2] C. Cortes, M. Mohri, M. Riley, and A. Rostamizadeh, “Sample selection bias correction theory,” in Algorithmic Learning Theory, 2008, pp. 38–53.
  • [3] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence,

    Dataset shift in machine learning

    .   The MIT Press, 2009.
  • [4] J. G. Moreno-Torres, T. Raeder, R. Alaiz-RodríGuez, N. V. Chawla, and F. Herrera, “A unifying view on dataset shift in classification,” Pattern Recognition, vol. 45, no. 1, pp. 521–530, 2012.
  • [5] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Machine Learning, vol. 79, no. 1-2, pp. 151–175, 2010.
  • [6] C. Cortes, Y. Mansour, and M. Mohri, “Learning bounds for importance weighting,” in Advances in Neural Information Processing Systems, 2010, pp. 442–450.
  • [7] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of Statistical Planning and Inference, vol. 90, no. 2, pp. 227–244, 2000.
  • [8] B. W. Silverman, Density estimation for statistics and data analysis.   CRC press, 1986, vol. 26.
  • [9] J. Huang, A. Gretton, K. M. Borgwardt, B. Schölkopf, and A. J. Smola, “Correcting sample selection bias by unlabeled data,” in Advances in Neural Information Processing Systems, 2006, pp. 601–608.
  • [10] S. Bickel, M. Brückner, and T. Scheffer, “Discriminative learning under covariate shift,” Journal of Machine Learning Research, vol. 10, pp. 2137–2155, 2009.
  • [11] M. Sugiyama, M. Krauledat, and K.-R. Müller, “Covariate shift adaptation by importance weighted cross validation,” Journal of Machine Learning Research, vol. 8, pp. 985–1005, 2007.
  • [12] T. Kanamori, S. Hido, and M. Sugiyama, “A least-squares approach to direct importance estimation,” Journal of Machine Learning Research, vol. 10, pp. 1391–1445, 2009.
  • [13] M. Loog, “Nearest neighbor-based importance weighting,” in International Workshop on Machine Learning for Signal Processing, 2012, pp. 1–6.
  • [14] J. Wen, C.-N. Yu, and R. Greiner, “Robust learning under uncertain test distributions: Relating covariate shift to model misspecification,” in International Conference on Machine Learning, 2014, pp. 631–639.
  • [15] B. Efron and R. J. Tibshirani, An introduction to the bootstrap.   CRC press, 1994.
  • [16] R. Kohavi et al., “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Ijcai, vol. 14, no. 2.   Stanford, CA, 1995, pp. 1137–1145.
  • [17] M. Sugiyama and K.-R. Müller, “Model selection under covariate shift,” in

    International Conference on Artificial Neural Networks

    .   Springer, 2005, pp. 235–240.
  • [18] W. M. Kouw and M. Loog, “On regularization parameter estimation under covariate shift,” in International Conference on Pattern Recognition.   IEEE, 2016, pp. 426–431.
  • [19] G. C. Cawley and N. L. Talbot, “On over-fitting in model selection and subsequent selection bias in performance evaluation,” Journal of Machine Learning Research, vol. 11, no. Jul, pp. 2079–2107, 2010.
  • [20] J. Friedman, T. Hastie, and R. Tibshirani, Elements of Statistical Learning.   Springer series in statistics Springer, Berlin, 2001, vol. 1.
  • [21] M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of machine learning.   MIT press, 2012.
  • [22] P. Massart, Concentration inequalities and model selection.   Springer, 2007, vol. 6.
  • [23] W. Kouw and M. Loog, “On reducing sampling variance in covariate shift using control variates,” arXiv preprint arXiv:1710.06514, 2017.
  • [24] H. Kahn and A. W. Marshall, “Methods of reducing sample size in monte carlo computations,” Journal of the Operations Research Society of America, vol. 1, no. 5, pp. 263–278, 1953.
  • [25] R. M. Neal, “Annealed importance sampling,” Statistics and Computing, vol. 11, no. 2, pp. 125–139, 2001.
  • [26] A. B. Owen, Monte Carlo theory, methods and examples, 2013.
  • [27] H. Cramér, Mathematical Methods of Statistics.   Princeton university press, 2016, vol. 9.
  • [28]

    D. Joanes and C. Gill, “Comparing measures of sample skewness and kurtosis,”

    Journal of the Royal Statistical Society: Series D, vol. 47, no. 1, pp. 183–189, 1998.