Stable Learning via Sparse Variable Independence

12/02/2022
by   Han Yu, et al.
0

The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training data. However, with finite samples, it is difficult to achieve the desirable weights that ensure perfect independence to get rid of the unstable variables. Besides, decorrelating within stable variables may bring about high variance of learned models because of the over-reduced effective sample size. A tremendous sample size is required for these algorithms to work. In this paper, with theoretical justification, we propose SVI (Sparse Variable Independence) for the covariate-shift generalization problem. We introduce sparsity constraint to compensate for the imperfectness of sample reweighting under the finite-sample setting in previous methods. Furthermore, we organically combine independence-based sample reweighting and sparsity-based variable selection in an iterative way to avoid decorrelating within stable variables, increasing the effective sample size to alleviate variance inflation. Experiments on both synthetic and real-world datasets demonstrate the improvement of covariate-shift generalization performance brought by SVI.

READ FULL TEXT
research
06/09/2020

Stable Prediction via Leveraging Seed Variable

In this paper, we focus on the problem of stable prediction across unkno...
research
06/11/2021

Locally Sparse Networks for Interpretable Predictions

Despite the enormous success of neural networks, they are still hard to ...
research
05/15/2023

Double-Weighting for Covariate Shift Adaptation

Supervised learning is often affected by a covariate shift in which the ...
research
07/30/2020

Stable Learning via Causality-based Feature Rectification

How to learn a stable model under agnostic distribution shift between tr...
research
10/19/2020

Reweighting samples under covariate shift using a Wasserstein distance criterion

Considering two random variables with different laws to which we only ha...
research
07/18/2014

Extensions of stability selection using subsamples of observations and covariates

We introduce extensions of stability selection, a method to stabilise va...
research
02/26/2022

Generalized Label Shift Correction via Minimum Uncertainty Principle: Theory and Algorithm

As a fundamental problem in machine learning, dataset shift induces a pa...

Please sign up or login with your details

Forgot password? Click here to reset