Stable Prediction across Unknown Environments

06/16/2018
by   Kun Kuang, et al.
14

In many important machine learning applications, the training distribution used to learn a probabilistic classifier differs from the testing distribution on which the classifier will be used to make predictions. Traditional methods correct the distribution shift by reweighting the training data with the ratio of the density between test and training data. In many applications training takes place without prior knowledge of the testing distribution on which the algorithm will be applied in the future. Recently, methods have been proposed to address the shift by learning causal structure, but those methods rely on the diversity of multiple training data to a good performance, and have complexity limitations in high dimensions. In this paper, we propose a novel Deep Global Balancing Regression (DGBR) algorithm to jointly optimize a deep auto-encoder model for feature selection and a global balancing model for stable prediction across unknown environments. The global balancing model constructs balancing weights that facilitate estimating of partial effects of features (holding fixed all other features), a problem that is challenging in high dimensions, and thus helps to identify stable, causal relationships between features and outcomes. The deep auto-encoder model is designed to reduce the dimensionality of the feature space, thus making global balancing easier. We show, both theoretically and with empirical experiments, that our algorithm can make stable predictions across unknown environments. Our experiments on both synthetic and real world datasets demonstrate that our DGBR algorithm outperforms the state-of-the-art methods for stable prediction across unknown environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2020

Stable Prediction via Leveraging Seed Variable

In this paper, we focus on the problem of stable prediction across unkno...
research
07/30/2020

Stable Learning via Causality-based Feature Rectification

How to learn a stable model under agnostic distribution shift between tr...
research
06/08/2020

Balance-Subsampled Stable Prediction

In machine learning, it is commonly assumed that training and test data ...
research
02/13/2019

Stable multi-instance learning visa causal inference

Multi-instance learning (MIL) deals with tasks where each example is rep...
research
05/18/2023

Prediction with Incomplete Data under Agnostic Mask Distribution Shift

Data with missing values is ubiquitous in many applications. Recent year...
research
06/08/2020

Rethinking Importance Weighting for Deep Learning under Distribution Shift

Under distribution shift (DS) where the training data distribution diffe...

Please sign up or login with your details

Forgot password? Click here to reset