A Learning Based Hypothesis Test for Harmful Covariate Shift

12/06/2022
by   Tom Ginsberg, et al.
0

The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.

READ FULL TEXT

page 37

page 38

research
04/19/2023

Information Geometrically Generalized Covariate Shift Adaptation

Many machine learning methods assume that the training and test data fol...
research
08/18/2021

Contrastive Identification of Covariate Shift in Image Data

Identifying covariate shift is crucial for making machine learning syste...
research
05/22/2023

MAGDiff: Covariate Data Set Shift Detection via Activation Graphs of Deep Neural Networks

Despite their successful application to a variety of tasks, neural netwo...
research
12/21/2022

Not Just Pretty Pictures: Text-to-Image Generators Enable Interpretable Interventions for Robust Representations

Neural image classifiers are known to undergo severe performance degrada...
research
04/13/2023

Unified Out-Of-Distribution Detection: A Model-Specific Perspective

Out-of-distribution (OOD) detection aims to identify test examples that ...
research
07/01/2020

Identifying Causal Effect Inference Failure with Uncertainty-Aware Models

Recommending the best course of action for an individual is a major appl...
research
11/07/2016

Revisiting Distributionally Robust Supervised Learning in Classification

Distributionally Robust Supervised Learning (DRSL) is necessary for buil...

Please sign up or login with your details

Forgot password? Click here to reset