Estimating Generalization under Distribution Shifts via Domain-Invariant Representations

07/06/2020
by   Ching-Yao Chuang, et al.
0

When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance. In this work, we aim to better estimate a model's performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels. Since the error of the resulting risk estimate depends on the target risk of the proxy model, we study generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our approach (1) enables self-tuning of domain adaptation models, and (2) accurately estimates the target error of given models under distribution shift. Other applications include model selection, deciding early stopping and error detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2023

Diagnosing Model Performance Under Distribution Shift

Prediction models can perform poorly when deployed to target distributio...
research
07/01/2021

Mandoline: Model Evaluation under Distribution Shift

Machine learning models are often deployed in different settings than th...
research
06/19/2023

Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts

Effective machine learning models learn both robust features that direct...
research
03/17/2023

Finding Competence Regions in Domain Generalization

We propose a "learning to reject" framework to address the problem of si...
research
10/20/2022

Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization

Machine learning systems are often applied to data that is drawn from a ...
research
05/13/2021

Causally-motivated Shortcut Removal Using Auxiliary Labels

Robustness to certain distribution shifts is a key requirement in many M...
research
01/11/2021

Learning to Ignore: Fair and Task Independent Representations

Training fair machine learning models, aiming for their interpretability...

Please sign up or login with your details

Forgot password? Click here to reset