Distribution Shift Inversion for Out-of-Distribution Prediction

06/14/2023
by   Runpeng Yu, et al.
0

Machine learning society has witnessed the emergence of a myriad of Out-of-Distribution (OoD) algorithms, which address the distribution shift between the training and the testing distribution by searching for a unified predictor or invariant feature representation. However, the task of directly mitigating the distribution shift in the unseen testing set is rarely investigated, due to the unavailability of the testing distribution during the training phase and thus the impossibility of training a distribution translator mapping between the training and testing distribution. In this paper, we explore how to bypass the requirement of testing distribution for distribution translator training and make the distribution translation useful for OoD prediction. We propose a portable Distribution Shift Inversion algorithm, in which, before being fed into the prediction model, the OoD testing samples are first linearly combined with additional Gaussian noise and then transferred back towards the training distribution using a diffusion model trained only on the source distribution. Theoretical analysis reveals the feasibility of our method. Experimental results, on both multiple-domain generalization datasets and single-domain generalization datasets, show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 10

page 11

research
09/03/2019

Can we trust deep learning models diagnosis? The impact of domain shift in chest radiograph classification

While deep learning models become more widespread, their ability to hand...
research
05/18/2023

Prediction with Incomplete Data under Agnostic Mask Distribution Shift

Data with missing values is ubiquitous in many applications. Recent year...
research
03/15/2019

On Target Shift in Adversarial Domain Adaptation

Discrepancy between training and testing domains is a fundamental proble...
research
10/11/2020

Robust Fairness under Covariate Shift

Making predictions that are fair with regard to protected group membersh...
research
07/15/2020

Experimental Design for Bathymetry Editing

We describe an application of machine learning to a real-world computer ...
research
09/02/2022

Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems

One of the key challenges of learning an online recommendation model is ...
research
08/09/2021

Unified Regularity Measures for Sample-wise Learning and Generalization

Fundamental machine learning theory shows that different samples contrib...

Please sign up or login with your details

Forgot password? Click here to reset