Rethinking Importance Weighting for Transfer Learning

12/19/2021
by   Nan Lu, et al.
0

A key assumption in supervised learning is that training and test data follow the same probability distribution. However, this fundamental assumption is not always satisfied in practice, e.g., due to changing environments, sample selection bias, privacy concerns, or high labeling costs. Transfer learning (TL) relaxes this assumption and allows us to learn under distribution shift. Classical TL methods typically rely on importance-weighting – a predictor is trained based on the training losses weighted according to the importance (i.e., the test-over-training density ratio). However, as real-world machine learning tasks are becoming increasingly complex, high-dimensional, and dynamical, novel approaches are explored to cope with such challenges recently. In this article, after introducing the foundation of TL based on importance-weighting, we review recent advances based on joint and dynamic importance-predictor estimation. Furthermore, we introduce a method of causal mechanism transfer that incorporates causal structure in TL. Finally, we discuss future perspectives of TL research.

READ FULL TEXT

page 23

page 27

research
03/12/2019

Transfer Adaptation Learning: A Decade Survey

The world we see is ever-changing and it always changes with people, thi...
research
06/08/2020

Rethinking Importance Weighting for Deep Learning under Distribution Shift

Under distribution shift (DS) where the training data distribution diffe...
research
07/08/2020

A One-step Approach to Covariate Shift Adaptation

A default assumption in many machine learning scenarios is that the trai...
research
11/29/2017

Dimension Reduction for Robust Covariate Shift Correction

In the covariate shift learning scenario, the training and test covariat...
research
04/19/2018

Effects of sampling skewness of the importance-weighted risk estimator on model selection

Importance-weighting is a popular and well-researched technique for deal...
research
07/01/2021

Mandoline: Model Evaluation under Distribution Shift

Machine learning models are often deployed in different settings than th...
research
03/07/2023

When is Importance Weighting Correction Needed for Covariate Shift Adaptation?

This paper investigates when the importance weighting (IW) correction is...

Please sign up or login with your details

Forgot password? Click here to reset