DeepAI
Log In Sign Up

Coping with Label Shift via Distributionally Robust Optimisation

10/23/2020
by   Jingzhao Zhang, et al.
6

The label shift problem refers to the supervised learning setting where the train and test label distributions do not match. Existing work addressing label shift usually assumes access to an unlabelled test sample. This sample may be used to estimate the test label distribution, and to then train a suitably re-weighted classifier. While approaches using this idea have proven effective, their scope is limited as it is not always feasible to access the target domain; further, they require repeated retraining if the model is to be deployed in multiple test environments. Can one instead learn a single classifier that is robust to arbitrary label shifts from a broad family? In this paper, we answer this question by proposing a model that minimises an objective based on distributionally robust optimisation (DRO). We then design and analyse a gradient descent-proximal mirror ascent algorithm tailored for large-scale problems to optimise the proposed objective. establish its convergence. Finally, through experiments on CIFAR-100 and ImageNet, we show that our technique can significantly improve performance over a number of baselines in settings where label shift is present.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/22/2019

Regularized Learning for Domain Adaptation under Label Shifts

We propose Regularized Learning under Label shifts (RLLS), a principled ...
03/23/2020

Minimax optimal approaches to the label shift problem

We study minimax rates of convergence in the label shift problem. In add...
03/15/2019

On Target Shift in Adversarial Domain Adaptation

Discrepancy between training and testing domains is a fundamental proble...
06/28/2021

Ensembling Shift Detectors: an Extensive Empirical Evaluation

The term dataset shift refers to the situation where the data used to tr...
07/10/2020

Robust Classification under Class-Dependent Domain Shift

Investigation of machine learning algorithms robust to changes between t...
06/17/2020

Self-training Avoids Using Spurious Features Under Domain Shift

In unsupervised domain adaptation, existing theory focuses on situations...
07/07/2021

Test for non-negligible adverse shifts

Statistical tests for dataset shift are susceptible to false alarms: the...