Calibration with Bias-Corrected Temperature Scaling Improves Domain Adaptation Under Label Shift in Modern Neural Networks

01/21/2019
by   Avanti Shrikumar, et al.
0

Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions while the conditional probability p(x|y) stays fixed. This is relevant in settings such as medical diagnosis, where a classifier trained to predict disease based on observed symptoms may need to be adapted to a different distribution where the baseline frequency of the disease is higher. Given calibrated estimates of p(y|x), one can apply an EM algorithm to correct for the shift in class imbalance between the training and test distributions without ever needing to calculate p(x|y). Unfortunately, modern neural networks typically fail to produce well-calibrated probabilities, compromising the effectiveness of this approach. Although Temperature Scaling can greatly reduce miscalibration in these networks, it can leave behind a systematic bias in the probabilities that still poses a problem. To address this, we extend Temperature Scaling with class-specific bias parameters, which largely eliminates systematic bias in the calibrated probabilities and allows for effective domain adaptation under label shift. We term our calibration approach "Bias-Corrected Temperature Scaling". On experiments with CIFAR10, we find that EM with Bias-Corrected Temperature Scaling significantly outperforms both EM with Temperature Scaling and the recently-proposed Black-Box Shift Estimation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2019

Well-calibrated Model Uncertainty with Temperature Scaling for Dropout Variational Inference

In this paper, well-calibrated model uncertainty is obtained by using te...
research
10/28/2019

Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration

Class probabilities predicted by most multiclass classifiers are uncalib...
research
07/26/2022

Domain Adaptation under Open Set Label Shift

We introduce the problem of domain adaptation under Open Set Label Shift...
research
02/12/2018

Detecting and Correcting for Label Shift with Black Box Predictors

Faced with distribution shift between training and test set, we wish to ...
research
07/29/2022

Factorizable Joint Shift in Multinomial Classification

Factorizable joint shift (FJS) was recently proposed as a type of datase...
research
12/25/2020

Contextual Temperature for Language Modeling

Temperature scaling has been widely used as an effective approach to con...
research
08/12/2020

Local Temperature Scaling for Probability Calibration

For semantic segmentation, label probabilities are often uncalibrated as...

Please sign up or login with your details

Forgot password? Click here to reset