Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift

06/26/2020
by   Alex J. Chan, et al.
5

Modern neural networks have proven to be powerful function approximators, providing state-of-the-art performance in a multitude of applications. They however fall short in their ability to quantify confidence in their predictions - this is crucial in high-stakes applications that involve critical decision-making. Bayesian neural networks (BNNs) aim at solving this problem by placing a prior distribution over the network's parameters, thereby inducing a posterior distribution that encapsulates predictive uncertainty. While existing variants of BNNs based on Monte Carlo dropout produce reliable (albeit approximate) uncertainty estimates over in-distribution data, they tend to exhibit over-confidence in predictions made on target data whose feature distribution differs from the training data, i.e., the covariate shift setup. In this paper, we develop an approximate Bayesian inference scheme based on posterior regularisation, wherein unlabelled target data are used as "pseudo-labels" of model confidence that are used to regularise the model's loss on labelled source data. We show that this approach significantly improves the accuracy of uncertainty quantification on covariate-shifted data sets, with minimal modification to the underlying model architecture. We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2021

Dangers of Bayesian Model Averaging under Covariate Shift

Approximate Bayesian inference for neural networks is considered a robus...
research
04/09/2022

Uncertainty-Informed Deep Learning Models Enable High-Confidence Predictions for Digital Histopathology

A model's ability to express its own predictive uncertainty is an essent...
research
07/12/2020

BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty

Traditional training of deep classifiers yields overconfident models tha...
research
06/17/2021

PAC Prediction Sets Under Covariate Shift

An important challenge facing modern machine learning is how to rigorous...
research
01/02/2019

Auditing Pointwise Reliability Subsequent to Training

To use machine learning in high stakes applications (e.g. medicine), we ...
research
06/18/2021

Being a Bit Frequentist Improves Bayesian Neural Networks

Despite their compelling theoretical properties, Bayesian neural network...
research
02/13/2023

Probabilistic Circuits That Know What They Don't Know

Probabilistic circuits (PCs) are models that allow exact and tractable p...

Please sign up or login with your details

Forgot password? Click here to reset