Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift

by   Alex J. Chan, et al.

Modern neural networks have proven to be powerful function approximators, providing state-of-the-art performance in a multitude of applications. They however fall short in their ability to quantify confidence in their predictions - this is crucial in high-stakes applications that involve critical decision-making. Bayesian neural networks (BNNs) aim at solving this problem by placing a prior distribution over the network's parameters, thereby inducing a posterior distribution that encapsulates predictive uncertainty. While existing variants of BNNs based on Monte Carlo dropout produce reliable (albeit approximate) uncertainty estimates over in-distribution data, they tend to exhibit over-confidence in predictions made on target data whose feature distribution differs from the training data, i.e., the covariate shift setup. In this paper, we develop an approximate Bayesian inference scheme based on posterior regularisation, wherein unlabelled target data are used as "pseudo-labels" of model confidence that are used to regularise the model's loss on labelled source data. We show that this approach significantly improves the accuracy of uncertainty quantification on covariate-shifted data sets, with minimal modification to the underlying model architecture. We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.



There are no comments yet.


page 1

page 2

page 3

page 4


Dangers of Bayesian Model Averaging under Covariate Shift

Approximate Bayesian inference for neural networks is considered a robus...

PAC Prediction Sets Under Covariate Shift

An important challenge facing modern machine learning is how to rigorous...

BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty

Traditional training of deep classifiers yields overconfident models tha...

Identifying Causal Effect Inference Failure with Uncertainty-Aware Models

Recommending the best course of action for an individual is a major appl...

Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions

Recurrent neural networks (RNNs) are instrumental in modelling sequentia...

Auditing Pointwise Reliability Subsequent to Training

To use machine learning in high stakes applications (e.g. medicine), we ...

Being a Bit Frequentist Improves Bayesian Neural Networks

Despite their compelling theoretical properties, Bayesian neural network...

Code Repositories


Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift (ICML 2021) by Alex J. Chan, Ahmed M. Alaa, Zhaozhi Qian, and Mihaela van der Schaar.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern neural networks have achieved the state of the art predictive performance in a wide variety of applications. They are especially useful in areas where a large quantity of labelled i.i.d data are available (Krizhevsky et al., 2012). However, neural networks fall short in their ability to quantify their confidence in the predictions, which leads to difficulties to apply them to mission critical domains. The immediate problem is that neural networks may issue erroneous predictions with high confidence (Ovadia et al., 2019). These over-confident predictions would mislead rather than inform human experts’ decisions and can lead to severe consequences in high-stakes applications.

In practice, the task of quantifying predictive uncertainty is even more challenging because the training and testing data are typically not i.i.d due to the impact of exogenous factors over time or the inconsistency in data collection. This situation is known as covariate shift (Shimodaira, 2000) and various research has indicated that this may cause neural networks to display unexpected behaviour (Ovadia et al., 2019). In the extreme case, they may confidently produce nonsensical predictions for out-of-distribution adversarial examples (Madry et al., 2017). While in this work we do not consider the scenario of a targeted adversarial attack, we would like the network to return a high uncertainty prediction if a test point falls far from any of the training data. We motivate the need for work in the area with a concrete example; consider the case of trying to predict the mortality rate for a group of patients with prostate cancer in a country where we have no labels due to tight privacy regulations on medical data. We have access though to labelled examples from another country which we could use to train the model, the problem being that the populations of each country may differ in their distribution so a model purely trained on the labelled data may perform poorly on the unlabelled data both in terms of accuracy and uncertainty estimation. This type of problem is common in the medical setting and errors here are especially damaging since model predictions may have a direct impact on the treatment a patient receives.

Bayesian neural networks (BNNs) (Neal, 2012)

aim to solve the uncertainty quantification problem by learning neural networks via Bayesian inference, a principled way to reason under uncertainty. BNNs encapsulate the prediction uncertainty in the posterior predictive distribution, which is typically intractable and has to be approximated

(Graves, 2011; Blundell et al., 2015). While existing approximation methods are able to produce reliable uncertainty estimates over in-distribution data, it has been shown that they tend to be over-confident under covariate shift.

In this paper, we propose Transductive Dropout, a method leveraging information from the unlabelled target data to find a better approximation to the posterior. We make the following observation: a point being in the target data is an indication that the model should output higher uncertainty because the target distribution is not well-represented by training data due to covariate shift. Therefore, we use whether the data come from training or target set as a “pseudo-label” of model confidence. This naturally leads to a posterior regularisation term which we incorporate into the variational approximation objective. Our regulariser can be easily applied to many of the current network architectures and inference schemes — here, we demonstrate its usefulness in Monte Carlo Dropout, showing that it much more appropriately quantifies its uncertainty under covariate shift. Empirical evaluations demonstrate that our method performs competitively compared to Bayesian and frequentist approaches in the task of prostate cancer mortality prediction across globally diverse populations.

Figure 1: High-level depiction of our approach. We first generate our augmented data set with pseudo-labels before feeding forward to make predictions and then back-propagating both errors through the network.


2 Related Work

2.1 Overview of Related Methods

Utilising unlabelled data to improve uncertainty estimate under covariate shift is a previously less explored area in the literature. Here we highlight some of the key methods in the surrounding fields to contextualise our work.

Bayesian Uncertainty Estimate for Neural Networks

Bayesian methodology has been applied to quantify the predictive uncertainty of neural networks leading to a large family of methods known as Bayesian Neural Networks (BNN). BNN learns a posterior distribution over parameters that encapsulates the model uncertainty. Due the complexity of deep neural networks, the exact posterior is usually intractable. Hence, much of the research in BNN literature is devoted to finding better approximate inference algorithms for the posterior. Popular approximate Bayesian approaches include dropout-based variational inference (Gal and Ghahramani, 2016; Kingma et al., 2015) and Stochastic Variational Bayesian Inference (Blundell et al., 2015; Graves, 2011; Louizos and Welling, 2017). These methods are known to achieve reliable uncertainty estimate in i.i.d scenario. However, recent research has cast doubt about the validity of these uncertainty estimates under covariate shift (Ovadia et al., 2019). Moreover, the above methods do not make use of any unlabelled data for training or inference.

Semi-Supervised Learning

Semi-supervised learning (SSL) covers the broad field of learning from both labelled and unlabelled data (Zhu and Goldberg, 2009). It s generally separated into two with most of the work covering inductive SSL which aims to use the unlabelled data to learn a general mapping from the features to the outcome. Many recent works encourage the model to generalise better by using a regularisation term computed on the unlabelled data Berthelot et al. (2019). This includes entropy minimisation which encourages the model to produce confident predictions on unlabelled data (Grandvalet and Bengio, 2005; Lee, 2013; Jean et al., 2018) and consistency regularisation which ensures the predictions for slightly perturbed data stay similar (Sajjadi et al., 2016). The other split covers transductive SSL where the aim is to make predictions over only the unlabelled points given with no need to generalise further. As we will show later, the proposed Transductive Dropout fits more into this framework, using the unlabelled data as a regulariser in order to induce a better variational approximation to the intractable posterior distribution.

However, our work is significantly different from traditional SSL in several ways. First, we note that most existing works in SSL focus entirely on using unlabelled data to improve predictive performance (e.g. accuracy), but much less thoughts have been given to improving the uncertainty estimate for those predictions, which is the focus of this paper. Furthermore, our work explicitly addresses the issue of covariate shift between source and target data whereas traditional SSL often assumes that they are i.i.d. In addition, most of the recent work in SSL considers problems like image classification and natural language processing where the methods can leverage the complicated dependencies in the features - we don’t consider this a focus and develop a method that works appropriately for tabular data as well.

Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) is the task of training models to achieve better performance on a target domain, with access to only unlabelled data in the target domain and labelled data from a (different) source domain. Kouw and Loog (2019) contains a detailed review of popular UDA methods. As with SSL, existing works on UDA centre around improving predictive performance rather than producing well-calibrated uncertainty estimates. Our work contributes to the UDA literature by proposing a method to improve the uncertainty estimates on the predictions made in the target domain.

Transfer Learning

In the setting of transfer learning

(Torrey and Shavlik, 2010) the task does involve a change in distribution over features but typically also involves some amount of labels on the target set (known as one-shot or few-shot learning). This has led to a lot of work that uses the training set to learn a useful prior for a second model that can be trained on the labelled data in the target set (Raina et al., 2006; Karbalayghareh et al., 2018). Given the complete lack of labels in our target data set this is inapplicable for our problem.

3 Preliminaries

3.1 Notation and Problem Setup

Let be a

-dimensional feature vector, and 

be the prediction target; where for regression targets, and for -class classification targets. We are presented with two sources of training data: a labelled data set , and an unlabelled data set . The labelled data set comprises a collection of feature-label pairs, i.e., , whereas the unlabelled set comprises a collection of feature instances .

We assume that consists of i.i.d samples of features and labels drawn from the distribution

where both and are unknown, and could only be accessed empirically through . Throughout the paper, we will refer to as the feature distribution — feature instances in the unlabelled data set are assumed to be drawn from a shifted feature distribution as follows:

where , whereas the unobserved labels in the data set , i.e., the blue dots in Figure 2 corresponding to , are generated from the same conditional distribution . Note that even though the feature distributions and differ, the conditional is invariant across the two data sets. This situation is commonly known as covariate shift (Shimodaira, 2000). We denote the entirety of observed data .

3.2 Learning from (and for) unlabelled data

Our key objective is to use the (source) labelled data set  to train a model that would be applied to the (target) unlabelled data set . However, since the feature distributions in and mismatch, we cannot expect a model trained on to perfectly generalise to . Thus, we aim at training the model to learn which prediction instances can be confidently transferred from to , and which cannot be confidently generalised across the two distributions. To this end, we train the model to score its uncertainty on predictions issued for all feature instances in .

We take a Bayesian approach to uncertainty estimation. That is, for a model with parameter and a test point , the Bayesian posterior distribution over is


The posterior decomposition in (1) comprises two types of uncertainty (Malinin and Gales, 2018): data uncertainty

, also referred to as aleatoric uncertainty, is the variance of the true conditional distribution

, reflecting the inherent ambiguity or noise in the true labels (Gal et al., 2017). The second type of uncertainty, model uncertainty, pertains to the model’s epistemic uncertainty created by the lack of training examples in the vicinity of the test feature . Since the conditional is invariant across the source and target distributions, it is the model uncertainty that we focus on.

3.3 Standard approximate Bayesian is not enough…

A true Bayesian model (with appropriate priors) would completely capture model uncertainty in  by simply training the model on in a supervised fashion, while completely ignoring the unlabelled data in (Sugiyama and Storkey, 2007). However, exact Bayesian inference in neural networks is generally intractable (and computationally expensive), hence existing practical solutions to Bayesian modelling rely on approximate inference schemes, for example based on Monte Carlo dropout (MCDP) (Gal and Ghahramani, 2016).

While approximate inference via MCDP — with appropriate hyper-parameter tuning — provides reliable uncertainty estimates for in-distribution data (i.e., feature instances in ), it has been shown in Ovadia et al. (2019) that these methods lead to miscalibrated estimates of uncertainty for out-of-distribution data. In the next Section, we develop an approximate Bayesian scheme that makes use of the unlabelled data in to provide more accurate uncertainty estimates on the predictions made for features instances drawn from the shifted distribution .

4 Transductive Regularisation

How can we use our knowledge of the unlabelled data in  to improve the uncertainty estimates on predictions made for the target distribution ? In this Section, we develop an approximate Bayesian method tailored to this task. Here, we regard a neural network (NN) as a distribution

that assigns a probability to each possible output


4.1 Variational inference with posterior regularisation

In a Bayesian framework, we specify a prior distribution on the NN parameters, and obtain the posterior via Bayes rule. In practice, the posteriors and in (1) are both intractable. To address this issue, we use variational inference, whereby we use a surrogate distribution parameterised by to approximate . The parameter is obtained by minimising the KL-divergence between and as follows (Graves, 2011):


In practice the KL divergence is not minimised directly, rather it is achieved my maximising the Evidence Lower BOund (ELBO), which can be written as:


being seen as the balance of two terms. The objective being to maximise the log-likelihood under the surrogate distribution (first term) while regularising the approximation to not be too far from the prior (second term). Variational inference also leads to an approximate posterior predictive distribution , obtained by replacing in (1) with its variational counterpart . Note that the unlabelled data in is ancillary to the optimisation problem in (2), since mere evidence maximisation would render as the only relevant conditional for finding the variational parameter . Hence, the vanilla variational Bayes is insufficient in our setup as it cannot capitalise on our knowledge of the unlabelled data in .

To incorporate the unlabelled data in into our inference machine, we resort to posterior regularisation (Zhu et al., 2014). That is, instead of computing the variational posterior that best matches the true posterior in KL distance, we add a regulariser to the objective in (2), i.e.,


in order to explicitly influence the learned variational posterior so that it produces the desired uncertainty profiles, i.e., posterior variance, over the target feature distribution .

What do our sought-after uncertainty profiles look like? In order to design the regulariser , we first need to specify the influences it needs to exert on the learned variational posterior . Let and  denote the mean and variance of a given distribution , respectively. A “good” variational posterior is one that matches the true posterior , and induces the following uncertainty profile: for any pair of features drawn from the target distribution, the variational posterior satisfies the following condition:


That is, the variance of the variational posterior, which quantifies the model’s uncertainty, should be smaller for target test points that are close (in distribution) to the labelled data in , and vice versa. The key idea behind our posterior regularisation approach is that the augmentation of labelled and unlabelled data serve as “pseudo-labels” of model confidence — by regarding the condition in (5) as an auxiliary classification task wherein  predicts whether a feature  is drawn from the source or target distributions, we can “train” to make this binary prediction via its variance. Building on this insight, the rest of this Section builds a regulariser that enables to discriminate source and target features.

Figure 2: Pictorial depiction of transductive dropout inference. (a) Here, we depict an exemplary one-dimensional feature space, along with the corresponding variational posterior and feature-dependent dropout rate . Transductive dropout inference operates by adapting the dropout rate so that it induces larger posterior variance for regions with dense concentration of unlabelled data, but low density for labelled data (small for some . (b) This panel shows an exemplary realisation of labelled and unlabelled data sets for the same example in panel (a). Red dots are fully observed, whereas for blue ones, we only observe the locations but not the outputs on the -axis. The typical behaviour of the transductive dropout is to increase the dropout rates in regions where unlabelled data are denser than labelled data, creating more variability in the Monte carlo samples of the network outputs. Here, exemplary instances of test-time dropout applied to the network architecture for different values of the feature are depicted.


4.2 Posterior regularisation via transductive dropout

As discussed above, we seek a variational posterior that best fits the labelled data in , and discriminates source and target data. Before proceeding, we first define an augmented data set , where

where corresponds to a missing value for the label . In addition, we define the monotonic function  as a map from positive real values to the unit interval. Given the variational distribution , our prediction of whether the feature  comes from the source or target distributions is


which follows directly from the condition in (5). Given (6), we define the regulariser in (4) as the cross-entropy loss between predicted and true auxiliary variables, and , i.e.,


Thus, our variational posterior is obtained by plugging the regulariser in (4) and solving for , with the optional inclusion of a hyperparamter to control the level of regularisation. The exact choice of can as well be controlled although from our experiments it made little difference, and we settled on . We note that this regularisation scheme addresses the issue of over-confident predictions on the target set without taking the naive approach of just increasing the variance everywhere — it is balanced by the location of the source data set that will lower the variance in our appropriately confident locations. Since the regulariser above solves the transductive

learning problem of classifying source and target data in a way that resembles semi-supervised learning

(Rohrbach et al., 2013), we call a transductive regulariser. In what follows, we propose a practical way to implement transductive regularisation within the MCDP approximate inference framework.

Transductive Dropout. We extend the MCDP approximate inference scheme in (Gal and Ghahramani, 2016) by applying our posterior regularisation penalty, and allowing the dropout rates to vary per data point, dependent on the feature values. By enabling the dropout rates to be a function of

, we provide more degrees-of-freedom to flexibly craft the posterior variance

so that it accurately discriminates source and target data points.

Let be the dropout rate of the underlying NN model. We parameterise to be dependent on the feature value as follows. Let be a neural network with a sigmoid output layer and parameters , i.e., maps feature values to dropout rates so that . This equates to an approximate posterior distribution over the NN weights:


for , 0 otherwise, where is the set of weights for the NN modelling the conditional distribution . This leaves an optimisation objective (of the form in (4)) over the variational parameters . Using the equivalence between KL minimisation and squared loss minimisation under dropout regularisation, we can write the objective function in (4) as


with the possibility of adding an regulariser as well. As we can see, this objective incorporates both labelled and unlabelled data: the data set contributes to the first term, which is concerned with fitting the observed labels drawn from the source distribution, whereas the second term, which depends on the entire augmented data set , makes sure that the induced variational posterior is aware of the mismatch between source and target feature distributions. We can see that this scheme, as depicted in figure 1, acts in a similar way to (3), primarily optimising the likelihood of the data under the approximation while constrained by a requlariser on the form of the distribution, only now the regulariser induces more specific behaviour and makes use of .

The regulariser in (9

) can be computed in backpropagation using sample estimates of the posterior variance as follows. Let

be the current estimate of the variational parameters at a given iteration of the gradient descent procedure. To evaluate the model loss and gradients, we use the MCDP forward pass to sample outputs for every in , and compute a Monte Carlo sample estimate of the transductive regularisation term as follows:


Computations of the estimator in (10) only involve the forward pass, and evaluating its gradients is straightforward.

Key insights Figure 2 provides a pictorial depiction of our transductive dropout inference procedure applied to an exemplary, one-dimensional feature space. A key insight is that transductive dropout inference learns to adapt the dropout rate so that it induces larger posterior uncertainty for regions with dense concentration of unlabelled data, but low density for labelled data.

4.3 Limitations

With the target data included in the training regime, it would seem that this method does not lend itself to online deployment where new test points come in over time. We note though that retraining is not practically necessary every time a new prediction needs to be made, indeed given an initial collection of test points it would only be useful when we encounter a significant amount of new data that is covariate shifted further even than the original targets.

5 Experiments

5.1 Toy Dataset

Figure 3: Comparison of uncertainty predictions (a) Here we show the confidence intervals for MCDP, demonstrating that while appropriate over the labelled data, they remain overconfident at the unlabelled data. (b) This panel shows the predictions for transductive dropout - the mean prediction remaining equally as accurate while producing more uncertainty at the unlabelled locations


Figure 4: Dropout rate distribution Smoothed density estimates of the dropout rate distribution over the source and target sets.


In this section, we consider a toy 1-d regression example to show how standard BNNs produce overconfident uncertainty estimates under covariate-shift.

The features in the source and target data sets are generated i.i.d. from and respectively. The labels are generated as where and

is zero-centred Gaussian noise with standard deviation

. The source and target data sets contain 50 data points each.

Figure 3

compares the fit of MCDP (left) and transductive dropout (right). Both networks consist of two fully connected hidden layers with 32 and 64 neurons per respective layer with tanh activations. We see they produce similar mean predictions near the labelled training points. However, MCDP starts to issue over-confident predictions as the feature distribution shifts away from the training data. On the other hand, transductive dropout learns to output larger uncertainty in the areas of low density under

. In figure 4 we plot smoothed density estimates of the learnt dropout rates for both source and target distribution points. In the source distribution the learnt rates are all quite tight around 0.18, while in the target distribution there is a much bigger spread reflective of the points’ distances from labelled data.

5.2 Prostate Cancer Mortality Prediction


Prostate cancer is the third most common cancer in men, with half a million new cases each year around the world (Quinn and Babb, 2002). It is far more common among the elderly with around 75% of cases occur in men aged over 65 years. Therefore, prostate cancer is expected to bring increasing healthcare burden to countries with ageing population (Hsing et al., 2000). The latest clinical guideline for prostate cancer treatment recommends watchful waiting or non-invasive treatment for early-stage patients who have low mortality rate (Heidenreich et al., 2011). Surgery (Radical Prostatectomy) is recommended instead for high-risk patients whose health condition deteriorates rapidly. The patient’s survival outlook therefore plays an important role in the treatment decisions. Hence, improved accuracy and uncertainty quantification for mortality prediction will help clinicians to design effective treatment plans and improve patients’ life expectancy.


We consider the problem of predicting and estimating the uncertainty of the mortality rate for patients with prostate cancer. Our training data consists of 240,486 patients enrolled in the American SEER program (SEER, 2019), while for our target data we consider a group of 10,086 patients enrolled in the British Prostate Cancer UK program (UK, 2019). For both sets of patients we have identical covariate data with information concerning the age, PSA, and Gleason scores as well as what clinical stage they’re at and which, if any, treatment they are receiving. Note that while we have the same features for both sets this is an area where we expect a level of covariate shift given the different programs and the transition from American to British patients. Indeed we do see this, without giving a full break down of the summary statistics, patients in the Prostate Cancer UK are in general older with higher Gleason scores though not as far along in the clinical stages.


We compare our method against competitive methods from the probabilistic deep learning literature based on their prevalence and applicability. While we consider this work quite different to semi-supervised learning, which do not usually consider improving uncertainty estimates, we also include MixMatch as a benchmark

(Berthelot et al., 2019). The methods we consider are:

  1. [noitemsep]

  2. MLP

    - Standard feed forward neural network to benchmark accuracy.

  3. Dropout - Monte Carlo dropout with rate 0.5 (Gal and Ghahramani, 2016; Srivastava et al., 2014)

  4. Concrete Dropout - Dropout with the rate treated as an additional variational parameter and is optimised with respect to the ELBO (Gal et al., 2017).

  5. Ensemble - Ensemble of feed forward MLPs (Lakshminarayanan et al., 2017) with the number of models in the ensemble.

  6. MixMatch - We implement a version of the MixMatch algorithm (Berthelot et al., 2019) where we perform one round of label guessing and mixup and without sharpening. As the base predictive model we use a MC Dropout network.

  7. Last Layer Approximations (LL) - Approximate inference for only the parameters of the last layer of the network (Riquelme et al., 2018), using Dropout.

  8. Transductive Dropout - No Regularisation (TDNR) - We implement transductive dropout as described above but without the addition of our variance regulariser to show that the gains are not just down to the ability to adapt the dropout rate to the input.

For all of the neural networks we consider the same architecture of two fully connected hidden layers of 128 units each and tanh activation function. The initial weights are randomly drawn from N(0, 0.1) and all networks are trained using Adam

(Kingma and Ba, 2015)

. Hyperparameter optimisation remains an open problem under covariate shift - we used a validation set consisting of 10% of the labelled data selected, not entirely randomly, but based on propensity score matching in order to obtain a set more reflective of the target data. With this, hyperparemeters were selected for all model through grid search.

Method Test Perf. Error Pred. CI Width Misclassified SD INPT
MLP 0.720 0.012 N/A N/A N/A N/A
MC Dropout 0.729 0.016 0.730 0.016 0.093 0.025 8
Concrete Dropout 0.791 0.012 0.794 0.012 0.151 0.066 76
Ensemble 0.761 0.014 0.782 0.014 0.037 0.018 8
MixMatch 0.728 0.016 0.726 0.016 0.082 0.021 0
LL 0.723 0.014 0.696 0.014 0.073 0.028 22
TDNR 0.836 0.010 0.808 0.011 0.197 0.068 18
Transductive Dropout 0.861 0.009 0.857 0.009 0.130 0.110 189

Table 1: Area under the ROC curve for two tasks, first correctly predicting the mortality rate of patients in the test set and secondly predicting whether for a given patient the model will make an error. We also report the average confidence interval (CI) length over test predictions, the average standard deviation (SD) at miss-classified points, and the increased number of patients receiving treatment (INPT) using the associated uncertainty in the model and a risk level of .

Evaluation metrics

We consider five evaluation metrics for a comprehensive understanding of the model performance. First, we consider the prediction accuracy as measured by AUROC shown as “TEST PERF.” in table

1 (Bewick et al., 2004). Second, we consider the standard deviation of the posterior predictive distribution as a (unnormalised) predictor for whether or not the model will make an error on a given input. The corresponding AUROC score (“ERROR PRED”) measures the agreement between model uncertainty and the chance to predict wrongly, and hence reflects whether the model is well-calibrated. Third, we present the average width of the 95% predictive interval as a measure of general model confidence on unlabelled data (“CI WIDTH”). Next, we show the standard deviation of the predictive distribution on misclassified data (“MISCLASSIFIED SD”). Finally, we show the increased number of patients receiving treatment (INPT) using the associated uncertainty in the model and a risk level of . All quantities related to the posterior distribution are estimated by MC sampling.

Main results

First, we note that transductive dropout yields an improvement in the AUROC on the mortality prediction against the other benchmarks, demonstrating that our improved uncertainty calibration does not come at the cost of mean accuracy. Our focus though is on the calibration of our uncertainty estimates. While ultimately it is impossible to properly test how close uncertainty predictions are to what would be the true uncertainty, we test by using the posterior predictive variance to classify whether or not the model will make a mistake. The intuition here is that if the model is appropriately uncertain the variance will be high when a mistake is likely and low when not, thus a high performance on using variance as a predictor for when the model will make a mistake should demonstrate appropriate uncertainty estimates. Here we see that transductive dropout significantly outperforms the other benchmarks, suggesting that in general the high variance predictions are indeed associated with those that are more likely to be wrong. We additionally focus on these predictions that each method gets wrong and look at the average standard deviation at each of these points. Here transductive dropout shows on average it’s much less confident about its incorrect predictions than the the other benchmarks, which is the preferred behaviour. It is important to note that this is not at the expense of confidence over all predictions as we show that both concrete dropout and TDNR both have on average larger confidence intervals than transductive dropout.

Impact on patients

Given our motivations we also ground the performance of our method in how it could be used in real world decision making on the treatments offered to patients. There are many reasons treatment options may not be offered to patients including cost and potential side effects, as such there will usually be an associated risk level which a patient must be above in order to receive treatment. It’s thus very damaging to patients for a model to confidently predict them to be low risk when they are indeed not. In Table 1 we set a threshold, and show how many more patient would receive treatment if we consider coverage of the confidence interval on the patients risk, with the assumption that these cases can be handed off to a human expert who will correctly classify them. We see that transductive dropout results in a large increase in previously patients misclassified as low risk receiving treatment and we develop the impact on treatment options further in Figure 5. Here we set a treatment risk threshold at and show how the size of any predicted confidence intervals over a patients risk impacts the increased number of patients correctly receiving treatment. Naturally for all methods as the confidence interval grows the number of now correctly treated patients increases but transductive dropout consistently outperforms the other benchmarks as it is less often confidently incorrect in its risk prediction.

Figure 5: Improved patient outcome We show how many more patients, for a risk threshold of , correctly receive treatment as the size of the confidence interval on the prediction of risk changes.


How does the covariate-shift affect uncertainty? Of interest is to consider how the covariate shift has actually impacted our models performance. To that end we consider the feature distribution of those points misclassified by the model to see how it compares to both source and target sets. One of the most important factors affecting both the treatment and survival of prostate cancer patient is the age at diagnosis (Bechis et al., 2011). Studies have shown that older patients tend to have worse survival outlook and are more likely to receive surgery (Hall et al., 2005). In our source data, the average age at diagnosis is 66 years old moving up to 70 in the target set. Comparing to the distribution over ages for incorrectly predicted cases, where the average is 74, we see that it is for the patients who are considerably older than those usually seen in the training data that the model is less sure about. We see a similar story in their PSA scores (measurements of prostate specific antigen in the patients blood). PSA score is known to be a highly sensitive indicator for the risk level and severity of prostate cancer, and it is widely adopted in cancer screening and monitoring (Grimm et al., 2012). Again we see an increase in the average from 14.8 to 18.4 from source to target set but for those that are incorrectly classified the mean is much higher at 28.6.§ The percentage of patients receiving surgery in the incorrectly classified group is twice that of those correctly classified, suggesting that our models are least confident in areas which we might think are the most at risk given domain knowledge - the more elderly with high levels of PSA. The model struggles with them (is much less confident) though as they are values which don’t have high density in the training data, demonstrating that blind application of a model to a covariate shifted data set may easily yield surprisingly incorrect predictions. Fortunately transductive dropout tends to return high uncertainty over its predictions on this covariate shifted data such that the practitioner can suitably inform any decisions to be taken as a result of these predictions.

5.3 Additional Results

We focused here on an important real-world example problem, but with the aim of demonstrating generalisation, we provide further benchmark results on some publicly available data sets from the UCI machine learning repository

(Dua and Graff, 2017) in appendix A.

6 Conclusions

In this paper we introduced transductive regularisation, a method for using unlabelled data to calibrate the variance of Bayesian neural networks by introducing the auxiliary task of using the posterior predicted variance to discriminate between source and target distributions. We showed that this amounts to performing posterior regularisation in approximate Bayesian inference and results in more useful uncertainty predictions. We examined an instantiation of this framework within MCDP, transductive dropout, and demonstrate its applicability in the real task of predicting prostate cancer mortality, where it outperforms the tested benchmarks and demonstrates a higher level of appropriate uncertainty calibration.

Future Work

The question of perfect calibration is clearly not solved and there are several immediate directions for further work that present themselves. First is an extension to frequentist probabilistic ensembles, this is not entirely trivial since the capturing of model uncertainty comes from averaging across elements of the ensemble making estimates during training more complicated to obtain. Second would be an application in the active learning setting and using targeted labelled data acquisition in order to appropriately reduce the model uncertainty - the sub-network used to predict the rate in transductive dropout could inform an important part of an acquisition function.


We would like to thank the anonymous reviewers for their helpful comments and suggestions. Research in this paper was supported by the National Science Foundation (NSF grants 1524417 and 1722516), and the US Office of Naval Research (ONR).


  • S. K. Bechis, P. R. Carroll, and M. R. Cooperberg (2011) Impact of age at diagnosis on prostate cancer treatment and survival. Journal of Clinical Oncology 29 (2), pp. 235. Cited by: §5.2.
  • D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel (2019) Mixmatch: a holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pp. 5050–5060. Cited by: §2.1, item 5, §5.2.
  • V. Bewick, L. Cheek, and J. Ball (2004)

    Statistics review 13: receiver operating characteristic curves

    Critical care 8 (6), pp. 508. Cited by: §5.2.
  • C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra (2015) Weight uncertainty in neural networks. In International Conference on Machine Learning, Cited by: §1, §2.1.
  • D. Dua and C. Graff (2017) UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences. External Links: Link Cited by: §5.3.
  • Y. Gal and Z. Ghahramani (2016) Dropout as a bayesian approximation: representing model uncertainty in deep learning. In International Conference on Machine Learning, pp. 1050–1059. Cited by: §2.1, §3.3, §4.2, item 2.
  • Y. Gal, J. Hron, and A. Kendall (2017) Concrete dropout. In Advances in Neural Information Processing Systems, pp. 3581–3590. Cited by: §3.2, item 3.
  • Y. Grandvalet and Y. Bengio (2005) Semi-supervised learning by entropy minimization. In Advances in neural information processing systems, pp. 529–536. Cited by: §2.1.
  • A. Graves (2011) Practical variational inference for neural networks. In Advances in neural information processing systems, pp. 2348–2356. Cited by: §1, §2.1, §4.1.
  • P. Grimm, I. Billiet, D. Bostwick, A. P. Dicker, S. Frank, J. Immerzeel, M. Keyes, P. Kupelian, W. R. Lee, S. Machtens, et al. (2012) Comparative analysis of prostate-specific antigen free survival outcomes for patients with low, intermediate and high risk prostate cancer treatment by radical therapy. results from the prostate cancer results study group. BJU international 109, pp. 22–29. Cited by: §5.2.
  • W. Hall, A. Jani, J. Ryu, S. Narayan, and S. Vijayakumar (2005) The impact of age and comorbidity on survival outcomes and treatment patterns in prostate cancer. Prostate Cancer and Prostatic Diseases 8 (1), pp. 22–30. Cited by: §5.2.
  • A. Heidenreich, J. Bellmunt, M. Bolla, S. Joniau, M. Mason, V. Matveev, N. Mottet, H. Schmid, T. van der Kwast, T. Wiegel, et al. (2011) EAU guidelines on prostate cancer. part 1: screening, diagnosis, and treatment of clinically localised disease. European urology 59 (1), pp. 61–71. Cited by: §5.2.
  • A. W. Hsing, L. Tsao, and S. S. Devesa (2000) International trends and patterns of prostate cancer incidence and mortality. International journal of cancer 85 (1), pp. 60–67. Cited by: §5.2.
  • N. Jean, S. M. Xie, and S. Ermon (2018) Semi-supervised deep kernel learning: regression with unlabeled data by minimizing predictive variance. In Advances in Neural Information Processing Systems, pp. 5322–5333. Cited by: §2.1.
  • A. Karbalayghareh, X. Qian, and E. R. Dougherty (2018) Optimal bayesian transfer learning. IEEE Transactions on Signal Processing 66 (14), pp. 3724–3739. Cited by: §2.1.
  • D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. ICLR. Cited by: §5.2.
  • D. P. Kingma, T. Salimans, and M. Welling (2015) Variational dropout and the local reparameterization trick. In Advances in neural information processing systems, pp. 2575–2583. Cited by: §2.1.
  • W. M. Kouw and M. Loog (2019) A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.1.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • B. Lakshminarayanan, A. Pritzel, and C. Blundell (2017) Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pp. 6402–6413. Cited by: item 4.
  • D. Lee (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, Vol. 3, pp. 2. Cited by: §2.1.
  • C. Louizos and M. Welling (2017) Multiplicative normalizing flows for variational bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2218–2227. Cited by: §2.1.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Cited by: §1.
  • A. Malinin and M. Gales (2018) Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems, pp. 7047–7058. Cited by: §3.2.
  • R. M. Neal (2012) Bayesian learning for neural networks. Vol. 118, Springer Science & Business Media. Cited by: §1.
  • Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. V. Dillon, B. Lakshminarayanan, and J. Snoek (2019) Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530. Cited by: §1, §1, §2.1, §3.3.
  • M. Quinn and P. Babb (2002) Patterns and trends in prostate cancer incidence, survival, prevalence and mortality. part i: international comparisons. BJU international 90 (2), pp. 162–173. Cited by: §5.2.
  • R. Raina, A. Y. Ng, and D. Koller (2006) Constructing informative priors using transfer learning. In Proceedings of the 23rd international conference on Machine learning, pp. 713–720. Cited by: §2.1.
  • C. Riquelme, G. Tucker, and J. Snoek (2018)

    Deep bayesian bandits showdown: an empirical comparison of bayesian deep networks for thompson sampling

    arXiv preprint arXiv:1802.09127. Cited by: item 6.
  • M. Rohrbach, S. Ebert, and B. Schiele (2013) Transfer learning in a transductive setting. In Advances in neural information processing systems, pp. 46–54. Cited by: §4.2.
  • M. Sajjadi, M. Javanmardi, and T. Tasdizen (2016) Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems, pp. 1163–1171. Cited by: §2.1.
  • SEER (2019) Surveillance, epidemiology, and end results (seer) program. External Links: Link Cited by: §5.2.
  • H. Shimodaira (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference 90 (2), pp. 227–244. Cited by: §1, §3.1.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: item 2.
  • M. Sugiyama and A. J. Storkey (2007) Mixture regression for covariate shift. In Advances in Neural Information Processing Systems, pp. 1337–1344. Cited by: §3.3.
  • L. Torrey and J. Shavlik (2010) Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pp. 242–264. Cited by: §2.1.
  • P. C. UK (2019) Prostate cancer uk. External Links: Link Cited by: §5.2.
  • J. Zhu, N. Chen, and E. P. Xing (2014) Bayesian inference with posterior regularization and applications to infinite latent svms. The Journal of Machine Learning Research 15 (1), pp. 1799–1847. Cited by: §4.1.
  • X. Zhu and A. B. Goldberg (2009) Introduction to semi-supervised learning.

    Synthesis lectures on artificial intelligence and machine learning

    3 (1), pp. 1–130.
    Cited by: §2.1.

Appendix A Additional Results

We provide some additional results in table 2 on publicly available data sets taken from the UCI machine learning repository. Specifically we take three data sets: Breast Cancer, Iris, and Wine before slightly adapting them to more naturally fit the covariate shifted setting. First we make them a binary classification problem by taking the class with the largest members as positive and all others as negative. We then split the data into training and testing sets by projecting on to the first principal component and sampling a 20% testing set weighted by this value.

For all of the neural networks we consider the same architecture of two fully connected hidden layers of 32 and 64 hidden units each with tanh activation function. The initial weights are randomly drawn from N(0, 0.1) and all networks are trained using Adam. We consider the prediction accuracy as measured by AUROC shown as “TEST PERF.” as well as the standard deviation of the posterior predictive distribution as a (unnormalised) predictor for whether or not the model will make an error on a given input. The corresponding AUROC score (“ERROR PRED”) measures the agreement between model uncertainty and the chance to predict wrongly, and hence reflects whether the model is well-calibrated.

We see tranductive dropout always performs strongly on test performance, and though not always the best is certainly competitive in all cases, demonstrating their doesn’t appear to be a toll on mean predicitive power. Further though we see that transductive dropout does remain the best across the data sets on the task of error prediction, demonstrating better uncertainty calibration, the focus of this work.

Method Breast Cancer Iris Wine
Test Perf. Error Pred. Test Perf. Error Pred. Test Perf. Error Pred.
MC Dropout 0.979 0.012 0.662 0.033 0.937 0.044 0.063 0.046 0.972 0.026 0.775 0.155
Dropout 0.791 0.006 0.794 0.006 0.952 0.038 0.847 0.055 1.000 0.000 0.915 0.050
Ensemble 0.978 0.011 0.675 0.007 0.960 0.041 0.571 0.115 0.993 0.007 0.939 0.037
MixMatch 0.986 0.010 0.529 0.046 0.242 0.069 0.758 0.069 0.889 0.050 0.611 0.105
LL 0.950 0.033 0.329 0.032 0.929 0.064 0.071 0.064 0.986 0.013 0.575 0.230
TDNR 0.979 0.013 0.945 0.026 0.940 0.045 0.657 0.105 1.000 0.000 0.890 0.055
Dropout 0.968 0.017 0.975 0.015 0.956 0.045 0.877 0.082 1.000 0.000 0.951 0.034

Table 2: For the three datasets we present the area under the ROC curve for two tasks, first correctly predicting the classification in the test set and secondly predicting whether for a given test point the model will make an error.