Log In Sign Up

Evaluating and Boosting Uncertainty Quantification in Classification

by   Xiaoyang Huang, et al.

Emergence of artificial intelligence techniques in biomedical applications urges the researchers to pay more attention on the uncertainty quantification (UQ) in machine-assisted medical decision making. For classification tasks, prior studies on UQ are difficult to compare with each other, due to the lack of a unified quantitative evaluation metric. Considering that well-performing UQ models ought to know when the classification models act incorrectly, we design a new evaluation metric, area under Confidence-Classification Characteristic curves (AUCCC), to quantitatively evaluate the performance of the UQ models. AUCCC is threshold-free, robust to perturbation, and insensitive to the classification performance. We evaluate several UQ methods (e.g., max softmax output) with AUCCC to validate its effectiveness. Furthermore, a simple scheme, named Uncertainty Distillation (UDist), is developed to boost the UQ performance, where a confidence model is distilling the confidence estimated by deep ensembles. The proposed method is easy to implement; it consistently outperforms strong baselines on natural and medical image datasets in our experiments.


page 1

page 2

page 3

page 4


Demystifying Inferential Models: A Fiducial Perspective

Inferential models have recently gained in popularity for valid uncertai...

Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation

Uncertainty quantification in automated image analysis is highly desired...

Uncertainty Evaluation Metric for Brain Tumour Segmentation

In this paper, we develop a metric designed to assess and rank uncertain...

Objective Evaluation of Deep Uncertainty Predictions for COVID-19 Detection

Deep neural networks (DNNs) have been widely applied for detecting COVID...

UQ-CHI: An Uncertainty Quantification-Based Contemporaneous Health Index for Degenerative Disease Monitoring

Developing knowledge-driven contemporaneous health index (CHI) that can ...

Better Uncertainty Quantification for Machine Translation Evaluation

Neural-based machine translation (MT) evaluation metrics are progressing...

1 Introduction

“Rise of the machines” in biomedical applications requires more research on AI interpretability, security, privacy and fairness issues. Uncertainty quantification (UQ), as an important capability for explainable artificial intelligence (XAI), is urgently needed for safety-critical tasks, e.g., medical decision making [begoli2019need]. With effective UQ models, clinicians are able to intervene in the automatic medical decision procedures when the decisions are made uncertainly.

We focus on uncertainty quantification for classification. As suggested by prior study [lakshminarayanan2017simple], two aspects should be examined on UQ models: 1) calibration and 2) generalization. Calibration

measures, how confident a UQ model is with accurate / inaccurate classification results on in-distribution inputs. However, the UQ performance should be decoupled from classification performance, which means a classifier (and the UQ model) “may be very accurate yet miscalibrated, and vice versa”

[lakshminarayanan2017simple]. As for generalization, it focus on whether the model is uncertain on out-of-distribution inputs. Hendrycks and Gimpel (2017) [hendrycks2016baseline] develop maximum softmax outputs in effectively identifying out-of-distribution samples.

However, it is difficult to compare prior methods of uncertainty quantification (in classification) with each other, due to the lack of a unified quantitative evaluation. The evaluations are either qualitative [gal2016dropout], or highly dependent on the classification performance [lakshminarayanan2017simple]. Research on out-of-distribution detection [hendrycks2016baseline, liang2017enhancing] uses Receiver Operating Characteristic analysis to quantitatively evaluate the performance, yet it is not applicable on the in-distribution setting. Motivated by unifying the evaluation on in-distribution (calibration) and out-of-distribution (generalization) uncertainty quantification, we propose Confidence-Classification Characteristic (CCC) analysis, a UQ evaluation method orthogonal to classification. The area under the CCC curves (AUCCC) is a quantitative evaluation metric, which is threshold-free, robust to perturbation, and insensitive to classification performance. Our experiments on several datasets validate the effectiveness of CCC.

Another major challenge in UQ is that the “ground truth” of uncertainty estimates are generally not available [lakshminarayanan2017simple]. Bayesian approaches in UQ, e.g., Monte Carlo dropout [gal2016dropout], thereby become prevalent. Besides, deep ensemble [lakshminarayanan2017simple], a non-Bayesian (yet probabilistic) alternative, develops a simple and scalable scheme to estimate uncertainty. These studies estimate the confidence111confidence 1 - uncertainty if uncertainty from the softmax output, Yet a key insight in this study is that classification and its confidence should be modeled separately. Inspired from knowledge distillation [hinton2015distilling], we develop a cascade model, named Uncertainty Distillation (UDist), to distill the uncertainty estimation from the Deep Ensembles [lakshminarayanan2017simple]. Experiments empirically validate the effectiveness of this simple scheme over Deep Ensembles on UQ.

2 Evaluating Uncertainty Quantification

2.1 A Discriminative View of Uncertainty Quantification

Intuitively, the classification and uncertainty quantification is calibrated, if the classification accuracy is higher at high confidence and vice versa. Deep Ensemble [lakshminarayanan2017simple] uses a curve of accuracy vs. confidence to evaluate the calibration, yet this evaluation is highly dependent on the classification model. We believe the evaluation of UQ is orthogonal to the classification performance. Even if classification is modeled terribly deficient, a UQ model should be regarded as perfect if it is “confident” to all accurate classification and “uncertain” to all inaccurate classification. To this regard, we propose a discriminative view of UQ, where the UQ model is discriminating between the “accurate’‘ or “inaccurate” classification.

Figure 1:

(a) Illustration of discriminative view of uncertainty quantification. (b) Analogy between (b1) Receiver Operating Characteristic (ROC) and (b2) Confidence-Classification Characteristic (CCC) analysis. Similar to ROC, a unique confidence confusion matrix is constructed given a threshold

. By varying , we get various confusion matrices. We plot the CCC curves, with on the -axis and on the -axis.

With the help of the discriminative view, we are able to quantitatively analyze the performance of a UQ model. A UQ model is expected to separate the “accurate” and “inaccurate” classification as far as possible, by giving out various confidence score for various instances. As demonstrated in Figure 1 (a), given the results of classification and the confidence scores (assume confidence ), several metrics are able to measure this separability, e.g., cross entropy and Brier score [lakshminarayanan2017simple]. We take cross entropy (CE) for illustration, by defining “accurate” as positive and “inaccurate” as negative. CE for the upper (U), middle (M) and bottom (B) cases indicates that the (M) is the worst and (B) is the best. In fact, (M) is generated by shifting all confidences in (U) to the right by a same margin, which should not be regarded as a different UQ model. Besides, the (B) is a UQ model with small perturbation to the leftmost and rightmost confidence in (U). We believe a fair evaluation for uncertainty quantification should not be sensitive to such small perturbation. Cross entropy and Brier score are defective measures in this sense.

For this reason, an eligible metric should be designed to be threshold-free, robust on perturbation and insensitive to classification performance, which motivates us to consider a new evaluation metric for uncertainty quantification.

2.2 Confidence-Classification Characteristic Analysis

Following the discriminative view of UQ, we introduce Confidence-Classification Characteristic (CCC) analysis, which is motivated by Receiver Operating Characteristic (ROC) analysis [fawcett2006introduction]. Formally, each instance is mapped to a class label via a classification model, and meanwhile mapped to a confidence score via a UQ model, indicating whether the classification should be accepted. Given an instance, there are four possible outcomes: 1) correct accept (CAcc), if the result is accepted on an accurate classification; 2) correct reject (CRej), if the result is rejected on an inaccurate classification; 3) incorrect accept (IAcc), if the result is accepted on an inaccurate classification; 4) incorrect reject (IRej), if the result is rejected on an accurate classification. Given a set of instances, a two-by-two confidence confusion matrix can be abtained, as in Figure 1 (b). Several metrics can be calculated based on the matrix, among which we define the two most important metrics, correct accept rate (CAccR) and correct reject rate (CRejR):


A two-dimensional CCC curve is plotted with CAccR on the -axis and on the -axis, which denotes the trade-offs between more acceptance with more incorrect results and more rejection with less incorrect results. Similar to ROC, the CCC curve is upper if the model has better performance on confidence evaluation. To further reduce the evaluation to a single scalar value, we define the area under the CCC curve as AUCCC. A random confidence model leads to an AUCCC of 0.5, indicating that the confidence of accurate results and inaccurate results are entirely mixed up. On the other hand, a perfect confidence model is expected to have an AUCCC of 1.0, representing that each accurate result have a confidence score higher than the inaccurate results. In more general circumstances, the CCC curve is convex with an AUCCC score . The CCC analysis elegantly unifies the in-distribution and out-of-distribution uncertainty quantification, by assigning all classification results on the out-of-distribution instances as “inaccurate classification”. It is worth noting that, even for binary classification, the CCC and ROC measure different aspects. ROC evaluates how accurate the predictions are, given various classification thresholds; CCC focuses on how accurate the accepted predictions are, with regard to various confidence thresholds, given a fixed classification threshold (e.g., 0.5).

Essentially, CCC is a special instance of ROC analysis (refer to Figure 1 (b) for illustration), where confidence serves as the score function, and “accurate” / “inaccurate” classification is the positive / negative class. Hence CCC inherits several advantages for ROC analysis. First, CCC is threshold-free, by taking all possible thresholds to generate the CCC curve. Second, CCC is robust on perturbation; subtle perturbation on confidence scores does not affect the CCC curve, without violation of the order of all scores. Third, considering that ROC is insensitive to class imbalance [fawcett2006introduction], CCC is insensitive to classification performance, which decouples the evaluation of UQ from classification models.

3 Boosting UQ Performance via Uncertainty Distillation

Figure 2: The cascade for Uncertainty Distillation. In base training stage, classification models are trained independently. The outputs are then averaged and temperature scaled, before concatenated with the input image to be fed into the confidence model in uncertainty distillation stage. The softened ensemble output of the actual class () serves as the “ground truth” in the confidence loss.

Prior research couples the modeling of classification and confidence in single models. Even though the confidence extracted from classification output is theoretically calibrated222It is proven [gneiting2007strictly] that the outputs optimized with a proper scoring rule, e.g., cross entropy loss, are calibrated with “confidence”. for classification [gneiting2007strictly, lakshminarayanan2017simple], there is no trivial method to extract the output of the actual class in the inference stage, due to the lack of ground truth. Max softmax output is proposed [hendrycks2016baseline] as an alternative, it is however over-confident at inaccurate classification. Inspired from knowledge distillation [hinton2015distilling], where a student model is better optimized with teachers’ outputs as ground truth, we propose to distill the uncertainty estimated by a deep ensemble [lakshminarayanan2017simple]. This simple scheme, named Uncertainty Distillation (UDist), is effective in boosting the UQ performance.

As depicted in Figure 2, in base training stage, classification models are trained on the datasets independently, from which softmaxprobability distributions are obtained on each instance. Due to over-fitting, the probabilities on the training set are generally higher than that on test set, which we call the over-confidence issue. To alleviate the over-confidence issue of single model, an ensemble probability is averaged for better calibration [lakshminarayanan2017simple]. We then apply temperature scaling to further soften the over-confident output:


We denote as the output of actual class from the softened ensemble output , which represents the (softened) confidence estimated by the deep ensemble. is generally unavailable in the inference stage, due to the lack of ground truth. To this end, we optimize the confidence score towards the confidence estimate by a cross entropy loss, also named as the confidence loss in our study,


In the uncertainty distillation stage, a cascade model is designed as the confidence model, whose inputs are concatenation of the input images and softened probability from deep ensemble. Only the input images are insufficient, since the confidence is jointly dependent on the input and the output. Theoretically, the UDist cascade is able to identify the inaccurate classification, with the confidence loss and not over-confident as ground truth.

4 Experiments

We validate the effectiveness of the proposed methods on a widely-used natural image dataset CIFAR-10, and a multi-label medical image dataset ChestX-ray14 [wang2017chestx]. AUCCC is valid on evaluating several baselines, and the proposed UDist cascade consistently outperforms the baselines. Finally, we address evaluation on identifying in-distribution (id) and out-of-distribution (ood) samples.

4.1 Cifar-10

4.1.1 Experiment Settings

CIFAR-10 consists of colored natural images of classes, among which for training and for testing. A standard data normalization and augmentation scheme that is widely used for this dataset is adopted. We set up several baselines: 1) VGG [simonyan2014very]; 2) DenseNet [huang2017densely]; 3) deep ensemble on four DenseNets (); 4) deep ensembles with temperature scaling (), proposed by Liang and Li [liang2017enhancing]. For UDist, two DenseNets are adopted respectively for classification model and confidence model, trained with and tested with . Each DenseNet shares the same structure: dense layers are repeated times before down-sampling, with a growth rate of , resulting in a model size of . VGG and DenseNet also share about the same number of parameters. Since classification performance is not our issue, we employ small networks due to computation constraints. We use an Adam optimizer [kingma2014adam] for all training with a batch size of , with an initial learning rate . All classification models are trained for epochs, while the confidence model is trained for only epochs since it converges extremely fast. We decay the learning rate by at and milestones of all epochs. Accuracy (not including UDist) and AUCCC on test set are reported.

Metrics   VGG   DenseNet Deep Ensemble Deep Ensemble   UDist
() ()
Classification Acc 90.80 91.61 92.99 93.00 -
Confidence AUCCC 90.48 91.21 92.75 93.03 93.56
Table 1: Classification and UQ performance on CIFAR-10 dataset.

4.1.2 Results

As illustrated in Table 1, the improvement on classification also leads to the improvement on UQ performance. VGG has the worst performance due to its simple structure. Deep ensemble based on DenseNets can improve both AUROC and AUCCC by a large margin. Significantly, UDist can further improve UQ performance over ensemble by temperature scaling.

4.2 ChestX-ray14

4.2.1 Experiment Settings

In this experiment, we use the NIH ChestX-ray14 dataset [wang2017chestx], which consists of X-ray images of patients. Each patient is labeled with deceases. The multi-label confidence loss averaged over confidence loss of all classes. The models’ configuration and training strategies are the same as that of the CIFAR-10 experiment, resulting in a model size of .

Metrics   VGG   DenseNet Deep Ensemble Deep Ensemble   UDist
() ()
Classification AUC 81.50 82.37 82.86 82.86 -
Confidence AUCCC 81.34 82.07 82.56 82.56 82.73
Table 2: Classification and UQ performance on ChextX-ray14 dataset.

4.2.2 Results

UDist outperforms all baselines in uncertainty quantification. Note that deep ensemble provides advancement over DenseNet on this dataset, while our model achieves even more significant improvement by a margin of .

4.3 On Out-of-Distribution Samples

4.3.1 Experiment Settings

Deep models are expected to provide low confidence, when the test data is distinctive from the training data [hendrycks2016baseline]. With better UQ performance, the model distinguishes better out-of-distribution (ood) data from in-distribution (id) data. We conducted experiments on two sets of datasets. First, we evaluate models trained on CIFAR-10 with CIFAR-10 (id) vs. the Street View House Numbers (SVHN) [netzer2011reading](ood). Second, we evaluate models trained on ChestX-ray14 with ChestX-ray14 (id) vs. OCT2017333A dataset with optical coherence tomography (OCT) images of the retina. dataset [kermany2018identifying](ood). Both experiments randomly select the same numbers of ood samples as id. Deep ensemble with temperature scaling proposed by Liang and Li [liang2017enhancing] is a strong baseline compared with UDist (). We reports two kinds of metrics. The first one is our AUCCC metric, where predictions on ood samples are labeled with inaccurate. The CCC unifies the id and ood uncertainty quanfication. Second, to adapt the same experiment settings from prior study [hendrycks2016baseline], we label predictions on id samples with accurate and those on ood with inaccurate. We apply ROC to analyze how separate the id and ood samples are, resulting in a metric named I/O AUROC. Higher value in these two metrics indicates better out-of-distribution (generalization) UQ performance.

CIFAR-10 vs. SVHN ChestX-ray14 vs. OCT2017
Metrics Deep Ensemble     UDist Deep Ensemble     UDist
I/O AUROC 97.76 98.18 68.09 69.42
AUCCC 98.04 98.25 70.61 71.77
Table 3: UQ performance on identifying out-of-distribution samples.

4.3.2 Results

As depicted in Table 3, UDist is effective to distinguish ood instances from id instances in all kinds of settings, indicating its excellent performance in uncertainty quantification. Besides, we figure out two more important findings: 1) AUCCC metric is generally higher than I/O AUROC. We argue that it is because AUCCC considers accurate and inaccurate results on id samples separately, which is more eligible than I/O AUROC. 2) UDist achieves more significant improvement using I/O AUROC metric, i.e., when we consider ood and id generalization alone. We argue that these improvement comes from UDist’s advantage to distinguish ood samples from id samples.

5 Discussions

5.0.1 Over-confidence issue.

A major challenge for the uncertainty distillation is the over-confidence issue on the training set. In our study, we apply ensemble and temperature scaling techniques to alleviate this problem. However, deep neural networks still tend to overfit the training set; on CIFAR-10, a base model achieves an AUCCC over 0.99, much more higher than that on the test set. Our Uncertainty Distillation cascade demonstrates promising results even trained with over-confident uncertainty estimates. In future study, independent datasets on uncertainty estimates will be involved to solve the over-confidence issue.

5.0.2 On the cascade inputs.

In our current cascade model, classification information is directly concatenated with the image input. It decays after layers of convolution and normalization, and becomes hard to associate with the final outputs, especially for multi-label classification (e.g., ChestX-ray14). Advanced techniques in conditional generation may benefit this problem.

6 Conclusion

In this study, we propose a new evaluation method, Confidence-Classification Characteristic analysis, which unifies the in-distribution (calibration) and out-of-distribution (generalization) uncertainty quantification. Moreover, a cascade model, named Uncertainty Distillation, is proposed to boost the performance of uncertainty quantification over strong baselines. In future study, we are solving the over-confidence issue, and simplifying the cascade structure.