Confidence Calibration for Convolutional Neural Networks Using Structured Dropout

06/23/2019
by   Adrian V. Dalca, et al.
16

In classification applications, we often want probabilistic predictions to reflect confidence or uncertainty. Dropout, a commonly used training technique, has recently been linked to Bayesian inference, yielding an efficient way to quantify uncertainty in neural network models. However, as previously demonstrated, confidence estimates computed with a naive implementation of dropout can be poorly calibrated, particularly when using convolutional networks. In this paper, through the lens of ensemble learning, we associate calibration error with the correlation between the models sampled with dropout. Motivated by this, we explore the use of structured dropout to promote model diversity and improve confidence calibration. We use the SVHN, CIFAR-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques. We also show the merit of structured dropout in a Bayesian active learning application.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2018

Dropout Distillation for Efficiently Estimating Model Confidence

We propose an efficient way to output better calibrated uncertainty scor...
research
05/25/2021

Calibration and Uncertainty Quantification of Bayesian Convolutional Neural Networks for Geophysical Applications

Deep neural networks offer numerous potential applications across geosci...
research
11/19/2019

Parameters Estimation for the Cosmic Microwave Background with Bayesian Neural Networks

In this paper, we present the first study that compares different models...
research
02/18/2013

Maxout Networks

We consider the problem of designing models to leverage a recently intro...
research
03/21/2019

Empirical confidence estimates for classification by deep neural networks

How well can we estimate the probability that the classification, C(f(x)...
research
11/05/2016

Robustly representing inferential uncertainty in deep neural networks through sampling

As deep neural networks (DNNs) are applied to increasingly challenging p...
research
04/12/2019

Reliable Prediction Errors for Deep Neural Networks Using Test-Time Dropout

While the use of deep learning in drug discovery is gaining increasing a...

Please sign up or login with your details

Forgot password? Click here to reset