Confidence Calibration in Deep Neural Networks through Stochastic Inferences

09/28/2018
by   Seonguk Seo, et al.
0

We propose a generic framework to calibrate accuracy and confidence (score) of a prediction through stochastic inferences in deep neural networks. We first analyze relation between variation of multiple model parameters for a single example inference and variance of the corresponding prediction scores by Bayesian modeling of stochastic regularization. Our empirical observation shows that accuracy and score of a prediction are highly correlated with variance of multiple stochastic inferences given by stochastic depth or dropout. Motivated by these facts, we design a novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores. The proposed loss function enables us to learn deep neural networks that predict confidence calibrated scores using a single inference. Our algorithm presents outstanding confidence calibration performance and improves classification accuracy with two popular stochastic regularization techniques---stochastic depth and dropout---in multiple models and datasets; it alleviates overconfidence issue in deep neural networks significantly by training networks to achieve prediction accuracy proportional to confidence of prediction.

READ FULL TEXT
research
05/23/2023

Dual Focal Loss for Calibration

The use of deep neural networks in real-world applications require well-...
research
09/27/2018

Dropout Distillation for Efficiently Estimating Model Confidence

We propose an efficient way to output better calibrated uncertainty scor...
research
03/02/2022

GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning

Cross entropy (CE) loss integrated with softmax is an orthodox component...
research
07/03/2020

Confidence-Aware Learning for Deep Neural Networks

Despite the power of deep neural networks for a wide range of tasks, an ...
research
02/21/2020

Calibrating Deep Neural Networks using Focal Loss

Miscalibration – a mismatch between a model's confidence and its correct...
research
11/13/2020

Nonparametric fusion learning: synthesize inferences from diverse sources using depth confidence distribution

Fusion learning refers to synthesizing inferences from multiple sources ...
research
05/19/2022

Mitigating Neural Network Overconfidence with Logit Normalization

Detecting out-of-distribution inputs is critical for safe deployment of ...

Please sign up or login with your details

Forgot password? Click here to reset