Soft Calibration Objectives for Neural Networks

07/30/2021
by   Archit Karandikar, et al.
0

Optimal decision making requires that classifiers produce uncertainty estimates consistent with their empirical accuracy. However, deep neural networks are often under- or over-confident in their predictions. Consequently, methods have been developed to improve the calibration of their predictive uncertainty both during training and post-hoc. In this work, we propose differentiable losses to improve calibration based on a soft (continuous) version of the binning operation underlying popular calibration-error estimators. When incorporated into training, these soft calibration losses achieve state-of-the-art single-model ECE across multiple datasets with less than 1 (70 decrease in accuracy relative to the cross entropy baseline on CIFAR-100. When incorporated post-training, the soft-binning-based calibration error objective improves upon temperature scaling, a popular recalibration method. Overall, experiments across losses and datasets demonstrate that using calibration-sensitive procedures yield better uncertainty estimates under dataset shift than the standard practice of using a cross entropy loss and post-hoc recalibration methods.

READ FULL TEXT
research
02/24/2021

Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration

We address the problem of uncertainty calibration and introduce a novel ...
research
06/19/2023

Scaling of Class-wise Training Losses for Post-hoc Calibration

The class-wise training losses often diverge as a result of the various ...
research
03/22/2020

Improving Calibration in Mixup-trained Deep Neural Networks through Confidence-Based Loss Functions

Deep Neural Networks (DNN) represent the state of the art in many tasks....
research
12/16/2019

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration

Uncertainty estimates help to identify ambiguous, novel, or anomalous in...
research
07/05/2023

Set Learning for Accurate and Calibrated Models

Model overconfidence and poor calibration are common in machine learning...
research
06/01/2023

A Uniform Confidence Phenomenon in Deep Learning and its Implications for Calibration

Despite the impressive generalization capabilities of deep neural networ...
research
06/25/2021

Improving Uncertainty Calibration of Deep Neural Networks via Truth Discovery and Geometric Optimization

Deep Neural Networks (DNNs), despite their tremendous success in recent ...

Please sign up or login with your details

Forgot password? Click here to reset