Diverse Ensembles Improve Calibration

07/08/2020
by   Asa Cooper Stickland, et al.
0

Modern deep neural networks can produce badly calibrated predictions, especially when train and test distributions are mismatched. Training an ensemble of models and averaging their predictions can help alleviate these issues. We propose a simple technique to improve calibration, using a different data augmentation for each ensemble member. We additionally use the idea of `mixing' un-augmented and augmented inputs to improve calibration when test and training distributions are the same. These simple techniques improve calibration and accuracy over strong baselines on the CIFAR10 and CIFAR100 benchmarks, and out-of-domain data from their corrupted versions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2020

Combining Ensembles and Data Augmentation can Harm your Calibration

Ensemble methods which average over multiple neural network predictions ...
research
01/13/2021

Should Ensemble Members Be Calibrated?

Underlying the use of statistical approaches for a wide range of applica...
research
08/23/2023

RankMixup: Ranking-Based Mixup Training for Network Calibration

Network calibration aims to accurately estimate the level of confidences...
research
10/24/2018

Local calibration of verbal autopsy algorithms

Computer-coded-verbal-autopsy (CCVA) algorithms used to generate burden-...
research
07/17/2020

Uncertainty Quantification and Deep Ensembles

Deep Learning methods are known to suffer from calibration issues: they ...
research
10/12/2022

Can Calibration Improve Sample Prioritization?

Calibration can reduce overconfident predictions of deep neural networks...
research
10/05/2022

The Calibration Generalization Gap

Calibration is a fundamental property of a good predictive model: it req...

Please sign up or login with your details

Forgot password? Click here to reset