Diversity regularization in deep ensembles

02/22/2018
by   Changjian Shui, et al.
0

Calibrating the confidence of supervised learning models is important for a variety of contexts where the certainty over predictions should be reliable. However, it has been reported that deep neural network models are often too poorly calibrated for achieving complex tasks requiring reliable uncertainty estimates in their prediction. In this work, we are proposing a strategy for training deep ensembles with a diversity function regularization, which improves the calibration property while maintaining a similar prediction accuracy.

READ FULL TEXT
research
02/01/2023

Pathologies of Predictive Diversity in Deep Ensembles

Classical results establish that ensembles of small models benefit when ...
research
06/18/2019

Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles

The inaccuracy of neural network models on inputs that do not stem from ...
research
01/26/2022

Improving robustness and calibration in ensembles with diversity regularization

Calibration and uncertainty estimation are crucial topics in high-risk e...
research
03/10/2020

DIBS: Diversity inducing Information Bottleneck in Model Ensembles

Although deep learning models have achieved state-of-the-art performance...
research
08/28/2023

Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning

Ensembling a neural network is a widely recognized approach to enhance m...
research
11/04/2019

Ensembles of Locally Independent Prediction Models

Many ensemble methods encourage their constituent models to be diverse, ...
research
07/17/2020

Uncertainty Quantification and Deep Ensembles

Deep Learning methods are known to suffer from calibration issues: they ...

Please sign up or login with your details

Forgot password? Click here to reset