Multi-Head Multi-Loss Model Calibration

03/02/2023
by   Adrian Galdran, et al.
0

Delivering meaningful uncertainty estimates is essential for a successful deployment of machine learning models in the clinical practice. A central aspect of uncertainty quantification is the ability of a model to return predictions that are well-aligned with the actual probability of the model being correct, also known as model calibration. Although many methods have been proposed to improve calibration, no technique can match the simple, but expensive approach of training an ensemble of deep neural networks. In this paper we introduce a form of simplified ensembling that bypasses the costly training and inference of deep ensembles, yet it keeps its calibration capabilities. The idea is to replace the common linear classifier at the end of a network by a set of heads that are supervised with different loss functions to enforce diversity on their predictions. Specifically, each head is trained to minimize a weighted Cross-Entropy loss, but the weights are different among the different branches. We show that the resulting averaged predictions can achieve excellent calibration without sacrificing accuracy in two challenging datasets for histopathological and endoscopic image classification. Our experiments indicate that Multi-Head Multi-Loss classifiers are inherently well-calibrated, outperforming other recent calibration techniques and even challenging Deep Ensembles' performance. Code to reproduce our experiments can be found at <https://github.com/agaldran/mhml_calibration> .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2021

Improving MC-Dropout Uncertainty Estimates with Calibration Error-based Optimization

Uncertainty quantification of machine learning and deep learning methods...
research
04/08/2023

Deep Anti-Regularized Ensembles provide reliable out-of-distribution uncertainty quantification

We consider the problem of uncertainty quantification in high dimensiona...
research
12/07/2022

A Simple Nadaraya-Watson Head can offer Explainable and Calibrated Classification

In this paper, we empirically analyze a simple, non-learnable, and nonpa...
research
11/30/2021

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

In spite of the dominant performances of deep neural networks, recent wo...
research
09/30/2021

Learning to Predict Trustworthiness with Steep Slope Loss

Understanding the trustworthiness of a prediction yielded by a classifie...
research
02/25/2021

Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling

With a better understanding of the loss surfaces for multilayer networks...
research
07/17/2020

Uncertainty Quantification and Deep Ensembles

Deep Learning methods are known to suffer from calibration issues: they ...

Please sign up or login with your details

Forgot password? Click here to reset