The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

11/30/2021
by   Bingyuan Liu, et al.
0

In spite of the dominant performances of deep neural networks, recent works have shown that they are poorly calibrated, resulting in over-confident predictions. Miscalibration can be exacerbated by overfitting due to the minimization of the cross-entropy during training, as it promotes the predicted softmax probabilities to match the one-hot label assignments. This yields a pre-softmax activation of the correct class that is significantly larger than the remaining activations. Recent evidence from the literature suggests that loss functions that embed implicit or explicit maximization of the entropy of predictions yield state-of-the-art calibration performances. We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses. Specifically, these losses could be viewed as approximations of a linear penalty (or a Lagrangian) imposing equality constraints on logit distances. This points to an important limitation of such underlying equality constraints, whose ensuing gradients constantly push towards a non-informative solution, which might prevent from reaching the best compromise between the discriminative performance and calibration of the model during gradient-based optimization. Following our observations, we propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances. Comprehensive experiments on a variety of image classification, semantic segmentation and NLP benchmarks demonstrate that our method sets novel state-of-the-art results on these tasks in terms of network calibration, without affecting the discriminative performance. The code is available at https://github.com/by-liu/MbLS .

READ FULL TEXT
research
11/28/2022

Class Adaptive Network Calibration

Recent studies have revealed that, beyond conventional accuracy, calibra...
research
02/11/2023

Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels

IoU losses are surrogates that directly optimize the Jaccard index. In s...
research
03/02/2023

Multi-Head Multi-Loss Model Calibration

Delivering meaningful uncertainty estimates is essential for a successfu...
research
01/26/2023

Revisiting Discriminative Entropy Clustering and its relation to K-means

Maximization of mutual information between the model's input and output ...
research
07/31/2021

Learning with Noisy Labels via Sparse Regularization

Learning with noisy labels is an important and challenging task for trai...
research
12/22/2022

On Calibrating Semantic Segmentation Models: Analysis and An Algorithm

We study the problem of semantic segmentation calibration. For image cla...
research
08/24/2023

Don't blame Dataset Shift! Shortcut Learning due to Gradients and Cross Entropy

Common explanations for shortcut learning assume that the shortcut impro...

Please sign up or login with your details

Forgot password? Click here to reset