On Calibrating Semantic Segmentation Models: Analysis and An Algorithm

12/22/2022
by   Dongdong Wang, et al.
9

We study the problem of semantic segmentation calibration. For image classification, lots of existing solutions are proposed to alleviate model miscalibration of confidence. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration, and show that selective scaling consistently outperforms other methods.

READ FULL TEXT

page 8

page 17

research
08/12/2020

Local Temperature Scaling for Probability Calibration

For semantic segmentation, label probabilities are often uncalibrated as...
research
07/20/2023

Label Calibration for Semantic Segmentation Under Domain Shift

Performance of a pre-trained semantic segmentation model is likely to su...
research
11/06/2022

Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates

Calibration strengthens the trustworthiness of black-box models by produ...
research
12/21/2021

Distribution-aware Margin Calibration for Semantic Segmentation in Images

The Jaccard index, also known as Intersection-over-Union (IoU), is one o...
research
03/06/2023

Rethinking Confidence Calibration for Failure Prediction

Reliable confidence estimation for the predictions is important in many ...
research
08/21/2020

It's better to say "I can't answer" than answering incorrectly: Towards Safety critical NLP systems

In order to make AI systems more reliable and their adoption in safety c...
research
11/30/2021

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

In spite of the dominant performances of deep neural networks, recent wo...

Please sign up or login with your details

Forgot password? Click here to reset