Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with Differentiable Expected Calibration Error

08/06/2023
by   Zixin Wang, et al.
0

The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage, where private information from the source domain could inadvertently be exposed in the target domain. To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution that leverages self-training methods to pseudo-label high-confidence regions and adapt the model to the target data. However, the confidence scores obtained are often highly biased due to over-confidence and class-imbalance issues, which render both model selection and optimization problematic. In this paper, we propose a novel calibration-guided source-free domain adaptive semantic segmentation (Cal-SFDA) framework. The core idea is to estimate the expected calibration error (ECE) from the segmentation predictions, serving as a strong indicator of the model's generalization capability to the unlabeled target domain. The estimated ECE scores, in turn, assist the model training and fair selection in both source training and target adaptation stages. During model pre-training on the source domain, we ensure the differentiability of the ECE objective by leveraging the LogSumExp trick and using ECE scores to select the best source checkpoints for adaptation. To enable ECE estimation on the target domain without requiring labels, we train a value net for ECE estimation and apply statistic warm-up on its BatchNorm layers for stability. The estimated ECE scores assist in determining the reliability of prediction and enable class-balanced pseudo-labeling by positively guiding the adaptation progress and inhibiting potential error accumulation. Extensive experiments on two widely-used synthetic-to-real transfer tasks show that the proposed approach surpasses previous state-of-the-art by up to 5.25

READ FULL TEXT

page 3

page 5

page 8

page 12

research
03/08/2020

Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation

This paper focuses on the unsupervised domain adaptation of transferring...
research
06/23/2021

Exploiting Negative Learning for Implicit Pseudo Label Rectification in Source-Free Domain Adaptive Semantic Segmentation

It is desirable to transfer the knowledge stored in a well-trained sourc...
research
03/18/2022

Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation

Domain adaptive semantic segmentation aims to learn a model with the sup...
research
10/13/2021

Domain Adaptive Semantic Segmentation without Source Data

Domain adaptive semantic segmentation is recognized as a promising techn...
research
12/02/2022

Geometry-Aware Network for Domain Adaptive Semantic Segmentation

Measuring and alleviating the discrepancies between the synthetic (sourc...
research
07/20/2023

Label Calibration for Semantic Segmentation Under Domain Shift

Performance of a pre-trained semantic segmentation model is likely to su...
research
12/11/2020

Confidence Estimation via Auxiliary Models

Reliably quantifying the confidence of deep neural classifiers is a chal...

Please sign up or login with your details

Forgot password? Click here to reset