Bidirectional Self-Training with Multiple Anisotropic Prototypes for Domain Adaptive Semantic Segmentation

04/16/2022
by   Yulei Lu, et al.
5

A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them. Under this self-training paradigm, some competitive methods have sought to the latent-space information, which establishes the feature centroids (a.k.a prototypes) of the semantic classes and determines the pseudo label candidates by their distances from these centroids. In this paper, we argue that the latent space contains more information to be exploited thus taking one step further to capitalize on it. Firstly, instead of merely using the source-domain prototypes to determine the target pseudo labels as most of the traditional methods do, we bidirectionally produce the target-domain prototypes to degrade those source features which might be too hard or disturbed for the adaptation. Secondly, existing attempts simply model each category as a single and isotropic prototype while ignoring the variance of the feature distribution, which could lead to the confusion of similar categories. To cope with this issue, we propose to represent each category with multiple and anisotropic prototypes via Gaussian Mixture Model, in order to fit the de facto distribution of source domain and estimate the likelihood of target samples based on the probability density. We apply our method on GTA5->Cityscapes and Synthia->Cityscapes tasks and achieve 61.2 and 62.8 respectively in terms of mean IoU, substantially outperforming other competitive self-training methods. Noticeably, in some categories which severely suffer from the categorical confusion such as "truck" and "bus", our method achieves 56.4 and 68.8 respectively, which further demonstrates the effectiveness of our design.

READ FULL TEXT

page 3

page 5

page 8

page 9

research
01/26/2021

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation

Self-training is a competitive approach in domain adaptive segmentation,...
research
03/18/2022

Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation

Domain adaptive semantic segmentation aims to learn a model with the sup...
research
03/29/2021

Get away from Style: Category-Guided Domain Adaptation for Semantic Segmentation

Unsupervised domain adaptation (UDA) becomes more and more popular in ta...
research
03/02/2023

Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation

This paper describes a method of domain adaptive training for semantic s...
research
08/12/2022

Exploring High-quality Target Domain Information for Unsupervised Domain Adaptive Semantic Segmentation

In unsupervised domain adaptive (UDA) semantic segmentation, the distill...
research
04/19/2022

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation

Domain adaptive semantic segmentation attempts to make satisfactory dens...
research
04/06/2022

Towards Robust Adaptive Object Detection under Noisy Annotations

Domain Adaptive Object Detection (DAOD) models a joint distribution of i...

Please sign up or login with your details

Forgot password? Click here to reset