Give Me Your Trained Model: Domain Adaptive Semantic Segmentation without Source Data

06/22/2021 ∙ by Yuxi Wang, et al. ∙ 0

Benefited from considerable pixel-level annotations collected from a specific situation (source), the trained semantic segmentation model performs quite well, but fails in a new situation (target) due to the large domain shift. To mitigate the domain gap, previous cross-domain semantic segmentation methods always assume the co-existence of source data and target data during distribution alignment. However, the access to source data in the real scenario may raise privacy concerns and violate intellectual property. To tackle this problem, we focus on an interesting and challenging cross-domain semantic segmentation task where only the trained source model is provided to the target domain, and further propose a unified framework called Domain Adaptive Semantic Segmentation without Source data (DAS^3 for short). Specifically, DAS^3 consists of three schemes, i.e., feature alignment, self-training, and information propagation. First, we mainly develop a focal entropic loss on the network outputs to implicitly align the target features with unseen source features via the provided source model. Second, besides positive pseudo labels in vanilla self-training, we first introduce negative pseudo labels to the field and develop a bi-directional self-training strategy to enhance the representation learning in the target domain. Finally, the information propagation scheme further reduces the intra-domain discrepancy within the target domain via pseudo semi-supervised learning. Extensive results on synthesis-to-real and cross-city driving datasets validate DAS^3 yields state-of-the-art performance, even on par with methods that need access to source data.



There are no comments yet.


page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.