IJCAI2020 & IJCV 2021 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation. Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data. Yet the pseudo labels of the target-domain data are usually predicted by the model trained on the source domain. Thus, the generated labels inevitably contain the incorrect prediction due to the discrepancy between the training domain and the test domain, which could be transferred to the final adapted model and largely compromises the training process. To overcome the problem, this paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning for unsupervised semantic segmentation adaptation. Given the input image, the model outputs the semantic segmentation prediction as well as the uncertainty of the prediction. Specifically, we model the uncertainty via the prediction variance and involve the uncertainty into the optimization objective. To verify the effectiveness of the proposed method, we evaluate the proposed method on two prevalent synthetic-to-real semantic segmentation benchmarks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, as well as one cross-city benchmark, i.e., Cityscapes -> Oxford RobotCar. We demonstrate through extensive experiments that the proposed approach (1) dynamically sets different confidence thresholds according to the prediction variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves significant improvements over the conventional pseudo label learning and yields competitive performance on all three benchmarks.READ FULL TEXT VIEW PDF
Unsupervised domain adaptation (UDA) aims to transfer the knowledge from...
"Self-training" has become a dominant method for semantic segmentation v...
It is desirable to transfer the knowledge stored in a well-trained sourc...
This paper describes a novel method of training a semantic segmentation ...
Self-training is a competitive approach in domain adaptive segmentation,...
Unsupervised Domain Adaptation (UDA) for semantic segmentation has been
Recent advances in unsupervised domain adaptation have seen considerable...
IJCAI2020 & IJCV 2021 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo
Uncertainty Aware Curriculum Domain Adaptation. Code for The UIoU Dark Zurich Challenge @ Vision for All Seasons Workshop, CVPR 2020
Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Deep neural networks (DNNs) have been widely adopted in the field of semantic segmentation, yielding the state-of-the-art performanceLiang et al. (2017); Wei et al. (2018). However, recent works show that DNNs are limited in the scalability to the unseen environments, e.g., the testing data collected in rainy days Hendrycks and Dietterich (2019); Wu et al. (2019). One straightforward idea is to annotate more training data of the target environment and then re-train the segmentation model. However, semantic segmentation task usually demands dense annotations and it is unaffordable to manually annotate the pixel-wise label for collected data in new environments. To address the challenge, the researchers, therefore, resort to unsupervised semantic segmentation adaption, which takes one step closer to real-world practice. In unsupervised semantic segmentation adaptation, two datasets collected in different environments are considered: a labeled source-domain dataset where category labels are provided for every pixel, and an unlabeled target-domain dataset where only provides the collected data without annotations. Compared with the annotated data in the target domain, the unlabeled data is usually easy to collect. Semantic segmentation adaptation aims at leveraging the labeled source-domain data as well as the unlabeled target-domain data to adapt the well-trained model to the target environment.
The main challenge of semantic segmentation adaption is the discrepancy of data distribution between the source domain and the target domain. There are two lines of methods for semantic segmentation adaptation. On one hand, several existing works focus on the domain alignment by minimizing the distribution discrepancy in different levels, such as pixel level Wu et al. (2018, 2019); Hoffman et al. (2018), feature level Huang et al. (2018); Yue et al. (2019); Luo et al. (2019a); Zhang et al. (2019b) and semantic level Tsai et al. (2018, 2019); Wang et al. (2019). Despite great success, this line of work is sub-optimal. Because the alignment objective drives the model to learn the shared knowledge between domains but ignores the domain-specific knowledge. The domain-specific knowledge is one of the keys to the final target, i.e., the model adapted to the target domain. On the other hand, some researchers focus on learning the domain-specific knowledge of the target domain by fully exploiting the unlabeled target-domain data Zou et al. (2018, 2019); Han et al. (2019). Specifically, this line of methods usually adopts the two-stage pipeline, which is similar to the traditional semi-supervised framework Lee (2013). The first step is to predict pseudo labels by the knowledge learned from the labeled data, e.g., the model trained on the source domain. The second step is to minimize the cross-entropy loss on the pseudo labels of the unlabeled target-domain data. In the training process, pseudo labels are usually regarded as accurate annotations to optimize the model.
However, one inherent problem exists in the pseudo label based scene adaptation approaches. Pseudo labels usually suffer from the noise caused by the model trained on different data distribution (see Figure 1). The noisy label could compromise the subsequent learning. Although some existing works Zou et al. (2018, 2019) have proposed to manually set the threshold to neglect the low-confidence pseudo labels, it is still challenging in several aspects: First, the value of the threshold is hard to be determined for different target domain. It depends on the similarity of the source domain and target domain, which is hard to estimate in advance. Second, the value of the threshold is also hard to be determined for different categories. For example, the objectives, such as traffic signs, have rarely appeared in the source domain. The overall confidence score for the rare category is relatively low. The high threshold may ignore the information of rare categories. Third, the threshold is also related to the location of the pixel. For example, the pixel in the center of objectives, such as cars, is relatively easy to predict, while the pixel on the objective edge usually faces ambiguous predictions. It reflects that the threshold should not only consider the confidence score but also the location of the pixel. In summary, every pixel in the segmentation map needs to be treated differently. The fixed threshold is hard to match the demand.
To address the mentioned challenges, we propose one simple and effective method for semantic segmentation adaption via modeling uncertainty, which could provide the pixel-wise threshold for the input image automatically. Without introducing extra parameters or modules, we formulate the uncertainty as the prediction variance. The prediction variance reflects the model uncertainty towards the prediction in a bootstrapping manner. Meanwhile, we explicitly involve the variance into the optimization objective, called variance regularization, which works as an automatic threshold and is compatible with the standard cross-entropy loss. The automatic threshold rectifies the learning from noisy labels and ensures the training in a coherent manner. Therefore, the proposed method could effectively exploit the domain-specific information offered by pseudo labels and takes advantage of the unlabeled target-domain data.
In a nutshell, our contributions are as follows:
To our knowledge, we are among the first attempts to exploit the uncertainty estimation and enable the automatic threshold to learn from noisy pseudo labels. This is in contrast to most existing domain adaptation methods that directly utilize noisy pseudo labels or manually set the confidence threshold.
Without introducing extra parameters or modules, we formulate the uncertainty as the prediction variance. Specifically, we introduce a new regularization term, variance regularization, which is compatible with the standard cross-entropy loss. The variance regularization works as the automatic threshold, and rectifies the learning from noisy pseudo labels.
We verify the proposed method on two synthetic-to-real benchmarks and one cross-city benchmark. The proposed method has achieved significant improvements over the conventional pseudo label learning, yielding competitive performance to existing methods.
The main challenge in unsupervised domain adaptation is different data distribution between the source domain and the target domain Fu et al. (2015); Wang et al. (2018). To deal with the challenge, some pioneering works Hoffman et al. (2018); Wu et al. (2018) propose to transfer the visual style of the source-domain data to the target domain. In this way, the model could be trained on the labeled data with the target style. Similarly, some recent works leverage Generative Adversarial Networks (GANs) to transfer the source-domain images to multiple domains and intend to learn the domain-invariant features Wu et al. (2019); Yue et al. (2019). Furthermore, some works focus on the alignment among the middle activation of neural networks. Luo et al.Luo et al. (2019a, b) utilize the attention mechanism to refine the feature alignment. Instead of modifying the visual appearance, the alignment between the high-level semantic features also attracts a lot of attention. Tsai et al.Tsai et al. (2018, 2019) propose to utilize the discriminator to demand the similar semantic outputs between two domains. In summary, this line of methods focuses on the alignment, learning the shared knowledge between the source and target domains. However, the domain-specific information is usually ignored, which is one of the keys to the adaptation in the target environment. Therefore, in this paper, we resort to another line of methods, which is based on pseudo label learning.
. The main idea is close to the conventional semi-supervised learning approach, entropy minimization, which is first proposed to leverage the unlabeled dataGrandvalet and Bengio (2005). Entropy minimization encourages the model to give the prediction with a higher confidence score. In practise, Reed et al.Reed et al. (2014) propose bootstrapping via entropy minimization and show the effectiveness on the object detection and emotion recognition. Furthermore, Lee et al.Lee (2013) exploit the trained model to predict pseudo labels for the unlabeled data, and then fine-tune the model as supervised learning methods to fully leverage the unlabeled data. Recently, Pan et al.Pan et al. (2019) utilize the pseudo label learning to minimize the target-domain data with the source-domain prototypes. For unsupervised semantic segmentation, Zou et al.Zou et al. (2019, 2018) introduce the pseudo label strategy to the semantic segmentation adaptation and provide one comprehensive analysis on the regularization terms. In a similar spirit, Zheng et al.Zheng and Yang (2019) also apply the pseudo label to learn the domain-specific features, yielding competitive results. However, one inherent weakness of the pseudo label learning is that the pseudo label usually contains noisy predictions. Despite the fact that most pseudo labels are correct, wrong labels also exist, which could compromise the subsequent training. If the model is fine-tuned on the noisy label, the error would also be transferred to the adapted model. Different from existing works, we do not treat the pseudo labels equally and intend to rectify the learning from noisy labels. The proposed method explicitly predict the uncertainty of pseudo labels, when fine-tuning the model. The uncertainty could be regarded as an automatic threshold to adjust the learning from noisy labels.
To address the noise, existing works have explored the uncertainty estimation from different aspects, such as the input data, the annotation and the model weights. In this work, we focus on the annotation uncertainty. Our target is to learn a model that could predict whether the annotation is correct, and learn from noisy pseudo labels. Among existing works, Bayesian networks are widely used to predict the uncertainty of weights in the networkNielsen and Jensen (2009). In a similar spirit, Kendall et al.Kendall and Gal (2017)
apply the Bayesian theory to the prediction of computer vision tasks, and intend to provide not only the prediction results but also the confidence of the prediction. Further, Yuet al.Yu et al. (2019) explicitly model the uncertainty via an extra auxiliary branch, and involve the random noise into training. The model could explicitly estimate the feature mean as well as the prediction variance. Inspired by the above-mentioned works, we propose to leverage the prediction variance to formulate the uncertainty. There are two fundamental differences between previous works and ours: (1) We do not introduce extra modules or parameters to simulate the noise. Instead, we leverage the prediction discrepancy within the segmentation model. (2) We explicitly involve the uncertainty into the training target and adopt the adaptive method to learn the pixel-wise uncertainty map automatically. The proposed method does not need manually setting the threshold to enforce the pseudo label learning.
In Section 3.1, we first provide the problem definition and denotations. We then revisit the conventional domain adaption method based on the pseudo label and discuss the limitation of the pseudo label learning (see Section 3.2). To deal with the mentioned limitations, we propose to leverage the uncertainty estimation. In particular, we formulate the uncertainty as the prediction variance and provide one brief definition in Section 3.3, followed by the proposed variance regularization, which is compatible with the standard cross-entropy loss in Section 3.4. Besides, the implementation details are provided in Section 3.5.
Given the labeled dataset from the source domain and the unlabeled dataset from the target domain, semantic segmentation adaptation intends to learn the projection function , which maps the input image to the semantic segmentation . and denote the number of the labeled data and the unlabeled data. The source-domain semantic segmentation label is provided for every labeled data of the source domain , while the target-domain label remains unknown during the training. The target of unsupervised domain adaptation is to estimate the model parameter , which could minimize the prediction bias on the target-domain inputs:
is the ground-truth class probability of target data. Ideally,
is one-hot vector and the maximum value ofis . The ground-truth label . In contrast,
is the predicted probability distribution of. When we minimize the prediction bias in Equation 1, the discrepancy between predicted results and the ground-truth probability is minimized.
Pseudo label learning is to leverage the pseudo label to learn from the unlabeled data. The common practice contains two stages. The first stage is to generate the pseudo label for the unlabeled target-domain training data. The pseudo labels could be obtained via the model trained on source-domain data: . We note that is the model parameters learned from the source-domain training data. Therefore, the pseudo labels , are not accurate in nature due to different data distribution between and . We denote as the one-hot vector of . If the class index equals to , else . The second stage of pseudo learning is to minimize the prediction bias. We could formulate the bias as the similar style of Equation 1:
The first term is the difference between the prediction and the pseudo label, while the second term is the error between the pseudo label and the ground-truth label. When fine-tuning the model in the second stage, we fix the pseudo label. Therefore, the second term is one constant. Existing methods usually optimize the first term as the pretext task. It equals to considering the pseudo labels as true labels. Existing methods train the model parameter to minimize the bias between the prediction and pseudo labels. In practice, the cross-entropy loss is usually adopted Zou et al. (2018, 2019); Zheng and Yang (2019). The objective could be formulated as:
Discussion. There are two advantages of pseudo label learning : First, the model is only trained on the target-domain data. The training data distribution is close to the testing data distribution, minoring the input distribution discrepancy. Second, despite the domain discrepancy, most pseudo labels are correct. Theoretically, the fine-tuned model could arrive the competitive performance with the fully-supervised model. However, one inherent problem exists that the pseudo label inevitably contains noise. The wrong annotations are transferred from the source model to the final model. Noisy pseudo label could largely compromise the training.
To address the label noise, we model the uncertainty of the pseudo label via the prediction variance. Intuitively, we could formulate the variance of the prediction as:
Since remains unknown, one naive way is to utilize the pseudo label to replace the . The variance could be approximated as:
However, in Equation 2, we have pushed to . When optimizing the prediction bias, the variance in Equation 5 will also be minimized. It could not reflect the real prediction variance during training. In this paper, therefore, we adopt another approximation as:
where denotes the auxiliary classifier output of the segmentation model. As shown in Figure 2, we adopt the widely-used two-classifier model, which contains one primary classifier as well as one auxiliary classifier. We note that the extra auxiliary classifier could be viewed as a free lunch since most segmentation models, including PSPNet Zhao et al. (2017) and the modified DeepLab-v2 in Tsai et al. (2018, 2019); Luo et al. (2019a); Zheng and Yang (2019), contain the auxiliary classifier to solve the gradient vanish problem He et al. (2016b) and help the training. In this paper, we further leverage the auxiliary classifier to estimate the variance. In practice, we utilize the KL-divergence of two classifier predictions as the variance:
If two classifiers provide two different class predictions, the approximated variance will obtain one large value. It reflects the uncertainty of the model on the prediction. Besides, it is worthy to note that the proposed variance in Equation 7 is independent with the pseudo label .
Discussion: What leads to the discrepancy of the primary classifier and the auxiliary classifier? First of all, the main reason is different receptive fields. As shown in Figure 2, the auxiliary classifier is located at the relatively shallow layer, when the primary classifier learns from the deeper layer. The input activation is different between two classifiers, leading to the prediction difference. Second, the two classifiers have not been trained on the target-domain data. Therefore, both classifiers may have different biases to the target-domain data. Third, we apply the dropout function Srivastava et al. (2014) to two classifiers, which also could lead to the different prediction during training. The prediction discrepancy helps us to estimate the uncertainty.
In this paper, we propose the variance regularization term to rectify the learning from noisy labels. It leverages the approximated variance introduced in Section 3.3. The rectified objective could be formulated as:
It is worthy to note that we do not intend to minimize the prediction bias under all conditions. If the prediction variance has received one large value, we will not punish the prediction bias . Meanwhile, to prevent that the model predicts the large variance all the time, as a trade-off, we introduce the regularization term via adding . Besides, since could be zero, it may lead to the problem of dividing by zero. To stabilize the training, we adopt the policy in Kendall and Gal (2017) that replace as . Therefore, the loss term could be rewritten with the approximated terms as:
The training procedure of the proposed method is summarized in Algorithm 1. In practice, we utilize the parameter learned in the source-domain dataset to initialize the . In every iteration, we calculate the prediction variance as well as the cross-entropy loss for the given inputs. We utilize the to update the . The training cost of the rectified objective approximately equals to the conventional pseudo label learning, since no extra modules are introduced.
Discussion: What are the advantages of the proposed variance regularization? First, the proposed variance regularization does not introduce extra parameters or modules to model the uncertainty. Different from Yu et al. (2019), we do not explicitly introduce the Gaussian noise or extra branches. Instead, we leverage the prediction variance of the model itself. Second, the proposed variance regularization has good scalability. If the variance equals to zero, the optimization loss degrades to the objective of the conventional pseudo learning and the model will focus on minimizing the prediction bias only. In contrast, when the value of variance is high, the model is prone to neglect the bias and skip ambiguous pseudo labels; Third, the proposed variance regularization has the same shape of the prediction, and could works as the pixel-wise threshold of the pseudo label. As shown in Figure 3, we could observe that the noise usually exists in the area with high variance. The proposed rectified loss assigns different thresholds to different areas. For example, for the location with coherent predictions, the variance regularization drives the model trust pseudo labels. For the area with ambiguous predictions, the variance regularization drives the model to neglect pseudo labels. Different from existing works that set the unified threshold for all training samples, the proposed pseudo label could provide more accurate and adaptive threshold for every pixel.
Network Architecture. In this work, we utilize the widely-used Deeplab-v2 Chen et al. (2017) as the baseline model, which adopts the ResNet-101 He et al. (2016a) as the backbone model. We follow most existing works Tsai et al. (2018, 2019); Luo et al. (2019a, b); Zheng and Yang (2019) to add one auxiliary classifier. The auxiliary classifier has similar structure with the primary classifier, including one Atrous Spatial Pyramid Pooling (ASPP) module Chen et al. (2017) and one fully-connected layer. The auxiliary classifier is added after the res4b22 layer. We also insert the dropout layer Srivastava et al. (2014) before the fully-connected layer, and the dropout rate is .
Pseudo Label. To verify the effectiveness of the proposed method, we deploy two existing methods, i.e., AdaptSegNet Tsai et al. (2018) and MRNet Zheng and Yang (2019), to generate the pseudo labels of the target-domain dataset.
AdaptSegNet Tsai et al. (2018) is one widely-adopted baseline model, which utilize the adversarial training to align the semantic outputs.
MRNet Zheng and Yang (2019) is one recent work, which leverages the memory module to regularize the model training, especially for the target-domain data.
Specifically, MRNet arrives superior performance to AdaptSegNet in terms of mIoU on three benchmarks. Therefore, if not specific, we adopt the pseudo label generated by the stronger baseline, i.e., MRNet.
Training Details. The input image is resized to with scale jittering from , and then we randomly crop for training. Horizontal flipping is applied with the possibility of . We train the model with mini-batch size of
, and the parameters of batch normalization layers are also fine-tuned. The learning rate is set to. Following Zhao et al. (2017); Zhang et al. (2019a, 2020), we deploy the ploy learning rate policy by multiplying the factor . The total iteration is set as iterations and we adopt the early-stop strategy. We stop the training after 50k iterations. When inference, we follow Zheng and Yang (2019) to combine the output of both classifier as the final result.
. Our implementation is based on PytorchPaszke et al. (2017).
Datasets. To simplify, we denote the test setting as A B, where A represents the labeled source domain and B denotes the unlabeled target domain. We evaluate the proposed method on two widely-used synthetic-to-real benchmarks: i.e., GTA5 Richter et al. (2016)Cityscapes Cordts et al. (2016) and SYNTHIA5 Ros et al. (2016)Cityscapes Cordts et al. (2016). Both source dataset, i.e., GTA5 and SYNTHIA are the synthetic datasets, and the corresponding annotation is easy to obtain. Specifically, the GTA5 dataset is collected from a video game, which contains images for training. The SYNTHIA dataset is rendered from a virtual city and comes with pixel-level segmentation annotations, containing training images. The realistic dataset, Cityscapes, collect street-view scenes from different cities, which contains training images and images for validation. Besides, we also evaluate the performance on the cross-city benchmark, i.e., Cityscapes Cordts et al. (2016)Oxford RobotCar Maddern et al. (2017). We utilize the annotation of Cityscapes training images in this setting. The Oxford RobotCar dataset serves as the unlabeled target domain, containing training images and validation images. We note that this setting is challenging in different weather conditions. Oxford RobotCar is collected in the rainy days, while the Cityscapes dataset is mostly collected in the sunny days. The differences between datasets are listed in Table 1.
Evaluation Metric. We report pre-class IoU and mean IoU over all classes. For SYNTHIA Cityscapes, due the limited annotated classes in the source dataset, we report the results based on 13 categories as well as 16 categories with three small-scale categories. For Cityscapes Oxford RobotCar, we follow the setting in Tsai et al. (2019) and report 9 pre-class IoU as well as the mIoU accuracy.
|CyCADA Hoffman et al. (2018)||79.1||33.1||77.9||23.4||17.3||32.1||33.3||31.8||81.5||26.7||69.0||62.8||14.7||74.5||20.9||25.6||6.9||18.8||20.4||39.5|
|MCD Saito et al. (2018)||90.3||31.0||78.5||19.7||17.3||28.6||30.9||16.1||83.7||30.0||69.1||58.5||19.6||81.5||23.8||30.0||5.7||25.7||14.3||39.7|
|AdaptSegNet Tsai et al. (2018)||86.5||36.0||79.9||23.4||23.3||23.9||35.2||14.8||83.4||33.3||75.6||58.5||27.6||73.7||32.5||35.4||3.9||30.1||28.1||42.4|
|SIBAN Luo et al. (2019a)||88.5||35.4||79.5||26.3||24.3||28.5||32.5||18.3||81.2||40.0||76.5||58.1||25.8||82.6||30.3||34.4||3.4||21.6||21.5||42.6|
|CLAN Luo et al. (2019b)||87.0||27.1||79.6||27.3||23.3||28.3||35.5||24.2||83.6||27.4||74.2||58.6||28.0||76.2||33.1||36.7||6.7||31.9||31.4||43.2|
|APODA Yang et al. (2020)||85.6||32.8||79.0||29.5||25.5||26.8||34.6||19.9||83.7||40.6||77.9||59.2||28.3||84.6||34.6||49.2||8.0||32.6||39.6||45.9|
|PatchAlign Tsai et al. (2019)||92.3||51.9||82.1||29.2||25.1||24.5||33.8||33.0||82.4||32.8||82.2||58.6||27.2||84.3||33.4||46.3||2.2||29.5||32.3||46.5|
|AdvEnt Vu et al. (2019)||DeepLabv2||89.4||33.1||81.0||26.6||26.8||27.2||33.5||24.7||83.9||36.7||78.8||58.7||30.5||84.8||38.5||44.5||1.7||31.6||32.4||45.5|
|FCAN Zhang et al. (2018)||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||46.6|
|CBST Zou et al. (2018)||91.8||53.5||80.5||32.7||21.0||34.0||28.9||20.4||83.9||34.2||80.9||53.1||24.0||82.7||30.3||35.9||16.0||25.9||42.8||45.9|
|MRKLD Zou et al. (2019)||91.0||55.4||80.0||33.7||21.4||37.3||32.9||24.5||85.0||34.1||80.8||57.7||24.6||84.1||27.8||30.1||26.9||26.0||42.3||47.1|
|MRNet Zheng and Yang (2019)||89.1||23.9||82.2||19.5||20.1||33.5||42.2||39.1||85.3||33.7||76.4||60.2||33.7||86.0||36.1||43.3||5.9||22.8||30.8||45.5|
|MCD Saito et al. (2018)||84.8||43.6||79.0||3.9||0.2||29.1||7.2||5.5||83.8||83.1||51.0||11.7||79.9||27.2||6.2||0.0||43.5||37.3|
|SIBAN Luo et al. (2019a)||82.5||24.0||79.4||16.5||12.7||79.2||82.8||58.3||18.0||79.3||25.3||17.6||25.9||46.3|
|PatchAlign Tsai et al. (2019)||82.4||38.0||78.6||8.7||0.6||26.0||3.9||11.1||75.5||84.6||53.5||21.6||71.4||32.6||19.3||31.7||46.5||40.0|
|AdaptSegNet Tsai et al. (2018)||84.3||42.7||77.5||4.7||7.0||77.9||82.5||54.3||21.0||72.3||32.2||18.9||32.3||46.7|
|CLAN Luo et al. (2019b)||81.3||37.0||80.1||16.1||13.7||78.2||81.5||53.4||21.2||73.0||32.9||22.6||30.7||47.8|
|APODA Yang et al. (2020)||86.4||41.3||79.3||22.6||17.3||80.3||81.6||56.9||21.0||84.1||49.1||24.6||45.7||53.1|
|AdvEnt Vu et al. (2019)||DeepLabv2||85.6||42.2||79.7||8.7||0.4||25.9||5.4||8.1||80.4||84.1||57.9||23.8||73.3||36.4||14.2||33.0||48.0||41.2|
|CBST Zou et al. (2018)||68.0||29.9||76.3||10.8||1.4||33.9||22.8||29.5||77.6||78.3||60.6||28.3||81.6||23.5||18.8||39.8||48.9||42.6|
|MRKLD Zou et al. (2019)||67.7||32.2||73.9||10.7||1.6||37.4||22.2||31.2||80.8||80.5||60.8||29.1||82.8||25.0||19.4||45.3||50.1||43.8|
|MRNet Zheng and Yang (2019)||82.0||36.5||80.4||4.2||0.4||33.7||18.0||13.4||81.1||80.8||61.3||21.7||84.4||32.4||14.8||45.7||50.2||43.2|
|AdaptSegNet Tsai et al. (2018)||95.1||64.0||75.7||61.3||35.5||63.9||58.1||84.6||57.0||69.5|
|PatchAlign Tsai et al. (2019)||94.4||63.5||82.0||61.3||36.0||76.4||61.0||86.5||58.6||72.0|
|MRNet Zheng and Yang (2019)||95.9||73.5||86.2||69.3||31.9||87.3||57.9||88.8||61.5||72.5|
Synthetic-to-real. We compare the proposed method with other recent semantic segmentation adaptation methods that have reported the results or can be re-implemented by us on three benchmarks. For a fair comparison, we mainly compare the results based on the same network structure, i.e., DeepLabv2. The competitive methods cover a wide range of approaches and could be roughly categorised according to the usage of pseudo label: CyCADA Hoffman et al. (2018), MCD Saito et al. (2018), AdaptSegNet Tsai et al. (2018), SIBAN Luo et al. (2019a), CLAN Luo et al. (2019b), APODA Yang et al. (2020) and PatchAlign Tsai et al. (2018) do not leverage the pseudo labels and focus on aligning the distribution between the source domain and the target domain; CBST Zou et al. (2018), MRKLD Zou et al. (2019), and our implemented MRNet+Pseudo are based on the pseudo label learning to fully exploit the unlabeled target-domain data.
First of all, we consider the widely-used GTA5 Cityscapes benchmark. Table 2 shows that: (1) The proposed method arrives the state-of-the-art results mIoU, which surpasses other methods. Besides, the proposed method also yields the competitive performance in terms of the pre-class IoU. (2) Comparing to our baseline, i.e., MRNet+Pseudo ( mIoU), which adopts the conventional pseudo learning, the proposed method ( mIoU) gains mIoU improvement. It verifies the effectiveness of the proposed method in rectifying the learning from the noisy pseudo label. The variance regularization plays an important role in achieving this result; (3) Meanwhile, we could observe that the proposed method outperforms the source-domain model, i.e., MRNet ( mIoU), which provides the pseudo label, mIoU. It verifies the effectiveness of the pseudo label learning that push the model to be confident about the prediction. If most pseudo labels are correct, the pseudo label learning could effectively boost the target-domain performance. (4) The proposed method also surpasses the other domain alignment method by a relatively large margin. For example, the modified AdaptSegNet, i.e., PatchAlign Tsai et al. (2018), leverages the patch-level information, yielding , which is inferior to ours. (5) Without using the prior knowledge, the proposed method is also superior to other pseudo label learning works, i.e.CBST Zou et al. (2018) and MRKLD Zou et al. (2019). CBST Zou et al. (2018) introduces the location knowledge, e.g., sky is always in the upper bound of the image. In this work, we do not apply such prior knowledge, but we note that the prior knowledge is compatible with our method.
We observe a similar result on SYNTHIA Cityscapes (see Table 3). Following the setting in Zou et al. (2018, 2019), we include the mIoU results of 13 categories as well as 16 categories, which also calculate IoU of other three small-scale objectives, i.e., Wall, Fence and Pole. The proposed method has achieved mIoU of 16 categories and mIoU of 13 categories. Comparing to the baseline, MRNet+Pseudo, we yield mIoU and mIoU improvement. Meanwhile, the proposed method also outperforms the second best method, i.e., APODA Yang et al. (2020), mIoU.
Cross-city. We further evaluate the proposed method on the cross-city benchmark, i.e., Cityscapes Oxford RobotCar. Both of the source-domain and target-domain datasets are collected in the real-world scenario. We follow the settings in Tsai et al. (2019) to report IoU of the shared 9 categories between the two datasets. As shown in Table 4, the proposed method arrives mIoU. Comparing to the baseline, i.e., MRNet+Pseudo (), the improvement () on the cross-city benchmark is relatively limited. We speculate that it is due to that the number of wrong pseudo label is limited, which could not fully exploit the advantages of the proposed method. The source-domain model, MRNet, has achieved mIoU. Therefore, the baseline, MRNet+Pseudo, also could obtain competitive results by directly utilizing all pseudo labels. Besides, it is worthy to note that the proposed method has arrived the 6 of 9 best pre-class IoU accuracy, and achieved on the class of traffic sign, which is a small-scale objective.
Visualization. As shown in Figure 4, we provide the qualitative results of the semantic segmentation adaptation on all three benchmarks. Comparing to the source model, the pseudo label learning could significantly improve the performance. Besides, in contrast with the baseline method with conventional pseudo label learning, we observe that the proposed variance regularization has better scalability to small-scale objectives, such as traffic signs and poles. We speculate that it is because that the noisy pseudo label usually contains the error of predicting the rare category to the common category, i.e., large-scale objectives. The proposed method rectifies the learning from such mistakes, yielding more reasonable segmentation prediction.
Variance Regularization vs. Handcrafted Threshold. The proposed variance regularization is free from setting the threshold. To verify the effectiveness of the variance regularization, we also compare the conventional pseudo label learning with different thresholds. As shown in Table 5, the proposed regularization arrives the superior performance to the hand-crafted threshold. It is due to that the variance regularization could be viewed as a dynamic threshold, providing different thresholds for different pixels in the same image. For the coherent predictions, the model is prone to learning the pseudo label and maximizing the impact of such labels. For the incoherent results, the model is prone to neglecting the pseudo label automatically and minimizing the negative effect of noisy labels. The best handcrafted threshold is to neglect the label with the prediction score , yielding mIoU. In contrast, the proposed method achieves mIoU with increment.
|MRNet Zheng and Yang (2019)||-||45.5|
Could the proposed method work on the pseudo label generated by other models (e.g., with more noise)? To verify the scalability of the proposed method, we adopt the AdaptSegNet Tsai et al. (2018) to generate pseudo labels. AdaptSegNet is inferior to MRNet in terms of the mIoU on GTA5 Cityscapes. As shown in Table 6, the proposed method still could learn from the label generated by AdaptSegNet, improving the performance from to . Meanwhile, the proposed method is also superior to the baseline method with the conventional pseudo learning ( mIoU).
|AdaptSegNet Tsai et al. (2018)||-||42.4|
|MRNet Zheng and Yang (2019)||-||45.5|
Training Convergence. As shown in Figure 6, the conventional pseudo label learning (orange line) is prone to over-fit all pseudo labels, including the noisy label. Therefore, the training loss converges to zero. In contrast, the proposed method (blue line) also converges, but does not force the loss to be zero. It is because that we provide the variance regularization term, which could punish the wrong prediction for the uncertain pseudo labels with flexibility.
Uncertainty Visualization. As a by-product, we also could estimate the prediction uncertainty when inference. We provide the visualization results to show the difference between the uncertainty estimation and the confidence score. As shown in Figure 5, we observe that the model is prone to provide the low confidence score of the boundary pixels, which does not provide the effective cue to the ambiguous prediction. Instead, the proposed prediction variance reflects the label uncertainty, and the highlight area in prediction variance map has lots of overlaps with the wrong prediction.
We identify the challenge of pseudo label learning in adaptive semantic segmentation and present a simple and effective method to estimate the prediction uncertainty during training. We also involve the uncertainty into the optimization objective as the variance regularization to rectify the training. The regularization helps the model learn from the noisy label, without introducing extra parameters or modules. As a result, we achieve the competitive performance on three benchmarks, including two synthetic-to-real benchmarks and one cross-city benchmark. In the future, we will continue to investigate the usage of uncertainty and the applications to other related tasks, e.g., medical imaging.
The cityscapes dataset for semantic urban scene understanding.In CVPR.
What uncertainties do we need in bayesian deep learning for computer vision?In Advances in neural information processing systems, pages 5574–5584.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2239–2247.
The journal of machine learning research, 15(1):1929–1958.
Class-specific reconstruction transfer learning for visual recognition across domains.TIP.