Recent advances in domain adaptation help alleviate the labeling efforts required for training fully-supervised models, which is especially helpful for tasks like semantic segmentation. Most previous works address the single-target setting whose goal is to adapt from source to a particular target domain of interest, e.g. a specific urban area. However in practice, the perception system is often put to test in various scenarios including different cities, weathers or lighting conditions. To deal with multiple test distributions, one can straight-forwardly adopt single-target techniques by either (i) training multiple models for all target domains and adaptively activating one at test time or (ii) merging all target data and treat them as being drawn from a single target distribution. While the former strategy raises storage issues for embedded platforms and is difficult to scale up, the latter overlooks distribution shifts across different target domains.
In this work, we address multi-target unsupervised domain adaptation (UDA) in semantic segmentation. We aim to learn a single segmenter that achieves equally good performance in all target domains, simultaneously closing distribution gaps between labeled-unlabeled data (source target) and among target domains (target target). Our work is inline with recent efforts [3, 7, 15] toward more practical domain adaption settings for real-life applications. Different from most existing multi-target works that specifically consider image classification, we study here the more complex task of semantic segmentation.
We propose two adversarial UDA frameworks with architectures and learning schemes designed for the multi-target setup. The multi-discriminator model explicitly reduces both source-target and target-target domain gaps via adversarial learning – each target domain is aligned to its counterparts. Our second framework, called multi-target knowledge transfer (MTKT) relaxes the multi-target optimization complexity by adopting a multi-teacher/single-student mechanism. Each target-specific teacher handles a specific source-target domain gap via adversarial training; The target-agnostic student is learned from all teachers to achieve target-target alignment and to perform equally well in all target domains.
Our contributions can be summarized as follows:
[itemsep=0pt, parsep=3pt, topsep=3pt, leftmargin=15pt]
We propose two multi-target UDA frameworks for semantic segmentation.
We conduct extensive experiments of these two models against state-of-the-art baselines on the proposed benchmarks. Our approaches report consistent improvements over addressed baselines.
2 Related Works
Unsupervised Domain Adaptation for Semantic Segmentation. UDA is a setting that has received a lot of attention recently [10, 16, 22, 23, 25, 27]. The objective is to train a model on an unlabeled target domain by leveraging information from a labeled source domain, which is usually performed by aligning in some way the distributions between source and target domains. Some strategies include constraining the training with regularization such as maximum mean discrepancy (MMD)  or correlation alignment . Most recent works, in particular in UDA for semantic segmentation, adopt an adversarial training strategy either at feature level  or output level [23, 25]. Some works also include a form of style transfer or image translation [10, 27, 28] to obtain target-looking source images while keeping source annotation. Additionally, a few works resort to “pseudo-labeling” [14, 21, 31] to refine their model with the help of automatically produced annotation in the target domain.
While these methods are really effective to adapt from one domain to another, their UDA setting is limited. In real-world scenarios, data may come from various domains: In urban scenes for instance, such domain variations may stem from different sensors, weather conditions or cities. While the underlying distribution is similar across domains, traditional UDA models are not robust to changes of target domains. Moreover, since they are specifically designed for single-source to single-target alignment, they fail to leverage information across more source or target domains.
Some recent works extend the standard UDA setting in semantic segmentation to more source or target domains. MADAN  tackles the task of multi-source domain adaptation for semantic segmentation where a model is trained using multiple labeled source domains and adapted on a single target domain. The authors first transform source images into adapted domains, similar to the target domain, then bring these new domains closer together with a sub-domain aggregation discriminator. They finally train the segmentation network by performing adversarial feature-level alignment between adapted and target domains. Closer to our setting, OCDA  addresses UDA with an open compound target domain: In this task, the target domain may be considered as a combination of multiple homogeneous target domains – for instance, similar weather conditions such as ‘sunny’, ‘foggy’, etc. – where the domain labels are not known during training. Moreover, previously unseen target domains may be encountered during inference. Unlike OCDA, our multi-target setting assumes that the domain of origin is known at training time and that no new domains are faced at test time (except in additional generalization experiments).
Multi-Target Domain Adaptation for Classification. Multi-target domain adaptation is still a fairly recent setting in the literature and mostly tackles classification tasks. Two main scenarios emerge in the works on this task. In the first one, even though the target is considered composed of multiple domains with gaps and misalignments, the domain labels are unknown during training and test.  proposes an architecture that extracts domain-invariant features by performing source-target domain disentanglement. Moreover, it also removes class-irrelevant features by adding a class disentanglement loss. In a similar setting,  presents an adversarial meta-adaptation network that both aligns source with mixed-target features and uses an unsupervised meta-learner to cluster the target inputs into clusters, which are adversarially aligned. In the second scenario, the target identities are labeled on the training samples but remain unknown during inference. To handle it,  learns a common parameter dictionary from the different target domains and extracts the target model parameters by sparse representation; 
adopts a disentanglement strategy by capturing separately both domain-specific private features and feature representations by learning a domain classifier and a class label predictor, and trains a shared decoder to reconstruct the input sample from those disentangled representations.
In the present work, we adopt the second multi-target hypothesis: The target identities are known for the training samples but not for test ones. In fact, assuming that this information is available at test time is incompatible with some practical scenarios. More importantly, it would hinder generalization to previously-unseen domains, an important issue for autonomous systems in the wild. To the best of our knowledge, tackling semantic segmentation in this multi-target UDA scenario has only been proposed in a recently published concurrent work . This work proposes to train a fully-fledged segmentation network for each domain and to ensure consistency among these multiple networks with image stylization between domains.
3 Adversarial Adaptation to Multiple Targets
3.1 Problem Formulation
Standard Unsupervised Domain Adaptation. The standard setting that is addressed in most UDA works is single source and single target. For adaptation, the model is trained on both a source-domain set with the associated ground-truth set and an unlabeled target-domain set .
For semantic segmentation in classes, sets and contain training images , while the annotation set contains for each a map of
one-hot vectors indicating the ground-truth semantic classes for all pixels.
A segmentation network takes an image as input and predicts a soft-segmentation map .111We use notation for . The final segmentation map, , is given by max-score class, , at each pixel. UDA methods aim at aligning the distributions of the source-domain and target-domain training data such that, at test time, the segmenter produces satisfactory predictions for target-domain inputs, without having been trained on labeled images from this domain.
Multi-Target UDA. In this work, we consider a different UDA scenario where distinct target domains must be jointly handled. These target domains are represented by unlabeled training sets , . Similar to the standard setting, we assume that the annotated training examples stem from a single source domain, a specific synthetic environment for instance. The main goal is to train a single segmenter that achieves equally good results on all target-domain test sets. While the target domain of origin is known for all unlabeled training examples, we assume as in classification approaches in [7, 29] that this information is not accessible at test time.
3.2 Revisiting Adversarial UDA Approach
Recent state-of-the-art single-target UDA approaches are based on adversarial training to align source-target distributions. In such approaches, besides the segmenter with parameters , an additional network with parameters , called discriminator, is trained to play the segmenter’s “adversary”: is learned to predict the domain of an input from suitable representations extracted by such as intermediate or close-to-output features. Concurrently, tries to produce results that can fool into wrong discrimination. In semantic segmentation, adversarial approaches operating on close-to-prediction representations have the most success. AdaptSegnet  proposes to have adversarial learning on top of the soft-segmentation predictions . AdvEnt  improves AdaptSegnet by using instead the “weighted self-information” maps ,222Defined as , with entry-wise operations. which brings additional entropy-minimization effect through adversarial alignment. Such single-target adversarial frameworks serve as the building block on top of which we develop our multi-target strategies. Hereafter, we denote the used representation, which stands for either in  or in .
In practice, is a fully-convolutional binary classifier with parameters . It classifies segmenter’s output into either class (source) or (target). To train the discriminator, we minimize the classification loss:
where stands for the binary cross-entropy loss and denotes averaging over the set in subscript.
Concurrently, the segmenter is trained over its parameters not only to minimize the supervised segmentation loss on source-domain data, but also to fool the discriminator via minimizing an adversarial loss . The final objective reads:
with a weight balancing the two terms; is the common cross-entropy loss. During training, one alternately minimizes the two losses and .
Figure 2 provides a high-level view of the training flow in recent adversarial UDA approaches. For more details, we refer the readers to [23, 25] for instance. To later facilitate the presentation of our proposed strategies, the segmenter is decoupled into a feature extractor, , followed by a pixel-wise classifier, .
Discussion. Approaches like [23, 25] handle only one source domain and one target domain. In our setting with multiple target domains, a simple strategy is to merge all target datasets into a single one and then to utilize an existing single-source single-target UDA framework. Such a strategy however disregards the inherent discrepancy among target domains. As we show in the experiments, this multi-target baseline is less effective than the proposed strategies which explicitly handle inter-target domain shifts. In what follows, we describe these two novel frameworks.
3.3 Multi-Target Frameworks
Multi-Discriminator. Our first strategy for multi-target UDA, called multi-discriminator (‘Multi-Dis.’ in short), relies on two types of discriminators to align each target domain with the source (source-target discriminators) and with other targets (target-target discriminators). Figure 3 illustrates this first approach.
Source-target adversarial alignment. We introduce a discriminator with parameters for each target domain . It is learned to discriminate from the source set . By denoting the minimization objective of this discriminator, defined as in (1) on domain , we train these source-target discriminators with the mean objective:
Concurrently, the segmenter is trained to fool these discriminators with the adversarial objective:
Target-target adversarial alignment. In the above source-target alignment, the source acts as an anchor for each target to “pull” closer the other targets. However, as this alignment is imperfect, there remain gaps across targets, which we propose to reduce further by additional target-target alignments. To this end, we introduce for each target domain a discriminator with parameters that classifies (class 1) all other target domains (class 0), resulting in 1--all discriminators. The target-target discriminator is trained by minimizing the loss
The collective objective of all target-target discriminators now reads:
The segmenter tries to fool all the target-target discriminators by minimizing the adversarial loss:
To sum up, the segmenter is trained by minimizing over the objective:
with weights and to balance the adversarial terms.
Multi-Target Knowledge Transfer. The main driving force in prediction-level adversarial approaches [23, 25] is the adjustment of the decision boundaries. Alignment in feature space then follows to comply with adjusted boundaries. We thus stress the importance of classifier design in the multi-target UDA scenario. In our multi-discriminator approach, one classifier simultaneously handles multiple domain shifts, either source-target or target-target. The main challenge is the instability of adversarial training, which is amplified if several adversarial losses are jointly minimized. Such an issue is particularly problematic in the early training phase when most target predictions are very noisy. To address this challenge, we propose the multi-target knowledge transfer (MTKT) framework, with novel network design and learning scheme which do not rely on the joint minimization of multiple adversarial losses over the same classifier module, hopefully reducing the instability of the training. Figure 4 shows the MTKT architecture.
The classification part of the network is first re-designed with target-specific instrumental classifiers, , based on the same feature extractor , each handling one specific source-target domain shift. Such an architecture allows separate output-space adversarial alignment for each specific source-target pair, alleviating the instability problem. For each target-specific classifier , we introduce a domain discriminator as to classify source target . The training objectives are similar to those used in single-target models (Eqs. 1 and 2).
We then introduce a target-agnostic classification branch that fuses all the knowledge transferred from the target-specific classifiers. This target-agnostic classifier is the final product of the approach, , the one used at test time when domain knowledge is not available.
The knowledge from the
“teachers” is transferred to the target-agnostic “student” via minimizing the Kullback-Leibler divergence between teachers’ and student’s predictions on target domains. In details, for a given sample , we compute the KL loss
where and are soft-segmentation predictions coming from the target-specific and the target-agnostic respectively. The minimization objective of the target-agnostic classifier over the segmenter’s parameters (including feature extractor’s) then reads:
Minimizing KL losses helps adjust its decision boundaries toward good behavior in all target domains. As the KL loss is back-propagated through the feature extractor, such an adjustment results in implicit alignment in target feature space, which overall mitigates the distribution shifts between the domains.
Discussion. Unlike Multi-Dis., the multi-teacher/single-student mechanism in MTKT avoids direct alignment between unlabeled parts. The target-agnostic classifier is encouraged to adjust its decision boundaries to favor all the target-specific teachers, thus helping cross-target alignment.
Although we build our frameworks over output-space alignment [25, 23], note that they could be adapted to other adversarial feature-alignment methods . Moreover, orthogonal approaches like pseudo-labeling can also be included in our frameworks and we show some experiments with such an addition in Section 4.3.
4.1 Experimental Details
Datasets. We build our experiments on four urban driving datasets, one being synthetic and the three others being recorded in various geographic locations:
GTA5  is a dataset of 24,966 labeled synthetic images generated from the eponymous video game;
Cityscapes  contains labeled urban scenes from cities around Germany, split in training and validation sets of 2,975 and 500 samples respectively;
IDD  is an Indian urban dataset having 6,993 training and 981 validation labeled scenes;
Mapillary Vistas  is a dataset collected in multiple cities around the world, which is composed of 18,000 training and 2,000 validation labeled scenes.
Though all containing urban scenes, the four datasets have different labeling policies and semantic granularity. We follow the protocol used in [13, 26] and standardize the label set with 7 super classes, common to all four datasets: flat, construction, object, nature, sky, human and vehicle. The mapping from original classes to these super classes is given in the Supplementary Material.
When Cityscapes, IDD or Mapillary are used as target domain, only unlabeled images from them are used for training, by definition of the UDA problem.
Our experiments are conducted with PyTorch. The adversarial framework is based on AdvEnt’s published code.333https://github.com/valeoai/ADVENT We adopt DeepLab-V2  as the semantic segmentation model, built upon the ResNet-101 
backbone initialized with ImageNet
pre-trained weights. The segmenters are trained by Stochastic Gradient Descent with learning rate , momentum and weight decay . We train the discriminators using an Adam optimizer  with learning rate . All experiments were conducted at the resolution.
For MTKT, we “warm up” the target-specific branches for 20,000 iterations before training the target-agnostic branch. The warm-up step avoids distillation of noisy target predictions in the early phase, which helps stabilize target-agnostic training.
4.2 Main results
We consider four setups, varying the type of domain-shift (‘syn-2-real’ or ‘city-2-city’) or the number of targets (two to three domains). To measure per-target segmentation performance, we use the standard mean Intersection-over-Union (mIoU) metric. For multi-target performance, we report the mIoU averaged over the target domains; Using the average helps mitigate the potential bias caused by target evaluation sets with substantially different sizes.
GTA5 Cityscapes Mapillary. Table 1 reports segmentation results on the two target validation sets of Cityscapes and Mapillary; GTA5 is the source domain in this setup. For comparison, we consider the single-target AdvEnt models, i.e. trained on either Cityscapes or Mapillary unlabeled images. We have also the multi-target AdvEnt model, denoted as ‘Multi-Target Baseline’ in Table 1, which is trained on the merging of the two targets. For all models, including the single-target ones, we report both per-target and average mIoUs. The two rows marked with ‘(*)’ indicate results of the single-target models on the same domains used for training, regarded as per-target baselines.
Single-target baselines achieve worse average mIoU than those trained on both domains, which indicates the benefit of having access to diverse data from multiple domains during training. Our proposed approaches outperform the multi-target baseline with mIoU gains of for multi-discriminator and for MTKT. Looking closer at the per-target results, we observe unfavorable performance if one directly transfers single-target models to a new domain. Indeed, testing the Cityscapes-only model on Mapillary results in a drop of mIoU compared to the reference performance and a similar drastic drop is seen for Mapillary-only model on Cityscapes. Especially we notice important degradation on safety-critical classes like human or vehicle using those single-target models. The multi-discriminator model achieves comparable mIoUs as the per-target baselines. The MTKT model improves over the per-target baselines by significant margin, i.e. on Cityscapes and on Mapillary. Such results highlight the merit of the proposed strategies, especially MTKT. Note that adding adversarial training on the target-agnostic branch of MTKT hinders the alignment effect, reducing the performance by mIoU Avg.
GTA5 Cityscapes IDD. We experiment with another syn-2-real setup in which the two target datasets have noticeably different landscapes, i.e. European cities in Cityscapes and Indian ones in IDD. Results are reported in Table 2. Here also, multi-target models outperform the single-target ones. In this setup, the performance of Multi-Dis. is comparable to the multi-target baseline’s. We conjecture that the complex and unstable optimization problem in the multi-discriminator framework makes it difficult to achieve good alignment across targets, especially when the two targets are more noticeably different. With a dedicated architecture and learning scheme that alleviate such an optimization issue, the MTKT model achieves the best results, in terms of both per-target and average mIoUs.
We visualize some qualitative results in Figure 5.
GTA5 Cityscapes Mapillary IDD. We consider a more challenging setup involving three target domains – Cityscapes, Mapillary and IDD – and show results in Table 3. With more target domains, the same conclusions hold. In terms of average mIoU, the multi-discriminator model marginally improves over the multi-target baseline. The MTKT model significantly outperforms all other models with mIoU Avg. Moreover, when compared to the per-target baselines, MTKT is the only model to show improvement on every target domain.
Cityscapes Mapillary IDD. Finally, we experiment on a realistic city-2-city setup with Cityscapes as source and Mapillary and IDD as target domains. The results are shown in Table 4. Interestingly, on Mapillary, the single-target model trained on IDD achieves better results than the one trained only on Mapillary. We conjecture that the domain gap between Cityscapes and Mapillary is less than the one between Cityscapes and IDD; The extra data diversity coming from IDD improves the single-target IDD-only model generalization and helps mitigate the small Cityscapes-Mapillary domain gap. Another observation is that the IDD-only model outperforms the multi-target baseline. This indicates the disadvantage of the naive dataset merging strategy: Not only complementary signals but also conflicting/negative ones get transferred. The two proposed models outperform the multi-target baseline; MTKT obtains the best performance overall. Again in this realistic setup, we showcase the advantages of our methods, especially the multi-target knowledge transfer model.
Conclusions. These four sets of experiments demonstrate that the proposed multi-target frameworks consistently deliver competitive performance on the multiple target domains they are trained for. MTKT always gives the best performance, both in per-target and average mIoUs, compared to the baselines and to the multi-discriminator model. Note that our models are compatible with techniques such as image translation [10, 27, 28] or pseudo-labeling self-training [14, 21, 31], from which they could benefit. In particular, we show next with additional experiments how to use pseudo-labeling  with MTKT.
|(a) Input||(b) Ground truth||(c) City. Baseline||(d) IDD Baseline||(e) MT Baseline||(f) Multi-Dis.||(g) MTKT|
4.3 Further Experiments
Additional Impact of Pseudo-Labeling. Pseudo-labeling (PL) is a strategy that has become quite popular in UDA for semantic segmentation [14, 21, 31]. It can be easily combined with our multi-target frameworks. Taking for instance the recently-proposed ESL , we consider three ways to adapt its pseudo-labeling strategy to the MTKT architecture. In all of them, we collect pseudo-labels in each target domain using the corresponding target-specific classifier and use them as additional self-supervision for these target-specific heads; In the second method we also use these pseudo-labels to restrict the back-propagation of the KL losses to pixels that are correctly classified according to these pseudo-labels; In the third method, they are also used to refine the target-agnostic classifier. We report in Table 5 the results of the models trained with these three PL-based refinement strategies on GTA5 Cityscapes IDD and compare them to the baseline trained with ESL. The three ways of extending MTKT with PL result in similar performance gains of at least mIoU Avg. This demonstrates that knowledge transfer is complementary to pseudo-labeling. Moreover, MTKT with ESL outperforms the baseline with ESL by mIoU Avg.
Direct Transfer to a New Dataset. We consider a direct transfer setup in which the models see no images from the test domain during training: This experiment highlights how well the models can generalize to new previously-unseen domains. We report in Table 6 the results of such a direct transfer to a new dataset in different setups. The models are trained on GTA5 Cityscapes IDD (resp. on GTA5 Cityscapes Mapillary) and tested on Mapillary (resp. IDD). On both setups, MTKT shows better performance in terms of mIoU compared to the baselines on the new domain. In the first one in particular, with Mapillary as the new test domain, MTKT outperforms the multi-target baseline by . What is particularly noticeable in this setup is the performance on the human class: While we observe an IoU of around in the main results on domain adaptation to Mapillary (e.g. in Tab. 1), the direct transfer results of the multi-target baseline and of Multi-Dis. drop under on this class; Differently, MTKT manages to get similar performance with IoU on human. This experiment hints at the ability of MTKT to better generalize to new unseen domains.
This work addresses the new problem of unsupervised adaptation to multiple target domains in semantic segmentation. We discuss the challenges that this UDA setup raises in terms of distribution alignment and of joint learning. That leads to two novel frameworks: The multi-discriminator approach extends single-target UDA to handle pair-wise domain alignment; The multi-target knowledge transfer approach alleviates the instability of multi-domain adversarial learning with a multi-teacher/single-student distillation mechanism. In the context of driving scenes, we propose four experimental setups, varying the type of source-target gaps and the number of target domains. Our approaches outperform all baselines on these four setups, which are representative of real-world applications. Further experiments additionally show that our frameworks can be combined to state-of-the-art pseudo-labeling strategies and that the proposed learning schemes help to generalize to previously-unseen datasets. This work thus contributes to the recent research line in domain adaptation toward more practical use cases. With the same goal, future research directions may consider more complex mixes of source and target domains, making use of several labeled and unlabeled datasets.
Appendix A Mapping Classes to Super Classes
Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT, Cited by: §4.1.
-  (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Cited by: §4.1.
-  (2019) Blending-target domain adaptation by adversarial meta-adaptation networks. In , Cited by: §1, §2.
The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 2nd item, 2nd item.
-  (2009) ImageNet: a large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
-  (2015) Adam: a method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §4.1.
-  (2020) Unsupervised multi-target domain adaptation: an information theoretic approach. IEEE Transactions on Image Processing (TIP). Cited by: §1, §2, §3.1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §3.3.
-  (2018) Cycada: cycle-consistent adversarial domain adaptation. Cited by: §2, §4.2.
-  (2016) FCNs in the wild: pixel-level adversarial and constraint-based adaptation. arXiv:1612.02649. Cited by: §2, §3.3.
-  (2021) Multi-target domain adaptation with collaborative consistency learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (2019) SPIGAN: privileged adversarial learning from simulation. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §4.1.
-  (2019) Bidirectional learning for domain adaptation of semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2, §4.3.
-  (2020-06) Open compound domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2015) Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
-  (2017) The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), External Links: Cited by: 2nd item, 4th item.
-  (2017) Automatic differentiation in pytorch. In Workshop at Advances in Neural Information Processing Systems (NIPS), Cited by: §4.1.
-  (2019) Domain agnostic learning with disentangled representations. In International Conference on Machine Learning (ICML), Cited by: §2.
-  (2016) Playing for data: ground truth from computer games. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), Cited by: 2nd item, 1st item.
ESL: entropy-guided self-supervised learning for domain adaptation in semantic segmentation. In Workshop on Scalability in Autonomous Driving of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2, §4.3, Table 5.
-  (2016) Deep coral: correlation alignment for deep domain adaptation. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), Cited by: §2.
-  (2018) Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §3.2, §3.2, §3.2, §3.3, §3.3.
-  (2019) DD: a dataset for exploring problems of autonomous navigation in unconstrained environments. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Cited by: 2nd item, 3rd item.
-  (2019) ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §3.2, §3.2, §3.2, §3.3, §3.3, Table 1, Table 2, Table 3, Table 4.
-  (2019) DADA: depth-aware domain adaptation in semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Cited by: §4.1.
-  (2018) Dcan: dual channel-wise alignment networks for unsupervised scene adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, §4.2.
-  (2020) Fda: fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2.
-  (2018) Multi-target unsupervised domain adaptation without exactly shared categories. arXiv preprint arXiv:1809.00852. Cited by: §2, §3.1.
-  (2019) Multi-source domain adaptation for semantic segmentation. Advances in Neural Information Processing Systems (NeurIPS). Cited by: §2.
-  (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, §4.2, §4.3.