Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

08/16/2021 ∙ by Antoine Saporta, et al. ∙ Valeo 0

In this work, we address the task of unsupervised domain adaptation (UDA) for semantic segmentation in presence of multiple target domains: The objective is to train a single model that can handle all these domains at test time. Such a multi-target adaptation is crucial for a variety of scenarios that real-world autonomous systems must handle. It is a challenging setup since one faces not only the domain gap between the labeled source set and the unlabeled target set, but also the distribution shifts existing within the latter among the different target domains. To this end, we introduce two adversarial frameworks: (i) multi-discriminator, which explicitly aligns each target domain to its counterparts, and (ii) multi-target knowledge transfer, which learns a target-agnostic model thanks to a multi-teacher/single-student distillation mechanism.The evaluation is done on four newly-proposed multi-target benchmarks for UDA in semantic segmentation. In all tested scenarios, our approaches consistently outperform baselines, setting competitive standards for the novel task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent advances in domain adaptation help alleviate the labeling efforts required for training fully-supervised models, which is especially helpful for tasks like semantic segmentation. Most previous works address the single-target setting whose goal is to adapt from source to a particular target domain of interest, e.g. a specific urban area. However in practice, the perception system is often put to test in various scenarios including different cities, weathers or lighting conditions. To deal with multiple test distributions, one can straight-forwardly adopt single-target techniques by either (i) training multiple models for all target domains and adaptively activating one at test time or (ii) merging all target data and treat them as being drawn from a single target distribution. While the former strategy raises storage issues for embedded platforms and is difficult to scale up, the latter overlooks distribution shifts across different target domains.

In this work, we address multi-target unsupervised domain adaptation (UDA) in semantic segmentation. We aim to learn a single segmenter that achieves equally good performance in all target domains, simultaneously closing distribution gaps between labeled-unlabeled data (source target) and among target domains (target target). Our work is inline with recent efforts [3, 7, 15] toward more practical domain adaption settings for real-life applications. Different from most existing multi-target works that specifically consider image classification, we study here the more complex task of semantic segmentation.

We propose two adversarial UDA frameworks with architectures and learning schemes designed for the multi-target setup. The multi-discriminator model explicitly reduces both source-target and target-target domain gaps via adversarial learning – each target domain is aligned to its counterparts. Our second framework, called multi-target knowledge transfer (MTKT) relaxes the multi-target optimization complexity by adopting a multi-teacher/single-student mechanism. Each target-specific teacher handles a specific source-target domain gap via adversarial training; The target-agnostic student is learned from all teachers to achieve target-target alignment and to perform equally well in all target domains.

Our contributions can be summarized as follows:

  • [itemsep=0pt, parsep=3pt, topsep=3pt, leftmargin=15pt]

  • We propose two multi-target UDA frameworks for semantic segmentation.

  • We define four different evaluation benchmarks for the task making use of existing semantic segmentation datasets, i.e. GTA5 [20], Cityscapes [4], Mapillary Vistas [17] and India Driving Dataset [24].

  • We conduct extensive experiments of these two models against state-of-the-art baselines on the proposed benchmarks. Our approaches report consistent improvements over addressed baselines.

2 Related Works

Unsupervised Domain Adaptation for Semantic Segmentation.  UDA is a setting that has received a lot of attention recently [10, 16, 22, 23, 25, 27]. The objective is to train a model on an unlabeled target domain by leveraging information from a labeled source domain, which is usually performed by aligning in some way the distributions between source and target domains. Some strategies include constraining the training with regularization such as maximum mean discrepancy (MMD) [16] or correlation alignment [22]. Most recent works, in particular in UDA for semantic segmentation, adopt an adversarial training strategy either at feature level [11] or output level [23, 25]. Some works also include a form of style transfer or image translation [10, 27, 28] to obtain target-looking source images while keeping source annotation. Additionally, a few works resort to “pseudo-labeling”  [14, 21, 31] to refine their model with the help of automatically produced annotation in the target domain.

While these methods are really effective to adapt from one domain to another, their UDA setting is limited. In real-world scenarios, data may come from various domains: In urban scenes for instance, such domain variations may stem from different sensors, weather conditions or cities. While the underlying distribution is similar across domains, traditional UDA models are not robust to changes of target domains. Moreover, since they are specifically designed for single-source to single-target alignment, they fail to leverage information across more source or target domains.

Some recent works extend the standard UDA setting in semantic segmentation to more source or target domains. MADAN [30] tackles the task of multi-source domain adaptation for semantic segmentation where a model is trained using multiple labeled source domains and adapted on a single target domain. The authors first transform source images into adapted domains, similar to the target domain, then bring these new domains closer together with a sub-domain aggregation discriminator. They finally train the segmentation network by performing adversarial feature-level alignment between adapted and target domains. Closer to our setting, OCDA [15] addresses UDA with an open compound target domain: In this task, the target domain may be considered as a combination of multiple homogeneous target domains – for instance, similar weather conditions such as ‘sunny’, ‘foggy’, etc. – where the domain labels are not known during training. Moreover, previously unseen target domains may be encountered during inference. Unlike OCDA, our multi-target setting assumes that the domain of origin is known at training time and that no new domains are faced at test time (except in additional generalization experiments).

Multi-Target Domain Adaptation for Classification.  Multi-target domain adaptation is still a fairly recent setting in the literature and mostly tackles classification tasks. Two main scenarios emerge in the works on this task. In the first one, even though the target is considered composed of multiple domains with gaps and misalignments, the domain labels are unknown during training and test. [19] proposes an architecture that extracts domain-invariant features by performing source-target domain disentanglement. Moreover, it also removes class-irrelevant features by adding a class disentanglement loss. In a similar setting, [3] presents an adversarial meta-adaptation network that both aligns source with mixed-target features and uses an unsupervised meta-learner to cluster the target inputs into clusters, which are adversarially aligned. In the second scenario, the target identities are labeled on the training samples but remain unknown during inference. To handle it, [29] learns a common parameter dictionary from the different target domains and extracts the target model parameters by sparse representation; [7]

adopts a disentanglement strategy by capturing separately both domain-specific private features and feature representations by learning a domain classifier and a class label predictor, and trains a shared decoder to reconstruct the input sample from those disentangled representations.

In the present work, we adopt the second multi-target hypothesis: The target identities are known for the training samples but not for test ones. In fact, assuming that this information is available at test time is incompatible with some practical scenarios. More importantly, it would hinder generalization to previously-unseen domains, an important issue for autonomous systems in the wild. To the best of our knowledge, tackling semantic segmentation in this multi-target UDA scenario has only been proposed in a recently published concurrent work [12]. This work proposes to train a fully-fledged segmentation network for each domain and to ensure consistency among these multiple networks with image stylization between domains.

3 Adversarial Adaptation to Multiple Targets

3.1 Problem Formulation

Standard Unsupervised Domain Adaptation.  The standard setting that is addressed in most UDA works is single source and single target. For adaptation, the model is trained on both a source-domain set with the associated ground-truth set and an unlabeled target-domain set .

For semantic segmentation in classes, sets and contain training images , while the annotation set contains for each a map of

one-hot vectors indicating the ground-truth semantic classes for all pixels.

A segmentation network takes an image as input and predicts a soft-segmentation map .111We use notation for . The final segmentation map, , is given by max-score class, , at each pixel. UDA methods aim at aligning the distributions of the source-domain and target-domain training data such that, at test time, the segmenter produces satisfactory predictions for target-domain inputs, without having been trained on labeled images from this domain.

Multi-Target UDA.  In this work, we consider a different UDA scenario where distinct target domains must be jointly handled. These target domains are represented by unlabeled training sets , . Similar to the standard setting, we assume that the annotated training examples stem from a single source domain, a specific synthetic environment for instance. The main goal is to train a single segmenter that achieves equally good results on all target-domain test sets. While the target domain of origin is known for all unlabeled training examples, we assume as in classification approaches in [7, 29] that this information is not accessible at test time.

Figure 2: Training in adversarial UDA. The segmentation model under training ingests source-domain (green) and target-domain (blue) data. The former contribute to the segmentation loss, the latter to the adversarial loss, and both to the discriminator’s loss. The three losses (dotted boxes) are defined in Eqs. (1) and (2).

3.2 Revisiting Adversarial UDA Approach

Recent state-of-the-art single-target UDA approaches are based on adversarial training to align source-target distributions. In such approaches, besides the segmenter with parameters , an additional network with parameters , called discriminator, is trained to play the segmenter’s “adversary”: is learned to predict the domain of an input from suitable representations extracted by such as intermediate or close-to-output features. Concurrently, tries to produce results that can fool into wrong discrimination. In semantic segmentation, adversarial approaches operating on close-to-prediction representations have the most success. AdaptSegnet [23] proposes to have adversarial learning on top of the soft-segmentation predictions . AdvEnt [25] improves AdaptSegnet by using instead the “weighted self-information” maps ,222Defined as , with entry-wise operations. which brings additional entropy-minimization effect through adversarial alignment. Such single-target adversarial frameworks serve as the building block on top of which we develop our multi-target strategies. Hereafter, we denote the used representation, which stands for either in [23] or in [25].

In practice, is a fully-convolutional binary classifier with parameters . It classifies segmenter’s output into either class (source) or (target). To train the discriminator, we minimize the classification loss:

(1)

where stands for the binary cross-entropy loss and denotes averaging over the set in subscript.

Concurrently, the segmenter is trained over its parameters not only to minimize the supervised segmentation loss on source-domain data, but also to fool the discriminator via minimizing an adversarial loss . The final objective reads:

(2)

with a weight balancing the two terms; is the common cross-entropy loss. During training, one alternately minimizes the two losses and .

Figure 2 provides a high-level view of the training flow in recent adversarial UDA approaches. For more details, we refer the readers to [23, 25] for instance. To later facilitate the presentation of our proposed strategies, the segmenter is decoupled into a feature extractor, , followed by a pixel-wise classifier, .

Discussion.  Approaches like [23, 25] handle only one source domain and one target domain. In our setting with multiple target domains, a simple strategy is to merge all target datasets into a single one and then to utilize an existing single-source single-target UDA framework. Such a strategy however disregards the inherent discrepancy among target domains. As we show in the experiments, this multi-target baseline is less effective than the proposed strategies which explicitly handle inter-target domain shifts. In what follows, we describe these two novel frameworks.

3.3 Multi-Target Frameworks

Figure 3: Multi-discriminator approach to multi-target UDA. With Multi-Dis., the segmenter is trained against two types of adversaries that discriminate respectively source vs. one target and one target vs. all other targets. The four types of adversarial losses are defined in Eqs. (3), (4), (6) and (7). Symbols and colors follow those in Figure 2.

Multi-Discriminator.  Our first strategy for multi-target UDA, called multi-discriminator (‘Multi-Dis.’ in short), relies on two types of discriminators to align each target domain with the source (source-target discriminators) and with other targets (target-target discriminators). Figure 3 illustrates this first approach.

Source-target adversarial alignment.  We introduce a discriminator with parameters for each target domain . It is learned to discriminate from the source set . By denoting the minimization objective of this discriminator, defined as in (1) on domain , we train these source-target discriminators with the mean objective:

(3)

Concurrently, the segmenter is trained to fool these discriminators with the adversarial objective:

(4)

Target-target adversarial alignment. In the above source-target alignment, the source acts as an anchor for each target to “pull” closer the other targets. However, as this alignment is imperfect, there remain gaps across targets, which we propose to reduce further by additional target-target alignments. To this end, we introduce for each target domain a discriminator with parameters that classifies (class 1) all other target domains (class 0), resulting in 1--all discriminators. The target-target discriminator is trained by minimizing the loss

(5)
Figure 4: Multi-target knowledge transfer approach to multi-target UDA. With MTKT, a set of target-specific segmenters is first trained adversarially. Their knowledge is then jointly distilled to the target-agnostic segmenter whose loss (10) is not back-propagated into the target-specific branches (as indicated by the dotted arrow). Symbols and colors follow those in Figure 2.

The collective objective of all target-target discriminators now reads:

(6)

The segmenter tries to fool all the target-target discriminators by minimizing the adversarial loss:

(7)

To sum up, the segmenter is trained by minimizing over the objective:

(8)

with weights and to balance the adversarial terms.

Multi-Target Knowledge Transfer.  The main driving force in prediction-level adversarial approaches [23, 25] is the adjustment of the decision boundaries. Alignment in feature space then follows to comply with adjusted boundaries. We thus stress the importance of classifier design in the multi-target UDA scenario. In our multi-discriminator approach, one classifier simultaneously handles multiple domain shifts, either source-target or target-target. The main challenge is the instability of adversarial training, which is amplified if several adversarial losses are jointly minimized. Such an issue is particularly problematic in the early training phase when most target predictions are very noisy. To address this challenge, we propose the multi-target knowledge transfer (MTKT) framework, with novel network design and learning scheme which do not rely on the joint minimization of multiple adversarial losses over the same classifier module, hopefully reducing the instability of the training. Figure 4 shows the MTKT architecture.

The classification part of the network is first re-designed with target-specific instrumental classifiers, , based on the same feature extractor , each handling one specific source-target domain shift. Such an architecture allows separate output-space adversarial alignment for each specific source-target pair, alleviating the instability problem. For each target-specific classifier , we introduce a domain discriminator as to classify source target . The training objectives are similar to those used in single-target models (Eqs. 1 and 2).

We then introduce a target-agnostic classification branch that fuses all the knowledge transferred from the target-specific classifiers. This target-agnostic classifier is the final product of the approach, , the one used at test time when domain knowledge is not available.

The knowledge from the

“teachers” is transferred to the target-agnostic “student” via minimizing the Kullback-Leibler divergence 

[9] between teachers’ and student’s predictions on target domains. In details, for a given sample , we compute the KL loss

(9)

where and are soft-segmentation predictions coming from the target-specific and the target-agnostic respectively. The minimization objective of the target-agnostic classifier over the segmenter’s parameters (including feature extractor’s) then reads:

(10)

Minimizing KL losses helps adjust its decision boundaries toward good behavior in all target domains. As the KL loss is back-propagated through the feature extractor, such an adjustment results in implicit alignment in target feature space, which overall mitigates the distribution shifts between the domains.

Discussion.  Unlike Multi-Dis., the multi-teacher/single-student mechanism in MTKT avoids direct alignment between unlabeled parts. The target-agnostic classifier is encouraged to adjust its decision boundaries to favor all the target-specific teachers, thus helping cross-target alignment.

Although we build our frameworks over output-space alignment [25, 23], note that they could be adapted to other adversarial feature-alignment methods [11]. Moreover, orthogonal approaches like pseudo-labeling can also be included in our frameworks and we show some experiments with such an addition in Section 4.3.

4 Experiments

4.1 Experimental Details

Datasets.  We build our experiments on four urban driving datasets, one being synthetic and the three others being recorded in various geographic locations:

  • [itemsep=0pt]

  • GTA5 [20] is a dataset of 24,966 labeled synthetic images generated from the eponymous video game;

  • Cityscapes [4] contains labeled urban scenes from cities around Germany, split in training and validation sets of 2,975 and 500 samples respectively;

  • IDD [24] is an Indian urban dataset having 6,993 training and 981 validation labeled scenes;

  • Mapillary Vistas [17] is a dataset collected in multiple cities around the world, which is composed of 18,000 training and 2,000 validation labeled scenes.

Though all containing urban scenes, the four datasets have different labeling policies and semantic granularity. We follow the protocol used in [13, 26] and standardize the label set with 7 super classes, common to all four datasets: flat, construction, object, nature, sky, human and vehicle. The mapping from original classes to these super classes is given in the Supplementary Material.

When Cityscapes, IDD or Mapillary are used as target domain, only unlabeled images from them are used for training, by definition of the UDA problem.

Implementation Details.

  Our experiments are conducted with PyTorch 

[18]. The adversarial framework is based on AdvEnt’s published code.333https://github.com/valeoai/ADVENT We adopt DeepLab-V2 [2] as the semantic segmentation model, built upon the ResNet-101 [8]

backbone initialized with ImageNet 

[5]

pre-trained weights. The segmenters are trained by Stochastic Gradient Descent 

[1] with learning rate , momentum and weight decay . We train the discriminators using an Adam optimizer [6] with learning rate . All experiments were conducted at the resolution.

For MTKT, we “warm up” the target-specific branches for 20,000 iterations before training the target-agnostic branch. The warm-up step avoids distillation of noisy target predictions in the early phase, which helps stabilize target-agnostic training.

4.2 Main results

We consider four setups, varying the type of domain-shift (‘syn-2-real’ or ‘city-2-city’) or the number of targets (two to three domains). To measure per-target segmentation performance, we use the standard mean Intersection-over-Union (mIoU) metric. For multi-target performance, we report the mIoU averaged over the target domains; Using the average helps mitigate the potential bias caused by target evaluation sets with substantially different sizes.

=0ex =0ex GTA5  Cityscapes  Mapillary Method Target Train flat constr. object nature sky human vehicle mIoU mIoU Avg. Single-Target Baselines [25] Cityscapes 93.5 80.5 26.0 78.5 78.5 55.1 76.4 69.8 (*) 66.6 Mapillary - 86.8 69.0 30.2 71.2 91.5 35.3 59.5 63.4 Cityscapes - 89.3 79.3 19.5 76.9 84.6 47.7 63.0 65.8 67.7 Mapillary 89.5 72.6 31.0 75.3 94.1 50.7 73.8 69.6 (*) Multi-Target Baseline [25] Cityscapes 93.1 80.5 24.0 77.9 81.0 52.5 75.0 69.1 68.9 Mapillary 90.0 71.3 31.1 73.0 92.6 46.6 76.6 68.7 Cityscapes 94.5 80.8 22.2 79.2 82.1 47.0 79.0 69.3 Multi-Dis. Mapillary 89.4 71.2 29.5 76.2 93.6 50.4 78.3 69.8 69.5 Cityscapes 95.0 81.6 23.6 80.1 83.6 53.7 79.8 71.1 MTKT Mapillary 90.6 73.3 31.0 75.3 94.5 52.2 79.8 70.8 70.9

Table 1: Semantic segmentation performance on GTA5  Cityscapes  Mapillary. Per-class IoU (%), per-domain mean IoU (‘mIoU’) and mIoU averaged over domains (‘mIoU Avg.’); mIoU gain (green) or loss (red) w.r.t. corresponding per-target baselines (marked as ‘*’); ‘train’: indication of the unlabeled target data used for training.

GTA5  Cityscapes  Mapillary.  Table 1 reports segmentation results on the two target validation sets of Cityscapes and Mapillary; GTA5 is the source domain in this setup. For comparison, we consider the single-target AdvEnt models, i.e. trained on either Cityscapes or Mapillary unlabeled images. We have also the multi-target AdvEnt model, denoted as ‘Multi-Target Baseline’ in Table 1, which is trained on the merging of the two targets. For all models, including the single-target ones, we report both per-target and average mIoUs. The two rows marked with ‘(*)’ indicate results of the single-target models on the same domains used for training, regarded as per-target baselines.

Single-target baselines achieve worse average mIoU than those trained on both domains, which indicates the benefit of having access to diverse data from multiple domains during training. Our proposed approaches outperform the multi-target baseline with mIoU gains of for multi-discriminator and for MTKT. Looking closer at the per-target results, we observe unfavorable performance if one directly transfers single-target models to a new domain. Indeed, testing the Cityscapes-only model on Mapillary results in a drop of mIoU compared to the reference performance and a similar drastic drop is seen for Mapillary-only model on Cityscapes. Especially we notice important degradation on safety-critical classes like human or vehicle using those single-target models. The multi-discriminator model achieves comparable mIoUs as the per-target baselines. The MTKT model improves over the per-target baselines by significant margin, i.e. on Cityscapes and on Mapillary. Such results highlight the merit of the proposed strategies, especially MTKT. Note that adding adversarial training on the target-agnostic branch of MTKT hinders the alignment effect, reducing the performance by mIoU Avg.

=0ex =0ex GTA5  Cityscapes  IDD Method Target Train flat constr. object nature sky human vehicle mIoU mIoU Avg. Single-Target Baselines [25] Cityscapes 93.5 80.5 26.0 78.5 78.5 55.1 76.4 69.8 (*) 66.5 IDD - 91.3 52.3 13.3 76.1 88.7 46.7 74.8 63.3 Cityscapes - 78.6 79.2 24.8 77.6 83.6 48.7 44.8 62.5 63.8 IDD 91.2 53.1 16.0 78.2 90.7 47.9 78.9 65.1 (*) Multi-Target Baseline [25] Cityscapes 93.9 80.2 26.2 79.0 80.5 52.5 78.0 70.0 67.4 IDD 91.8 54.5 14.4 76.8 90.3 47.5 78.3 64.8 Cityscapes 94.3 80.7 20.9 79.3 82.6 48.5 76.2 68.9 Multi-Dis. IDD 92.3 55.0 12.2 77.7 92.4 51.0 80.2 65.7 67.3 Cityscapes 94.5 82.0 23.7 80.1 84.0 51.0 77.6 70.4 MTKT IDD 91.4 56.6 13.2 77.3 91.4 51.4 79.9 65.9 68.2

Table 2: Semantic segmentation performance on GTA5  Cityscapes  IDD. Organization as in Tab. 1.

GTA5  Cityscapes IDD.  We experiment with another syn-2-real setup in which the two target datasets have noticeably different landscapes, i.e. European cities in Cityscapes and Indian ones in IDD. Results are reported in Table 2. Here also, multi-target models outperform the single-target ones. In this setup, the performance of Multi-Dis. is comparable to the multi-target baseline’s. We conjecture that the complex and unstable optimization problem in the multi-discriminator framework makes it difficult to achieve good alignment across targets, especially when the two targets are more noticeably different. With a dedicated architecture and learning scheme that alleviate such an optimization issue, the MTKT model achieves the best results, in terms of both per-target and average mIoUs.

We visualize some qualitative results in Figure 5.

=0ex =0ex GTA5  Cityscapes  Mapillary  IDD Method Target Train flat constr. object nature sky human vehicle mIoU mIoU Avg. Single-Target Baselines [25] Cityscapes 93.5 80.5 26.0 78.5 78.5 55.1 76.4 69.8 (*) 65.5 Mapillary - 86.8 69.0 30.2 71.2 91.5 35.3 59.5 63.3 IDD - 91.3 52.3 13.3 76.1 88.7 46.7 74.8 63.3 Cityscapes - 89.3 79.3 19.5 76.9 84.6 47.7 63.0 65.8 66.7 Mapillary 89.5 72.6 31.0 75.3 94.1 50.7 73.8 69.6 (*) IDD - 91.7 54.3 13.0 77.3 92.3 47.4 76.8 64.7 Cityscapes - 78.6 79.2 24.8 77.6 83.6 48.7 44.8 62.5 65.5 Mapillary - 88.5 71.2 32.4 72.8 92.8 51.3 73.7 69.0 IDD 91.2 53.1 16.0 78.2 90.7 47.9 78.9 65.1 (*) Multi-Target Baseline [25] Cityscapes 93.6 80.6 26.4 78.1 81.5 51.9 76.4 69.8 67.8 Mapillary 89.2 72.4 32.4 73.0 92.7 41.6 74.9 68.0 IDD 92.0 54.6 15.7 77.2 90.5 50.8 78.6 65.6 Cityscapes 94.6 80.0 20.6 79.3 84.1 44.6 78.2 68.8 Mapillary 89.0 72.5 29.3 75.5 94.7 50.3 78.9 70.0 Multi-Dis. IDD 91.6 54.2 13.1 78.4 93.1 49.6 80.3 65.8 68.2 Cityscapes 94.6 80.7 23.8 79.0 84.5 51.0 79.2 70.4 Mapillary 90.5 73.7 32.5 75.5 94.3 51.2 80.2 71.1 MTKT IDD 91.7 55.6 14.5 78.0 92.6 49.8 79.4 65.9 69.1

Table 3: Results on GTA5  Cityscapes  Mapillary  IDD (). Organization as in Tab. 1.

=0ex =0ex Cityscapes Mapillary  IDD Method Target Train flat constr. object nature sky human vehicle mIoU mIoU Avg. Single-Target Baselines [25] Mapillary 87.4 65.9 28.2 72.8 92.1 46.9 72.7 66.6 (*) 65.8 IDD - 91.8 52.2 15.9 80.2 91.1 45.7 77.6 65.0 Mapillary - 88.2 70.0 28.5 75.4 93.6 49.1 76.7 68.8 68.0 IDD 93.2 53.4 16.5 83.4 93.4 51.4 79.5 67.3 (*) Multi-Target Baseline [25] Mapillary 87.7 65.9 29.0 73.2 91.5 47.9 75.7 67.3 67.0 IDD 93.3 53.0 17.2 82.8 92.2 49.3 79.6 66.8 Mapillary 88.6 70.9 29.6 75.8 94.7 49.2 76.1 69.3 Multi-Dis. IDD 92.8 52.8 17.0 83.1 94.2 48.5 77.4 66.5 67.9 Mapillary 88.3 70.4 31.6 75.9 94.4 50.9 77.0 69.8 MTKT IDD 93.6 54.9 18.6 84.0 94.5 53.4 79.2 68.3 69.0

Table 4: Results of city-2-city multi-target UDA on Cityscapes  Mapillary  IDD. Organization as in Tab. 1.

GTA5  Cityscapes  Mapillary  IDD.  We consider a more challenging setup involving three target domains – Cityscapes, Mapillary and IDD – and show results in Table 3. With more target domains, the same conclusions hold. In terms of average mIoU, the multi-discriminator model marginally improves over the multi-target baseline. The MTKT model significantly outperforms all other models with mIoU Avg. Moreover, when compared to the per-target baselines, MTKT is the only model to show improvement on every target domain.

Cityscapes Mapillary  IDD.  Finally, we experiment on a realistic city-2-city setup with Cityscapes as source and Mapillary and IDD as target domains. The results are shown in Table 4. Interestingly, on Mapillary, the single-target model trained on IDD achieves better results than the one trained only on Mapillary. We conjecture that the domain gap between Cityscapes and Mapillary is less than the one between Cityscapes and IDD; The extra data diversity coming from IDD improves the single-target IDD-only model generalization and helps mitigate the small Cityscapes-Mapillary domain gap. Another observation is that the IDD-only model outperforms the multi-target baseline. This indicates the disadvantage of the naive dataset merging strategy: Not only complementary signals but also conflicting/negative ones get transferred. The two proposed models outperform the multi-target baseline; MTKT obtains the best performance overall. Again in this realistic setup, we showcase the advantages of our methods, especially the multi-target knowledge transfer model.

Conclusions.  These four sets of experiments demonstrate that the proposed multi-target frameworks consistently deliver competitive performance on the multiple target domains they are trained for. MTKT always gives the best performance, both in per-target and average mIoUs, compared to the baselines and to the multi-discriminator model. Note that our models are compatible with techniques such as image translation [10, 27, 28] or pseudo-labeling self-training [14, 21, 31], from which they could benefit. In particular, we show next with additional experiments how to use pseudo-labeling [21] with MTKT.

(a) Input (b) Ground truth (c) City. Baseline (d) IDD Baseline (e) MT Baseline (f) Multi-Dis. (g) MTKT

Cityscapes

IDD

Figure 5: Qualitative results in the GTA5  Cityscapes  IDD setup. (a) Test images from Cityscapes and IDD; (b) Ground-truth segmentation maps; Results of (c) single-target baseline trained on Cityscapes target, (d) single-target baseline trained on IDD target, (e) multi-target baseline, (f) proposed Multi-Dis. and (g) proposed MTKT. Both proposed multi-target frameworks give overall cleaner segmentation maps compared to the baselines.

4.3 Further Experiments

=0ex =0ex GTA5  Cityscapes  IDD Method M-T base. M-T base. + PL MTKT MTKT + PL (1) MTKT + PL (2) MTKT + PL (3) mIoU Avg. 67.4 68.9 68.2 69.8 69.7 69.9

Table 5: Additional impact of pseudo-labeling (PL). Trained models are refined with one step of ESL [21] (pseudo-labeling with predictive entropy as selection criteria). For MTKT, pseudo-labels are extracted for each target domain with the associated teacher head, and used either (1) to refine this head only, (2) to refine this head and to back-propagate KL-loss only on the pixels with predictions compliant with pseudo-labels or (3) to refine both this head and the target-agnostic model.

Additional Impact of Pseudo-Labeling.  Pseudo-labeling (PL) is a strategy that has become quite popular in UDA for semantic segmentation [14, 21, 31]. It can be easily combined with our multi-target frameworks. Taking for instance the recently-proposed ESL [21], we consider three ways to adapt its pseudo-labeling strategy to the MTKT architecture. In all of them, we collect pseudo-labels in each target domain using the corresponding target-specific classifier and use them as additional self-supervision for these target-specific heads; In the second method we also use these pseudo-labels to restrict the back-propagation of the KL losses to pixels that are correctly classified according to these pseudo-labels; In the third method, they are also used to refine the target-agnostic classifier. We report in Table 5 the results of the models trained with these three PL-based refinement strategies on GTA5  Cityscapes  IDD and compare them to the baseline trained with ESL. The three ways of extending MTKT with PL result in similar performance gains of at least mIoU Avg. This demonstrates that knowledge transfer is complementary to pseudo-labeling. Moreover, MTKT with ESL outperforms the baseline with ESL by mIoU Avg.

=0ex =0ex setup Method Test set flat constr. object nature sky human vehicle mIoU G C + I M-T Baseline Mapillary 88.4 71.0 31.0 72.4 92.0 37.4 74.7 66.7 Multi-Dis. 89.2 72.1 21.7 73.8 94.0 34.8 75.9 65.9 MTKT 89.8 74.0 30.4 74.1 93.6 52.6 79.4 70.6 G C + M M-T Baseline IDD 91.6 54.7 13.9 76.5 90.9 48.3 77.5 64.8 Multi-Dis. 91.2 54.6 12.9 77.7 92.5 50.3 78.6 65.4 MTKT 91.5 56.1 12.3 76.1 90.9 51.4 79.2 65.4

Table 6: Direct transfer to new target. Multi-target models are tested on a new unseen domain: (Top) GTA5  Cityscapes  IDD, tested on Mapillary; (Bottom) GTA5  Cityscapes  Mapillary, tested on IDD.

Direct Transfer to a New Dataset.  We consider a direct transfer setup in which the models see no images from the test domain during training: This experiment highlights how well the models can generalize to new previously-unseen domains. We report in Table 6 the results of such a direct transfer to a new dataset in different setups. The models are trained on GTA5  Cityscapes  IDD (resp. on GTA5  Cityscapes  Mapillary) and tested on Mapillary (resp. IDD). On both setups, MTKT shows better performance in terms of mIoU compared to the baselines on the new domain. In the first one in particular, with Mapillary as the new test domain, MTKT outperforms the multi-target baseline by . What is particularly noticeable in this setup is the performance on the human class: While we observe an IoU of around in the main results on domain adaptation to Mapillary (e.g. in Tab. 1), the direct transfer results of the multi-target baseline and of Multi-Dis. drop under on this class; Differently, MTKT manages to get similar performance with IoU on human. This experiment hints at the ability of MTKT to better generalize to new unseen domains.

5 Conclusion

This work addresses the new problem of unsupervised adaptation to multiple target domains in semantic segmentation. We discuss the challenges that this UDA setup raises in terms of distribution alignment and of joint learning. That leads to two novel frameworks: The multi-discriminator approach extends single-target UDA to handle pair-wise domain alignment; The multi-target knowledge transfer approach alleviates the instability of multi-domain adversarial learning with a multi-teacher/single-student distillation mechanism. In the context of driving scenes, we propose four experimental setups, varying the type of source-target gaps and the number of target domains. Our approaches outperform all baselines on these four setups, which are representative of real-world applications. Further experiments additionally show that our frameworks can be combined to state-of-the-art pseudo-labeling strategies and that the proposed learning schemes help to generalize to previously-unseen datasets. This work thus contributes to the recent research line in domain adaptation toward more practical use cases. With the same goal, future research directions may consider more complex mixes of source and target domains, making use of several labeled and unlabeled datasets.

Appendix A Mapping Classes to Super Classes

Tabs. 7, 8, 9 and 10 present how the original classes in the 4 considered datasets are mapped to 7 shared super classes.

=0ex =0ex Name Orig. Id Used? Label unlabeled 0 void ego vehicle 1 void rectification border 2 void out of roi 3 void static 4 void dynamic 5 void ground 6 void road 7 flat sidewalk 8 flat parking 9 flat rail track 10 flat building 11 construction wall 12 construction fence 13 construction guard rail 14 construction bridge 15 construction tunnel 16 construction pole 17 object polegroup 18 object traffic light 19 object traffic sign 20 object vegetation 21 nature terrain 22 nature sky 23 sky person 24 human rider 25 human car 26 vehicle truck 27 vehicle bus 28 vehicle caravan 29 vehicle trailer 30 vehicle train 31 vehicle motorcycle 32 vehicle bicycle 33 vehicle license plate -1 vehicle

Table 7: Mapping classes of Cityscapes. First two columns: names and ids of original classes; Third column: indication of the use or not of a class during training and test; Last column: Super classes that are mapped to.

=0ex =0ex Name Orig. Id Used? Label road 0 flat sidewalk 1 flat building 2 construction wall 3 construction fence 4 construction pole 5 object traffic light 6 object traffic sign 7 object vegetation 8 nature terrain 9 nature sky 10 sky person 11 human rider 12 human car 13 vehicle truck 14 vehicle bus 15 vehicle train 16 vehicle motorcycle 17 vehicle bicycle 18 vehicle unlabeled -1 void

Table 8: Mapping classes of GTA5. Organization as in Table 7.

=0ex =0ex Name Orig. Id Used? Label bird 0 other ground animal 1 other curb 2 construction fence 3 construction guard rail 4 construction barrier 5 construction wall 6 construction bike lane 7 flat crosswalk - plain 8 flat curb cut 9 flat parking 10 flat pedestrian area 11 flat rail track 12 flat road 13 flat service lane 14 flat sidewalk 15 flat bridge 16 construction building 17 construction tunnel 18 construction person 19 human bicyclist 20 human motorcyclist 21 human other rider 22 human lane marking - crosswalk 23 flat lane marking - general 24 flat mountain 25 nature sand 26 nature sky 27 sky snow 28 nature terrain 29 flat vegetation 30 nature water 31 nature banner 32 object bench 33 object bike rack 34 object billboard 35 object catch basin 36 object cctv camera 37 object fire hydrant 38 object junction box 39 object mailbox 40 object manhole 41 object phone booth 42 object pothole 43 object street light 44 object pole 45 object traffic sign frame 46 object utility pole 47 object traffic light 48 object traffic sign (back) 49 object traffic sign (front) 50 object trash can 51 object bicycle 52 vehicle boat 53 vehicle bus 54 vehicle car 55 vehicle caravan 56 vehicle motorcycle 57 vehicle on rails 58 vehicle other vehicle 59 vehicle trailer 60 vehicle truck 61 vehicle wheeled slow 62 vehicle car mount 63 void ego vehicle 64 void unlabeled -1 void

Table 9: Mapping classes of Mapillary Vistas. Organization as in Table 7.

=0ex =0ex Name Orig. Id Used? Label road 0 flat parking 1 flat drivable fallback 2 flat sidewalk 3 flat rail track 4 flat non-drivable fallback 5 flat person 6 human animal 7 other rider 8 human motorcycle 9 vehicle bicycle 10 vehicle autorickshaw 11 vehicle car 12 vehicle truck 13 vehicle bus 14 vehicle caravan 15 vehicle trailer 16 vehicle train 17 vehicle vehicle fallback 18 vehicle curb 19 construction wall 20 construction fence 21 construction guard rail 22 construction billboard 23 object traffic sign 24 object traffic light 25 object pole 26 object polegroup 27 object obs-str-bar-fallback 28 object building 29 construction bridge 30 construction tunnel 31 construction vegetation 32 nature sky 33 sky fallback background 34 object unlabeled 35 void ego vehicle 36 void rectification border 37 void out of roi 38 void license plate 39 vehicle

Table 10: Mapping classes of IDD. Organization as in Table 7.

References

  • [1] L. Bottou (2010)

    Large-scale machine learning with stochastic gradient descent

    .
    In Proceedings of COMPSTAT, Cited by: §4.1.
  • [2] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Cited by: §4.1.
  • [3] Z. Chen, J. Zhuang, X. Liang, and L. Lin (2019) Blending-target domain adaptation by adversarial meta-adaptation networks. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §1, §2.
  • [4] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 2nd item, 2nd item.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: a large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  • [6] J. B. Diederik P. Kingma (2015) Adam: a method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §4.1.
  • [7] B. Gholami, P. Sahu, O. Rudovic, K. Bousmalis, and V. Pavlovic (2020) Unsupervised multi-target domain adaptation: an information theoretic approach. IEEE Transactions on Image Processing (TIP). Cited by: §1, §2, §3.1.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  • [9] G. Hinton, O. Vinyals, and J. Dean (2015)

    Distilling the knowledge in a neural network

    .
    arXiv preprint arXiv:1503.02531. Cited by: §3.3.
  • [10] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2018) Cycada: cycle-consistent adversarial domain adaptation. Cited by: §2, §4.2.
  • [11] J. Hoffman, D. Wang, F. Yu, and T. Darrell (2016) FCNs in the wild: pixel-level adversarial and constraint-based adaptation. arXiv:1612.02649. Cited by: §2, §3.3.
  • [12] T. Isobe, X. Jia, S. Chen, J. He, Y. Shi, J. Liu, H. Lu, and S. Wang (2021) Multi-target domain adaptation with collaborative consistency learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [13] K. Lee, G. Ros, J. Li, and A. Gaidon (2019) SPIGAN: privileged adversarial learning from simulation. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §4.1.
  • [14] Y. Li, L. Yuan, and N. Vasconcelos (2019) Bidirectional learning for domain adaptation of semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2, §4.3.
  • [15] Z. Liu, Z. Miao, X. Pan, X. Zhan, D. Lin, S. X. Yu, and B. Gong (2020-06) Open compound domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [16] M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
  • [17] G. Neuhold, T. Ollmann, S. Rota Bulò, and P. Kontschieder (2017) The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), External Links: Link Cited by: 2nd item, 4th item.
  • [18] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. In Workshop at Advances in Neural Information Processing Systems (NIPS), Cited by: §4.1.
  • [19] X. Peng, Z. Huang, X. Sun, and K. Saenko (2019) Domain agnostic learning with disentangled representations. In International Conference on Machine Learning (ICML), Cited by: §2.
  • [20] S. R. Richter, V. Vineet, S. Roth, and V. Koltun (2016) Playing for data: ground truth from computer games. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), Cited by: 2nd item, 1st item.
  • [21] A. Saporta, T. Vu, M. Cord, and P. Pérez (2020)

    ESL: entropy-guided self-supervised learning for domain adaptation in semantic segmentation

    .
    In Workshop on Scalability in Autonomous Driving of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2, §4.3, Table 5.
  • [22] B. Sun and K. Saenko (2016) Deep coral: correlation alignment for deep domain adaptation. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), Cited by: §2.
  • [23] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker (2018) Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §3.2, §3.2, §3.2, §3.3, §3.3.
  • [24] G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. V. Jawahar (2019) DD: a dataset for exploring problems of autonomous navigation in unconstrained environments. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Cited by: 2nd item, 3rd item.
  • [25] T. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez (2019) ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §3.2, §3.2, §3.2, §3.3, §3.3, Table 1, Table 2, Table 3, Table 4.
  • [26] T. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez (2019) DADA: depth-aware domain adaptation in semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Cited by: §4.1.
  • [27] Z. Wu, X. Han, Y. Lin, M. Gokhan Uzunbas, T. Goldstein, S. Nam Lim, and L. S. Davis (2018) Dcan: dual channel-wise alignment networks for unsupervised scene adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, §4.2.
  • [28] Y. Yang and S. Soatto (2020) Fda: fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.2.
  • [29] H. Yu, M. Hu, and S. Chen (2018) Multi-target unsupervised domain adaptation without exactly shared categories. arXiv preprint arXiv:1809.00852. Cited by: §2, §3.1.
  • [30] S. Zhao, B. Li, X. Yue, Y. Gu, P. Xu, R. Hu, H. Chai, and K. Keutzer (2019) Multi-source domain adaptation for semantic segmentation. Advances in Neural Information Processing Systems (NeurIPS). Cited by: §2.
  • [31] Y. Zou, Z. Yu, B. Vijaya Kumar, and J. Wang (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, §4.2, §4.3.