Learning to Relate Depth and Semantics for Unsupervised Domain Adaptation

05/17/2021 ∙ by Suman Saha, et al. ∙ ETH Zurich 12

We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting. Semantic segmentation and monocular depth estimation are shown to be complementary tasks; in a multi-task learning setting, a proper encoding of their relationships can further improve performance on both tasks. Motivated by this observation, we propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions. To capture the cross-task relationships, we propose a neural network architecture that contains task-specific and cross-task refinement heads. Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain. We experimentally observe improvements in both tasks' performance because the complementary information present in these tasks is better captured. Specifically, we show that: (1) our approach improves performance on all tasks when they are complementary and mutually dependent; (2) the CTRL helps to improve both semantic segmentation and depth estimation tasks performance in the challenging UDA setting; (3) the proposed ISL training scheme further improves the semantic segmentation performance. The implementation is available at https://github.com/susaha/ctrl-uda.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 9

page 11

page 12

page 13

page 14

Code Repositories

ctrl-uda

An implementation of our work "Learning to Relate Depth and Semantics for Unsupervised Domain Adaptation".


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Corresponding author: Suman Saha (suman.saha@vision.ee.ethz.ch)* Equal contribution.

Semantic segmentation and monocular depth estimation are two important computer vision tasks that allow us to perceive the world around us and enable agents’ reasoning, e.g., in an autonomous driving scenario. Moreover, these tasks have been shown to be complementary to each other, i.e., information from one task can improve the other task’s performance 

[kendall2018multi, maninis2019attentive, vandenhende2020mti]. Domain Adaptation (DA) [csurka2017comprehensive] refers to maximizing model performance in an environment with a smaller degree of supervision (the target domain) relative to what the model was trained on (the source domain). Unsupervised Domain Adaptation (UDA) assumes only access to the unannotated samples from the target domain at train time – the setting of interest in this paper, explained in greated detail in Sec. 6.

Recent domain adaptation techniques [lee2018spigan, vu2019dada] proposed to leverage depth information available in the source domain to improve semantic segmentation on the target domain. However, they lack an explicit multi-task formulation to relate depth and semantics, that is to say, how each semantic category relates to different depth levels. The term depth levels refers to different discrete ranges of depth values, i.e., “near” (1-5m); “medium-range” (5-20m), or “far” (20m). This paper aims to design a model that learns explicit relationships between different visual semantic classes and depth levels within the UDA context.

To this end, we design a network architecture and a new multitask-aware feature space alignment mechanism for UDA. First, we propose a Cross-Task Relation Layer (CTRL) – a novel parameter-free differentiable module tailored to capture the task relationships given the network’s semantic and depth predictions. Second, we utilize a Semantics Refinement Head (SRH) that explicitly captures cross-task relationships by learning to predict semantic segmentation given predicted depth features. Both CTRL and SRH boost the model’s ability to effectively encode correlations between semantics and depth, thus improving predictions on the target domain. Third, we employ an Iterative Self Learning (ISL) scheme. Coupled with the model design, it further pushes the performance of semantic segmentation. As a result, our method achieves state-of-the-art semantic segmentation performance on three challenging UDA benchmarks (Sec. 4). Fig. 1 demonstrates our method’s effectiveness by comparing semantic predictions of classes underrepresented in the target domain to predictions made by the previous state-of-the-art method. The paper is organized as follows: Sec. 2 discusses the related work; Sec. 3 describes the proposed approach to UDA, the network architecture, and the learning scheme; Sec. 4 presents the experimental analysis with ablation studies; Sec. 5 concludes the paper.

2 Related Work

Semantic Segmentation.

refers to the task of assigning a semantic label to each pixel of an image. Conventionally, the task has been addressed using hand-crafted features combined with classifiers, such as Random Forests 

[shotton2008semantic], SVMs [fulkerson2009class], or Conditional Random Fields [ladicky2010and]

. Powered by the effectiveness of Convolutional Neural Networks (CNNs) 

[lecun1998gradient]

, we have seen an increasing number of deep learning-based models. Long et al. 

[long2015fully] were among the first to use fully convolutional networks (FCNs) for semantic segmentation. Since then, this design has quickly become a state-of-the-art method for the task. The encoder-decoder design is still widely used [yu2015multi, chen2018encoder, badrinarayanan2017segnet, zhao2017pyramid, chen2017deeplab].

Cross-domain Semantic Segmentation. Training deep networks for semantic segmentation requires large amounts of labeled data, which presents a significant bottleneck in practice, as acquiring pixel-wise labels is a labor-intensive process. A common approach to address the issue is to train the model on a source domain and apply it to a target domain in a UDA context. However, this often causes a performance drop due to the domain shift. Domain Adaptation aims to solve the issue by aligning the features from different domains. DA is a highly active research field, and techniques have been developed for various applications, including image classification [ganin2015unsupervised, li2017deeper, long2015learning, lu2017unsupervised], object detection [chen2018domain], fine-grained recognition [gebru2017fine], etc.

More related to our method are several works on unsupervised domain adaptation for semantic segmentation [zhang2017curriculum, sankaranarayanan2017unsupervised, zou2018unsupervised, chen2018road, vu2019advent, iqbal2020mlsl, yang2020label, zhou2020uncertainty, paul2020domain, yang2020context]. This problem has been tackled with curriculum learning [zhang2017curriculum], GANs [sankaranarayanan2017unsupervised], adversarial training on the feature space [chen2018road], output space [tsai2018learning], or entropy maps [vu2019advent], self-learning using pseudo- or weak labels [zou2018unsupervised, paul2020domain, iqbal2020mlsl]. However, prior works typically only consider adapting semantic segmentation while neglecting any multi-task correlations. A few methods [chen2019learning, vu2019dada] model correlations between semantic segmentation and depth estimation, similarly to our work, yet – as explained in Sec. 1 – these works come with crucial limitations.

Monocular Depth Estimation. Similar to semantic segmentation, monocular depth estimation is dominated by CNN-based methods [Eigen2014, fu2018deep, laina2016deeper, li2015depth]. [Eigen2014] introduced a CNN-based architecture for depth estimation, which regresses a dense depth map. Their approach was then improved by incorporating techniques such as a CRF [Liu:2016, li2015depth] and multi-scale CRF techniques [xu2017multi]. Besides, improvements in the loss design itself also lead to better depth estimation. Examples include the reverse Huber (berHu) loss [owen2007robust, zwald2012berhu], and the ordinal regression loss [fu2018deep].

Multi-task Learning for Semantic Segmentation and Depth Estimation. Within the context of multi-task learning, semantic segmentation is shown to be highly correlated with depth estimation, and vice versa [zamir2018taskonomy, xu2018pad, kendall2018multi, zhang2018joint, zhang2019pattern, maninis2019attentive, standley2019tasks, vandenhende2020mti, vandenhende2020revisiting, kanakis2020reparameterizing]. To leverage this correlation, some authors have proposed to learn them jointly [ramirez2018geometry, jiao2018look, chen2019towards]. In particular, [neven2017fast, jiao2018look, vandenhende2019branched, bruggemann2020automated] proposed to share the encoder and use multiple decoders, whereas a shared conditional decoder is used in [chen2019towards]. Semantic segmentation was also demonstrated to help guide the depth training process [guizilini2020semantically, jiang2019sense].

In this paper, we build upon these observations. We argue that task relationships, like the ones between depth and semantics, are not entirely domain-specific. As a result, if we correctly model these relationships in one domain, they can be transferred to another domain to help guide the DA process. The proposed method and its components are explicitly designed around this hypothesis.

Inputimage ()

()

Backbone

Decoder

Sharedfeaturemap

Semanticshead ()

SRHhead ()

Depthhead ()

Predictions

Semantics ()

Refinedsemantics ()

Continuousdepth ()

Cross-Task Relation Layer

Discretizationmodule

Descretedepth ()

Entropy mapgenerator

Entropymap ()

Domaindiscriminator()

Domainof

Figure 2: Overview of the proposed neural architecture (Sec. 3.2) and the CTRL module (Sec. 3.4). Supervised losses (in the middle) are applied only on the source domain; the rest of the data flow is domain-agnostic. Legend: learned modules, predictions, loss functions; rounded corners denote operators, rectangles denote activations.

3 Method

In this section, we describe our approach to UDA in the autonomous driving setting. Sec. 3.1 presents an overview of the proposed approach; Sec. 3.2 explains the notation and problem formulation; Sec. 3.3 describes supervision on the source domain; Sec. 3.4 presents the CTRL module design; Sec. 3.5 describes the ISL technique; Sec. 3.6 prescribes the rest of the network architecture details.

3.1 Overview

The primary hypothesis behind our approach is that task dependencies persist across domains, i.e., most semantic classes fall under a finite depth range. We can exploit this information from source samples and transfer it to target using adversarial training. As our goal is to train the network in a UDA setting, we follow an adversarial training scheme [hoffman2016fcns, tsai2018learning] to learn domain invariant representations.

Unlike [vu2019dada] that directly aligns a combination of semantics and depth features, we wish to design a joint feature space for domain alignment by fusing the task-specific and the cross-task features and then learn to minimize the domain gap through adversarial training. To this end, we propose CTRL – a novel module that constructs the joint feature space by computing entropy maps of both the semantic label and discretized depth distributions (Fig. 2). Thus, CTRL entropy maps, generated on the source and target domains, are expected to carry similar information.

Further enhancement of semantic segmentation performance appears possible by utilizing the Iterative Self-Learning (ISL) training scheme, which does not require expensive patch-based pseudo-label generation like [iqbal2020mlsl]. As our CTRL helps the network to predict high-quality predictions (Fig. 1), ISL training exploits high-confidence predictions as supervision (pseudo-labels) on the target domain.

3.2 Problem Formulation

Let and denote the source and target domains, with samples from them represented by tuples and respectively, where are color images, are semantic annotations with classes, and are depth maps from a finite frustum. Furthermore, is the shared feature extractor, which includes a pretrained backbone, and a decoder; and are the task-specific semantics and depth heads, respectively; is the SRH (Fig. 2).

First, extracts a shared feature map to be used by SRH and task-specific semantics and depth heads. The semantics head predicts a semantic segmentation map with

channels per pixel, denoting predicted class probabilities. The depth head

predicts a real-valued depth map , where each pixel is mapped into the finite frustum specified in the source domain. We further employ SRH to learn the cross-task relationship between semantics and depth by making it predict semantics from the shared feature map, attenuated by the predicted depth map. Formally, the shared feature map is point-wise multiplied by the predicted depth map, and then SRH predicts a second (auxiliary) semantic segmentation map: .

We refer to the part of the model enclosing the modules as a prediction network. The predictions made by the network on the source and target domains are denoted as and , respectively. We upscale these predictions along the spatial dimension to match the original input image dimension before any further processing. Given these semantics and depth predictions on the source and target domains, we optimize the network cost using supervised loss on the source domain, and unsupervised domain alignment loss on the target domain within the same training process.

3.3 Supervised Learning

Since the semantic segmentation predictions , and ground truth are represented as pixel-wise class probabilities over classes, we employ the standard cross-entropy loss with the semantic heads:

(1)

We use the berHu loss (the reversed Huber criterion [laina2016deeper]) for penalizing depth predictions:

(2)

Following [kendall2018multi], we regress inverse depth values (normalized disparity), which is shown to improve the precision of predictions on the full range of the view frustum. The parameters of the network , , , (parameterizing , , , modules), collectively denoted as , are learned to minimize the following supervised objective on the source domain:

(3)

where and

are the hyperparameters weighting relative importance of the SRH and depth supervision.

3.4 Cross-Task Relation Layer

In the absence of ground truth annotations for the target samples, we train the network on the target images using unsupervised domain alignment loss. Existing works either align source and target domain in a semantic space [vu2019advent] or a depth-aware semantic space [vu2019dada] by fusing the continuous depth predictions with predicted semantic maps. Here, we argue that simple fusion of the continuous depth prediction into the semantics does not enable the network to learn useful semantic features at different depth levels. Instead, explicit modeling is required to achieve this goal.

Humans learn to relate semantic categories at each discrete depth level differently. For example, “sky” is “far away” (large depth), “vehicles” are “nearby”, “road” appears to be both “far” and “nearby”. Taking inspiration from the way humans relate semantic and depth, we design a CTRL (Fig. 2) that captures the semantic class-specific dependencies at different discrete depth levels. Moreover, CTRL also preserves task-specific information by fusing task-specific and task-dependent features learned by the semantics, depth, and refinement (SRH) heads. CTRL consists of a depth discretization, an entropy map generation, and a fusion layer described in the following subsections.

3.4.1 Depth Discretization Module

The prediction made by the depth head contains continuous depth values. We want to map it to a discrete probability space to learn visual semantic features at different depth levels. We quantize the view frustum depth range into a set of representative discrete values following the spacing-increasing discretization (SID) [fu2018deep]. Such discretization assigns progressively large depth sub-ranges further away from the point of view into separate bins, which allows us to simulate the human perception of depth relations in the scene, with a finite number of categories.

Given the depth range and the number of depth bins , SID outputs a

-dimensional vector of discretization bin centers

as follows:

(4)

We can now assign probabilities of the predicted depth values falling into the defined bins:

(5)

3.4.2 Joint Space for Domain Alignment

The task-dependency (output by SRH), alongside the task-specific semantics and depth probability maps, can be considered as discrete distributions over semantic classes and depth levels. As we do not have access to the ground truth labels for the target domain, one way to train the network to predict high-confidence predictions is by minimizing the uncertainty (or entropy) in the predicted distributions over the target domain [vu2019advent]. The source and target domains share similar spatial features, and it is recommended to align them in the structured output space [hoffman2018cycada].

To this end, we propose a novel UDA training scheme, where task-specific and task-dependent knowledge is transferred from the source to the target domain by constraining the target distributions to be similar to the source by aligning the entropy maps of , , and . Note that unlike [vu2019dada, vu2019advent], which constrain only on the task-specific space ( in our case) for domain alignment, we train the network to output highly certain predictions by aligning features in the task-specific and task-dependent spaces.

We argue that aligning source and target distributions jointly in task-specific and task-dependent spaces helps to bridge the domain gap for underrepresented classes, which are learned poorly without the presence of a joint representation. To encode such a joint representation, we generate entropy maps as follows:

(6)

We then concatenate these maps along the channel dimension to get the fused entropy map and employ adversarial training on it.

For aligning the source and target domain distributions, we train the proposed segmentation and depth prediction network (parameterized by ) and the discriminator network (parameterized by ) following an adversarial learning scheme. More specifically, the discriminator is trained to correctly classify the sample domain being either source or target given only the fused entropy map:

(7)

At the same time, the prediction network parameters are learned to maximize the domain classification loss (i.e., fooling the discriminator) on the target samples using the following optimization objective:

(8)

We use the hyperparameter weighing the relative importance of the adversarial loss (8). Our training scheme jointly optimizes the model parameters of the prediction network () and the discriminator (). Updates to the prediction network and the discriminator happen upon every training iteration; however, when updating the prediction network, the discriminator parameters are kept fixed. Parameters of the discriminator are updated separately using the domain classification objective (Eq. 7).

3.5 Iterative Self Learning

Following prior work [zou2018unsupervised], we train our network end-to-end using an ISL scheme using Algorithm 1. We first train the prediction () and discriminator () networks for iterations. We then generate semantic pseudo-labels () on the target training samples using the trained prediction network.

We further train the prediction network on the target training samples using pseudo-labels supervision and a masked cross-entropy loss (Eq. 1), masking target prediction pixels with confidence less than , for iterations. Instead of training the prediction network using SL only once, we iterate over generating high-confidence pseudo-labels and self-training times to refine the pseudo-labels, further resulting in better quality semantics output on the target domain.

We show in the ablation studies (Sec. 4.4) that our ISL scheme outperforms the simple SL. The discriminator network parameters () are kept fixed during self-training.

1:Train prediction () and discriminator () networks on source and target domains for iterations;
2:for  times do
3:     Generate using trained ;
4:     Train on for iterations;
5:end for
Algorithm 1 ISL

3.6 Network Architecture

The shared part of the prediction network consists of a ResNet-101 backbone and a decoder (Fig. 2). The decoder consists of four convolutional layers; its outputs are fused with the backbone output features, which are denoted as the “shared feature map”. This shared feature map is then fed forward to the respective semantics and semantics refinement heads. Following the residual auxiliary block [mordan2018revisiting] (as in [vu2019dada]), we place the depth prediction head between the last two convolutional layers of the decoder. In the supplementary materials, we show that our proposed approach is not sensitive to the residual auxiliary block and performs equally well with a standard multi-task learning network architecture (i.e., a shared encoder followed by multiple task-specific decoders). We adopt the Deeplab-V2 [chen2017deeplab] architectural design with Atrous Spatial Pyramid Pooling (ASPP) for the prediction heads. We use DC-GAN [radford2015unsupervised] as our domain discriminator for adversarial learning.

4 Experiments

SYNTHIA Cityscapes (16 classes)
Models Depth angle=60,lap=0pt-(1em)road angle=60,lap=0pt-(1em)sidewalk angle=60,lap=0pt-(1em)building angle=60,lap=0pt-(1em)wall* angle=60,lap=0pt-(1em)fence* angle=60,lap=0pt-(1em)pole* angle=60,lap=0pt-(1em)light angle=60,lap=0pt-(1em)sign angle=60,lap=0pt-(1em)veg angle=60,lap=0pt-(1em)sky angle=60,lap=0pt-(1em)person angle=60,lap=0pt-(1em)rider angle=60,lap=0pt-(1em)car angle=60,lap=0pt-(1em)bus angle=60,lap=0pt-(1em)mbike angle=60,lap=0pt-(1em)bike mIoU mIoU*
SPIGAN-no-PI [lee2018spigan] 69.5 29.4 68.7 4.4 0.3 32.4 5.8 15.0 81.0 78.7 52.2 13.1 72.8 23.6 7.9 18.7 35.8 41.2
SPIGAN [lee2018spigan] 71.1 29.8 71.4 3.7 0.3 33.2 6.4 15.6 81.2 78.9 52.7 13.1 75.9 25.5 10.0 20.5 36.8 42.4
AdaptSegnet [tsai2018learning] 79.2 37.2 78.8 - - - 9.9 10.5 78.2 80.5 53.5 19.6 67.0 29.5 21.6 31.3 - 45.9
AdaptPatch [tsai2019domain] 82.2 39.4 79.4 - - - 6.5 10.8 77.8 82.0 54.9 21.1 67.7 30.7 17.8 32.2 - 46.3
CLAN [luo2019taking] 81.3 37.0 80.1 - - - 16.1 13.7 78.2 81.5 53.4 21.2 73.0 32.9 22.6 30.7 - 47.8
Advent [vu2019advent] 87.0 44.1 79.7 9.6 0.6 24.3 4.8 7.2 80.1 83.6 56.4 23.7 72.7 32.6 12.8 33.7 40.8 47.6
DADA [vu2019dada] 89.2 44.8 81.4 6.8 0.3 26.2 8.6 11.1 81.8 84.0 54.7 19.3 79.7 40.7 14.0 38.8 42.6 49.8
Ours (best mIoU) 86.9 43.0 80.7 19.2 0.9 27.2 11.6 12.6 81.3 83.2 60.7 24.0 84.2 46.2 22.0 44.2 45.5 52.4
Ours (confidence)
86.4
0.6
42.5
0.8
80.4
0.2
20.0
2.0
1.0
0.06
27.7
0.3
10.5
0.9
13.3
0.7
80.6
0.4
82.6
0.5
61.0
0.4
23.7
1.2
81.8
2.2
42.9
3.8
21.0
3.2
44.7
2.4
45.0
0.3
51.5
0.4
Table 1: Semantic segmentation performance (IoU and mIoU, %) comparison to the prior art. All models are trained and evaluated using the EP1 protocol. mIoU* is computed on a subset of

classes, excluding those marked with *. For our method, we report the results of the run giving the best mIoU, as well as 68% confidence interval over five runs as

meanstd.
(a) SYNTHIA Cityscapes (7 classes) (b) SYNTHIA Mapillary (7 classes)
Res. Model Depth angle=60,lap=0pt-(1em)flat angle=60,lap=0pt-(1em)const angle=60,lap=0pt-(1em)object angle=60,lap=0pt-(1em)nature angle=60,lap=0pt-(1em)sky angle=60,lap=0pt-(1em)human angle=60,lap=0pt-(1em)vehicle mIoU angle=60,lap=0pt-(1em)flat angle=60,lap=0pt-(1em)const angle=60,lap=0pt-(1em)object angle=60,lap=0pt-(1em)nature angle=60,lap=0pt-(1em)sky angle=60,lap=0pt-(1em)human angle=60,lap=0pt-(1em)vehicle mIoU

SPIGAN-no-PI [lee2018spigan] 90.3 58.2 6.8 35.8 69.0 9.5 52.1 46.0 53.0 30.8 3.6 14.6 53.0 5.8 26.9 26.8
SPIGAN [lee2018spigan] 91.2 66.4 9.6 56.8 71.5 17.7 60.3 53.4 74.1 47.1 6.8 43.3 83.7 11.2 42.2 44.1
Advent [vu2019advent] 86.3 72.7 12.0 70.4 81.2 29.8 62.9 59.4 82.7 51.8 18.4 67.8 79.5 22.7 54.9 54.0
DADA [vu2019dada] 89.6 76.0 16.3 74.4 78.3 43.8 65.7 63.4 83.8 53.7 20.5 62.1 84.5 26.6 59.2 55.8
Ours (best mIoU) 90.8 77.5 15.7 77.1 82.9 45.3 68.6 65.4 86.6 57.4 19.7 73.0 87.5 45.1 68.1 62.5
Ours (confidence)
90.1
0.5
76.7
0.4
15.7
0.9
76.3
0.7
82.2
1.1
44.1
2.3
68.2
1.0
64.7
0.5
86.8
0.3
58.6
0.7
17.0
2.3
70.8
1.4
88.9
0.8
44.8
2.7
67.9
0.9
62.1
0.4

Full

Advent [vu2019advent] 89.6 77.8 22.1 76.3 81.4 54.7 68.7 67.2 86.9 58.8 30.5 74.1 85.1 48.3 72.5 65.2
DADA [vu2019dada] 92.3 78.3 25.0 75.5 82.2 58.7 72.4 69.2 86.7 62.1 34.9 75.9 88.6 51.1 73.8 67.6
Oracle (only-target) 97.6 87.9 46.0 87.9 88.8 69.1 88.6 80.8 95.0 84.2 54.8 87.7 97.2 70.2 87.5 82.4
Ours (best mIoU) 92.4 80.7 27.7 78.1 83.6 59.0 78.6 71.4 88.5 59.2 27.8 79.4 85.7 64.4 79.6 69.2
Ours (confidence)
92.2
0.3
80.8
0.1
27.0
0.9
78.6
0.8
84.9
1.2
54.5
3.2
78.2
1.3
70.8
0.4
88.4
0.1
58.6
0.7
29.0
0.8
79.8
0.4
85.0
0.9
63.2
1.3
79.0
0.4
69.0
0.1
The correct mean of class IoU values in Table 2 of [vu2019dada].
Table 2: Semantic segmentation performance (IoU and mIoU, %) comparison to the prior art. All models are trained and evaluated using the EP2 and EP3 protocols at different resolutions, as indicated in the resolution (“Res.”) column. For our method, we report the results of the run giving the best mIoU, as well as 68% confidence interval over five runs as meanstd.

4.1 UDA Benchmarks

We use three standard UDA evaluation protocols (EPs) to validate our model: EP1: SYNTHIA Cityscapes (16 classes), EP2: SYNTHIA Cityscapes (7 classes), and EP3: SYNTHIA Mapillary (7 classes). A detailed explanation of these settings can be found in [vu2019dada]. In all settings, the SYNTHIA dataset [ros2016synthia] is used as the synthetic source domain. In particular, we use the SYNTHIA-RAND-CITYSCAPES split consisting of 9,400 synthetic images and their corresponding pixel-wise semantic and depth annotations. For target domains, we use Cityscapes [cordts2016cityscapes] and Mapillary Vistas [neuhold2017mapillary] datasets. Following EP1, we train models on 16 classes common to SYNTHIA and Cityscapes; in EP2 and EP3, models are trained on 7 classes common to SYNTHIA, Cityscapes, and Mapillary. We use intersection-over-union to evaluate segmentation: IoU (class-IoU) and mIoU (mean-IoU). To promote reproducibility and emphasize significance of our results, we report two outcomes: the best mIoU, and the confidence interval. The latter is denoted as meanstd collected over five runs, thus describing a 68% confidence interval centered at mean111 Class-IoU values of the ”best mIoU” setting can be less than the mean of the class confidence interval at the expense of other classes performance. . For depth, we use Absolute Relative Difference (), Squared Relative Difference (Rel), Root Mean Squared Error (RMS), its log-variant LRMS; and the accuracy metrics [eigen2014depth] as denoted by , , and . For each metric, we use and to denote the improvement direction.

4.2 Experimental Setup

All our experiments are implemented in PyTorch 

[paszke2017automatic]. Backbone network is a ResNet-101 [he2016deep]

initialized with ImageNet 

[deng2009imagenet] weights. The prediction and discriminator networks are optimized with SGD [bottou2010large] and Adam [kingma2014adam] with learning rates and respectively. Throughout our experiments, we use , . For generating depth bins, we use m, m, and . In all ISL experiments, parameters of the algorithm are: , , . Link to the project page with source code is in the Abstract.

Figure 3: Qualitative semantic segmentation results with EP1: SYNTHIA Cityscapes (16 classes). (a) Images from Cityscapes validation set; (b) ground truth annotations; (c) DADA [vu2019dada] predictions; (d) our model predictions. Our method demonstrates notable improvements over [vu2019dada] on “bus”, “person”, “motorbike”, and “bicycle” classes as highlighted using the yellow boxes.
Figure 4: Qualitative semantic segmentation results with EP3: SYNTHIA Mapillary-Vista ( classes). Top: images from Mapillary validation set; Middle: ground truth annotations; Bottom: our model predictions.

4.3 Comparison to Prior Art

4.3.1 Ep1

Table 1 reports semantic segmentation performance of our proposed model trained and evaluated following EP1. For a fair comparison with [tsai2018learning, tsai2019domain, luo2019taking], we also report results on 13 classes and the standard 16 classes settings. Our method achieves SOTA performance in EP1 on both and classes, outperforming [vu2019dada, lee2018spigan] by large margins. Now we can identify the major class-specific improvements of our method over the SOTA [vu2019dada] DADA. The major gains come from the following classes – “wall” (), “motorbike” (), “person” (), “bicycle” () and “rider” (). Moreover, our method shows consistent improvements on classes underrepresented in the target domain: “light” (+3%), “sign” (+1.5%), “bicycle” (+5.4%), and “motorbike” (+8%). Fig. 3 shows the results of the qualitative comparison of our method with DADA [vu2019dada]. Note that our model delineates small objects like “human”, “bicycle”, and “motorbike” more accurately than DADA.

4.3.2 EP2 and EP3

Table 2 presents the semantic segmentation results in EP2 and EP3 benchmarks. The models are evaluated on the Cityscapes and Mapillary validation sets on their common classes. We also train and evaluate our model on the resolution to obtain a fair comparison with the reference low-resolution models. In a similar vein, the proposed method outperforms the prior works in EP2 and EP3 benchmarks for both full- and low-resolution () settings. We further show in Sec. 4.5 that our approach achieves state-of-the-art performance without ISL in EP2 and EP3 in both full- and low-resolution settings. The proposed CTRL coupled with SRH demonstrates consistent improvements over three challenging benchmarks by capitalizing on the inherent semantic and depth correlations. In EP2 and EP3, our models show noticeable improvements over the state-of-the-art [vu2019dada] with mIoU gains of (EP2-full-res), (EP2-low-res), (EP3-full-res), (EP3-low-res). Despite the challenging domain gap between SYNTHIA and Mapillary, our model shows significant improvement () in a low-resolution setting, which suggests robustness to scale changes.

Conf angle=60,lap=0pt-(1em)SemSup angle=60,lap=0pt-(1em)DepSup angle=60,lap=0pt-(1em)SRHSup angle=60,lap=0pt-(1em)SemAdv angle=60,lap=0pt-(1em)DepAvd angle=60,lap=0pt-(1em)SRHAdv angle=60,lap=0pt-(1em)SL angle=60,lap=0pt-(1em)ISL mIoU (%)
30.7
35.2
33.7
33.1
40.8
40.2
39.5
42.1
44.1
42.8
45.5
Table 3: Ablation study of our method from Sec. 4.4.

4.4 Ablation Studies

A comprehensive ablation study is reported in Table 3. We trained different models, each having a different configuration; these are denoted as . We use the following shortcuts in Table 3 to represent different combinations of settings: “Sem” – semantic, “Dep” – depth, “Sup” – supervision, “Adv” – adversarial, and “Conf” – configuration. Configurations to

denote supervised learning settings without any adversarial training. These models are trained on the SYNTHIA dataset and evaluated on Cityscapes validation set. Configurations from

to denote different combinations of supervised and adversarial losses on the semantics, depth, and semantics refinement heads. is the proposed model with CTRL, but without ISL. to are models trained with SL or ISL with or without SRH. to follow EP1 protocol: SYNTHIA Cityscapes UDA training and evaluation setting.

is trained using semantics label supervision without any depth information or adversarial learning. By enabling parts of the model and training procedure, we observed the following tendencies: & : depth supervision (either direct or through SRH) improves performance; : however, adding SRH on top of the depth head in the supervised learning setting does not bring improvements; : effectiveness of entropy map domain alignment in semantics feature space [vu2019advent]; and : domain alignment in the depth or refined semantics feature spaces do not bring any further improvements; : a combination of depth and SRH with task-specific semantics improves the performance (i.e., our CTRL model); : SL brings further improvement but not as good as with our ISL training scheme; : emphasizes the improvement over with ISL enabled; : positive contribution of the SRH towards improving the overall model performance. Finally, we achieve state-of-the-art segmentation results (mIoU 45.5%) by combining the proposed CTRL, SRH, and ISL (configuration ).

Model
(FR)
(FR)
(LR)
(LR)
DADA [vu2019dada] 69.2 67.6 63.4 55.8
Ours (best mIoU) 70.6 67.6 64.9 58.8
Ours (confidence)
69.5
0.6
66.5
0.6
64.3
0.4
58.5
0.4
Table 4: Effectiveness of the joint feature space learned by our method (w/o ISL) for robust domain alignment. Performance in mIoU; legend for “Ours” as in Table 2.
Model angle=60,lap=0pt-(1em) angle=60,lap=0pt-(1em)Rel angle=60,lap=0pt-(1em)RMS angle=60,lap=0pt-(1em)LRMS
DADA [vu2019dada] 0.6 10.8 17.0 4.4 0.14 0.28 0.41
Ours 0.3 6.3 14.8 0.6 0.30 0.58 0.77
Table 5: Improvement over the state-of-the-art [vu2019dada] in monocular depth estimation. The models are trained following SYNTHIA Cityscapes (16 classes) UDA setting w/o ISL and evaluated on the Cityscapes validation set.

4.5 Additional Experimental Analysis

4.5.1 Effectiveness of the Joint UDA Feature Space

This section analyzes the effectiveness of joint feature space learned by the CTRL for unsupervised domain alignment. We train and evaluate our CTRL model without ISL on two UDA benchmarks: (a) EP2: SYNTHIA to Cityscapes classes (S C) and (b) EP3: SYNTHIA to Mapillary classes (S M) in both full- and low-resolution (FR and LR) settings. In Table 4, we show the segmentation performance of our model on these four different benchmark settings and compare it against the state-of-the-art DADA model [vu2019dada]. Out of four settings, the proposed CTRL model (w/o ISL) outperforms the DADA model with mIoU gains of , , and in three benchmark settings attesting the effectiveness of the joint feature space learned by the proposed CTRL.

Besides, we train both DADA and our model with ISL and notice improvements in both the models with mIoU (DADA) and (ours). The superior quality of the predictions of our model, when used as pseudo labels, provides better supervision to the target semantics; the same can be observed in both our quantitative (Tables 1 and 2) and qualitative results (Figs. 3 and 4).

4.5.2 Monocular Depth Estimation Results

In this section, we show that our model not only improves semantic segmentation but also learns a better representation for monocular depth estimation. This intriguing property is of great importance for multi-task learning. According to [mordan2018revisiting], paying too much attention to depth is detrimental to the segmentation performance. Following [mordan2018revisiting], DADA [vu2019dada] uses depth as purely auxiliary supervision. We observed that depth predictions of [vu2019dada] are noisy (also admitted by the authors), resulting in failure cases. We conjecture that a proper architectural design choice coupled with a robust multi-tasking feature representation (encoding task-specific and cross-task relationship) improves both semantics and depth. In Table 5, we report the depth estimation evaluation results on the Cityscapes validation set of our method and compare it against the DADA model [vu2019dada]. Training and evaluation are done following the EP1 protocol: SYNTHIA Cityscapes (16 classes). We use Cityscapes disparity maps as ground truth depth pseudo-labels for evaluation. Table 5 demonstrates a consistent improvement of depth predictions with our method over [vu2019dada].

5 Conclusion

We proposed a novel approach to semantic segmentation and monocular depth estimation within a UDA context. The main highlights of this work are: (1) a Cross-Task Relation Layer (CTRL), which learns a joint feature space for domain alignment; the joint space encodes both task-specific features and cross-task dependencies shown to be useful for UDA; (2) a semantic refinement head (SRH) aids in learning task correlations; (3) a depth discretizing technique facilitates learning distinctive relationship between different semantic classes and depth levels; (4) a simple yet effective iterative self-learning (ISL) scheme further improves the model’s performance by capitalizing on the high confident predictions in the target domain. Our comprehensive experimental analysis demonstrates that the proposed method consistently outperforms prior works on three challenging UDA benchmarks by a large margin.

Acknowledgments. The authors gratefully acknowledge the support by armasuisse. We thank Amazon Activate for EC2 credits and the anonymous reviewers for the valuable feedback and time spent.

 

Learning to Relate Depth and Semantics for Unsupervised Domain Adaptation
Supplementary Materials


 

Figure 5: Overview of the UDA setting

In this document, we provide supplementary materials for our main paper submission. First, Sec. 6 provides a bird-eye view of the assumed UDA setting and how CTRL fits into it. The main paper reported our experimental results using three standard UDA evaluation protocols (EPs) where the SYNTHIA dataset [ros2016synthia] is used as the synthetic domain. To demonstrate our proposed method’s effectiveness on an entirely new UDA setting, in Sec. 7, we report semantic segmentation results of our method on a new EP: Virtual KITTI KITTI. In this setup, we use synthetic Virtual KITTI [gaidon2016virtual] as the source domain and real KITTI [geiger2013vision] as the target domain. We show that our proposed method consistently outperforms the SOTA DADA method [vu2019dada] when evaluated on this new EP with different synthetic and real domains. In Sec. 8, we present a t-SNE [tsne] plot comparing our method with [vu2019dada]. We also share additional qualitative results on SYNTHIA Cityscapes (16 classes). Sec. 9 details our network design. To demonstrate that the proposed CTRL is not sensitive to a particular network design (in our case, the residual auxiliary block [mordan2018revisiting]), we train a standard multi-task learning network architecture (i.e., a shared encoder followed by multiple task-specific decoders without any residual auxiliary block) with CTRL and notice a similar improvement trend over the baselines. The set of experiments and the results are discussed in Sec. 10.

6 Overview of the UDA setting

Unsupervised Domain Adaptation (UDA) aims at training high-performance models with no label supervision on the target domain. As seen in Fig. 5, label supervision is applied only on the source domain predictions, whereas tuning the model to perform well on the target domain is the task of adversarial supervision. Since both types of supervision are applied within the same training protocol, adversarial supervision is responsible for teaching the model the specificity of the target domain by means of bridging the domain gap. When dealing with multi-modal predictions, it is crucial to choose the joint feature space subject to adversarial supervision correctly. CTRL provides such rich feature space, which allows training much better models using the same training protocols. This allows us to leverage the abundance of samples in the synthetic source domain and produce high-quality predictions in the real target domain.

7 Virtual KITTI Kitti

Following [chen2019learning], we train and evaluate our model on common classes of Virtual KITTI and KITTI. In KITTI, the ground-truth label is only available for the training set; thus, we use the official unlabelled test images for domain alignment. We report the results on the official training set following [chen2019learning]. The model is trained on the annotated training samples of VKITTI and unannotated samples of KITTI. For this experiment, we train our model without (w/o) ISL. Table 6 reports the semantic segmentation performance (mIoU%) of our approach. Our model outperforms DADA [vu2019dada], with significant gains coming from the following classes: “sign” (), “pole” (), “building” (), and “light” (). Notably, these classes are practically highly relevant to an autonomous driving scenario. In Figure 7, we present some qualitative results of DADA and our models trained following the new Virtual KITTI KITTI UDA protocol.

VKITTI KITTI (10 classes)
Models Depth angle=60,lap=0pt-(1em)road angle=60,lap=0pt-(1em)building angle=60,lap=0pt-(1em)pole angle=60,lap=0pt-(1em)light angle=60,lap=0pt-(1em)sign angle=60,lap=0pt-(1em)veg angle=60,lap=0pt-(1em)terrain angle=60,lap=0pt-(1em)sky angle=60,lap=0pt-(1em)car angle=60,lap=0pt-(1em)truck mIoU
Chen [chen2019learning] 81.4 71.2 11.3 26.6 23.6 82.8 56.5 88.4 80.1 12.7 53.5
DADA [vu2019dada] 90.9 76.2 12.4 30.3 30.8 73.5 24.1 88.4 86.8 17.2 53.0
Ours (w/o ISL) 90.9 78.9 18.1 32.2 38.9 73.7 22.0 88.2 86.2 16.7 54.6
Table 6: Semantic segmentation performance (IoU and mIoU, higher is better, %) comparison to the prior art. All models are trained and evaluated using the UDA evaluation protocol Virtual KITTI KITTI.
Table 7: Semantic segmentation performance (mIoU) of two variants of the proposed model. Both models outperform DADA [vu2019dada] attesting the robustness of features learned by the proposed CTRL. UDA Protocol DADA Ours Ours S C 16 cls 42.6 43.70.2 45.00.3 S C (LR) 7 cls 63.4 63.80.5 64.70.5 S M (LR) 7 cls 55.8 61.50.6 62.10.4 S C (FR) 7 cls 69.2 71.30.5 70.80.4 S M (FR) 7 cls 67.6 70.10.5 69.00.1 Figure 6: t-SNE comparison of features learned by DADA [vu2019dada] and CTRL. It leads to more structured feature space and better class separation in the target domain. Circled classes have a better separation than the other method.
Figure 7: Qualitative semantic segmentation results with VKITTI KITTI (10 classes) UDA evaluation protocol. (a) Input image from the target domain KITTI; (b) ground truth annotations; (c) DADA [vu2019dada] predictions; (d) our model predictions. We follow the color encoding scheme of Cityscapes to colorize the label maps.

8 Synthia Cityscapes

This section presents a t-SNE [tsne] plot of the feature embeddings learned by the proposed model guided by CTRL, and [vu2019dada]. Fig. 6 shows 10 top-scoring classes of each method; distinct classes are circled. As can be seen from the figure, CTRL leads to more structured feature space, which concurs with our analysis of the main paper. Both models are trained and evaluated following the UDA protocol SYNTHIA Cityscapes (16 classes). Furthermore, we present additional qualitative results of our model for semantic segmentation and monocular depth estimation. Figures 89 show the results of the qualitative comparison of our method with [vu2019dada]. Note that our proposed method has higher spatial acuity in delineating small objects like “human”, “bicycle”, and “person” compared to [vu2019dada]. Figure 10 shows some qualitative monocular depth estimation results.

Figure 8: Qualitative semantic segmentation results with EP1: SYNTHIA Cityscapes (16 classes). (a) Images from Cityscapes validation set; (b) ground truth annotations; (c) DADA [vu2019dada] predictions; (d) our model predictions. Our method demonstrates notable improvements over [vu2019dada] on “bus”, “person”, and “bicycle” classes as highlighted using the yellow boxes.
Figure 9: Qualitative semantic segmentation results with EP1: SYNTHIA Cityscapes (16 classes). (a) Images from Cityscapes validation set; (b) ground truth annotations; (c) DADA [vu2019dada] predictions; (d) our model predictions. Our method demonstrates notable improvements over [vu2019dada] on “bus”, “person”, and “bicycle” classes as highlighted using the yellow boxes.
Figure 10: Qualitative monocular depth estimation results with EP1: SYNTHIA Cityscapes (16 classes). (a) Images from Cityscapes validation set; (b) ground truth annotations; (c) DADA [vu2019dada] predictions; (d) our model predictions.

9 Network Architecture Design

The shared part of the semantic and depth prediction network

consists of a ResNet-101 backbone and a decoder. The decoder consists of four convolutional layers, each followed by a Rectified Linear Unit (ReLU). The decoder outputs a feature map that is shared among both semantics and depth heads. This shared feature map is fed forward to the respective semantic segmentation, monocular depth estimation, and semantics refinement heads. For the task-specific and task-refinement heads, we use Atrous Spatial Pyramid Pooling (ASPP) with sampling rates

and the Deeplab-V2 [chen2017deeplab] architecture. Our DC-GAN [radford2015unsupervised] based domain discriminator takes as input a feature map with channel dimension , where is the number of semantic classes, and is the number of depth levels.

10 Robustness to Different Network Design

Our proposed model adopts the residual auxiliary block [mordan2018revisiting] (as in [vu2019dada]), which was originally proposed to tackle a particular MTL setup where the objective was to improve one primary task by leveraging several other auxiliary tasks. However, unlike [vu2019dada] which doesn’t have any decoder for depth, we introduce a DeepLabV2 decoder for depth estimation to improve both task performances. Our qualitative and quantitative experimental results show an improvement of depth estimation performance over [vu2019dada]. Furthermore, we are interested to see the proposed model’s performance when used with a standard MTL architecture (a common encoder followed by multiple task-specific decoders without any residual auxiliary blocks). To this end, we make necessary changes to our existing network design to have a standard MTL network design. We then train it following UDA protocols. The details of our experimental analysis are given below.

For the standard MTL model (denoted as “Ours*” in Table 7), the depth head is placed after the shared feature extractor . The shared feature extractor consists of a ResNet backbone and decoder network (see Fig. 2). For the second model with residual auxiliary block (denoted as “Ours”), we positioned the depth head after the decoder’s third convolutional layer. The semantic segmentation performance of these two variants of the proposed model is shown in Table 7. Both models are evaluated on the five different UDA protocols and outperform state-of-the-art DADA [vu2019dada] results. The results show that our proposed CTRL is not sensitive to architectural changes and can be used with standard encoder-decoder MTL frameworks. Our findings may be found beneficial for the domain-adaptive MTL community, e.g., in answering a question whether learning additional complementary tasks (surface normals, instance segmentation) performs domain alignment.

References