A Fully Convolutional Tri-branch Network (FCTN) for Domain Adaptation

11/10/2017
by   Junting Zhang, et al.
University of Southern California
0

A domain adaptation method for urban scene segmentation is proposed in this work. We develop a fully convolutional tri-branch network, where two branches assign pseudo labels to images in the unlabeled target domain while the third branch is trained with supervision based on images in the pseudo-labeled target domain. The re-labeling and re-training processes alternate. With this design, the tri-branch network learns target-specific discriminative representations progressively and, as a result, the cross-domain capability of the segmenter improves. We evaluate the proposed network on large-scale domain adaptation experiments using both synthetic (GTA) and real (Cityscapes) images. It is shown that our solution achieves the state-of-the-art performance and it outperforms previous methods by a significant margin.

READ FULL TEXT VIEW PDF

page 2

page 3

page 4

02/27/2017

Asymmetric Tri-training for Unsupervised Domain Adaptation

Deep-layered models trained on a large number of labeled samples boost t...
12/23/2020

General Domain Adaptation Through Proportional Progressive Pseudo Labeling

Domain adaptation helps transfer the knowledge gained from a labeled sou...
11/19/2021

Deep Domain Adaptation for Pavement Crack Detection

Deep learning-based pavement cracks detection methods often require larg...
07/17/2020

DACS: Domain Adaptation via Cross-domain Mixed Sampling

Semantic segmentation models based on convolutional neural networks have...
04/25/2022

Loss-based Sequential Learning for Active Domain Adaptation

Active domain adaptation (ADA) studies have mainly addressed query selec...
09/26/2021

DAMix: Density-Aware Data Augmentation for Unsupervised Domain Adaptation on Single Image Dehazing

Learning-based methods have achieved great success on single image dehaz...
06/25/2020

Automatic Domain Adaptation Outperforms Manual Domain Adaptation for Predicting Financial Outcomes

In this paper, we automatically create sentiment dictionaries for predic...

1 Introduction

Semantic segmentation for urban scenes is an important yet challenging task for a variety of vision-based applications, including autonomous driving cars, smart surveillance systems, etc. With the success of convolutional neural networks (CNNs), numerous successful fully-supervised semantic segmentation solutions have been proposed in recent years

[1, 2]. To achieve satisfactory performance, these methods demand a sufficiently large dataset with pixel-level labels for training. However, creating such large datasets is prohibitively expensive as it requires human annotators to accurately trace segment boundaries. Furthermore, it is difficult to collect traffic scene images with sufficient variations in terms of lighting conditions, weather, city and driving routes.

To overcome the above-mentioned limitations, one can utilize the modern urban scene simulators to automatically generate a large amount of synthetic images with pixel-level labels. However, this introduces another problem, i.e. distributions mismatch between the source domain (synthesized data) and the target domain (real data). Even if we synthesize images with the state-of-the-art simulators [3, 4]

, there still exists visible appearance discrepancy between these two domains. The testing performance in the target domain using the network trained solely by the source domain images is severely degraded. The domain adaptation (DA) technique is developed to bridge this gap. It is a special example of transfer learning that leverages labeled data in the source domain to learn a robust classifier for unlabeled data in the target domain. DA methods for object classification have several challenges such as shifts in lighting and variations in object’s appearance and pose. There are even more challenges in DA methods for semantic segmentation because of variations in the scene layout, object scales and class distributions in images. Many successful domain-alignment-based methods work for DA-based classification but not for DA-based segmentation. Since it is not clear what comprises data instances in a deep segmenter

[5], DA-based segmentation is still far from its maturity.

In this work, we propose a novel fully convolutional tri-branch network (FCTN) to solve the DA-based segmentation problem. In the FCTN, two labeling branches are used to generate pseudo segmentation ground-truth for unlabeled target samples while the third branch learns from these pseudo-labeled target samples. An alternating re-labeling and re-training mechanism is designed to improve the DA performance in a curriculum learning fashion. We evaluate the proposed method using large-scale synthesized-to-real urban scene datasets and demonstrate substantial improvement over the baseline network and other benchmarking methods.

Figure 1: An overview of the proposed fully convolutional tri-branch network (FCTN). It has one shared base network denoted by followed by three branches of the same architecture denoted by , and . Branches and assign pseudo labels to images in the unlabeled target domain, while branch is trained with supervision from images in the pseudo-labeled target domain.

2 Related Work

The current literatures on visual domain adaptation mainly focus on image classification [6]. Being inspired by shallow DA methods, one common intuition of deep DA methods is that adaptation can be achieved by matching the distribution of features in different domains. Most deep DA methods follow a siamese architecture with two streams, representing the source and target models. They aim to obtain domain-invariant features by minimizing the divergence of features in the two domains and a classification loss [7, 8, 9, 10], where the classification loss is evaluated in the source domain with labeled data only. However, these methods assume the existence of a universal classifier that can perform well on samples drawn from whichever domain. This assumption tends to fail since the class correspondence constraint is rarely imposed in the domain alignment process. Without such an assumption, feature distribution matching may not lead to classification improvement in the target domain. The ATDA method proposed in [11]

avoids this assumption by employing the asymmetric tri-training. It can assign pseudo labels to unlabeled target samples progressively and learn from them using a curriculum learning paradigm. This paradigm has been proven effective in the weakly-supervised learning tasks

[12] as well.

Previous work on segmentation-based DA is much less. Hoffman et. al [13] consider each spatial unit in an activation map of a fully convolutional network (FCN) as an instance, and extend the idea in [9] to achieve two objectives: 1) minimizing the global domain distance between two domains using a fully convolutional adversarial training and 2) enhancing category-wise adaptation capability via multiple instance learning. The adversarial training aims to align intermediate features from two domains. It implies the existence of a single good mapping from the domain-invariant feature space to the correct segmentation mask. To avoid this condition, Zhang et. al [5] proposed to predict the class distribution over the entire image and some representative super pixels in the target domain first. Then, they use the predicted distribution to regularize network training. In this work, we avoid the single good mapping assumption and rely on the remarkable success of the ATDA method [11]. In particular, we develop a curriculum-style method that improves the cross-domain generalization ability for better performance in DA-based segmentation.

3 Proposed Domain Adaptation Network

The proposed fully convolutional tri-branch network (FCTN) model for cross-domain semantic segmentation is detailed in this section. The labeled source domain training set is denoted by while the unlabeled target domain training set is denoted by , where is an image, is the ground truth segmentation mask and and are the sizes of training sets of two domains, respectively.

3.1 Fully Convolutional Tri-branch Network Architecture

An overview of the proposed FCTN architecture is illustrated in Fig. 1. It is a fully convolutional network that consists of a shared base network () followed by three branch networks (, and ). Branches and

are labeling branches. They accept deep features extracted by the shared base net,

, as the input and predict the semantic label of each pixel in the input image. Although the architecture of the three branches are the same, their roles and functions are not identical. and generate pseudo labels for the target images based on prediction. and learn from both labeled source images and pseudo-labeled target images. In contrast, is a target-specific branch that learns from pseudo-labeled target images only.

We use the DeepLab-LargeFOV (also known as the DeepLab v1) [14] as the reference model due to its simplicity and superior performance in the semantic segmentation task. The DeepLab-LargeFOV is a re-purposed VGG-16 [15] network with dilated convolutional kernels. The shared base network contains 13 convolutional layers while the three branche networks are formed by three convolutional layers that are converted from fully connected layers in the original VGG-16 network. Although the DeepLab-LargeFOV is adopted here, any effective FCN-based semantic segmentation framework can be used in the proposed FCTN architecture as well.

3.2 Encoding Explicit Spatial Information

Being inspired by PFN [16], we attach the pixel coordinates as the additional feature map to the last layer of . The intuition is that the urban traffic scene images have structured layout and certain classes usually appear in a similar location in images. However, a CNN is translation-invariant by nature. That is, it makes prediction based on patch-based feature regardless of the patch location in the original image. Assume that the last layer in has a feature map of size , where , and are the height, width and depth of the feature map, respectively. We generate two spatial coordinate maps and of size , where values of and are set to be and for pixel at location , respectively. We concatenate spatial coordinate maps and to the original feature maps along the depth dimension. Thus, the output feature maps are of dimension . By incorporating the spatial coordinate maps, the FCTN can learn more location-aware representations.

3.3 Assigning Pseudo Labels to Target Images

Being inspired by the ATDA method [11], we generate pseudo labels by feeding images in the target domain training set to the FCTN and collect predictions from both labeling branches. For each input image, we assign the pseudo-label to a pixel if the following two conditions are satisfied: 1) the classifiers associated with labeling branches, and , agree in their predicted labels on this pixel; 2) the higher confidence score of these two predictions exceeds a certain threshold. In practice, the confidence threshold is set very high (say, 0.95 in our implementation) because the use of many inaccurate pseudo labels tends to mislead the subsequent network training. In this way, high-quality pseudo labels for target images are used to guide the network to learn target-specific discriminative features. The pseudo-labeled target domain training set is denoted by , where is the partially pseudo-labeled segmentation mask. Some sample pseudo-labeled segmentation masks are shown in Fig. 2. In the subsequent training, the not-yet-labeled pixels are simply ignored in the loss computation.

Figure 2: Illustration of pseudo labels used in the 2-round curriculum learning in the GTA-to-Cityscapes DA experiments. The first row shows the input images. The second row shows the ground truth segmentation masks. The third and fourth row shows the pseudo labels used in the first and second round of curriculum learning, respectively. Note in the visualization of pseudo labels, white pixels indicate the unlabeled pixels. Best viewed in color.

3.4 Loss Function

Weight-Contrained Loss. As suggested in the standard tri-training algorithm [17], the three classifiers in , and must be diverse. Otherwise, the training degenerates to self-training. In our case, one crucial requirement to obtain high-quality pseudo-labels from two labeling branches and is that they should have different views on one sample and make decisions on their own.

Unlike the case in the co-training algorithm [18]

, where one can explicitly partition features into different sufficient and redundant views, it is not clear how to partition deep features effectively in our case. Here, we enforce divergence of the weights of the convolutional layers of two labeling branches by minimizing their cosine similarity. Then, we have the following filter weight-constrained loss term:

(1)

where and are obtained by the flattening and concatenating the weights of convolutional filters in convolutional layers of and , respectively.

Weighted Pixel-wise Cross-entropy Loss. In the curriculum learning stage, we take a minibatch of samples with one half from and the other half from at each step. We calculate the segmentation losses separately for each half of samples. For the source domain images samples, we use the vanilla pixel-wise softmax cross-entropy loss, denoted by

, as the segmentation loss function.

Furthermore, as mentioned in Sec. 3.3, we assign pseudo labels to target domain pixels based on predictions of two labeling branches. This mechanism tends to assign pseudo labels to the prevalent and easy-to-predict classes, such as the road, building, etc., especially in the early stage (this can be seen in Fig. 2). Thus, the pseudo labels can be highly imbalanced in classes. If we treat all classes equally, the gradients from challenging and relatively rare classes will be insignificant and the training will be biased toward prevalent classes. To remedy this, we use a weighted cross-entropy loss for target domain samples, denoted by . We calculate weights using the median frequency balancing scheme [19], where the weight assigned to class in the loss function becomes

(2)

where is the number of pixels of class divided by the total number of pixels in the source domain images whenever is present, and is the median of these frequencies , and where is the total number of classes. This scheme works well under the assumption that the global class distributions of the source domain and the target domain are similar.

Total Loss Function. There are two stages in our training procedure. We first pre-train the entire network using minibatches from so as to minimize the following objective function:

(3)

Once the curriculum learning starts, the overall objective function becomes

(4)

where is evaluated on and averaged over predictions of and branches, is evaluated on and averaged over predictions of all three top branches, and and are hyper-parameters determined by the validation split.

Model per-class IoU mIoU

road

sidewlk

bldg.

wall

fence

pole

t. light

t. sign

veg.

terr.

sky

person

rider

car

truck

bus

train

mbike

bike

No Adapt 31.9 18.9 47.7 7.4 3.1 16.0 10.4 1.0 76.5 13.0 58.9 36.0 1.0 67.1 9.5 3.7 0.0 0.0 0.0 21.1
FCN [13] 70.4 32.4 62.1 14.9 5.4 10.9 14.2 2.7 79.2 21.3 64.6 44.1 4.2 70.4 8.0 7.3 0.0 3.5 0.0 27.1
No Adapt 18.1 6.8 64.1 7.3 8.7 21.0 14.9 16.8 45.9 2.4 64.4 41.6 17.5 55.3 8.4 5.0 6.9 4.3 13.8 22.3
CDA [5] 26.4 22.0 74.7 6.0 11.9 8.4 16.3 11.1 75.7 13.3 66.5 38.0 9.3 55.2 18.8 18.9 0.0 16.8 14.6 27.8
No Adapt 59.7 24.8 66.8 12.8 7.9 11.9 14.2 4.2 78.7 22.3 65.2 44.1 2.0 67.8 9.6 2.4 0.6 2.2 0.0 26.2
Round 1 66.9 25.6 74.7 17.5 10.3 17.1 18.4 8.0 79.7 34.8 59.7 46.7 0.0 77.1 10.0 1.8 0.0 0.0 0.0 28.9
Round 2 72.2 28.4 74.9 18.3 10.8 24.0 25.3 17.9 80.1 36.7 61.1 44.7 0.0 74.5 8.9 1.5 0.0 0.0 0.0 30.5
Table 1: Adaptation from GTA to Cityscapes. All numbers are measured in %. The last three rows show our results before adaptation, after one and two rounds of curriculum learning using the proposed FCTN, respectively.
Figure 3: Domain adaptation results from the Cityscapes Val set. The third column shows segmentation results using the model trained solely by the GTA dataset, and the fourth column shows the segmentation results after two rounds of the FCTN training (best viewed in color).

3.5 Training Procedure

The training process is illustrated in Algorithm 1. We first pretrain the entire FCTN on the labeled source domain training set for iterations, optimizing the loss function in Eq. (3). We then use the pre-trained model to generate the initial pseudo labels for the target domain training set , using the method described in Sec. 3.3. We re-train the network using and for several steps. At each step, we take a minibatch of samples with half from and half from , optimizing the terms in Eq. (4) jointly. We repeat the re-labeling of and the re-training of the network for several rounds until the model converges.

labeled source domain training set and unlabeled target domain training set
Pretraining on :
for  to  do
     train with minibatches from
end for
Curriculum Learning with and :
for  to  do
      Labeling() See Sec. 3.3
     for  to  do
         train with samples from
         train with samples from
     end for
end for
return
Algorithm 1 Training procedure for our fully convolutional tri-branch network (FCTN).

4 Experiments

We validate the proposed method by experimenting the adaptation from the recently built synthetic urban scene dataset GTA [3] to the commonly used urban scene semantic segmentation dataset Cityscapes [20].

Cityscapes [20] is a large-scale urban scene semantic segmentation dataset. It provides over 5,000 finely labeled images (train/validation/test: 2,993/503/1,531), which are labeled with per pixel category labels. They are with high resolution of . There are 34 distinct semantic classes in the dataset, but only 19 classes are considered in the official evaluation protocol.

GTA [3] contains 24,966 high-resolution labeled frames extracted from realistic open-world computer games, Grand Theft Auto V (GTA5). All the frames are vehicle-egocentric and the class labels are fully compatible with Cityscapes.

We implemented our method using Tensorflow

[21] and trained our model using a single NVIDIA TITAN X GPU. We initialized the weights of shared base net

using the weights of the VGG-16 model pretrained on ImageNet. The hyper-parameter settings were

. We used a constant learning rate in the training. We trained the model for , and iterations in the pre-training and two rounds of curriculum learning, respectively.

We use synthetic data as source labeled training data and Cityscapes train as an unlabeled target domain, while evaluating our adaptation algorithm on Cityscapes val using the predictions from the target specific branch . Following Cityscapes official evaluation protocol, we evaluate our segmentation domain adaptation results using the per-class intersection over union (IoU) and mean IoU over the 19 classes. The detailed results are listed in Table. 1 and some qualitative results are shown in Fig. 3. We achieve the state-of-the-art domain adaptation performance. Our two rounds of curriculum learning boost the mean IoU over our non-adapted baseline by 2.7% and 4.3%, respectively. Especially, the IoU improvement for the small objects (e.g. pole, traffic light, traffic sign etc.) are significant (over 10%).

5 Conclusion

A systematic way to address the unsupervised semantic segmentation domain adaptation problem for urban scene images was presented in this work. The FCTN architecture was proposed to generate high-quality pseudo labels for the unlabeled target domain images and learn from pseudo labels in a curriculum learning fashion. It was demonstrated by the DA experiments from the large-scale synthetic dataset to the real image dataset that our method outperforms previous benchmarking methods by a significant margin.

There are several possible future directions worth exploring. First, it is interesting to develop a better weight constraint for the two labeling branches so that even better pseudo labels can be generated. Second, we may impose the class distribution constraint on each individual image [5] so as to alleviate the confusion between some visually similar classes, e.g. road and sidewalk, vegetation and terrain etc. Third, we can extend the proposed method to other tasks, e.g. instance-aware semantic segmentation.

References