MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling

09/30/2019 ∙ by Javed Iqbal, et al. ∙ 5

Most of the recent Deep Semantic Segmentation algorithms suffer from large generalization errors, even when powerful hierarchical representation models based on convolutional neural networks have been employed. This could be attributed to limited training data and large distribution gap in train and test domain datasets. In this paper, we propose a multi-level self-supervised learning model for domain adaptation of semantic segmentation. Exploiting the idea that an object (and most of the stuff given context) should be labeled consistently regardless of its location, we generate spatially independent and semantically consistent (SISC) pseudo-labels by segmenting multiple sub-images using base model and designing an aggregation strategy. Image level pseudo weak-labels, PWL, are computed to guide domain adaptation by capturing global context similarity in source and domain at latent space level. Thus helping latent space learn the representation even when there are very few pixels belonging to the domain category (small object for example) compared to rest of the image. Our multi-level Self-supervised learning (MLSL) outperforms existing state-of art (self or adversarial learning) algorithms. Specifically, keeping all setting similar and employing MLSL we obtain an mIoU gain of 5:1 to Cityscapes adaptation and 4:3 to existing state-of-art method.



There are no comments yet.


page 3

page 4

page 8

Code Repositories


Multi-Level Self-Supervised Learning for Domain Adaptation: MxNet Implementation

view repo


State of the art overview of unsupervised domain adaptation methods

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the evolution of deep learning methods during the last decade and the availability of densely labeled datasets

[9, 22, 20], a considerable attention has been devoted to improving the performance of semantic segmentation [16, 1, 3, 19, 4, 32, 10]. Significant reliance of real-time applications like autonomous vehicles [11], bio-medical imaging [21], etc. over robust and accurate semantic segmentation step has also helped it gain prominence in current research. However, with the limited datasets for such a complex task (pixel-wise annotation), the state-of-the-art models have been reported to produce large generalization errors [33, 27]. This occurs naturally, because the train data may vary from test data (domain shift) in many aspects like illumination, visual appearance, camera quality, etc. It is time consuming and labor-intensive to densely label high resolution images covering all the domain variations. Modern computer graphics makes it easier to train deep models using synthetic images with computer generated dense labels [20, 22]. However, these simulated-scene datasets are significantly different in visual appearance and object structures compared to real-life road-scene datasets, limiting the model performance. To overcome these domain shift issues, many techniques have been proposed to adapt the target data distribution [15, 14, 26]. Here our focus is to adapt the target domain dataset without labels in an unsupervised manner using Self-supervised learning.

Due to large real-world applications, unsupervised domain adaptation (UDA) is a well-studied field in the current decade and aims to generalize to unseen data using only the labeled data of source domain. In UDA, most of the algorithms try to match the source and target data distribution using adversarial loss [13] either at structured output level [26] or latent space features level [6, 18, 8] respectively. Similarly, UDA based on adversarial learning augmented with other methods have recently produced good results on adaptation of semantic segmentation [27, 31]. However, Zou et al. in [33]

showed that a comparative performance can be achieved using an alternative method contrary to adversarial learning with less computational resources required compared to these complex methods. They introduced a class balanced self-supervised training method by generating pseudo-labels using the source-data trained model and tried to minimize a single loss function. However, they failed to capture the global context of the image referenced to categories and also the generated pseudo-labels had high uncertainty.

In this work, we propose a novel Multi-level Self-Supervised learning (MLSL) approach for UDA of semantic segmentation. The proposed approach consists of two complementary strategies. First, we propose spatially independent and semantically consistent

(SISC) pseudo-labels generation process. We make reasonable assumption that an object should be segmented with similar label regardless of the location of the object. Same could be said about the stuff representing grass, road, sky, etc., given a reasonable context in surrounding. Using base model, multiple sub-images (extracted from an image) are segmented independently and output probability volume is aggregated. This not only generates better pseudo-labels than single instance (SI) based ones, the assumption is more general than the spatial consistency assumption used by


Secondly, we enforce the global context and small object information preservation while adaptation by attaching a category based image classification module at latent space level. For each target image, Image level labels, called pseudo weak-labels (PWL) are generated using SISC pseudo-labels and size statistics collected from source domain. In summary, our main contributions are :

  1. [noitemsep,nolistsep]

  2. A Multi-level self learning strategy for UDA of semantic segmentation by generating pseudo-labels at fine-grain pixel-level and image level, helping identify domain invariant features at both latent and output level.

  3. Designing a strategy, based on a reasonable assumption that for most categories labels should be location invariant given enough context, to generate spatially independent and semantically consistent pixel-wise pseudo-labels

  4. Using category wise size statistics to help build PWL and train latent space.

  5. State-of-the-art performance on benchmark datasets by further augmenting the class spatial and category distribution priors.

Figure 1: An illustration of the alternating self-supervised learning method for UDA of semantic segmentation. (a) shows pseudo-label generation and (b) shows segmentation network training on source and target images. (a) and (b) are repeated iteratively.

2 Related Work

Due to the evolution of deep learning methods, most of the computer vision tasks including but not limited to object detection, semantic segmentation, etc., are shifted to deep neural networks based methods

[7]. In [16], the authors proposed a fully convolutional network for pixel-level dense classification for the first time. Following them, many researchers proposed state-of-the-art methods for semantic segmentation taking the performance to an acceptable level for many computer vision tasks [1, 4, 32].

Domain adaptation is a widely studied area in computer vision for segmentation, detection, and classification tasks. With the emergence of semantic segmentation algorithms [1, 16, 3], availability of datasets [9, 22, 20] and modern applications demanding real-time constraints, e.g., self-driving cars, domain adaptation for semantic segmentation is in the spotlight. Many approaches exploited an appealing direction in semantic segmentation using domain adaptation from synthetic dataset to real-life datasets [26, 8]. The underlying idea of UDA include matching target and source features using discrepancy minimization [31, 18], self-supervised learning with pseudo-labels [33, 29] and re-weighting source domain to look like target domain [25, 14]. This work thoroughly investigates the unsupervised domain adaptation for semantic segmentation with focus on self-supervised learning approach.

Adversarial learning is the most explored method for UDA of semantic segmentation [8, 6, 26]. Adversarial loss-based training is exploited for feature matching, structured output matching, and re-weighting processes frequently in UDA. The authors in [18] and [25] exploited latent space representations and used an adversarial loss to match the latent space features of source and target domains. Similarly, Chen et al. [6] used the adversarial loss for UDA of semantic segmentation augmented with class-specific adversaries to enhance the adaptation performance. The authors in [31] also proposed the latent space domain matching based on adversarial loss augmented with appearance adaptation network at the input. They tried to combine the latent space representation adaptation and re-weighting process and observed a significant gain in performance. In [14] the authors adapted similar approach to first transform the fully labeled source images to target images, train the segmentation model using the labeled source data, and then adapt further to target data. Rui et al. [12] devised a domain flow approach to transfer source images to new domains using adversarial learning at intermediate levels. In [2], the authors leveraged the spatial structure of source and target domain dataset, and working in latent space, proposed domain independent structure and domain specific texture based composite architecture for UDA. However, due to high dimensional feature representation at latent space, it is hard to adapt to new data distributions using adversarial loss because of the instability of the adversarial learning process.

In [26], the authors proposed a structured output domain adaptation based on adversarial learning. Their proposed method does not suffer from high dimensional representation of latent space and performs well due to a defined structure of road scene imagery at the output. They proposed state-of-the-art results in comparison with previous methods and also provided a baseline solution for other methods. Zou et al. [33] proposed a comparative performance method based on iterative learning. They proposed a class balanced self-training mechanism and obtained state-of-the-art performance using spatial priors in the pseudo-labels generation process. A tri-branch UDA model for semantic segmentation is proposed in [29], where they generate pseudo-labels from two branches and train the third branch on that pseudo-labels alternatively. The authors in [27] stated that, only adversarial learning at latent space or output space is not enough to learn the target distribution. They used a direct entropy minimization algorithm augmented with an entropy-based adversarial loss for UDA of semantic segmentation.

In summary, the existing solutions are suffering due to various problems e.g. latent space adaptation suffers from high dimensional feature representation, output space adaptation struggles with small and thin objects, re-weighting independently is not enough to achieve the goal. Similarly, the existing iterative methods are not capable to generate good pseudo-labels and cannot capture the global image context. In this work we propose category-based image classification using PWL and SISC based self-supervised learning for domain adaptation of semantic segmentation.

Figure 2: (a) Single-inference pseudo-label generation, (b) SISC pseudo-labels generation where, from left to right: patches are extracted randomly, segmented, recombined, normalized and pseudo-labels are generated. (c) shows the semantic segmentation and category-based image classification model, and (d) describes the PWL generation process.

3 Approach

In this section, we present the proposed self-supervised and weakly-supervised learning approaches based on SISC pseudo-labels and PWL for domain adaptation of semantic segmentation. We start with existing state-of-the-art networks in semantic segmentation [28] and self-training for domain adaptation [33] as baseline methods and plugin additional modules for proposed approaches. Fig. 1 illustrates, iterative self-supervised learning technique for UDA.

3.1 Preliminaries

Let and where, corresponds to RGB images of source dataset with resolution and

are ground truth labels as C-classes one-hot vectors with same spatial resolution as

. Let be a fully convolutional network which predicts softmax outputs for an input image . One needs to learn the parameters of by minimizing the cross-entropy loss given in Eq. 1 on source domain images.


If ground truth labels for target dataset are available, the most direct strategy would be to use Eq. 1 and fine-tune the source trained model to target dataset. However, labels for target dataset are not available most of the time especially in real-time applications, e.g., self-driving cars. Therefore, an alternate way for unsupervised domain adaptation is to fine-tune the source trained model on the most confident outputs called “pseudo-labels”, which the model produces on target domain images. The pseudo-labels have exactly the same dimensions as . The loss function for the target domain images is formulated as follows:


where in Eq. 2 is self-training loss with as the pseudo-labels one-hot vectors with classes, and is a binary map, obtained from pseudo-labels e.g., if any pseudo-label is there at , and if there is no pseudo-label assigned at , where and . allows to back propagate loss for those pixel locations only, which are assigned pseudo-labels. We name the training method as “self-supervised learning” or “self-training”.

3.2 Semantically consistent pseudo-labels

Training a network using single inference (SI) generated pseudo-labels only misleads the training process as there is no guarantee over the quality of pseudo-labels. An initial optimal strategy is to jointly train the segmentation network using the ground truth labels of source images and the generated pseudo-labels of target images. The joint loss function is given by Eq. 3.


where, is the loss of source images and is the loss of target images given in Eq. 1 and Eq. 2 respectively. To minimize the loss in Eq. 3, we follow the two stage alternative process given below:

  1. [noitemsep ,nolistsep]

  2. Generate pseudo-labels by fixing the model parameters .

  3. Minimize the loss in Eq. 3 with respect to by fixing the pseudo-labels generated in the previous step.

In this work, Step 1 and Step 2 are executed alternatively and repeated for multiple iterations. A work-flow of the proposed algorithm is shown in Fig. 1. Step 1 tries to generate pseudo-labels using the output softmax probabilities of the target images based on the more confident examples. Once the pseudo-labels are generated, Step 2 updates the model parameters

using stochastic gradient descent (SGD) by minimizing the loss function given in Eq.


Spatially independent and semantically consistent pseudo-labels: Instead of generating pseudo-labels using SI, (e.g., segmenting the whole image simultaneously), we generate “spatially independent and semantically consistent (SISC)” pseudo-labels. We leverage the spatial independence of our baseline semantic segmentation model to generate spatially independent and semantically consistent predictions. To quantitatively show the contribution of semantic consistency, we evaluate the softmax predictions based on different spatial context and select the most consistent ones. For each target image , we select partially overlapping patches of size each. Each patch is passed through the segmentation algorithm to assign pixel-wise confidence vectors using softmax outputs. The output softmax probabilities for each patch are added to an empty matrix in specific locations where each patch belongs, and generate the composite output. Each pixel in has an associated count based on the number of occurrences in different patches during inference. We normalize with associated counts to obtain a normalized probability map and forward it to pseudo-label selection step which chooses the most confident outputs as pseudo-labels. The whole process of patch-based and single inference based pseudo-label generation is shown in Fig. 2.

Unlike simple pseudo-labels generation methods which suffer from category distribution imbalance problem, we use the category-balanced pseudo-label selection similar to the method used in [33]. Using the obtained normalized probability map, we further normalize the category-wise probabilities and select the pixels having high probability within a specific category. For example, we select all pixels locations which are assigned to be “road”, normalize probabilities on that locations and then select the most confident ones. This process balances the inter-category pseudo-labels ratio and avoids the training process to adapt simple examples only. The obtained pseudo-labels belong to the more consistent pixels inferred without the global view. The loss function given in Eq. 3 is minimized using the original labels for source domain and SISC pseudo-labels for the target domain.

3.3 Pseudo weak-labels guided domain adaptation

The cross-entropy loss for an input image/label pair defined in Eq. 1 calculates the sum of independent pixel-wise entropies, dealing with each pixel and label at the location independently. Thus ignoring any spatially global information, prone to effected by sparse erroneous pseudo-labels. Due to unbalanced pixels per category distribution, minimizing the summation of independent pixels entropies ignores the global data distribution. Even balancing the labels [33], the low-density classes fades (for target domain) as self-training proceeds.

We employ the pseudo weak-labels (PWL), guided multi-task weakly-supervised learning to regularize the pixel wise cross-entropy loss. The PWL based category level cross-entropy loss is attached at the encoder level while adapting. This forces the latent space to learn to represent target categories, even for the small objects whose latent space representation might be faded if only pixel-wise cross-entropy loss is used.

3.3.1 PWL Filtering

The pixel-wise pseudo-labels are too noisy to generate the image level pseudo-labels. Assuming that source and target have similar objects and their instances, we build a naive model for the category’s size relationship with the image. From the source dataset we calculate , to represent mean size of each class, where


stands for total images, and indicator function is 1 if image has class , otherwise zero. For each target image , we compute SISC pseudo-labels and use it to compute array . PWL vector for image is an indicator vector , s.t. if otherwise zero. is a small value chosen by the user.

3.3.2 PWL Loss

Given any image , an image classification module is designed to input the latent space representation (in this case of ResNet-38), and predict labels (Fig. 2(c)). Instead of softmax, we use sigmoid so that it can predict multiple labels for the image and use binary cross-entropy loss function given in Eq.5 .


For the source images , indicator vector represents image level label crated from ground truth segmentation labels. For the images in target domain , image level weak-labels are created as detailed in Sec. 3.3.1.

3.4 Final Loss Function

The overall loss function for segmentation network and category-based image classification network for source domain is the composition of both 1 and 5, and is given by


where is the scaling factor and is image level label. The combined loss function for self-supervised and weakly-supervised learning is given by;


Eq. 7, is minimized using criteria described in Sec. 3.2.

4 Experiments

In this section, we present experimental details and discuss the main results of our proposed UDA methods.

GTA-V Cityscapes








T. Light

T. Sign













ResNet-38 [28] - 70.0 23.7 67.8 15.4 18.1 40.2 41.9 25.3 78.8 11.7 31.4 62.9 29.8 60.1 21.5 26.8 7.7 28.1 12.0 35.4
AdaptSetNet [26] Adv 86.5 36.0 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4
Saleh et al [24] ST 79.8 29.3 77.8 24.2 21.6 6.9 23.5 44.2 80.5 38.0 76.2 52.7 22.2 83.0 32.3 41.3 27.0 19.3 27.7 42.5
MinEnt [27] ST 86.2 18.6 80.3 27.2 24.0 23.4 33.5 24.7 83.3 31.0 75.6 54.6 25.6 85.2 30.0 10.9 0.1 21.3 37.1 42.3
DLOW [12] Adv 87.1 33.5 80.5 24.5 13.2 29.8 29.5 26.6 82.6 26.7 81.8 55.9 25.3 78.0 33.5 38.7 0.0 22.9 34.5 42.3
CLAN [17] Adv 87.0 27.1 79.6 27.3 23.3 28.3 35.5 24.2 83.6 27.4 74.2 58.6 28.0 76.2 33.1 36.7 6.7 31.9 31.4 43.2
All Structure [2] Adv 91.5 47.5 82.5 31.3 25.6 33.0 33.7 25.8 82.7 28.8 82.7 62.4 30.8 85.2 27.7 34.5 6.4 25.2 24.4 45.4
CBST-SP [33] ST 88 56.2 77 27.4 22.4 40.7 47.3 40.9 82.4 21.6 60.3 50.2 20.4 83.8 35 51 15.2 20.6 37 46.2
Ours (SISC) ST 91.0 49.3 79.9 24.4 27.9 37.9 45.1 45.1 81.3 19.0 61.7 63.9 28.0 86.5 23.9 42.3 41.9 33.1 44.4 48.7
Ours (SISC-PWL) ST 89.0 45.2 78.2 22.9 27.3 37.4 46.1 43.8 82.9 18.6 61.2 60.4 26.7 85.4 35.9 44.9 36.4 37.2 49.3 49.0
Table 1: Semantic segmentation performance when the model trained on GTA-V dataset is adapted to Cityscapes dataset. We present the results of our proposed SISC pseudo-labels based self-supervised learning and PWL augmented self-training. We use the competitive baseline model and show a thorough comparison with existing state-of-the-art methods. The abbreviations ”ST” and ”Adv” indicates the self-training (self-supervised learning) and adversarial learning respectively.

4.1 Experimental setup

4.1.1 Datasets

We follow the synthetic-to-real setup for unsupervised domain adaptation. We use GTA-V [20] and SYNTHIA [22] as our source domain synthetic datasets and Cityscapes [9] as real-world target domain dataset. GTA-V consist of 24966 synthetic frames of spatial resolution extracted from a video game. All the 24966 frames have pixel level labels available for 33 categories, but we used 19 categories compatible with real-world Cityscapes dataset. Similarly, we use SYNTHIA-RAND-CITYSCAPES set having 9400 synthetic frames of size from SYNTHIA dataset. We train and evaluate our baseline and proposed models with 16 common classes in SYNTHIA and Cityscapes. We also report the 13 classes evaluation as described in [27] and [33].

In both the experiments, we use the Cityscapes training set without labels for unsupervised domain adaptation and evaluate the adapted models on Cityscapes separate validation set having 500 images. We use standard mean Intersection-over-Union (mIoU) as our evaluation metric.

4.1.2 Model architecture

We use ResNet-38 [28]

as our baseline semantic segmentation model. The pre-trained ResNet-38 (trained on ImageNet

[23]) is trained for semantic segmentation on GTA-V and SYNTHIA datasets. The architecture of ResNet-38 contains 7-blocks are there followed by two segmentation layers and an upsampling layer. We also call the ResNet-38 as encoder for segmentation network and refer its output as latent space representation. The two convolution layers comprises of filters with depth of 512 and

(number of classes to segment). At the end the upsampling layer up-scales the output using bi-linear interpolation.

Similarly, the image classification part discussed in Section 3.3 is a category (object/stuff) based image classification module augmented with ResNet-38. The image classification module consist of two convolution layers with filters with depth each. A global average pooling (GAP) layer is applied to capture the global nature of the feature map channels. The output of GAP is passed through two fully connected layers of depth and

respectively. Relu activation function is applied except the last layer where sigmoid is used.

4.1.3 Implementation and training details

We use MxNet [5] deep learning framework and a single Core-i5 machine with 32GB RAM and a GTX 1080 GPU with 8GB of memory to implement the proposed methods for domain adaptation of semantic segmentation. Our proposed model uses SGD optimizer for training with an initial learning rate of . To generate SISC pseudo-labels, is chosen (e.g. 50 sub-images of a target image are selected randomly). For SISC pseudo-labels based self-supervised learning, a batch size of 2 is chosen while the weakly-supervised setup described in section 3.3 processes a single image only. To optimize the joint loss function given in Eq. refeqn:7, the value of is investigated thoroughly (as shown in Section 4.3) and chosen as 0.025 to limit the image classification loss to back propagate large gradients. also controls the speed of adaptation with trade-off to segmentation performance, so the mentioned nominal value is used for all followed experiments.

SYNTHIA Cityscapes








T. Light

T. Sign











ResNet-38 [28] - 32.6 21.5 46.5 4.81 0.03 26.5 14.8 13.1 70.8 60.3 56.6 3.5 74.1 20.4 8.9 13.1 29.2 33.6
Road [8] Adv 77.7 30.0 77.5 9.6 0.3 25.8 10.3 15.6 77.6 79.8 44.5 16.6 67.8 14.5 7.0 23.8 36.2 41.8
AdaptSetNet [26] Adv 81.7 39.1 78.4 11.1 0.3 25.8 6.8 9.0 79.1 80.8 54.8 21.0 66.8 34.7 13.8 29.9 39.6 45.8
MinEnt [27] ST 73.5 29.2 77.1 7.7 0.2 27.0 7.1 11.4 76.7 82.1 57.2 21.3 69.4 29.2 12.9 27.9 38.1 44.2
CLAN [17] Adv 81.3 37.0 80.1 - - - 16.1 13.7 78.2 81.5 53.4 21.2 73.0 32.9 22.6 30.7 - 47.8
All Structure [2] Adv 91.7 53.5 77.1 2.5 0.2 27.1 6.2 7.6 78.4 81.2 55.8 19.2 82.3 30.3 17.1 34.3 41.5 48.7
CBST [33] ST 53.6 23.7 75.0 12.5 0.3 36.4 23.5 26.3 84.8 74.7 67.2 17.5 84.5 28.4 15.2 55.8 42.5 48.4
Ours (SISC) ST 73.7 34.4 78.7 13.7 2.9 36.6 28.2 22.3 86.1 76.8 65.3 20.5 81.7 31.4 13.9 47.3 44.4 50.8
Ours (SISC+PWL) ST 59.2 30.2 68.5 22.9 1.0 36.2 32.7 28.3 86.2 75.4 68.6 27.7 82.7 26.3 24.3 52.7 45.2 51.0
Table 2: Semantic segmentation performance of Cityscapes validation set when adapted from SYNTHIA dataset. We present mIoU and mIoU* (13-categories) comparison with existing state-of-the-art methods for Cityscapes validation set.

4.2 Experimental results

The experimental results of our proposed approaches compared to baseline ResNet-38 and existing state-of-the-art UDA methods are presented in this section. Our proposed approaches perform superior to other methods for domain adaptation and produce state-of-the-art results on two benchmark datasets. We also describe in detail, the behaviour of proposed approaches when exploited with different settings and different source datasets.

GTA-V to Cityscapes: Table 1 details the experimental results of 19 categories when adapted from GTA-V to Cityscapes. We use standard mIoU as semantic segmentation performance measure and report results on Cityscapes validation set. Our proposed approach of self-supervised learning with SISC pseudo-labels, shows state-of-the-art performance with ResNet-38 segmentation model. The SISC approach outperforms the latest approaches for UDA of semantic segmentation. Compared to MinEnt [27] which tries to minimize the self-entropy using direct entropy minimization, our SISC approach shows improvement in overall mIoU. Similarly, compared to the self-training approach presented in [33], the proposed SISC method outperforms it with a margin of in mIoU.

Our weak-labels guided UDA approach tries to capture the global image context by category (object/stuff) based image classification. This model helps improving the overall performance, and especially boost the performance for small and less occurring objects as shown in Table 1. The consistency and accuracy of pseudo weak-labels for image classification enable this approach to help the segmentation model for better performance. With ResNet-38 baseline, pseudo weak-labels when combined with CBST [33] provides boost in mIoU compared to simple CBST. Similarly, when SISC is augmented with PWL based image classification, the mIoU performance increases by from existing stat-of-the-art CBST-SP [33] as shown in Table 1. The ensemble of the two proposed approaches for UDA achieve 49.0 mIoU on Cityscapes validation set, which sets a new benchmark. The high boost in performance shows that both the approaches are capable to extract domain independent representations and produce better segmentation results comparatively.

For a more fair comparison with other UDA methods, in Table 3, we show the mIoU gain with respect to specific baselines methods used. Compared to more complex models with very deep backbones, our approaches produces a higher gain of +13.6 points to source model surpassing the existing methods by a minimum margin of . Fig. 3 shows some examples of semantic segmentation before and after domain adaptation. As illustrated in the figure, the segmentation results improves significantly with SISC and SISC+PWL based approaches compared to source and CBST-SP methods.

Dataset GTA Cityscapes SYN Cityscapes
Methods Source only UDA Algo. mIoU gain Source only UDA Algo. mIoU* gain
FCN in the wild [15] 21.2 27.1 5.9 23.6 25.4 1.8
Curriculam DA [30] 22.3 28.9 6.6 28.4 34.82 6.42
AdaptSetNet [26] 36.6 42.4 5.8 38.6 46.7 8.1
MinEnt [27] 36.6 42.3 5.7 38.6 44.2 5.6
CLAN [17] 36.6 43.2 6.6 38.6 47.8 9.2
All Structure [2] 36.6 45.4 8.8 38.6 48.7 10.1
CBST [33] 35.4 46.2 10.8 33.6 48.4 14.8
Ours (SISC) 35.4 48.7 13.3 33.6 50.8 17.2
Ours (SISC+PWL) 35.4 49 13.6 33.6 51.0 17.4
Table 3: Performance (mIoU, mIoU*) gain comparison between the GTA-V and SYNTHIA trained source models and the respective adapted models from GTA-V and SYNTHIA to Cityscapes.

Target Image Gound Truth ResNet-38 [28] CBST-SP [33] Ours (SISC) Ours (SISC+PWL)
Figure 3: Segmentation results on Cityscapes validation set when adapted from GTA to Cityscapes.

SYNTHIA to Cityscapes: SYNTHIA is a more diverse dataset with multiple viewpoints and different spatial constraints compared to GTA-V and Cityscapes. In Table 2, we present the unsupervised adaptation results on Cityscapes validation set when adapted from SYNTHIA. The categories in SYNTHIA and Cityscapes do not fully overlap, so we have selected the common 16 classes as done in [15, 30, 33] for evaluation. We have also reported the performance (mIoU*) over the 13 common classes as used in [33, 26, 17]. With ResNet-38 as baseline netwrok, our proposed SISC based sef-supervised learning method performs superior to existing state-of-the-art methods as shown in Table 2. Compared to MinEnt [27] which uses similar entropy minimization technique, our SISC based UDA approach achieves gain in mIoU and gain in mIoU*. Similarly, compared to CBST presented in [33], our SISC based approach gains and points in mIoU and mIoU* respectively. Our proposed PWL guided UDA approach combined with SISC based self-supervised learning provides and boost in mIoU and mIoU* respectively when compared with CBST. Compared to an ensemble method (adversarial training and self-training) [27], our composite UDA method achieves and gain in mIoU and mIoU* respectively.

To make a more fair comparison with existing methods, Table 3 shows the baseline, after adaptation, and gain in terms of mIoU*. It is fair to say, that our proposed methods outperforms the existing state-of-the-art methods achieving the gain over baseline with a minimum margin of . In Fig. 4, some examples of semantic segmentation before and after UDA are shown. As illustrated, the segmentation results improves significantly with SISC and SISC+PWL based approaches compared to source and CBST methods.

Target Image Gound Truth ResNet-38 [28] CBST-SP [33] Ours (SISC) Ours (SISC+PWL)
Figure 4: Segmentation results on Cityscapes validation set when adapted from SYNTHIA to Cityscapes.

4.3 Ablation experiments

Relative frequency based pseudo-labels: Besides the adapted methodology in Section 3.2, we also generated pixel classification relative frequency based pseudo-labels. The randomly selected patches like SISC are segmented and recombined in the large output map. A count is made for each pixel with respect to assigned category in each patch, and then relative frequency is calculated. This relative frequency is used as prediction probability and incorporated in pseudo-labels generation. Due to hard decision, the pseudo-labels generated were not effective and lead to a decline in the performance.

GTA-V Cityscapes
0.1 0.05 0.025 0.001
SISC+PWL 46.0 48.1 49.0 48.24
0.0 0.1 0.05 0.025
SISC+PWL 45.5 46.0 49.0 47.33
Table 4: Influence of and on overall performance.

Patch size selection: Our base models for semantic segmentation in both cases are trained on random patches selected from the whole image randomly. Following that nominal size, we have chosen as our patch size for pseudo-label generation. We also tried with patch size but on high resolution Cityscapes images, these small image patches were not contributing. For patch size greater than there were GPU memory limitations. Similarly, we selected 25, 50 and 100 patches per image randomly for SISC pseudo-labels generation. 25 patches were not enough to capture the high resolution Cityscapes images and 100 patches were taking the process very slow with negligible gain over 50 patches. Therefore, for all experiments, we have chosen 50 random patches per image. Category based image classification loss weight: Since image classification is added as a supporting module to segmentation network, the loss contribution by this module should also be limited. We tried multiple weight factors, and selected (Table 4). Pseudo-weak-label generation: For category based image classification loss, the PWL are generated from segmentation pseudo-labels. Since it is difficult to set a minimum number of pixels limit for a category to be labeled as present in an image. Therefore, we exploited the category distribution of source datasets and assigned pseudo weak-labels to present categories based on source data distribution. For GTA-V to Cityscapes, we select a category to be labeled as present in an image if, it has more pixels compared to the of mean category pixels of the same category in the source dataset. A detailed comparison along with respective mIoU is shown in Table 4.

5 Conclusions

In this paper, we have proposed, Multi-level self learning strategy (MLSL) for UDA of semantic segmentation by generating pseudo-labels at fine-grain pixel-level and image level, helping identify domain invariant features at both latent and output space. Using a reasonable assumption that labels of objects and stuff should be same regardless of their location, we generate Spatially independent but Semantically Consistent Labels. Image level labels, called pseudo weak-label (PWL) are generated by learning the pixel-wise object size distribution in the source domain images and using it as consistency check over SISC pseudo-labels. Binary cross-entropy loss using PWL enforces latent space to preserve the information about the objects, helping domain adapt for small objects. This multi-level pseudo-label generation for self-supervised learning, allows the network to learn domain-invariant features at different hierarchical levels. The rigorous experimentation demonstrates that the proposed SISC based self-supervised method alone outperforms the existing state-of-the-art algorithms on benchmark datasets: mIoU* improves from to and to on GTA-V SYNTHIA to Cityscapes. This includes both, ones using self-supervision or adversarial learning. Augmented with a PWL based image classification module, our proposed method further improves the performance, especially in the small objects. Effectiveness of SISC and PWL is highlighted by the substantial improvement of mean IOU over the base model, which is significantly more than previous state-of-methods.


  • [1] V. Badrinarayanan, A. Kendall, and R. Cipolla (2017) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39 (12), pp. 2481–2495. Cited by: §1, §2, §2.
  • [2] W. Chang, H. Wang, W. Peng, and W. Chiu (2019-06) All about structure: adapting structural information across domains for boosting semantic segmentation. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §2, Table 1, Table 2, Table 3.
  • [3] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2015) Semantic image segmentation with deep convolutional nets and fully connected crfs. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, Cited by: §1, §2.
  • [4] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40 (4), pp. 834–848. Cited by: §1, §2.
  • [5] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang (2015)

    MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems

    LearningSys Workshop, NIPS 2015 abs/1512.01274. External Links: Link, 1512.01274 Cited by: §4.1.3.
  • [6] Y. Chen, W. Chen, Y. Chen, B. Tsai, Y. F. Wang, and M. Sun (2017) No more discrimination: cross city adaptation of road scene segmenters. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2011–2020. Cited by: §1, §2.
  • [7] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool (2018-06) Domain adaptive faster r-cnn for object detection in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [8] Y. Chen, W. Li, and L. Van Gool (2018-06) ROAD: reality oriented adaptation for semantic segmentation of urban scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §2, Table 2.
  • [9] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §4.1.1.
  • [10] G. Csurka and F. Perronnin (2008) A simple high performance approach to semantic segmentation.. In BMVC, pp. 1–10. Cited by: §1.
  • [11] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: §1.
  • [12] R. Gong, W. Li, Y. Chen, and L. V. Gool (2019-06) DLOW: domain flow for adaptation and generalization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, Table 1.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • [14] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2018) CyCADA: cycle-consistent adversarial domain adaptation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 1994–2003. Cited by: §1, §2, §2.
  • [15] J. Hoffman, D. Wang, F. Yu, and T. Darrell (2016) Fcns in the wild: pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649. Cited by: §1, §4.2, Table 3.
  • [16] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §1, §2, §2.
  • [17] Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang (2019-06) Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, Table 1, Table 2, Table 3.
  • [18] M. Mancini, L. Porzi, S. Rota Bulò, B. Caputo, and E. Ricci (2018-06) Boosting domain adaptation by discovering latent domains. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §2.
  • [19] H. Noh, S. Hong, and B. Han (2015) Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528. Cited by: §1.
  • [20] S. R. Richter, V. Vineet, S. Roth, and V. Koltun (2016) Playing for data: Ground truth from computer games. In European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), LNCS, Vol. 9906, pp. 102–118. Cited by: §1, §2, §4.1.1.
  • [21] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1.
  • [22] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez (2016-06) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §4.1.1.
  • [23] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §4.1.2.
  • [24] F. S. Saleh, M. S. Aliakbarian, M. Salzmann, L. Petersson, and J. M. Alvarez (2018) Effective use of synthetic data for urban scene semantic segmentation. In European Conference on Computer Vision, pp. 86–103. Cited by: Table 1.
  • [25] S. Sankaranarayanan, Y. Balaji, A. Jain, S. N. Lim, and R. Chellappa (2018) Learning from synthetic data: addressing domain shift for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §2.
  • [26] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker (2018-06) Learning to adapt structured output space for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §2, §2, §2, §4.2, Table 1, Table 2, Table 3.
  • [27] T. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez (2019) Advent: adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2517–2526. Cited by: §1, §1, §2, §4.1.1, §4.2, §4.2, Table 1, Table 2, Table 3.
  • [28] Z. Wu, C. Shen, and A. Van Den Hengel (2019) Wider or deeper: revisiting the resnet model for visual recognition. Pattern Recognition 90, pp. 119–133. Cited by: §3, Figure 3, Figure 4, §4.1.2, Table 1, Table 2.
  • [29] J. Zhang, C. Liang, and C. J. Kuo (2018) A fully convolutional tri-branch network (fctn) for domain adaptation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3001–3005. Cited by: §2, §2.
  • [30] Y. Zhang, P. David, and B. Gong (2017-10) Curriculum domain adaptation for semantic segmentation of urban scenes. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §4.2, Table 3.
  • [31] Y. Zhang, Z. Qiu, T. Yao, D. Liu, and T. Mei (2018) Fully convolutional adaptation networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6810–6818. Cited by: §1, §2, §2.
  • [32] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2881–2890. Cited by: §1, §2.
  • [33] Y. Zou, Z. Yu, B. Vijaya Kumar, and J. Wang (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 289–305. Cited by: §1, §1, §1, §2, §2, §3.2, §3.3, §3, Figure 3, Figure 4, §4.1.1, §4.2, §4.2, §4.2, Table 1, Table 2, Table 3.