Correlation Maximized Structural Similarity Loss for Semantic Segmentation

10/19/2019 ∙ by Shuai Zhao, et al. ∙ 0

Most semantic segmentation models treat semantic segmentation as a pixel-wise classification task and use a pixel-wise classification error as their optimization criterions. However, the pixel-wise error ignores the strong dependencies among the pixels in an image, which limits the performance of the model. Several ways to incorporate the structure information of the objects have been investigated, , conditional random fields (CRF), image structure priors based methods, and generative adversarial network (GAN). Nevertheless, these methods usually require extra model branches or additional memories, and some of them show limited improvements. In contrast, we propose a simple yet effective structural similarity loss (SSL) to encode the structure information of the objects, which only requires a few additional computational resources in the training phase. Inspired by the widely-used structural similarity (SSIM) index in image quality assessment, we use the linear correlation between two images to quantify their structural similarity. And the goal of the proposed SSL is to pay more attention to the positions, whose associated predictions lead to a low degree of linear correlation between two corresponding regions in the ground truth map and the predicted map. Thus the model can achieve a strong structural similarity between the two maps through minimizing the SSL over the whole map. The experimental results demonstrate that our method can achieve substantial and consistent improvements in performance on the PASCAL VOC 2012 and Cityscapes datasets. The code will be released soon.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

(a) Real-world image
(b) Ground truth
(c) Prediction with cross entropy
(d) Prediction with proposed SSL
Figure 1: Compared to the regular cross entropy loss, the SSL can incorporate the structure information of the objects. Thus the model trained with SSL can better identify the pixels belonging to the horse’s leg or the horse body covered by the race number.

Semantic image segmentation is considered as a pixel-wise multiclass classification problem in practice, and the goal is to assign semantic labels to every pixel in the image. Much progress has been made with powerful convolutional neural networks (, VGGNet 

[37], ResNet [16], Xception [11]) and fancy segmentation models (, FCN [26], PSPNet [47], DeepLab [7, 8, 10], ExFuse [45]

). Generally, these models are optimized by a pixel-wise classification error. And the most commonly used pixel-wise loss function for semantic segmentation is the softmax cross entropy loss:

(1)

where is the number of pixels, is the number of classes, is the ground truth, and

is the estimated probability. The pixel-wise cross entropy loss considers the pixels as independent samples, and the total loss used for training is the average loss over all pixels. However, there are strong dependencies existed among pixels in an image and these dependencies carry important information about the structure of the objects 

[40]. As the pixel-wise loss ignores the relationship between pixels, models trained with a pixel-wise loss may get poor segmentation results when the visual evidence for the foreground is weak (, horse body covered by the race number in Fig. 1) or when pixels belong to objects with small spatial structure (, pixels belonging to the horse’s leg in Fig. 1[19].

Previous work has pursued three directions to utilize the structure information of the objects:

  1. Conditional Random Field (CRF). CRF can model the relationships between the pixels and enforce the predictions of pixels with similar visual appearances to be more consistent. It is typically used as a post-processing step [7, 20, 35] or a plug-in module inside the neural networks [25, 48]. Nevertheless, CRF usually has time-consuming iterative inference routines and is sensitive to visual appearance changes [19].

  2. Image structure priors, such as contour cues [1, 9] and pixel affinity [19, 24, 28]. Some additional model branches in such approaches [1, 9, 28]

    are required to extract contours or pixel affinity from images. Then these image priors are fused into predicted maps in a carefully designed manner. Besides, additional memory is required to hold the pixel affinity matrix 

    [19, 24].

  3. Generative adversarial network (GAN) [17, 27, 49]. Luc  [27] think the discriminator can detect and correct high-order inconsistencies between the ground truth map and the one produced by the generator. However, GAN models are often hard to train, and the performance of them may become unstable or even collapse [32]. Additionally, a large memory is also required to hold deep generator and discriminator networks simultaneously. Even with such efforts, the improvement of the performance may be slight, , Luc  [27] get about 0.25% improvement in mean intersection-over-union (mIoU) score on the PASCAL VOC 2012 dataset [13].

As discussed above, most of the existing approaches are resource-consuming or show minor improvements. Consequently, the top-performing models (, DeepLabv3+ [10], MSCI [22], ExFuse [45]) typically do not integrate these structure modeling techniques. In contrast, we propose a simple yet effective novel method to encode the structure information of the objects, which only requires a few additional computational resources during the training stage.

Inspired by the widely-used structural similarity (SSIM) index [40] in image quality assessment (IQA), we use the linear correlation between two images to quantify their structural similarity. Then we calculate the differences between the standard normalized results of two corresponding regions in the ground truth map and the predicted map. The differences between the two regions are used to measure their degree of linear correlation, and it is considered as the structural difference between the two regions. According to the magnitude of the structural difference, the regular cross entropy loss is reweighted to pay more attention to the inconsistent pixels between the two regions. Meanwhile, pixels with small structural differences are abandoned to make the training more efficient, and this can also be regarded as the online hard example mining (OHEM)  [31, 36, 38, 42] strategy. Now we get our structural similarity loss (SSL) for semantic segmentation. With the SSL, the segmentation model learns to focus more on positions whose relevant predictions lead to a low degree of linear correlation between two corresponding regions in two maps. By minimizing the SSL over the whole map, the predicted map can achieve a strong structural similarity with the ground truth map. And it is important to note that the SSL is supervised by the statistics of regions in the maps, thus it is a region-wise loss rather than a pixel-wise loss.

The SSL has a few appealing properties over existing approaches. First, the loss provides an intuitive way to measure the structural similarity between two images. Second, it can be implemented easily in a convolutional manner and only requires a few additional computational resources during training. Thus it can be effortlessly incorporated into any segmentation frameworks. Last but not least, the SSL is also easier to train than GAN and more efficient than CRF, as the SSL does not require additional networks or inference routines during training and testing.

The experimental results demonstrate that our proposed method can achieve substantial and consistent improvements in performance on the PASCAL VOC 2012 [13] and Cityscapes [12] datasets. We also empirically compare the SSL with some existing structure modeling methods [19, 20] on the PASCAL VOC dataset, where the SSL obtains better results.

2 Related Work

2.1 Semantic segmentation

Most state-of-the-art semantic segmentation approaches (, PSPNet [47], DeepLabv3 [8], DeepLabv3+ [10], SDN [14], EncNet [44], ExFuse [45]) use a pixel-wise error as their optimization criterions. However, the pixel-wise error ignores the relationship between pixels. As discussed in Sec. 1, several ways to incorporate the structure information of the objects have been investigated, , CRF based methods [7, 20, 35, 25, 48], image structure priors based methods [1, 9, 19, 24, 28], and GAN [17, 27, 49]. Nevertheless, these methods are usually resource-consuming or show minor improvements.

2.2 Structural similarity index

The SSIM index [40] is a widely used full-reference IQA measure. The goal of the full-reference IQA algorithm is to measure the similarity between two images, where one of them is the reference image. For semantic segmentation, the ground truth map is the reference image, and we need to measure the similarity between the ground truth map and the predicted map. Given two images or two image patches and , the SSIM index combines three components, namely, a luminance term, a contrast term and a structure term as follows:

(2)
(3)
(4)

where , and are the mean of

, the variance of

, and the covariance of and respectively. The small positive constants , and are included to stabilize each term. The mean and variance can be considered as estimates of the lumiance and contrast of the image, and the covariance measures the tendency of and to vary together, thus an indication of structural similarity [41].

Generally, the SSIM index is expressed as

(5)

where parameters , and are used to adjust the relative importance of the three components. If we set and , we get the commonly used simplified form of the SSIM index [39, 40, 41]:

(6)

Here and if and only if  [40]. is also called contrast-structure measure.

SSIM-based optimization. The SSIM index is also widely used as an optimization criterion in some image processing tasks for the sake of better visual quality, , image denosing [5, 6, 46], image downscaling [30], image deblurring [29], image compression [39]

, image demosaicking and super-resolution 

[46]. Conventionally, the maximization of SSIM is cast as a minimization problem:

(7)

The SSIM-based optimization is an instance of the perceptual optimization framework where the objective measure models the perceptual quality of an image [3].

3 Methods

In this section, we first analyze the SSIM index. Then we propose our structural similarity loss for semantic segmentation. Finally, we discuss the estimation of the local statistic and the overall objective function used for training.

3.1 Analysis of the SSIM index

The structure term (Eq. (4)) is the key that the SSIM index can measure the structural similarity between two images. The term is exactly the pearson correlation coefficient between and :

(8)

This suggests that the SSIM index uses the linear correlation between two images to quantify their structural similarity. Image processing systems [5, 29, 30, 39, 46], which use the as an optimization criterion for better visual quality, are actually attempting to achieve a high positive linear correlation between the generated image and the ground truth image. However, as the is not convex [4] and the segmentation map is also not the same as the real-world image, the is not an appropriate optimization criterion for semantic segmentation.

We can get another simplified form of the by subtracting the mean from the values of the pixels [4, 6]:

(9)

where and is the number of entries of . Here , , and it means because and . The equation (9) is quasiconvex when  [4], but this condition is not guaranteed to be met in the context of semantic segmentation. As the ground truth is in and the predicted probability is in , we can only know the elements of and are in range . Moreover, quasiconvex optimization requires carefully designed optimization methods like bisection method [2, 5]. For these reasons, the equation (9) is also not an appropriate optimization criterion for a deep neural network semantic segmentation model.

3.2 Structural similarity loss

Based on the analysis in Sec. 3.1, we propose our structural similarity loss (SSL) for semantic segmentation to achieve a high positive linear correlation between the ground truth map and the predicted map.

For a ground truth map with shape ( are the height, width and the number of channels respectively), we consider it as binary images. The structure comparison is conducted after standard normalization:

(10)

where and is the local

mean and standard deviation of the ground truth

respectively, the corresponding point of locates at the center of the local region, is the predicted probability, and is a stability factor. The total absolute error between two image patches can measure their degree of linear correlation. The smaller the total error is, the more likely the two image patches achieve a positive linear correlation, and this means the structures of them are more likely to be the same.

Then we reweight the cross entropy loss according to and abandon the samples with small :

(11)
(12)

where is the theoretical maximum value of , equals one when the condition inside holds and otherwise equals zero, is a weight factor used to select the abandoned samples, and is the sigmoid cross entropy loss. The factor is set to be in practice, and this is an empirical value.

The influence of the reweighting strategy is shown in Fig. 2. The inconsistent pixels between two maps get more attention after reweighting. As the maps are standardly normalized locally, the SSL is under the supervision of the local statistics, thus it is a region-wise loss. Moreover, Janocha  [18]

experimentally demonstrate that the log loss is a very appropriate choice when training a deep neural network classifier, so we still use it as the optimization criterion. And the error

is used as a constant weighting coefficient to make the model easy to optimize in practice.

Generally, there are millions or even tens of millions of pixels in a mini-batch. At the late stage of the training, the segmentation model can usually get a high pixel accuracy (, 96%) and a relatively low mean intersection-over-union (mIoU) score (, 78%). This phenomenon indicates that the easily classified samples dominate the loss and make the training inefficient [23, 43]. Therefore, we consider the samples with small as easy examples [36] and abandon them during training. Then the hard examples (the error is large) which leads to a low degree of linear correlation between and further get more attention. This is the OHEM strategy  [31, 36, 38, 42].

The total SSL over the mini-batch is:

(13)

where is the number of hard examples. As the structure comparison is performed between each ground truth binary image and the corresponding probability map independently, we consider each point in the binary image as a sample and the points in different binary images are independent. These factors determine the manner we calculate the number of hard examples and the choice of the sigmoid cross entropy loss.

(a) Ground truth
(b) Prediction
(c) Normalized groun truth
(d) Normalized prediction
Figure 2: The total binary cross entropy loss between (a) and (b) is about , and the loss of the center pixel accounts for about of the total loss. After normalization and reweighting, the total loss between (c) and (d) is about , and the loss of the center pixel accounts for about of the total loss. The inconsistent pixels between two maps get more attention after normalization and reweighting.

With the SSL, the segmentation model learns to pay more attention to predictions which lead to a low degree of linear correlation between two regions. Thus the prediction can achieve a strong structural similarity with the ground truth by minimizing the SSL over the whole map.

3.3 Estimation of the local statistic

The local statistics and are computed within a local square window with certain size , which moves pixel-by-pixel over the entire image. This can be easily implemented in a convolutional manner. Following the SSIM index [40], we use a circular-symmetric gaussian weighting function (normalized to unit sum ) with standard deviation of 1.5 samples to estimate the local statistics:

(14)
(15)

With the gaussian window approach, the normalized segmentation map exhibit a locally isotropic property [40]. And in the gaussian window, points closer to the center will contribute more information. As the ground truth is in , it means . If we subtitute it into the Eq. (15), we get . Then we can get:

(16)
Figure 3: Statistics of normalized values during a training procedure. The standard deviation of the gaussian kernel is 1.5, and the region size is 3. The data is smoothed to get better visual quality. The black line and dark green line represent the extreme value of and respectively. It is clear that the extreme value of is less than the extreme value of at the same time. The uppermost line represents the maximum practical value of during training. The middle lines represent the mean and median of . They are both close to 0 and the mean is larger than the median at the same time. Best viewed in color with 300% zoom.

If we take derivative of , we can easily find that is decreasing on the interval . Thus, get the maximum if and only if and values of all the other pixels are . In the opposite case, get the minimum . In practice, the in the Eq. (11) is set to be the difference between the theoretical and . The distribution of the predicted probability is not so extreme like the distribution of the ground truth, so the maximum of the absolute normalized probability is usually smaller than the or . This is shown in Fig. 3.

As pointed by Wang  [40], the statistical features of the image are usually highly spatially non-stationary. Furthermore, the global mean and variance are rotation invariant. To better capture the local details of the image, we apply the SSL locally rather than globally.

Finally, we get the overall objective function:

(17)

where the is a weight factor and we simply set in practice. The role of the pixel-wise cross entropy loss is like the luminance term (Eq. (2)) of the SSIM index. They both measure the similarity of the pixel intensity between two images. The role of the is like the structure term (Eq. (4)). They both measure the structural similarity between two images.

To be consistent with the SSL, we use the sigmoid cross entropy loss rather than the multiclass softmax cross entropy loss. It means we consider the semantic segmentation as multiple one-vs.-rest binary classification problems rather than a multi-class classification problem. Thus we adopt the sigmoid cross entropy loss and train multiple binary classifiers jointly. Liang  [21] argue the class competition introduced by softmax cross entropy loss hinders the model’s capability of learning a unified model using diverse label annotations, where only some parts of concepts belonging to one super-class are visible. EncNet [44] uses the sigmoid cross entropy loss to regularize the training. In practice, we get a slight improvement with the sigmoid cross entropy loss on the PASCAL VOC 2012 dataset [13], but the loss does not work on the Cityscapes dataset [12].

4 Experiments

4.1 Experimental setup

Models. We choose the DeepLabv3 [8] and DeepLabv3+ [10] as our base models. The backbone networks of the DeepLabv3 and DeepLabv3+ models are ResNet-101 [16] and Xception-65 [11]

respectively. The backbone networks are pretrained on ImageNet 

[33]111https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md.

Datasets. We evaluate our method on the PASCAL VOC 2012 [13] and Cityscapes [12] datasets. The original PASCAL VOC 2012 dataset contains 1 464 (train), 1 449 (val) and 1 456 (test) images. And the dataset contains 20 foreground object classes and one background class. Cityscapes dataset is a large-scale dataset containing high quality pixel-level annotations of 5 000 images (2 975, 500, and 1 525 for the training, validation, test sets respectively). Cityscapes dataset contains 19 object classes.

Method backg. aero. bike bird boat bottle bus car cat chair cow d.table dog horse mbike person p.plant sheep sofa train tv mIoU (%)
CE 94.07 85.54 52.46 81.18 67.84 77.63 91.42 88.00 91.27 35.95 83.44 61.47 86.80 84.48 85.67 82.28 64.19 85.97 53.63 84.02 69.97 76.54
SSL(3) 94.43 88.58 56.32 85.52 67.22 76.07 90.53 85.92 90.68 35.92 81.80 62.76 86.80 86.47 86.42 84.32 65.25 86.65 56.36 81.80 72.55 77.26
CE(+) 94.54 91.96 58.00 88.07 66.45 78.17 93.51 88.36 91.34 37.21 84.90 60.50 87.51 82.06 84.64 84.21 60.79 84.76 56.45 87.59 73.70 77.84
SSL(+3) 95.25 93.38 63.91 82.46 70.83 76.47 94.75 88.79 93.83 35.74 87.04 62.35 88.99 88.69 87.00 86.80 64.39 88.47 60.25 90.24 70.98 79.55
Table 1: Per-class results on the PASCAL VOC 2012 test set. The CE means the base model is DeepLabv3 and the loss is softmax cross entropy. The SSL(+3) indicates that the base model is DeepLabv3+ and the loss is the SSL with region size 3.

Learning rate and training steps. We use the poly learning rate policy where the initial learning rate is multiplied by with . For the PASCAL VOC 2012 dataset, the model is first trained on the trainaug [15] set which contains 10 582 images for about

iterations with the initial learning rate = 0.007, then we freeze the batch normalization parameters and finetune the model on the

train set for another iterations with a smaller initial learning rate = 0.0005. For Cityscapes dataset, we train the model on the train set for about 90K iterations with the initial learning rate = 0.007 and no finetuning procedure. We adopt the slow start strategy [34], where a small learning rate is used for the beginning 100 steps, 0.001 for the first stage and 0.0001 for the finetune stage.

Crop size and output stride. For the PASCAL VOC 2012 dataset, when using DeepLabv3 as the base model, the crop size is 513, and the batch size is 12. With DeepLabv3+, we change the crop size to be 393. For the Cityscapes dataset, when using DeepLabv3 as the base model, the crop size is 669, and the batch size is 8. We choose small crop sizes and batch sizes due to the limitations of our experiment platforms, two GTX 1080 Ti (11GB memory) GPUs. As the DeepLabv3+ model adds a decoder module to the DeepLab3 model, it needs more memory to hold the network. A larger crop size or batch size can get better performance [8]

. The output stride is always 16 during training and inference. It is also worth noting that we upscale the logits back to the input image resolution rather than downsampling the ground truth labels for training.

Data augmentation.

We apply data augmentation by randomly scaling the input images (from 0.5 to 2.0 on PASCAL, 0.75 to 1.25 on Cityscapes) and randomly left-right flipping during training. After training, we do inference on the original images without any special inference strategy. The evaluation metric is the mean intersection-over-union (mIoU) score. The rest settings are the same as 

[8].

4.2 Results on PASCAL VOC 2012 dataset

4.2.1 Effectiveness of the SSL

Model Method Size mIoU (%)
DeepLabv3 CE 76.33
BCE 76.80
SSIM - Eq. (7) 3 77.16
SSIM - Eq. (7) 11 76.25
SSIM - Eq. (9) 3 77.02
SSIM - Eq. (9) 11 76.78
SSL 1 76.66
SSL 11 77.70
SSL 9 77.87
SSL 7 78.34
SSL 5 77.69
SSL 3 78.46
DeepLabv3+ CE 78.12
BCE 78.31
SSL 1 79.04
SSL 11 79.22
SSL 7 79.88
SSL 3 80.63
Table 2: Evaluation of the SSL on the PASCAL VOC 2012 val set. CE and BCE are the softmax and sigmoid cross entropy losses respectively. The size is the region size. When using the SSIM, we apply the sigmoid operation.

We first evaluate the SSL on the PASCAL VOC 2012 val set. The results are shown in Tab. 2. With DeepLabv3 and DeepLabv3+ as base models, the SSL can improve the performance by 2.13% and 2.51% on the val set respectively. With similar settings, the DeepLabv3 and DeepLabv3+ can achieve 77.21% and 78.85% on the val set respectively in the original papers [8, 10]. They set the batch size to be 16, crop size to be 513, and the models are only trained on the trainaug set. Our batch size is 12, and the crop size is 513 or 393.

As shown in Tab. 2, the performance of SSL with regions size 3 or 7 is obviously better than the performance of SSL with region size 1. When the region size is 1, the SSL becomes a pixel-wise loss. It demonstrates the SSL does encode the local structure information of the objects. Compared to the SSIM (Eq. (7) and Eq. (9)), the SSL shows better performance. This is consistent with our analysis in Sec. 3.1, the SSIM index is not an appropriate optimization criterion for semantic segmentation. From Tab. 2, we can also find that the SSL with a smaller region size is more likely to get better performance. This may due to two larger image patches are more likely to have similar statistics at the late stage of the training, and this leads to small differences between the two normalized patches. Thus the training becomes inefficient.

The per-class results on test set are shown in Tab. 1, and they are given by the official evaluation server222http://host.robots.ox.ac.uk:8080/anonymous/P25MWR.html. With DeepLabv3 and DeepLabv3+ as base models, the SSL can improve the performance by 0.72% and 1.71% on the test set respectively.

Size OHEM Reweight mIoU (%)
3 1.5 0.10 77.66
3 1.5 0.10 77.61
3 1.0 0.10 77.41
3 2.0 0.10 77.92
3 2.5 0.10 77.99
3 3.0 0.10 78.36
3 3.5 0.10 77.89
3 1.5 0.06 77.39
3 1.5 0.08 77.84
3 1.5 0.09 77.88
3 1.5 0.11 78.02
3 1.5 0.12 78.09
3 1.5 0.14 77.22
3 1.5 0.10 78.46
Table 3: The influence of different components of the SSL. The base model is DeepLabv3. The

is the standard deviation of the gaussian kernel. The larger the sigma is, the closer the gaussian distribution to the uniform distribution.

4.2.2 Ablation study

Furthermore, we study the influence of the OHEM, the reweighting strategy, the standard deviation of the gaussian kernel and the factor in Eq. (11) used to choose the hard examples. When studying the influence of the OHEM, we set all the weights to be . When studying the effect of the reweighting strategy, we do not abandon the easy examples and only reweight the cross entropy loss. The results on PASCAL VOC 2012 val set are shown in Tab. 3.

From Tab. 3, we find both OHEM and reweighting strategies are essential. When only applying one of the two strategies, the performance cannot obtain a high improvement. And when the standard deviation of the gaussian kernel is 1.5, the SSL gets the best result. We further record the proportion of the hard examples to the population with different , and this is shown in Fig. 4. It is clear that the number of hard examples is sensitive to the value of , so the choice of the is vital to the SSL. The model gets the best performance when .

Figure 4: The proportion of the hard examples to the population with different . The region size is 3, and the standard deviation of the gaussian kernel is 1.5. It is easy to see that the proportion is sensitive to the value of . When , the proportion is about . Best viewed in color with 300% zoom.

4.2.3 Comparison to prior work

In this section, we compare the SSL with the CRF [20] and the affinity field loss [19] based on the DeepLabv3 model. We also show the results of GAN [27].

CRF. Following the DeepLabv2 [7], we use the fully connected CRF introduced in [20] as a post-processing step and use the negative logarithm of the predicted probability as unary potential. We use the default setting of the official public code333http://www.philkr.net/2011/12/01/nips/ and its python wrapper444https://github.com/lucasb-eyer/pydensecrf.

As a post-processing step, CRF needs additional inference time. When the inference steps of the CRF are 1, 2, and 5, the additional time for each image is about 0.4s, 0.5s and 0.75s respectively. This is intolerable in some real-time application scenarios.

Affinity field loss. The affinity field loss [19] exploits the relationships between pairs of pixels. The loss imposes a grouping force on the neighbour pixels which belong to the same class to make their predictions more consistent, and a separating force on the neighbour pixels which belong to different classes to make their predictions more inconsistent.

We reproduce the affinity field loss according to the official implementation555https://github.com/twke18/Adaptive_Affinity_Fields. However, the loss adopts an 8-neighbour strategy and we need to hold

ground truth and predicted maps in memory when calculating the loss. For PASCAL VOC dataset which contains 21 object classes, if the value of a pixel is 32bits long, the size of two tensors with shape

is about 3.95GB. Thus the loss is memory-consuming. Ke  [19] downsample the label map and use 4 GTX Titan X GPUs (12 GB memory) to hold 16 images in a batch with crop size to be 480. With only two GTX 1080Ti GPUs, we set crop size to be 321 and batch size to be 16 for all methods. Other settings are same as described in Sec. 4.1.

Method road swalk build. wall fence pole tlight tsign veg. terrain sky person rider car truck bus train mbike bike mIoU (%)
CE 97.97 82.97 91.16 44.91 58.15 54.19 65.25 74.23 91.36 59.81 93.73 79.36 62.34 94.40 71.32 81.87 60.97 62.30 74.52 73.73
SSL(3) 97.95 83.20 91.56 53.14 58.07 54.86 66.85 75.88 91.68 61.55 93.81 79.77 62.17 94.75 79.12 83.31 63.68 60.71 75.21 75.12
Table 4: Per-class results of the SSL with DeepLabv3 on the Cityscapes val set. SSL(3) indicates the region size is 3.
Model Method Size mIoU (%)
Luc  [27] Base 71.79
LargeFOV 72.04
DeepLabv3 CE 74.83
BCE 75.24
CE + CRF-1 75.65
CE + CRF-2 75.00
CE + CRF-5 73.58
Affinity 3 75.29
SSL 11 75.89
SSL 7 76.08
SSL 3 76.76
Table 5: Comparison to prior work on PASCAL VOC 2012 val set. Luc  [27] use the base as their segmentation network, and the LargeFOV is an additional network which is used as the discriminator when applying the adversarial training strategy. When using the DeepLabv3 as the base model, the crop size is . CRF-X means that we do inference with X iteration steps when applying CRF. The size of the affinity field loss is the neighbour size used to choose the pixel pairs, and 3 is the default choice in  [19]. The SSL gets better performance than affinity field loss and CRF. Meanwhile, the SSL requires less additional memory and has no extra inference time.

GAN. Luc  [27] get only about 0.25% improvement on the PASCAL VOC 2012 val

set, and this is not effictive enough. Additionally, the pix2pix 

[17] and CycleGAN [49] can only achieve about 35.00% and 16.00% mIoU score on the Cityscapes val set respectively. Limited by the GPU memory, it is hard for us to add an additional deep discriminator network to the DeepLabv3 model. So we directly reference the experimental data from the paper [27].

Model Method size mIoU (%)
DeepLabv3 CE 73.73
BCE 73.58
SSL 11 74.52
SSL 9 74.29
SSL 7 74.26
SSL 5 74.27
SSL 3 75.12
Table 6: Results of the SSL on Cityscapes val set.

All experimental results are shown in Tab. 5. The CRF, affinity field loss and SSL can improve the performance of the model by 0.82%, 0.46%, and 1.93% respectively. Compared to the CRF and affinity field loss, the SSL achieves better performance. Meanwhile, the SSL is less time-consuming than CRF and less memory-consuming than the affinity field loss. Compared to the GAN, the SSL requires no additional networks, and it is evident that the SSL is more effective. With these appealing properties, any segmentation frameworks can incorporate the SSL effortlessly.

4.3 Results on Cityscapes dataset

Moreover, we evaluate the SSL on the Cityscapes dataset with DeepLabv3 model. The size of the image in Cityscapes dataset is . As we can only apply the crop size to be about 489 and batch size to be 8 with the DeepLabv3+ model on two GTX 1080 Ti GPUs, the variation between the mini-batches is large, and this makes the training unstable. Thus we do not evaluate our methods with the DeepLabv3+ model on the Cityscapes.

The results are shown in Tab. 6. With DeepLabv3 as the base model, the SSL can improve the performance by 1.39% on the Cityscapes val set. The per-class results are shown in Tab. 4. With similar settings, the DeepLabv3 can achieve 77.23% mIoU on the val set in the original paper [8]. They set the batch size to be 16, and the crop size to be 769. Our batch size is 8, and the crop size is 669.

5 Conclusion

In this work, we propose a correlation maximized structural similarity loss for semantic segmentation. We use the linear correlation between the ground truth map and the predicted map to quantify their structural similarity. And the goal of the SSL is to pay more attention to predictions which lead to a low degree of linear correlation between the two maps. Thus the model can achieve a strong structural similarity between the two maps by minimizing the SSL. The SSL is simple yet effective and only requires a few additional computational resources during the training stage. Moreover, we experimentally demonstrate that our method can achieve substantial and consistent improvements in performance on standard benchmark datasets.

Acknowledgments

This work was supported in part by The National Key Research and Development Program of China (Grant Nos: 2018AAA0101400), in part by The National Nature Science Foundation of China (Grant Nos: 61936006), in part by the Alibaba-Zhejiang University Joint Institute of Frontier Technologies.

References

  • [1] G. Bertasius, J. Shi, and L. Torresani. Semantic segmentation with boundary neural fields. In CVPR, 2016.
  • [2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
  • [3] D. Brunet, S. S. Channappayya, Z. Wang, E. R. Vrscay, and A. C. Bovik. Optimizing Image Quality. 2018.
  • [4] D. Brunet, E. R. Vrscay, and Z. Wang. On the mathematical properties of the structural similarity index. TIP, 2012.
  • [5] S. S. Channappayya, A. C. Bovik, C. Caramanis, and R. W. Heath. Design of linear equalizers optimized for the structural similarity index. TIP, 2008.
  • [6] S. S. Channappayya, A. C. Bovik, and R. W. H. Jr. A linear estimator optimized for the structural similarity index and its application to image denoising. In ICIP, 2006.
  • [7] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI.
  • [8] L. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. CoRR, abs/1706.05587, 2017.
  • [9] L.-C. Chen, J. T. Barron, G. Papandreou, K. Murphy, and A. L. Yuille. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. In CVPR, 2016.
  • [10] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
  • [11] F. Chollet.

    Xception: Deep learning with depthwise separable convolutions.

    In CVPR, 2017.
  • [12] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The cityscapes dataset for semantic urban scene understanding.

    In CVPR, 2016.
  • [13] M. Everingham, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman. The pascal visual object classes (VOC) challenge. IJCV, 2010.
  • [14] J. Fu, J. Liu, Y. Wang, and H. Lu. Stacked deconvolutional network for semantic segmentation. CoRR, abs/1708.04943, 2017.
  • [15] B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, 2011.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [17] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In CVPR, 2017.
  • [18] K. Janocha and W. M. Czarnecki. On loss functions for deep neural networks in classification. CoRR, abs/1702.05659, 2017.
  • [19] T.-W. Ke, J.-J. Hwang, Z. Liu, and S. X. Yu. Adaptive affinity fields for semantic segmentation. In ECCV, 2018.
  • [20] P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in Neural Information Processing Systems, 2011.
  • [21] X. Liang, H. Zhou, and E. Xing. Dynamic-structured semantic propagation network. In CVPR, 2018.
  • [22] D. Lin, Y. Ji, D. Lischinski, D. Cohen-Or, and H. Huang. Multi-scale context intertwining for semantic segmentation. In ECCV, 2018.
  • [23] T. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In ICCV, 2017.
  • [24] S. Liu, S. De Mello, J. Gu, G. Zhong, M.-H. Yang, and J. Kautz. Learning affinity via spatial propagation networks. In Advances in Neural Information Processing Systems. 2017.
  • [25] Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang. Deep learning markov random field for semantic segmentation. PAMI, 2018.
  • [26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [27] P. Luc, C. Couprie, S. Chintala, and J. Verbeek. Semantic segmentation using adversarial networks. CoRR, abs/1611.08408, 2016.
  • [28] M. Maire, T. Narihira, and S. X. Yu. Affinity CNN: learning pixel-centric pairwise relations for figure/ground embedding. In CVPR, 2016.
  • [29] D. Otero and E. R. Vrscay. Solving optimization problems that employ structural similarity as the fidelity measure. In IPCV, 2014.
  • [30] A. C. Öztireli and M. H. Gross. Perceptually based downscaling of images. TOG, 2015.
  • [31] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe. Full-resolution residual networks for semantic segmentation in street scenes. In CVPR, 2017.
  • [32] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
  • [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li. Imagenet large scale visual recognition challenge. IJCV, 2015.
  • [34] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. In ICML, 2013.
  • [35] F. Shen, R. Gan, S. Yan, and G. Zeng. Semantic segmentation via structured patch prediction, context crf and guidance crf. In CVPR, 2017.
  • [36] A. Shrivastava, A. Gupta, and R. B. Girshick. Training region-based object detectors with online hard example mining. In CVPR, 2016.
  • [37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
  • [38] K. K. Sung. Learning and example selection for object and pattern detection. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995.
  • [39] Z. Wang and A. C. Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine, 2009.
  • [40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 2004.
  • [41] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems Computers, 2003, 2003.
  • [42] Z. Wu, C. Shen, and A. van den Hengel. Bridging category-level and instance-level semantic image segmentation. CoRR, abs/1605.06885, 2016.
  • [43] P. L. C. C. L. Xiaoxiao Li, Ziwei Liu and X. Tang. Not all pixels are equal: Difficulty-aware semantic segmentation via deep layer cascade. In CVPR, 2017.
  • [44] H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal. Context encoding for semantic segmentation. In CVPR, 2018.
  • [45] Z. Zhang, X. Zhang, C. Peng, X. Xue, and J. Sun. Exfuse: Enhancing feature fusion for semantic segmentation. In ECCV, 2018.
  • [46] H. Zhao, O. Gallo, I. Frosio, and J. Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 2017.
  • [47] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
  • [48] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr.

    Conditional random fields as recurrent neural networks.

    In ICCV, 2015.
  • [49] J. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.