Semantic segmentation is an important task for image understanding and object localization. With the development of fully-convolutional neural network (FCN) , there has been a significant advancement in the field using end-to-end trainable networks. The progress in deep convolutional neural networks (CNNs) such as the VGGNet , Inception Net , and Residual Net  pushes the semantic segmentation performance even higher via comprehensive learning of high-level semantic features. Besides deeper networks, other ideas have been proposed to enhance the semantic segmentation performance. For example, low-level features can be explored along with the high-level semantic features  for performance improvement. Holistic image understanding can also be used to boost the performance [6, 7, 8]. Furthermore, one can guide the network learning by generating highlighted targets [9, 19, 20, 18, wu2016high, 26]. Generally speaking, a CNN can learn the semantic segmentation task more effectively under specific guidance.
In spite of these developments, all existing methods focus on the understanding of the features and prediction of the target class. However, there is no mechanism to specifically teach the network to learn the difference between classes. The high-level semantic features are sometimes shared across different classes (or between an object and its background) due to a certain level of visual similarity among classes in the training set. This will yield a confusing results in regions that are located in the boundary of two objects (or object/background) since the responses to both objects (or an object and its background) are equally strong. Another problem is caused by the weaker responses of the target object due to a complicated mixture of objects/background. It is desirable to develop a mechanism to identify these regions and amplify the weaker responses to capture the target object. We are not aware of any effective solution to address these two problems up to now. In this work, we propose a new semantic segmentation architecture called the reverse attention network (RAN) to achieve these two goals. A conceptual overview of the RAN system is shown in Fig.1.
The RAN uses two separate branches to learn features and generate predictions that are and are not associated with a target class, respectively. To further highlight the knowledge learnt from reverse-class, we design a reverse attention structure, which generates per-class mask to amplify the reverse-class response in the confused region. The predictions of all three branches are finally fused together to yield the final prediction for the segmentation task. We build the RAN upon the state-of-the-art Deeplabv2-LargeFOV with the ResNet-101 structure and conduct comprehensive experiments on many datasets, including PASCAL VOC, PASCAL Person Part, PASCAL Context, NYU-Depth2, and ADE20K MIT datasets. Consistent and significant improvements across the datasets are observed. We implement the proposed RAN in Caffe, and the trained network structure with models are available to the public 111https://drive.google.com/drive/folders/0By2w_A-aM8Rzbllnc3JCQjhHYnM?usp=sharing.
2 Related Work
A brief review on recent progresses in semantic segmentation is given in this section. Semantic segmentation is a combination of the pixel-wisea localization task [11, 12] and the high-level recognition task. Recent developments in deep CNNs [13, 2, 3] enable comprehensive learning of semantic features using a large amount of image data [14, lin2014microsoft, deng2009imagenet]. The FCN  allows effective end-to-end learning by converting fully-connected layers into convolutional layers.
Performance improvements have been achieved by introducing several new ideas. One is to integrate low- and high-level convolutional features in the network. This is motivated by the observation that the pooling and the stride operations can offer a larger filed of view (FOV) and extract semantic features with fewer convolutional layers, yet it decreases the resolution of the response maps and thus suffers from inaccurate localization. The combination of segmentation results from multiple layers was proposed in[1, shuai2016improving]. Fusion of multi-level features before decision gives an even better performance as shown in [15, 6]. Another idea, as presented in , is to adopt a dilation architecture to increase the resolution of response maps while preserving large FOVs. In addition, both local- and long-range conditional random fields can be used to refine segmentation details as done in [17, 21]. Recent advances in the RefineNet  and the PSPNet  show that a holistic understanding of the whole image  can boost the segmentation performance furthermore.
Another class of methods focuses on guiding the learning procedure with highlighted knowledge. For example, a hard-case learning was adopted in  to guide a network to focus on less confident cases. Besides, the spatial information can be explored to enhance features by considering coherence with neighboring patterns [9, 19, 20]. Some other information such as the object boundary can also be explored to guide the segmentation with more accurate object shape prediction [21, huang2016object].
All the above-mentioned methods strive to improve features and decision classifiers for better segmentation performance. They attempt to capture generative object matching templates across training data. However, their classifiers simply look for the most likely patterns with the guidance of the cross-entropy loss in the softmax-based output layer. This methodology overlooks characteristics of less common instances, and could be confused by similar patterns of different classes. In this work, we would like to address this shortcoming by letting the network learn what does not belong to the target class as well as better co-existing background/object separation.
3 Proposed Reverse Attention Network (RAN)
Our work is motivated by observations on FCN’s learning as given in Fig. 2, where an image is fed into an FCN network. Convolutional layers of an FCN are usually represented as two parts, the convolutional features network (usually conv1-conv5), and the class-oriented convolutional layer (CONV) which relates the semantic features to pixel-wise classification results. Without loss of generality, we use an image that contains a dog and a cat as illustrated in Fig. 2 as an example in our discussion.
The segmentation result is shown in the lower-right corner of Fig. 2, where dog’s lower body in the circled area is misclassified as part of a cat. To explain the phenomenon, we show the heat maps (i.e. the corresponding filter responses) for the dog and the cat classes, respectively. It turns out that both classifiers generate high responses in the circled area. Classification errors can arise easily in these confusing areas where two or more classes share similar spatial patterns.
To offer additional insights, we plot the normalized filter responses in the last CONV layer for both classes in Fig. 2, where the normalized response is defined as the sum of all responses of the same filter per unit area. For ease of visualization, we only show the filters that have normalized responses higher than a threshold. The decision on a target class is primarily contributed by the high response of a small number of filters while a large number of filters are barely evoked in the decision. For examples, there are about 20 filters (out of a total of 2048 filters) that have high responses to the dog or the cat classes. We can further divide them into three groups - with a high response to both the dog and cat classes (in red), with a high response to the dog class only (in purple) or the cat class (in dark brown) only. On one hand, these filters, known as the Grand Mother Cell (GMC) filter [22, 23], capture the most important semantic patterns of target objects (e.g., the cat face). On the other hand, some filters have strong responses to multiple object classes so that they are less useful in discriminating the underlying object classes.
Apparently, the FCN is only trained by each class label yet without being trained to learn the difference between confusing classes. If we can let a network learn that the confusing area is not part of a cat explicitly, it is possible to obtain a network of higher performance. As a result, this strategy, called the reverse attention learning, may contribute to better discrimination of confusing classes and better understanding of co-existing background context in the image.
3.2 Proposed RAN System
To improve the performance of the FCN, we propose a Reverse Attention Network (RAN) whose system diagram is depicted in Fig. 3. After getting the feature map, the RAN consists of three branches: the original branch (the lower path), the attention branch (the middle path) and the reverse branch (the upper path). The reverse branch and the attention branch merge to form the reverse attention response. Finally, decisions from the reverse attention response is subtracted from the the prediction of original branch to derive the final decision scores in semantic segmentation.
The FCN system diagram shown in Fig. 2 corresponds to the lower branch in Fig. 3 with the “original branch” label. As described earlier, its CONV layers before the feature map are used to learn object features and its layers are used to help decision classifiers to generate the class-wise probability map. Here, we use layers to denote that obtained from the original FCN through a straightforward direct learning process. For the RAN system, we introduce two more branches - the reverse branch and the attention branch. The need of these two branches will be explained below.
Reverse Branch. The upper one in Fig. 3 is the Reverse Branch. We train another layer to learn the reverse object class explicitly, where the reverse object class is the reversed ground truth for the object class of concern. In order to obtain the reversed ground truth, we can set the corresponding class region to zero and that of the remaining region to one, as illustrated in Fig. 1. The remaining region includes background as well as other classes. However, this would result in specific reverse label for each object class.
There is an alternative way to implement the same idea. That is, we reverse the sign of all class-wise response values before feeding them into the softmax-based classifiers. This operation is indicated by the NEG block in the Reverse Branch. Such an implementation allows the layer to be trained using the same and original class-wise ground-truth label.
Reverse Attention Branch. One simple way to combine results of the original and the reverse branch is to directly subtract the reverse prediction from the original prediction (in terms of object class probabilities). We can interpret this operation as finding the difference between the predicted decision of the original FCN and the predicted decision due to reverse learning. For example, the lower part of the dog gives strong responses to both the dog and the cat in the original FCN. However, the same region will give a strong negative response to the cat class but almost zero response to the dog class in the reverse learning branch. Then, the combination of these two branches will reduce the response to the cat class while preserving the response to the dog class.
However, directly applying element-wise subtraction does not necessarily result in better performances. Sometimes the reverse prediction may not do as well as the original prediction in the confident area. Therefore we propose a reverse attention structure to further highlight the regions which are originally overlooked in the original prediction, including confusion and background areas. The output of reverse attention structure generates a class-oriented mask to amplify the reverse response map.
As shown in Fig. 3, the input to the reverse attention branch is the prediction result of
. We flip the sign of the pixel value by the NEG block, feed the result to the sigmoid function and, finally, filter the sigmoid output with an attention mask. The sigmoid function is used to convert the response attention map to the range of [0,1]. Mathematically, the pixel value in the reverse attention mapcan be written as
where denotes the pixel location, and denotes the response map of , respectively. Note that the region with small or negative responses will be highlighted due to the cascade of the NEG and the sigmoid operations. In contrast, areas of positive response (or confident scores) will be suppressed in the reverse attention branch.
After getting the reverse attention map, we combine it with the response map using the element-wise multiplication as shown in Fig. 3. The multiplied response score is then subtracted from the original prediction, contributing to our final combined prediction.
Several variants of the RAN architecture have been experimented. The following normalization strategy offers a faster convergence rate while providing similar segmentation performance:
where is normalized to be within
, which results in a more uniformed distribution before being fed into the sigmoid function. Also, we clip all negative scores of
to zero by applying the Relu operation and control inverse scores to be within the range of [-4, 4] using parametersand . In the experiment section, we will compare results of the reverse attention set-ups given in Equations (1) and (2). They are denoted by RAN-simple (RAN-s) and RAN-normalized (RAN-n), respectively.
RAN Training. In order to train the proposed RAN, we back-propagate the cross-entropy losses at the three branches simultaneously and adopt the softmax classifiers at the three prediction outputs. All three losses are needed to ensure a balanced end-to-end learning process. The original prediction loss and the reverse prediction loss allow and to learn the target classes and their reverse classes in parallel. Furthermore, the loss of the combined prediction allows the network to learn the reverse attention. The proposed RAN can be effectively trained based on the pre-trained FCN, which indicates that the RAN is a further improvement of the FCN by adding more relevant guidance in the training process.
|Methods||feature||pixel acc.||mean acc.||mean IoU.|
|Model A2, 2conv ||Wider ResNet||75.0||58.1||48.1|
To show the effectiveness of the proposed RAN, we conduct experiments on five datasets. They are the PASCAL Context , PASCAL Person-Part , PASCAL VOC , NYU-Depth-v2  and MIT ADE20K . We implemented the RAN using the Caffe  library and built it upon the available DeepLab-v2 repository . We adopted the initial network weights provided by the repository, which were pre-trained on the COCO dataset with the ResNet-101. All the proposed reverse attention architecture are implemented with the standard Caffe Layers, where we utilize the to flip, shift and scale the response, and use the provided Layer to conduct sigmoid transformation.
We employ the ”poly” learning rate policy with , and basic learning rate equals . Momentum and weight decay are set to 0.9 and 0.0001 respectively. We adopted the DeepLab data augmentation scheme with random scaling factor of 0.5, 0.75, 1.0, 1.25, 1.5 and with mirroring for each training image. Following  we adopt the multi-scale (MSC) input with max fusion in both training and testing. Although we did not apply the atrous spatial pyramid pooling (ASPP) due to limited GPU memory, we do observe significant improvement in the mean intersection-over-union (mean IoU) score over the baseline DeepLab-v2 LargeFOV and the ASPP set-up.
|DeepLabv2 (baseline) ||41.6||42.6||43.2||43.5||44.4|
PASCAL-Context. We first present results conducted on the challenging PASCAL-Context dataset . The dataset has 4,995 training images and 5,105 test images. There are 59 labeled categories including foreground objects and background context scenes. We compare the proposed RAN method with a group of state-of-the-art methods in Table 1, where RAN-s and RAN-n use equations (1) and (2) in the reverse attention branch, respectively. The mean IoU values of RAN-s and RAN-n have a significant improvement over that of their baseline Deeplabv2-LargeFOV. Our RAN-s and RAN-n achieve the state-of-the-art mean IoU scores (i.e., around 48.1%) that are comparable with those of the RefineNet  and the Wider ResNet .
We compare the performance of dual-branch RAN (without reverse attention), RAN-s, RAN-n and their baseline DeepLabv2 by conducting a set of ablation study in Table 2, where a sequence of techniques is employed step by step. They include dilated classification, data augmentation, MSC with max fusion and the fully connected conditional random field (CRF). We see that the performance of RANs keeps improving and they always outperform their baseline under all situations. The quantitative results are provided in Fig. 4. It shows that the proposed reverse learning can correct some mistakes in the confusion area, and results in more uniformed prediction for the target object.
PASCAL Person-Part. We also conducted experiments on the PASCAL Person-Part dataset . It includes labels of six body parts of persons (i.e., Head, Torso, Upper/Lower Arms and Upper/Lower Legs). There are 1,716 training images and 1,817 validation images. As observed in , the dilated decision classifier provides little performance improvement. Thus, we also adopted the MSC structure with 3-by-3 decision filters without dialtion for RANs. The mean IoU results of several benchmarking methods are shown in Table 3.The results demonstrate that both RAN-s and RAN-n outperform the baseline DeepLabv2 and achieves state-of-the-art performance in this fine-grained dataset.
|Attention ||HAZN ||Graph LSTM ||RefineNet ||DeepLabv2 ||RAN-s||RAN-n|
PASCAL VOC2012. Furthermore, we conducted experiments on the popular PASCAL VOC2012 test set . We adopted the augmented ground truth from  with a total of 12,051 training images and submitted our segmentation results to the evaluation website. We find that for the VOC dataset, our DeepLab based network does not have significant improvement as the specifically designed networks such as [6, 7]. However we still observer about improvement over the baseline DeepLabv2-LargeFOV, which also outperforms the DeepLabv2-ASPP.
NYUDv2. The NYUDv2 dataset  is an indoor scene dataset with 795 training images and 654 test images. It has coalesced labels of 40 classes provided by . The mean IoU results of several benchmarking methods are shown in Table 5. We see that RAN-s and RAN-n improve their baseline DeepLabv2-LargeFOV by a large margin (around 3%). Visual comparison of segmentation results of two images are shown in Fig. 5.
|Gupta et al. ||FCN-32s ||Context ||Holistic ||RefineNet ||DeepLabv2-ASPP ||DeepLabv2-LFOV ||RAN-s||RAN-n|
MIT ADE20K. The MIT ADE20K dataset  was released recently. The dataset has 150 labeled classes for both objects and background scene parsing. There are about 20K and 2K images in the training and validation sets, respectively. Although our baseline DeepLabv2 does not perform well in global scene parsing as in [8, 7], we still observe about 2% improvement in the mean IoU score as shown in Table 6.
|FCN-8s ||DilatedNet ||DilatedNet Cascade ||Holistic ||PSPNet ||DeepLabv2-ASPP ||DeepLabv2-LFOV ||RAN-s||RAN-n|
A new network, called the RAN, designed for reverse learning was proposed in this work. The network explicitly learns what are and are not associated with a target class in its direct and reverse branches, respectively. To further enhance the reverse learning effect, the sigmoid activation function and an attention mask were introduced to build the reverse attention branch as the third one. The three branches were integrated in the RAN to generate final results. The RAN provides significant performance improvement over its baseline network and achieves the state-of-the-art semantic segmentation performance in several benchmark datasets.
-  Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation.
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
-  Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 1–9
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
Pattern recognition and machine learning.springer (2006)
-  Lin, G., Milan, A., Shen, C., Reid, I.: Refinenet: Multi-path refinement networks with identity mappings for high-resolution semantic segmentation. arXiv preprint arXiv:1611.06612 (2016)
-  Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. arXiv preprint arXiv:1612.01105 (2016)
-  Hu, H., Deng, Z., Zhou, G.T., Sha, F., Mori, G.: Recalling holistic information for semantic segmentation. arXiv preprint arXiv:1611.08061 (2016)
-  Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1422–1430
-  Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, ACM (2014) 675–678
-  Zhang, Y.J.: A survey on evaluation methods for image segmentation. Pattern recognition 29 (1996) 1335–1346
-  Shi, J., Malik, J.: Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22 (2000) 888–905
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
-  Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88 (2010) 303–338
-  Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: Scale-aware semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 3640–3649
-  Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915 (2016)
Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D.,
Huang, C., Torr, P.H.:
Conditional random fields as recurrent neural networks.In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1529–1537
-  Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. arXiv preprint arXiv:1604.03540 (2016)
-  Dai, J., He, K., Li, Y., Ren, S., Sun, J.: Instance-sensitive fully convolutional networks. arXiv preprint arXiv:1603.08678 (2016)
-  Dai, J., Li, Y., He, K., Sun, J.: R-fcn: Object detection via region-based fully convolutional networks. arXiv preprint arXiv:1605.06409 (2016)
-  Chen, L.C., Barron, J.T., Papandreou, G., Murphy, K., Yuille, A.L.: Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 4545–4554
-  Gross, C.G.: Genealogy of the “grandmother cell”. The Neuroscientist 8 (2002) 512–518
-  Agrawal, P., Girshick, R., Malik, J.: Analyzing the performance of multilayer neural networks for object recognition. In: European Conference on Computer Vision, Springer (2014) 329–344
-  Dai, J., He, K., Sun, J.: Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1635–1643
-  Lin, G., Shen, C., van den Hengel, A., Reid, I.: Efficient piecewise training of deep structured models for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 3194–3203
-  Wu, Z., Shen, C., Hengel, A.v.d.: Bridging category-level and instance-level semantic image segmentation. arXiv preprint arXiv:1605.06885 (2016)
-  Wu, Z., Shen, C., Hengel, A.v.d.: Wider or deeper: Revisiting the resnet model for visual recognition. arXiv preprint arXiv:1611.10080 (2016)
-  Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2014) 891–898
-  Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., Yuille, A.: Detect what you can: Detecting and representing objects using holistic models and body parts. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2014) 1971–1978
-  Nathan Silberman, Derek Hoiem, P.K., Fergus, R.: Indoor segmentation and support inference from rgbd images. In: ECCV. (2012)
-  Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Semantic understanding of scenes through the ade20k dataset. arXiv preprint arXiv:1608.05442 (2016)
-  Xia, F., Wang, P., Chen, L.C., Yuille, A.L.: Zoom better to see clearer: Human and object parsing with hierarchical auto-zoom net. In: European Conference on Computer Vision, Springer (2016) 648–663
-  Liang, X., Shen, X., Feng, J., Lin, L., Yan, S.: Semantic object parsing with graph lstm. In: European Conference on Computer Vision, Springer (2016) 125–143
-  Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: Computer vision–ECCV 2014. Springer (2014) 297–312
-  Gupta, S., Arbelaez, P., Malik, J.: Perceptual organization and recognition of indoor scenes from rgb-d images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2013) 564–571
-  Gupta, S., Girshick, R., Arbeláez, P., Malik, J.: Learning rich features from rgb-d images for object detection and segmentation. In: European Conference on Computer Vision, Springer (2014) 345–360