Semantic segmentation is a pixel wise classification problem, where class prediction is assigned for each pixel of an image. The development of deep learning brings semantic segmentation into a new era. Starting from the Fully Covolutional Network (FCN), we have seen a rapid increase in the research field of semantic segmentation based on Convolutional Neural Networks (CNN) [25, 4, 2]. Those methods boost the field and reach the state-of-the-art performance on several semantic segmentation benchmarks.
However, multiple scale objects context understanding is still a challenging problem. Some approaches have been proposed to handle this problem. Following similar criterion presented in , we group those methods into three categories. First, image pyramid based methods: the input image is decomposed to image pyramid, then DCNN (Deep Convolutional Neural Network) is applied separately to every resolution level of the image pyramid input [9, 24, 14]. In this way, different scale objects are captured from different level feature maps. Second, encoder-decoder based methods: for encoder, convolution and pooling operations are applied hierarchically to extract features, then the spatial resolution is recovered in the decoder path by hierarchically up-sampling and convolution operations. The most representable architecture is U-Net  which has been achieving promising result in medical image processing field. And many other encoder-decoder based architectures [2, 13, 1, 18, 19]. Third, spatial pyramid pooling strategy based approaches: the feature maps are aggregated by pooling operations or by atrous convolutions with multiple rates. The atrous spatial pyramid pooling (ASPP) proposed in DeepLabs [4, 5] and the pyramid pooling module (PPM) presented in PSPNet  are two representative work in this group.
The performance of Deeplabs [5, 6] and PSPNet  on some benchmarks shows the effectiveness of their pyramid pooling module. However, the rates of ASPP and PPM are manually selected, which still cannot flexibly and image dependently encode multi/̄scale information. In this paper, our goal is to investigate is there a way to adaptively and input image dependently aggregate the feature maps?
Rethinking of ASPP and atrous convolution, setting different values of the atrous rate of atrous convolution operation endows the network with multiple effective field/̄of/̄view, thus the ability of capturing multi-scale context information. Therefore, if we can adaptively adjust the field/̄of/̄view of the convolution operations in the aggregation part, it would be possible for the network to aggregate contextual information input dependently.
Interestingly, Deformable Convolution Networks (DCN) is proposed recently in [8, 36]. For deformable convolution, the sampling locations are learnable, which can be highly integrated into our purpose. Therefore, in this paper, we propose an adaptive context encoding (ACE) module based on deformable convolution, more precisely, we replace the ASPP module or PPM by three deformable convolution blocks.
This idea is evaluated on Pascal-Context  and ADE20K  datasets for semantic segmentation. We experimentally demonstrate that our proposed method improves the segmentation result consistently over the baseline methods: ASPP and PPM. Especially, a more robust performance is shown under different bath size settings during training process. Moreover, even though our goal is to find a better multi-scale aggregation module compared to ASPP and PPM, our method achieves the state-of-the-art on Pascal-Context dataset with 53.6% mIoU and promising results on ADE20K dataset with a final score of 0.5535. Furthermore, our ACE module can be easily embedded into other networks for further improvement.
2 Related Work
Deep learning based semantic segmentation is rapidly developed and great progress has been achieved. DCNN with pooling and convolution with striding operation is invariant to local image transformations, thus can extract abstractions of data hierarchically. On one hand, this ability is beneficial for high-level vision tasks such as classification. On the other hand, it can bound the performance of pixel wise dense prediction tasks where spatial information is important . Semantic segmentation thus is challenging as it needs to simultaneously perform classification and localization.
There are many works proposed to improve semantic segmentation which can be briefly divided into two directions: Resolution Enlarging and Context Extraction.
2.1 Resolution Enlarging
Atrous convolution which is inspired by the atrous algorithm  is claimed to be useful for extracting denser feature maps which can further alleviate the detail information loss. Thus it is widely used in semantic segmentation to enlarge the receptive-field-of-view and extract dense feature [4, 6, 27]. Besides, encoder-decoder architectures employ decoder to up-sampling the resolution hierarchically and composite for the information loss of encoder [25, 13, 29]. Thus, we briefly group them into two categories: atrous convolution and encoder-decoder based approaches [27, 28], which will be introduced in the following paragraphs.
Atrous convolution based methods Typically, for DCNN, such as Resnet , the spatial resolution of the output feature maps of the final layer is 32 times smaller compared to the resolution of input images, which is harmful for pixel wise tasks. Atrous convolution is used to enlarge the receptive field while preserving the resolution of the feature map. DeepLabs especially Deeplabv2  and Deeplabv3 
are a series of methods which investigate atrous convolution for semantic segmentation and are considered as one of the state-of-the-art techniques. Similar feature extraction DCNN backbone is also used in PSPNet, where the resolution of the final layer feature maps is 8 times smaller. Atrous convolution is an effective solution for spatial information loss. However, the larger feature maps and larger convolution kernels make the network require higher computational resource. Recently, in , Wu et al. propose Joint Pyramid Up-sampling (JPU) to reduce the memory and time consuming atrous convolutions, while keeping the ability of extracting high resolution feature maps.
Encoder-Decoder based methods In an encoder-decoder network, the spatial resolution is gradually up-sampled at the decoder part. DeconvNet  uses deconvolutional layers  to recover the resolution which can get full resolution final prediction by a complex decoder part. U-Net  introduces skip connections from encoder to decoder, thus the information from the skip connection is used to compensate for the information loss. RefineNet  elaborately design an up-sampling path to fuse low level and high level features. DeepLabv3+  employs both skip connection and atrous convolution, thus reaching the state-of-the-art performance on some benchmarks to date.
2.2 Context Extraction
Scene context is important for extracting semantics. There are many approaches proposed to extract useful context information. Spatial pyramid pooling is proven to be effective for extracting context information [15, 34, 4]. Moreover, attention mechanism is proposed to learn the object context map in . In , Peng et al. use convolution operations with large kernel size to extract classification information. Among those approaches, the spatial pyramid pooling based methods are popular. Spatial pyramid pooling aims for extracting multiple scale context information from feature maps. For semantic segmentation, pyramid pooling module (PPM)  investigates pooling operation as a tool for multiple scale context aggregation and atrous spatial pyramid pooling (ASPP)  exploits atrous convolution for pyramid pooling. These two modules will be described in detail as it is highly related to the proposed approach.
Pyramid Pooling Module (PPM) Global Average Pooling (GAP) is used to obtain global contextual prior in ParseNet  for semantic segmentation. However, as pointed out in , fusing one feature map into one single value may cause information loss. Thus, in , Zhao et al. propose to hierarchically apply pooling operations with four scales as illustrated in Figure 1
(a), resulting in feature maps with four levels of resolution. The coarsest level is obtained by applying GAP on the feature maps and get a single vector output. For the other levels, the feature maps are first divided into sub-regions, then a global pooling is applied to every sub-region. The numbers of sub-region are set to, , for each level respectively in paper  and illustrated in Figure 1 (a). This PPM can thus extract the information at different scales for context aggregation.
Atrous Spatial Pyramid Pooling (ASPP) ASPP module is first proposed in  and further revised in . In ASPP module, as shown in Figure 1 (b), different atrous rates are used to extract multiple scale information. Besides, in order to capture global context prior, similar to ParseNet  and PPM , GAP is applied. In summary, one convolution block and three atrous convolution blocks with different atrous rates (6, 12, 18 respectively), and one GAP block are employed in parallel.
While Deeplabs and PSPNet reach the state-of-the-art performance on different benchmarks when they are proposed , and they are still have influence in semantic segmentation, it is important and interesting to investigate two following aspects: (1) It is obviously observed that the numbers of sub-region of PPM in PSPNet and the atrous rates of ASPP module from Deeplabs are selected empirically. The choice of those parameters need to be adjusted according to the applications, such as in , , , , sub-regions are chosen for their RMP (Residual Multi-kernel Pooling) module which is similar to PPM. It is essential to avoid choosing those parameters manually and empirically. (2) PPM and ASPP both extract the context information by sampling from rigid rectangle regions which contain pixels from different object categories. However, for a certain pixel, the surrounding pixels which belong to the same category should contribute more. As also pointed out in , Yuan and Wang hold similar opinion and they define object context as the set of pixels which belongs to the same category. While Yuan and Wang utilize self-attention  mechanism to exploit context from pixels that from the same object class, in this work, inspired by the original ASPP, we will investigate the possibility to aggregate multi-scale information by adjusting adaptively the field-of-view of the convolution operation.
In this section, we will discuss our proposed ACE module in detail. The most relevant atrous and deformable convolution operations are first introduced, then the ACE module and the network architecture are demonstrated.
3.1 Convolution Operation
Atrous convolution is chosen as the tool for the context aggregation module in ASPP . For two dimensional signals such as images, atrous convolution can be written as:
where indicates the output after atrous convolution operation, is the location, is the input signal, is the atrous rate, denotes the filter with a length of and enumerates . When , the equation stands for standard convolution. The value of controls the sampling location of atrous convolution. In ASPP, different sizes of field-of-view are obtained by setting different values. This observation leads to our claim that a learnable field-of-view can thus be obtained by learnable sampling location of the convolution operation.
It is interesting to notice that the recent proposed deformable convolution [8, 36] meets our requirement where convolution with learnable sampling location is proposed. Dai et al. propose Deformable Convolutional Netowrks (DCNv1) in  and Zhu et al. propose a revised version (DCNv2) in . Eq. 2 presents the deformable convolution operation from DCNv1:
where denotes offsets. The regular sampling locations is then augmented with the irregular offset .
It is clearly observed that the main difference between Eq. 1 and Eq. 2 is that the sampling locations of atrous convolution are always regular grid. For example, the sampling grid is square for a kernel, no matter what the value of atrous rate is. But the offset is input dependent, without regular shape constraint. Besides, compared to the manually set atrous rate , the offset is learned by the network. In , Zhu et al. further investigate deformable convolution and find out that the spatial support of the deformable convolution operation from DCNv1 can extend beyond the pertinent region. Therefore, they propose DCNv2 to let the network better focus on relevant image content by introducing a modulation mechanism to manipulate the spatial support region. This modulated deformable convolution can be expressed as follows:
where is the learnable modulation value with range of [0,1]. This modulation value can further adjust the sampled pixel’s contribution, thus the spatial support regions are adjusted better.
Therefore, in this work, we propose to employ the deformable convolution operation from DCNv2 as the tool for context aggregation.
3.2 Network Architecture
A brief illustration of context extraction based semantic segmentation pipeline is depicted in Figure 2 (a). For a given input image, convolution networks such as ResNet , Xception , are applied to extract feature maps, then feature aggregation is employed to extract context information. Based on the aggregated feature information, the final prediction is made and up-sampled to the original input spatial resolution.
In this paper, we only focus on feature aggregation part. Our proposed ACE module is shown in Figure 2 (b). After feature extraction, the input image is represented by feature maps with size of where and are the height and width of the feature maps respectively, and indicates the number of feature channels. In ACE module, three deformable convolution blocks (DCB) are applied to aggregate the feature maps. Each block is consists of “Deformable Convolution (DConv) BN (BatchNorm) ReLU (Rectified Linear Unit)” operations. The outputs size after each deformable convolution block are , , and separately. After ACE module, one convolution operation is applied for the final segmentation map prediction. Then the predicted result is up-sampled by ‘bilinear’ up-sampling operation to the original image spatial resolution directly.
In this section, we validate our proposed module on two public datasets Pascal-Context  and ADE20K . We first introduce the implementation details. Then the experiment results are presented and analyzed on these two datasets. The performance of the proposed method is evaluated in terms of two common measures, namely pixel accuracy (pixAcc) and mean Intersection of Union (mIoU).
In order to illustrate the effectiveness of the proposed method, we will compare it with ASPP and PPM. It is worth noticing that Deeplabv3 and PSPNet utilize atrous convolution for feature extraction which is memory and time consuming. In , Wu et al. propose a Joint Pyramid Up-sampling (JPU) module to replace the heavy feature extraction module. Their method (FastFCN) reduces more than three times the computation complexity and reaches slightly better performance. As a result of the limited computational resource we have, we adopt the FastFCN’s feature extraction part as backbone for comparison. In other words, only the feature aggregation (head) part is replaced with ASPP (atrous spatial pyramid pooling), PPM (pyramid pooling module) and our proposed ACE ( adaptive context encoding) module.
4.1 Implementation Details
The implementation is based on the PyTorch implementation of FastFCN 111https://github.com/wuhuikai/FastFCN which is similiar to 222https://github.com/zhanghang1989/PyTorch-Encoding and the implementation of Deformable ConvNets333https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch,444https://github.com/open-mmlab/mmdetection. For fair comparison, we adopt the original training strategy of FastFCN . Specifically, “poly” learning rate policy is used: , where . The initial base learning rate is set to 0.001 for batch size 16 for PASCAL-Context  and 0.01 for ADE20K . Besides, it is adjusted relatively to batch size value if other batch size is chosen:
. The networks are trained for 80 epochs with SGD for PASCAL-Context and 120 epochs for ADE20K . The momentum is 0.9 and the weight decay is set to 0.0001. For data augmentation, the image is randomly flipped and scaled between 0.5 to 2. Then the image is cropped to a fixed size (480 480). Pixel-wise cross-entropy loss and auxiliary loss as presented in [34, 27] are used, the weight for auxiliary loss is set to 0.2.
Due to limited access to multi-GPUs computational resource, our experiment includes training on a single GeForce RTX 2080 GPU for small batch sizes and training on 4 GeForce GTX 1080 GPUs for bath size 16.
Dataset Pascal-Context dataset  is based on PASCAL VOC 2010 with additional annotations which provide annotations for the whole scene. Training images are 4,998 (pascal-train) and testing images contain 5,105 images (pascal-val). Following the prior work [4, 33, 27], the semantic labels we used in this paper are the 59 categories with one background class.
Experimental Results Table 1 illustrates the performance on pascal-val of ASPP, PPM and the proposed ACE based FastFCN . We trained the models on different batch sizes: 4, 6 and 16. 6 is the maximum batch size for our single GPU. The methods employ ResNet/̄50  as backbone. The reported results are obtained for 59 classes without multi-scale evaluation. All the methods are trained by our machine for fair comparison. Obviously, our proposed ACE outperforms ASPP and PSP on all the different batch size training settings. It’s worth mentioning that, ASPP based FastFCN’s accuracy influenced severely by batch size, as also pointed out in Deeplabv3 paper , training DeepLabv3 model with small batch size is inefficient. Note that our proposed method not only reaches the best result, but also shows its stable performance for different batch sizes. The absolute improvements of mIoU of ACE compared to ASPP are 4.45%, 2.83% and 1.31% for batch sizes 4, 6, 16 respectively. And 2.39%, 1.04%, and 0.81% compared to PPM.
Table 2 illustrates the performance compared to the state-of-the-art methods. For a fair comparison, the reported result of our method is calculated with background class and multi-scale evaluation where the network prediction is averaged through multiple scales as in [15, 34, 33, 27]. The results of the other methods are obtained from the corresponding papers. Our proposed method achieves 53.6% mIoU, which outperforms the previous methods.
|Deeplabv2 (Res101-COCO) ||45.7|
|RefineNet (Res152) ||47.3|
|PSPNet (Res101) ||47.8|
|EncNet (Res101) ||51.7|
|DANet (Res101) ||52.6|
|FastFCN (Res101,EncNet)* ||53.1|
FastFCN backbone with EncNet head.
ADE20K is used in ImageNet Scene parsing challenge 2016 and it contains 150 object categories. It is divided into 20k/2K/3K images for training (a-train), validation (a-val) and testing (a-test) respectively.
Experimental Results Table 3 demonstrates the results for a-val set without multi-scale evaluation. Note, one GeForce RTX 2080 GPU maximum can fit 4 batches, thus the results reported in Table 3 are obtained on batch size 4 and ResNet/̄50 as backbone. Our proposed method achieves the best result compared to ASPP and PPM based FastFCN with an absolute improvement of 1.4% and 0.71% for ASPP and PPM in terms of mIoU.
In order to compare with the state-of-the-art methods, we further train our model with ResNet/̄101 backbone on 4GeFore 1080 GPUs with batch size 16. Table 4 shows the obtained result and results reported in the corresponding papers of the other approaches. The proposed method provides better result compared to PSPNet with an absolute improvement of 0.52% of mIoU. EncNet achieves the best result. Except the methodology itself, some part of the performance gap could be from the training strategy, such as EncNet is trained with image size of 576576 and our method is trained with 480480.
|RefineNet (Res152) ||-||40.7|
|PSPNet (Res101) ||81.39||43.29|
|EncNet (Res101) ||81.69||44.65|
|FastFCN (Res101,EncNet)* ||80.99||44.34|
FastFCN backbone with EncNet head.
Moreover, we fine-tune our trained model for another 20 epochs on a-train a-val set with a smaller learning rate 0.001, then submit the a-test set result to the evaluation website 555http://sceneparsing.csail.mit.edu/. Our method obtains 72.99% (pixAcc) and 37.71% (mIoU) with a final score of 0.5535 which is not the best but is an encouraging result.
5 Discussion and Conclusion
In summary, in this work, we revisited the atrous convolution operation and pyramid pooling modules and propose an effective feature aggregation method based on deformable convolution to adaptively extract multiple scale context for the final segmentation map prediction. Based on the experimental validation, our method outperforms the ASPP module and PPM on Pascal-Context and ADE20K datasets. Noticeably, although our goal for this work is to propose a better multiple scale context aggregation module, rather than to obtain the best results on the benchmarks, our proposed approach achieves state-of-the-art result 53.6% mIOU on Pascal-Conext and encouraging result 0.5535 on ADE20K.
All the experiments confirm that an adaptive context encoding (ACE) module is benefit for semantic segmentation which deserve further research. In this work, we directly use deformable convolution as the tool for ACE and simply cascaded three deformable convolution blocks, a sophisticated designed architecture is essential. We believe that further exploration for the usage and improvement of our feature aggregation idea is promising and necessary in the design of an efficient semantic segmentation.
-  M. Amirul Islam, M. Rochan, N. D. Bruce, and Y. Wang. Gated feedback refinement network for dense image labeling. In , pages 3751–3759. IEEE, 2017.
-  V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In International Conference on Learning Representations, 2015.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2018.
-  L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
-  L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision, pages 801–818. Springer, 2018.
-  F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1251–1258. IEEE, 2017.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 764–773. IEEE, 2017.
-  C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8):1915–1929, 2013.
-  Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, and J. Liu. Ce-net: Context encoder network for 2d medical image segmentation. IEEE transactions on medical imaging, 2019.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. IEEE, 2016.
-  H. T. Y. L. Y. B. Z. F. a. H. L. Jun Fu, Jing Liu. Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2019.
-  G. Lin, A. Milan, C. Shen, and I. Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1925–1934. IEEE, 2017.
-  G. Lin, C. Shen, A. Van Den Hengel, and I. Reid. Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3194–3203. IEEE, 2016.
-  W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. arXiv preprint arXiv:1506.04579, 2015.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440. IEEE, 2015.
-  S. Mallat. A wavelet tour of signal processing. Elsevier, 1999.
-  A. Mohammed, S. Yildirim, I. Farup, M. Pedersen, and Ø. Hovde. Y-net: A deep convolutional neural network for polyp detection. arXiv preprint arXiv:1806.01907, 2018.
-  A. Mohammed, S. Yildirim, I. Farup, M. Pedersen, and Ø. Hovde. Streoscennet: surgical stereo robotic scene segmentation. In Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, volume 10951, page 109510P. International Society for Optics and Photonics, 2019.
-  H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1520–1528. IEEE, 2015.
-  A. Oliva and A. Torralba. The role of context in object recognition. Trends in cognitive sciences, 11(12):520–527, 2007.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
-  C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large kernel matters—improve semantic segmentation by global convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1743–1751. IEEE, 2017.
P. H. Pinheiro and R. Collobert.
Recurrent convolutional neural networks for scene labeling.
International Conference on Machine Learning, 2014.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pages 234–241. Springer, 2015.
-  A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
-  H. Wu, J. Zhang, K. Huang, K. Liang, and Y. Yu. Fastfcn: Rethinking dilated convolution in the backbone for semantic segmentation. arXiv preprint arXiv:1903.11816, 2019.
-  C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision, pages 325–341. Springer, 2018.
-  C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Learning a discriminative feature network for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1857–1866. IEEE, 2018.
-  F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations, 2016.
-  Y. Yuan and J. Wang. Ocnet: Object context network for scene parsing. arXiv preprint arXiv:1809.00916, 2018.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, pages 818–833. Springer, 2014.
-  H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal. Context encoding for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7151–7160. IEEE, 2018.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2881–2890. IEEE, 2017.
-  B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 633–641. IEEE, 2017.
-  X. Zhu, H. Hu, S. Lin, and J. Dai. Deformable convnets v2: More deformable, better results. arXiv preprint arXiv:1811.11168, 2018.