Non-local RoIs for Instance Segmentation

07/14/2018 ∙ by Shou-Yao Roy Tseng, et al. ∙ National Tsing Hua University Academia Sinica 0

We introduce the concept of Non-Local RoI (NL-RoI) Block as a generic and flexible module that can be seamlessly adapted into different Mask R-CNN heads for various tasks. Mask R-CNN treats RoIs (Regions of Interest) independently and performs the prediction based on individual object bounding boxes. However, the correlation between objects may provide useful information for detection and segmentation. The proposed NL-RoI Block enables each RoI to refer to all other RoIs' information, and results in a simple, low-cost but effective module. Our experimental results show that generalizations with NL-RoI Blocks can improve the performance of Mask R-CNN for instance segmentation on the Robust Vision Challenge benchmarks.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The current trend of deep network architectures for object detection can be categorized into two main streams: one-stage detectors and two-stage detectors. One-stage detectors perform the task of object detection in an end-to-end single-pass manner, e.g. YOLO [17, 18, 19] and SSD [14, 5]

. On the other hand, two-stage detectors divide the task into two sub-problems that respectively focus on extracting object region proposals and classifying each of the candidate regions. Detectors such as Faster R-CNN 

[20] and Light-Head R-CNN [12] are both of this kind.

Mask R-CNN [9]

extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI) in parallel with the existing branch for classification and bounding box regression. This showcases the architecture flexibility of two-stage detectors for multitasking over the one-stage counterparts. Different branches in Mask R-CNN share the same set of high-level features extracted by a deep CNN backbone network, such as ResNet 

[10]. Then, each branch attends to specific RoI via RoIAlign, a simple and quantization-free layer that faithfully preserves spatial preciseness. Further, the proposed Non-Local RoI (NL-RoI) Block can be incorporated into Mask R-CNN to achieve better performance.

The ability to capture long-range and non-local information is a key success factor of deeper CNNs. For vanilla Mask R-CNN, the only means to acquire non-local information for each RoI is to explore the high-level features extracted by the deep backbone network. However, the high-level features are shared among all RoIs of different spatial locations, semantic categories, and branches for different tasks. Such high-level features are assumed to be general rather than specific for individual RoIs so that they are applicable to all the above varieties. Therefore, it is difficult for the same set of features to also contain the RoI-specific information. Besides, RoI features are rectangularly extracted based on their corresponding bounding box proposed by the Region Proposal Network (RPN). It is very likely to have multiple instances in a single bounding box when the scene is crowded. Moreover, if the instances are of the same category, it is harder for the branch network to tell apart the boundary by only referring to the local feature within an RoI. Especially for non-rigid objects, such as persons, the target object will deform in shape, and the bounding box has a higher chance to include other objects interlacing in a more complicated way.

To tackle the above concern, we introduce the idea of NL-RoI Block to better address the problem, and argue that RoI-specific non-local information can be helpful in discriminating the target instance from the others. For example, due to object co-occurrence prior in the real world, it is more probable to see cars along with pedestrians instead of refrigerators in a street scene. Besides, mutual information between instances may also be useful. Consider a scene of group dancing: People are usually posing in similar ways, and hence we can more confidently predict the pose for a dancer under partial occlusion, by referring to other dancers’ poses.

Our NL-RoI Block module is inspired by the non-local operations proposed by Wang et al[23]. They present the non-local operations as a family of generic building blocks for capturing long-range dependencies in different locations of data domain. The location can sit in a pixel or an audio sample for visual and acoustic data respectively. For visual data domain, the dependencies may come across space for tasks using a single static image, or space-time for tasks involving an extra time dimension such as video classification. In contrast, NL-RoIs are focusing on the long-range dependencies at a higher level between instances instead of just the pixel level. Specifically, our method explicitly empowers the network to model correlations and attentions between RoIs. By taking into account all pairs of RoIs of a scene in an efficient way, the NL-RoI Block benefits from not only neighboring RoIs but also spatially separated ones.

2 Non-local RoI

We first introduce the general definition of non-local RoI operation by following the notations in [23]. We then go on to provide a detailed implementation about the NL-RoI Block used in Robust Vision Challenge 2018. Fig. 1 shows the basic idea about how we apply the NL-RoI Block to augment the original RoI feature blobs.

2.1 Formulation

Inspired by the non-local operation in [23], we define a generic non-local RoI operation for the use in conjunction with R-CNN based models [8]:


where is the index of a target RoI whose non-local information is to be computed and enumerates all the RoIs, including the target one. The input feature blob is denoted as and the output feature containing non-local information is denoted by . A pairwise function computes a scalar that reflects the correlation between the th target RoI and each of the RoIs (). The unary function maps the input feature from the th RoI to another representation, which gives the operation the capacity to convert the input feature to be more specialized for non-local information. Finally, the response is normalized by a factor .

The non-local RoI property in Eq. (1) originates from the fact that all RoIs are associated with each other in the operation. For each RoI, the non-local RoI operation computes responses based on correlations between different RoIs. Theoretically, each RoI should gradually learn to characterize a meaningful instance during training. That is, Eq. (1) enables the attention mechanism between instances. Moreover, this kind of non-local operation supports a variable input number of RoIs.

Figure 1: Using an NL-RoI Block to extract augmented RoI-specific features.

2.2 Implementation of NL-RoI Block

While different possible instantiations for can be chosen, Wang et al[23] show, by experiments, that the non-local operations are not sensitive to specific choices. For simplicity, we just adopt the Embedded Gaussian version of :


Assume that we have RoIs and channels of input features, and the aligned RoI spatial size is . Hence, the input feature blob has the shape of . The two embedding functions and are both chosen to be a 1-by-1 2D convolution that reduces the channel dimension of the input blob. The purpose of is to calculate the correlations between RoIs, so the output of being applied to the whole input blob should be an -by- matrix. The output blobs from and are reshaped to . Afterward, a matrix multiplication on the reshaped outputs is performed to obtain the correlation matrix. Exponential and normalization terms are implemented by taking softmax to the rows of the correlation matrix.

It is worth noting that this form of is essentially the same as the Self-Attention Module in [22] for machine translation. For a given , becomes a softmax computation along the dimension . Eq. (1) results in the self-attention form in [22].

The remaining part in non-local RoI operation is responsible for extracting useful non-local information from the input feature. Following the bottleneck design of [10], we first use a 1-by-1 convolution to reduce the channel dimension and then a 3-by-3 convolution to take in the spatial information. To further cut down memory cost, a global 2D average pooling is applied. Finally, the pooled feature blob of shape is tiled around spatial dimensions and is appended to the end of input blob, as showed in Fig. 2

. A ReLU activation function

[15] is used between the two convolution layers.

Figure 2: The detailed operations of a NL-RoI Block.

3 Instance Segmentation Model

Our NL-RoI Block is plugged into Mask R-CNN to perform instance segmentation. The backbone network for image feature extraction is ResNet-50 with FPN [13]

. We replace batch normalization

[11] by group normalization [24] for better training stability and convergence with a smaller batch size.


The core training datasets for our method include Cityscapes [3], Kitti Instance Segmentation [1], WildDash [25], and ScanNet [4]. In addition, we use ADE20K [26] to provide more furniture samples for training. There are 76,528 valid training images in total. We train for 136K iterations, starting from a learning rate of and reducing it to , , on 56Kth, 76Kth, 116Kth iteration respectively. We use a weight decay of 0.0001 and a momentum of 0.9. Pre-trained weights for corresponding Mask R-CNN architecture from Detectron [7] are loaded during initialization.


At inference time, the input image is resized to 800 pixels on the shorter side. If the length of the longer side of resized image exceeds 1,333 pixels, we further resize the image to make sure the length of the longer side is 1,333 pixels. Soft-NMS [2] and box-voting [6] are also used during inference.

All implementations of the proposed NL-RoI Block and the related modifications are based on PyTorch deep learning framework 

[16] and the Detectron.pytorch GitHub Repo [21] of the first author, Roy Tseng.

4 Benchmark Results

Table 1 summarizes the instance segmentation benchmark results of NL-RoI on the four datasets involved in Robust Vision Challenge 2018. Fig. 3 shows two sample results on the Kitti test set.

 Dataset AP50:95 AP50 AP100m AP50m Neg AP
Kitti 16.37% 34.5% - - -
Cityscapes 24.0% 45.8% 36.1% 40.8% -
WildDash 19.4% 34.0% - - 19.7%
ScanNet 11% - - - -
Table 1: ROB2018 Instance Segmentation Benchmarks. AP: average of average precision ranging from overlap 0.5 to 0.95 in steps 0.05. AP50: average precision at overlap 0.5. AP100m/50m: average precision on objects within 100m/50m distance. Neg AP: average precision on images with visual hazards of blur, distortion, overexposure, etc.
Figure 3: Instance segmentation sample results on Kitti test set.


  • [1] H. A. Alhaija, S. K. Mustikovela, L. Mescheder, A. Geiger, and C. Rother. Augmented reality meets deep learning for car instance segmentation in urban scenes. In British Machine Vision Conference (BMVC), 2017.
  • [2] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis. Soft-nms - improving object detection with one line of code. In International Conference on Computer Vision (ICCV), pages 5562–5570, 2017.
  • [3] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The cityscapes dataset for semantic urban scene understanding.


    Computer Vision and Pattern Recognition (CVPR)

    , 2016.
  • [4] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Computer Vision and Pattern Recognition (CVPR), 2017.
  • [5] C. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. DSSD : Deconvolutional single shot detector. CoRR, abs/1701.06659, 2017.
  • [6] S. Gidaris and N. Komodakis. Object detection via a multi-region and semantic segmentation-aware CNN model. In International Conference on Computer Vision (ICCV), pages 1134–1142, 2015.
  • [7] R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He. Detectron., 2018.
  • [8] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), pages 580–587, 2014.
  • [9] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick. Mask R-CNN. In International Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  • [11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In

    Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015

    , pages 448–456, 2015.
  • [12] Z. Li, C. Peng, G. Yu, X. Zhang, Y. Deng, and J. Sun. Light-head R-CNN: in defense of two-stage object detector. CoRR, abs/1711.07264, 2017.
  • [13] T. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In Computer Vision and Pattern Recognition (CVPR), pages 936–944, 2017.
  • [14] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C. Fu, and A. C. Berg. SSD: single shot multibox detector. In European Conference on Computer Vision (ECCV), pages 21–37, 2016.
  • [15] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning (ICML), pages 807–814, 2010.
  • [16] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In Neural Information Processing Systems Workshop (NIPS-W), 2017.
  • [17] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Computer Vision and Pattern Recognition (CVPR), pages 779–788, 2016.
  • [18] J. Redmon and A. Farhadi. YOLO9000: better, faster, stronger. In Computer Vision and Pattern Recognition (CVPR), pages 6517–6525, 2017.
  • [19] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
  • [20] S. Ren, K. He, R. B. Girshick, and J. Sun. Faster R-CNN: towards real-time object detection with region proposal networks. In Neural Information Processing Systems (NIPS), pages 91–99, 2015.
  • [21] S.-Y. R. Tseng. Detectron.pytorch., 2018.
  • [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Neural Information Processing Systems (NIPS), pages 6000–6010, 2017.
  • [23] X. Wang, R. Girshick, A. Gupta, and K. He.

    Non-local neural networks.

    In Computer Vision and Pattern Recognition (CVPR), 2018.
  • [24] Y. Wu and K. He. Group normalization. arXiv preprint arXiv:1803.08494, 2018.
  • [25] O. Zendel, M. Murschitz, M. Humenberger, and W. Herzner. How good is my test data? introducing safety analysis for computer vision. International Journal of Computer Vision, 125(1-3):95–109, 2017.
  • [26] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. Scene parsing through ade20k dataset. In Computer Vision and Pattern Recognition (CVPR), 2017.