Instance segmentation aims to classify each pixel in an image into an object category. Different from semantic segmentation[32, 10, 6, 34, 39], instance segmentation also differentiates multiple object instances. Modern instance segmentation methods typically adapt object detection frameworks, where bounding-box detection is first performed, followed by segmentation inside each of detected bounding-boxes. Instance segmentation approaches can generally be divided into two-stage [21, 31, 8, 23, 17] and single-stage [37, 13, 47, 2, 42, 36] methods, based on the underlying detection framework. Two-stage methods typically generate multiple object proposals in the first stage. In the second stage, they perform feature pooling operations on each proposal, followed by box regression, classification, and mask prediction. Different from two-stage methods, single-stage approaches do not require proposal generation or pooling operations and employ dense predictions of bounding-boxes and instance masks. Although two-stage methods dominate accuracy, they are generally slow, which restricts their usability in real-time applications.
As discussed above, most single-stage methods are inferior in accuracy, compared to their two-stage counterparts. A notable exception is the single-stage TensorMask , which achieves comparable accuracy to two-stage methods. However, TensorMask achieves this accuracy at the cost of reduced speed. In fact, TensorMask  is slower than several two-stage methods, including Mask R-CNN . Recently, YOLACT  has shown to achieve an optimal tradeoff between speed and accuracy. On the COCO benchmark , the single-stage YOLACT operates at real-time (33 frames per second), while obtaining competitive accuracy. YOLACT achieves real-time speed mainly by avoiding proposal generation and feature pooling head networks that are commonly employed in two-stage methods. While operating at real-time, YOLACT still lags behind modern two-stage methods (e.g., Mask R-CNN ), in terms of accuracy.
In this work, we argue that one of the key reasons behind sub-optimal accuracy of YOLACT is the loss of spatial information within an object (bounding-box). We attribute this loss of spatial information due to the utilization of a single set of object-aware coefficients to predict the whole mask of an object. As a result, it struggles to accurately delineate spatially adjacent object instances (Fig. 1). To address this issue, we introduce an approach that comprises a novel computationally efficient spatial preservation (SP) module to preserve spatial information in a bounding-box. Our SP module predicts object-aware spatial coefficients that splits mask prediction into multiple sub-mask predictions, thereby enabling improved delineation of spatially adjacent objects (Fig. 1).
Contributions: We propose a fast anchor-free single-stage instance segmentation approach, called SipMask, with the following contributions.
We propose a novel light-weight spatial preservation (SP) module that preserves the spatial information within a bounding-box. Our SP module generates a separate set of spatial coefficients for each bounding-box sub-region, enabling improved delineation of spatially adjacent objects.
We introduce two strategies to better correlate mask prediction with object detection. First, we propose a mask alignment weighting loss that assigns higher weights to the mask prediction errors occurring at accurately detected boxes. Second, a feature alignment scheme is introduced to improve the feature representation for both box classification and spatial coefficients.
Comprehensive experiments are performed on COCO benchmark . Our single-scale inference model based on ResNet101-FPN backbone outperforms state-of-the-art single-stage TensorMask  in terms of both mask accuracy (absolute gain of 1.0% on COCO test-dev) and speed (four-fold speedup). Compared with real-time YOLACT , our SipMask provides an absolute gain of 3.0% on COCO test-dev, while operating at comparable speed.
The proposed SipMask can be extended to single-stage video instance segmentation by adding a fully-convolutional branch for tracking instances across video frames. On YouTube-VIS dataset , our single-stage approach achieves favourable performance while operating at real-time (30 fps).
2 Related Work
Deep learning has achieved great success in a variety of computer vision tasks [20, 12, 43, 35, 45, 44, 25, 24, 53, 52]. Existing instance segmentation methods either follow bottom-up [1, 26, 30, 33, 19] or top-down [21, 31, 8, 2, 36] paradigms. Modern instance segmentation approaches typically follow top-down paradigm where the bounding-boxes are first detected and second segmented. The top-down approaches are divided into two-stage [21, 31, 8, 23, 17] and single-stage [13, 47, 2, 42, 36] methods. Among these two-stage methods, Mask R-CNN  employs a proposal generation network (RPN) and utilizes RoIAlign feature pooling strategy (Fig. 2(a)) to obtain a fixed-sized features of each proposal. The pooled features are used for box detection and mask prediction. A position sensitive feature pooling strategy, PSRoI  (Fig. 2(b)), is proposed in FCIS . PANet  proposes an adaptive feature pooling that allows each proposal to access information from multiple layers of FPN. MS R-CNN  introduces an additional branch to predict mask quality (mask-IoU). MS R-CNN performs a mask confidence rescoring without improving mask quality. In contrast, our mask alignment loss aims to improve mask quality at accurate detections.
Different to two-stage methods, single-stage approaches [2, 13, 47, 42] typically aim at faster inference speed by avoiding proposal generation and feature pooling strategies. However, most single-stage approaches are generally inferior in accuracy compared to their two-stage counterparts. Recently, YOLACT  obtains an optimal tradeoff between accuracy and speed by predicting a dictionary of category-independent maps (basis masks) for an image and a single set of instance-specific coefficients. Despite its real-time capabilities, YOLACT achieves inferior accuracy compared to two-stage methods. Different to YOLACT, which has a single set of coefficients for each bounding-box (Fig. 2(c)), our novel SP module aims at preserving spatial information within a bounding-box. The SP module generates multiple sets of spatial coefficients that splits mask prediction into different sub-regions in a bounding-box (Fig. 2(d)). Further, SP module contains a feature alignment scheme that improves feature representation by aligning the predicted instance mask with detected bounding-box. Our SP module is different to feature pooling strategies, such as PSRoI  in several ways. Instead of pooling features into a fixed size (), we perform a simple linear combination between spatial coefficients and basis masks without any feature resizing operation. This preservation of feature resolution is especially suitable for large objects. PSRoI pooling (Fig. 2(b)) generates feature maps of channels, where is the pooled feature size and is the number of classes. In practice, such a pooling operation is memory expensive (7938 channels for and ). Instead, our design is memory efficient since the basis masks are of only channels for whole image and the spatial coefficients are a
dimensional vector for each sub-region of a bounding-box (Fig.2(d)). Further, compared to contemporary work  using RoIpool based feature maps, our approach utilizes fewer coefficients on original basis mask. Moreover, our SipMask can be adapted for real-time single-stage video instance segmentation.
Overall Architecture: Fig. 3(a) shows the overall architecture of our single-stage anchor-free method, SipMask, named for its instance-specific spatial information preservation characteristic. Our architecture is built on FCOS detection method , due to its flexible anchor-free design. In the proposed architecture, we replace the standard classification and regression in FCOS with our mask-specialized regression and classification branches. Both mask-specialized classification and regression branches are fully convolutional. Our mask-specialized classification branch predicts the classification scores of detected bounding-boxes and generates instance-specific spatial coefficients for instance mask prediction. The focus of our design is the introduction of a novel spatial preservation (SP) module, within the mask-specialized classification branch, to obtain improved mask predictions. Our SP module further enables better delineation of spatially adjacent objects. The SP module first performs feature alignment by using the final regressed bounding-box locations. The resulting aligned features are then utilized for both box classification and generating spatial coefficients required for mask prediction. The spatial coefficients are introduced to preserve spatial information within an object bounding-box. In our framework, we divide the bounding-box into sub-regions and compute a separate set of spatial coefficients for each sub-region. Our mask-specialized regression branch generates both bounding-box offsets for each instance and a set of category-independent maps, termed as basis masks, for an image. Our basis masks are constructed by capturing the contextual information from different prediction layers of FPN.
The spatial coefficients predicted for each of sub-regions within a bounding-box along with image-specific basis masks are utilized in our spatial mask prediction (SMP) module (Fig. 3(b)). Our SMP generates separate map predictions for respective regions within the bounding-box. Consequently, these separate map predictions are combined to obtain final instance mask prediction.
3.1 Spatial Preservation Module
Besides box classification, our mask-specialized classification branch comprises a novel spatial preservation (SP) module. Our SP module performs two tasks: spatial coefficients generation and feature alignment.
The spatial coefficients are introduced to improve mask prediction by preserving spatial information within a bounding-box.
Our feature alignment scheme aims at improving the feature representation for both box classification and spatial coefficients generation.
Spatial Coefficients Generation: As discussed earlier, the recently introduced YOLACT  utilizes a single set of coefficients to predict the whole mask of an object, leading to the loss of spatial information within a bounding-box. To address this issue, we propose a simple but effective approach that splits mask prediction into multiple sub-mask predictions. We divide the spatial regions within a predicted bounding-box into sub-regions. Instead of predicting a single set of coefficients for the whole bounding-box , we predict a separate set of spatial coefficients for each of its sub-region . Fig. 3(b) shows an example where a bounding-box is divided into sub-regions (four quadrants, i.e., top-left, top-right, bottom-left and bottom-right). In practice, we observe that provides an optimal tradeoff between speed and accuracy. Note that our spatial coefficients utilize improved features obtained though a feature alignment operation described next.
Feature Alignment Scheme: Generally, convolutional layer operates on a rectangular grid (e.g., kernel). Thus, the extracted features for classification and coefficients generation may fail to align with the features of regressed bounding-box. Our feature alignment scheme addresses this issue by aligning the features with regressed box location, resulting in an improved feature representation. For feature alignment, we introduce a deformable convolutional layer [16, 51, 5] in our mask-specialized classification branch. The input to the deformable convolutional layer are the regression offsets to left, right, top, and bottom corners of ground-truth bounding-box obtained from mask-specialized regression branch (Sec. 3.2
). These offsets are utilized to estimate the kernel offsetthat augments the regular sampling grid in the deformable convolution operator, resulting in an aligned feature at position , as follows:
where is the input feature, and is the original position of convolutional weight in . Different to [51, 50] that aim to learn accurate geometric localization, our approach aims to generate better features for box classification and coefficient generation. Next, we describe mask-specialized regression branch.
3.2 Mask-specialized Regression Branch
Our mask-specialized regression branch performs box regression and generates a set of category-independent basis masks for an image. Note that YOLACT utilizes a single FPN prediction layer to generate the basis masks. Instead, the basis masks in our SipMask are generated by exploiting the multi-layer information from different prediction layers of FPN. The incorporation of multi-layer information helps to obtain a continuous mask (especially on large objects) and remove background clutter. Further, it helps in scenarios, such as partial occlusion and large-scale variation. Here, objects of various sizes are predicted at different prediction layers of the FPN (i.e., ). To capture multi-layer information, the features from the layers of the FPN are utilized to generate basis masks. Note that and are excluded for basis mask generation to reduce the computational cost. The outputs from and are first upsampled to the resolution of
using bilinear interpolation. The resulting features from all three prediction layers () are concatenated, followed by a convolution to generate feature maps with channels. Finally, these feature maps are upsampled four times by using bilinear interpolation, resulting in basis masks, each having a spatial resolution of . Both the spatial coefficients (Sec. 3.1) and basis masks are utilized in our spatial mask prediction (SMP) module for final instance mask prediction.
3.3 Spatial Mask Prediction Module
Given an input image, our spatial mask prediction (SMP) module takes the predicted bounding-boxes, basis masks and spatial coefficients as inputs and predicts the final instance mask. Let represent predicted basis masks for the whole image, be the number of predicted boxes, and be a matrix that indicates the spatial coefficients at the sub-region (quadrant for ) of all predicted bounding-boxes. Note that the column of (i.e., ) indicates the spatial coefficients for the bounding-box (Sec. 3.1). We perform a simple matrix multiplication between and to obtain maps corresponding to the quadrant of all bounding-boxes as follows.
where is sigmoid normalization and are the maps generated for the quadrant of all bounding-boxes. Fig. 4(b) shows the procedure to obtain final mask of an instance . Let be the map generated for the quadrant of a bounding-box . Then, the response values of outside the quadrant of the box are set as zero for generating a pruned map . To obtain the instance map of a bounding-box , we perform a simple addition of its pruned maps obtained from all four quadrants, i.e.,
. Finally, the instance map at the predicted bounding-box region is binarized with a fixed threshold to obtain final maskof instance .
Fig. 4 shows a visual comparison of a single set of coefficients based mask prediction, as in YOLACT, with our separate set of spatial coefficients (for each sub-region) based mask prediction. The top-left pixels of an adjacent ‘cat’ instance are appearing inside the top-right quadrant of the detected ‘cat’ instance bounding-box (in red). In Fig. 4(a), a linear combination of a single set of instance-specific coefficients and image-level basis masks is used to obtain a map . The response values of the map outside the box are assigned with zero to produce a pruned mask , followed by thresholding to obtain the final mask . Instead, our SipMask (Fig. 4(b)) generates a separate set of instance-specific spatial coefficients for each sub-region within a bounding-box . By separating the mask predictions to different sub-regions of a box, our SipMask reduces the influence of adjacent (overlapping) object instance in final mask prediction.
3.4 Loss Function
The overall loss function of our framework contains loss terms corresponding to bounding-box detection (classification and regression) and mask generation. For box classificationand box regression , we utilize focal loss and IoU loss, respectively, as in . For mask generation, we introduce a novel mask alignment weighting loss that better correlate mask predictions with high quality bounding-box detections. Different to YOLACT that utilizes a standard pixel-wise binary cross entropy (BCE) loss during training, our improves the BCE loss with a mask alignment weighting scheme that assigns higher weights to the masks obtained from high quality bounding-box detections.
Mask Alignment Weighting: In our mask alignment weighting, we first compute the overlap between a predicted bounding-box and the corresponding ground-truth. The weighting factor is then obtained by multiplying the overlap and the classification score of the bounding-box . Here, a higher indicates good quality bounding-box detections. Consequently, is used to weight the mask loss of the instance , leading to . Here, is the number of bounding-boxes. Our weighting strategy encourages the network to predict a high quality instance mask for a high quality bounding-box detections. The proposed mask alignment weighting loss is utilized along with loss terms corresponding to bounding-box detection (classification and regression) in our overall loss function: .
3.5 Single-stage Video Instance Segmentation
In addition to still image instance segmentation, we investigate our single-stage SipMask for the problem of real-time video instance segmentation. In video instance segmentation, the aim is to simultaneously detect, segment, and track instances in videos.
To perform real-time single-stage video instance segmentation, we simply extend our SipMask by introducing an additional fully-convolutional branch in parallel to mask-specialized classification and regression branches for instance tracking. The fully-convolutional branch consists of two convolutional layers. After that, the output feature maps of different layers in this branch are fused to obtain the tracking feature maps, similar to basis mask generation in our mask-specialized regression branch. Different from the state-of-the-art MaskTrack R-CNN  that utilizes RoIAlign and fully-connected operations, our SipMask extracts a tracking feature vector from the tracking feature maps at the bounding-box center to represent each instance. The metric for matching the instances between different frames is similar to MaskTrack R-CNN. Our SipMask is very simple, efficient and achieves favourable performance for video instance segmentation (Sec. 4.4).
4.1 Dataset and Implementation Details
Dataset: We conduct experiments on COCO dataset , where the trainval set has about 115 images, the minival set has 5 images, and the test-dev set has about 20 images. We perform training on trainval set and present state-of-the-art comparison on test-dev set and the ablations on minival set.
Implementation Details: We adopt ResNet 
(ResNet50/ResNet101) with FPN pre-trained on ImageNet as the backbone. Our method is trained eight GPUs with SGD for optimization. During training, the initial learning rate is set to 0.01. When conducting ablation study, we use a 1 training scheme at single scale to reduce training time. For a fair comparison with the state-of-the-art single-stage methods [11, 2], we follow the 6, multi-scale training scheme. During inference we select top 100 bounding-boxes with highest classification scores, after NMS. For these bounding-boxes, a simple linear combination between the predicted spatial coefficients and basis masks are used to obtain instance masks.
|Mask R-CNN ||ResNet101-FPN||116||35.7||58.0||37.8||15.5||38.1||52.4|
|Mask R-CNN* ||ResNet101-FPN||116||38.3||61.2||40.8||18.2||40.6||54.1|
|MS R-CNN ||ResNet101-FPN||117||38.3||58.8||41.5||17.8||40.4||54.4|
|Single-Stage: Large input size|
|Single-Stage: Small input size|
4.2 State-of-the-art Comparison
Here, we compare our method with some two-stage [14, 27, 18, 9, 21, 31, 23, 8, 4] and single-stage [54, 2, 46, 11] methods on COCO test-dev set. Tab. 1 shows the comparison in terms of both speed and accuracy. Most existing methods use a larger input image size, typically (except YOLACT , which operates on input size of ). Among existing two-stage methods, Mask R-CNN  and PANet  achieve overall mask AP scores of 35.7 and 36.6, respectively. The recently introduced MS R-CNN  and HTC  obtain mask AP scores of 38.3 and 39.7, respectively. Note that HTC achieves this improved accuracy at the cost of a significant reduction in speed. Further, most two-stage approaches require more than 100 milliseconds (ms) to process an image.
In case of single-stage methods, PolarMask  obtains a mask AP of 30.4. RDSNet  achieves a mask AP score of 36.4. Among these single-stage methods, TensorMask  obtains the best results with a mask AP score of 37.1. Our SipMask under similar settings (input size and backbone) outperforms TensorMask with an absolute gain of 1.0%, while obtaining a four-fold speedup. In particular, our SipMask achieves an absolute gain of 2.7% on the large objects, compared to TensorMask.
In terms of fast instance segmentation and real-time capabilities, we compare our SipMask with YOLACT  when using two different backbone models (ResNet50/ResNet101 FPN). Compared to YOLACT, our SipMask achieves an absolute gain of 3.0% without any significant reduction in speed (YOLACT: 30 ms vs. SipMask: 32 ms). A recent variant of YOLACT, called YOLACT++ , utilizes a deformable backbone (ResNet101-Deform  with interval 3) and a mask scoring strategy. For a fair comparison, we also integrate the same two ingredients in our SipMask, called as SipMask++. When using a similar input size and same backbone, our SipMask++ achieves improved mask accuracy while operating at the same speed, compared to YOLACT++. Fig. 5 shows example instance segmentation results of our SipMask on COCO test-dev.
4.3 Ablation study
We perform an ablation study on COCO minival set with ResNet50-FPN backbone . First, we show the impact of progressively integrating our different components: spatial preservation (SP) module (Sec. 3.1), contextual basis masks (CBM) obtained by integrating context information from different FPN prediction layers (Sec. 3.2), and mask alignment weighting loss (WL) (Sec. 3.4), to the baseline. Note that our baseline is similar to YOLACT, obtaining the basis masks by using only high-resolution FPN layer () and using a single set of coefficients for mask prediction. The results are presented in Tab. 3. The baseline achieves a mask AP of 31.2. All our components (SP, CBM and WL) contribute towards achieving improved performance (mask accuracy). In particular, the most improvement in mask accuracy, over the baseline, comes from our SP module. Our final SipMask integrating all contributions obtains an absolute gain of 3.1% in terms of mask AP, compared to the baseline. We also evaluate the impact of adding our different components individually to the baseline. The results are shown in Tab. 3. Among these components, the spatial coefficients provides the most improvement in accuracy over the baseline. It is worth mentioning that both the spatial coefficients and feature alignment constitute our spatial preservation (SP) module. These results suggest that each of our components individually contributes towards improving the final performance.
Fig. 6 shows example results highlighting the spatial delineation capabilities of our spatial preservation (SP) module. We show the input image with the detected bounding-box (red) together with the mask prediction based on a single set of coefficients (baseline) and our mask prediction based on a separate set of spatial coefficients. Our approach is able to provide improved delineation of spatially adjacent instances, leading to superior mask predictions.
As discussed in Sec. 3.1, our SP module generates a separate set of spatial coefficients for each sub-region within a bounding-box. Here, we perform a study by varying the number of sub-regions to obtain spatial coefficients. Tab. 5 shows that a large gain in performance is obtained going from to . We also observe that the performance tends to marginally increase by further increasing the number of sub-regions. In practice, we found to provide an optimal tradeoff between speed and accuracy. As discussed earlier (Sec. 3.4), our mask alignment weighting loss re-weights the pixel-level BCE loss using both classification (class scores) and localization (overlap with the ground-truth) information. Here, we analyze the effect of classification (only cls.) and localization (only loc.) on our mask alignment weighting loss in Tab. 5. It shows that both the classification and localization are useful to re-weight the BCE loss for improved mask prediction.
|baseline||only cls.||only loc.||cls.+loc.|
4.4 Video Instance Segmentation Results
In addition to instance segmentation, we present the effectiveness of our SipMask, with the proposed modifications described in Sec. 3.5, for real-time video instance segmentation. We conduct experiments on the recently introduced large-scale YouTube-VIS dataset . The YouTube-VIS dataset contains 2883 videos, 4883 objects, 131 instance masks, and 40 object categories. Tab. 6 shows the state-of-the-art comparison on the YouTube-VIS validation set. When using the same input size () and backbone (ResNet50 FPN), our SipMask outperforms the state-of-the-art MaskTrack R-CNN  with an absolute gain of 2.2% in terms of mask accuracy (AP). Further, our SipMask achieves impressive mask accuracy while operating at real-time (30 fps) on a Titan Xp. Fig. 7 shows video instance segmentation results on example frames from the validation set.
|OSMN ||mask propagation||23.4||36.5||25.7||28.9||31.1|
|FEELVOS ||mask propagation||26.9||42.0||29.7||29.9||33.4|
|MaskTrack R-CNN ||track-by-detect||30.3||51.1||32.6||31.0||35.5|
|Our SipMask ms-train||track-by-detect||33.7||54.1||35.8||35.4||40.1|
We introduce a fast single-stage instance segmentation method, SipMask, that aims at preserving spatial information within a bounding-box. A novel light-weight spatial preservation (SP) module is designed to produce a separate set of spatial coefficients by splitting mask prediction of an object into different sub-regions. To better correlate mask prediction with object detection, a feature alignment scheme and a mask alignment weighting loss are further proposed. We also show that our SipMask is easily extended for real-time video instance segmentation. Our comprehensive experiments on COCO dataset show the effectiveness of the proposed contributions, leading to state-of-the-art single-stage instance segmentation performance. With the same instance segmentation framework and just changing the input resolution (), our SipMask operates at real-time on a single Titan Xp with a mask accuracy of 32.8 on COCO test-dev.
This work was supported by National Key R&D Program (2018AAA0102800) and National Natural Science Foundation (61906131, 61632018) of China.
Arnab, A., Torr, P.H.: Pixelwise instance segmentation with a dynamically instantiated network. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2017)
-  Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: Yolact: Real-time instance segmentation. Proc. IEEE International Conf. Computer Vision (2019)
-  Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: Yolact++: Better real-time instance segmentation. arXiv:1912.06218 (2020)
-  Cao, J., Cholakkal, H., Anwer, R.M., Khan, F.S., Pang, Y., Shao, L.: D2det: Towards high quality object detection and instance segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2020)
-  Cao, J., Pang, Y., Han, J., Li, X.: Hierarchical shot detector. Proc. IEEE International Conference on Computer Vision (2019)
-  Cao, J., Pang, Y., Li, X.: Triply supervised decoder networks for joint detection and segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Chen, H., Sun, K., Tian, Z., Shen, C., Huang, Y., Yan, Y.: Blendmask: Top-down meets bottom-up for instance segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2020)
-  Chen, K., Pang, J., Wang, J., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Shi, J., Ouyang, W., Loy, C.C., Lin, D.: Hybrid task cascade for instance segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Chen, L.C., Hermans, A., Papandreou, G., Schroff, F., Wang, P., Adam, H.: Masklab: Instance segmentation by refining object detection with semantic and direction features. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2018)
-  Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Analysis and Machine Intelligence 40(4), 834–848 (2017)
-  Chen, X., Girshick, R., He, K., Dollár, P.: Tensormask: A foundation for dense object segmentation. Proc. IEEE International Conf. Computer Vision (2019)
-  Cholakkal, H., Sun, G., Khan, F.S., Shao, L.: Object counting and instance segmentation with image-level supervision. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Dai, J., He, K., Li, Y., Ren, S., Sun, J.: Instance-sensitive fully convolutional networks. Proc. European Conf. Computer Vision (2016)
-  Dai, J., He, K., Sun, J.: Instance-aware semantic segmentation via multi-task network cascades. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2016)
-  Dai, J., Li, Y., He, K., Sun, J.: R-FCN: Object detection via region-based fully convolutional networks. Proc. Advances in Neural Information Processing Systems (2016)
-  Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. Proc. IEEE International Conf. Computer Vision (2017)
Fang, H.S., Sun, J., Wang, R., Gou, M., Li, Y.L., Lu, C.: Instaboost: Boosting instance segmentation via probability map guided copy-pasting. Proc. IEEE International Conf. Computer Vision (2019)
-  Fu, C.Y., Shvets, M., Berg, A.C.: Retinamask: Learning to predict masks improves state-of-the-art single-shot detection for free. arXiv:1901.03353 (2019)
-  Gao, N., Shan, Y., Wang, Y., Zhao, X., Yu, Y., Yang, M., Huang, K.: Ssap: Single-shot instance segmentation with affinity pyramid. Proc. IEEE International Conf. Computer Vision (2019)
-  Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2014)
-  He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. Proc. IEEE International Conf. Computer Vision (2017)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. Proc. IEEE International Conf. Computer Vision (2016)
-  Huang, Z., Huang, L., Gong, Y., Huang, C., Wang, X.: Mask scoring r-cnn. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Jiang, X., Zhang, L., Zhang, T., Lv, P., Zhou, B., Pang, Y., Xu, M., Xu, C.: Density-aware multi-task learning for crowd counting. IEEE Trans. Multimedia (2020)
-  Khan, F.S., Xu, J., van de Weijer, J., Bagdanov, A., Anwer, R.M., Lopez, A.: Recognizing actions through action-specific person detection. IEEE Trans. Image Processing 24(11), 4422–4432 (2015)
-  Kirillov, A., Levinkov, E., Andres, B., Savchynskyy, B., Rother, C.: Instancecut: from edges to instances with multicut. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2017)
-  Li, Y., Qi, H., Dai, J., Ji, X., Wei, Y.: Fully convolutional instance-aware semantic segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2017)
-  Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2017)
-  Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. Proc. European Conf. Computer Vision (2014)
-  Liu, S., Jia, J., Fidler, S., Urtasun, R.: Sgn: Sequential grouping networks for instance segmentation. Proc. IEEE International Conf. Computer Vision (2017)
-  Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2018)
-  Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2015)
-  Neven, D., Brabandere, B.D., Proesmans, M., Gool, L.V.: Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Pang, Y., Li, Y., Shen, J., Shao, L.: Towards bridging semantic gap to improve semantic segmentation. Proc. IEEE International Conf. Computer Vision (2019)
-  Pang, Y., Xie, J., Khan, M.H., Anwer, R.M., Khan, F.S., Shao, L.: Mask-guided attention network for occluded pedestrian detection. Proc. IEEE International Conference on Computer Vision (2019)
-  Peng, S., Jiang, W., Pi, H., Li, X., Bao, H., Zhou, X.: Deep snake for real-time instance segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2020)
-  Pinheiro, P.O., Lin, T.Y., Collobert, R., Dollár, P.: Learning to refine object segments. Proc. European Conf. Computer Vision (2016)
-  Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision (2015)
-  Sun, G., Wang, B., Dai, J., Gool, L.V.: Mining cross-image semantics for weakly supervised semantic segmentation. Proc. European Conf. Computer Vision (2020)
-  Tian, Z., Shen, C., Chen, H., He, T.: Fcos: Fully convolutional one-stage object detection. Proc. IEEE International Conf. Computer Vision (2019)
-  Voigtlaender, P., Chai, Y., Schroff, F., Adam, H., Leibe, B., Chen, L.C.: Feelvos: Fast end-to-end embedding learning for video object segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Wang, S., Gong, Y., Xing, J., Huang, L., Huang, C., Hu, W.: Rdsnet: A new deep architecture for reciprocal object detection and instance segmentation. Proc. AAAI Conf. Artificial Intelligence (2020)
-  Wang, T., Anwer, R.M., Cholakkal, H., Khan, F.S., Pang, Y., Shao, L.: Learning rich features at high-speed for single-shot object detection. Proc. IEEE International Conf. Computer Vision (2019)
-  Wang, T., Yang, T., Danelljan, M., Khan, F.S., Zhang, X., Sun, J.: Learning human-object interaction detection using interaction points. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2020)
-  Wu, J., Zhou, C., Yang, M., Zhang, Q., Li, Y., Yuan, J.: Temporal-context enhanced detection of heavily occluded pedestrians. Proc. IEEE Conference on Computer Vision and Pattern Recognition (2020)
-  Xie, E., Sun, P., Song, X., Wang, W., Liu, X., Liang, D., Shen, C., Luo, P.: Polarmask: Single shot instance segmentation with polar representation. arXiv:1909.13226 (2019)
-  Xu, W., Wang, H., Qi, F., Lu, C.: Explicit shape encoding for real-time instance segmentation. Proc. IEEE International Conf. Computer Vision (2019)
-  Yang, L., Fan, Y., Xu, N.: Video instance segmentation. Proc. IEEE International Conf. Computer Vision (2019)
-  Yang, L., Wang, Y., Xiong, X., Yang, J., Katsaggelos, A.K.: Efficient video object segmentation via network modulation. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2018)
-  Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: Reppoints: Point set representation for object detection. Proc. IEEE International Conf. Computer Vision (2019)
-  Yang, Z., Xu, Y., Xue, H., Zhang, Z., Urtasun, R., Wang, L., Lin, S., Hu, H.: Reppoints: Point set representation for object detection. Proc. European Conf. Computer Vision (2020)
-  Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., Hoi, S.C.H.: Deep learning for person re-identification: A survey and outlook. arXiv:2001.04193 (2020)
-  Ye, M., Zhang, X., Yuen, P.C., Chang, S.F.: Unsupervised embedding learning via invariant and spreading instance feature. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Zhou, X., Zhuo, J., Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)
-  Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: More deformable, better results. Proc. IEEE Conf. Computer Vision and Pattern Recognition (2019)