Instance segmentation 
is a fundamental and challenging computer vision task, which requires to locate, classify, and segment each instance in the image. Therefore, it has both the characters of object detection and semantic segmentation. State-of-the-art instance segmentation methods[12, 21, 14] are mostly built on the advances of two-stage object detectors [8, 9, 26]. Despite the popular trend of one-stage object detection [13, 25, 22, 17, 27, 30], only a few works[1, 2, 28, 7] are focusing on one-stage instance segmentation. In this work, we aim to design a simple one-stage and anchor-box free instance segmentation model.
Instance segmentation is much harder than object detection because the shapes of instances are more flexible than the two-dimensional bounding boxes. There are two main challenges for one-stage instance segmentation: (1) how to differentiate object instances, especially when they are in the same category. Some methods [3, 1]
extract the global features of the image firstly then post-process them to separate different instances, but these methods struggle when objects overlap. (2) how to preserve pixel-wise location information. State-of-the-art methods represent masks as structured 4D tensors or contour of fixed points , but still facing the pixel misalignment problem, which makes the masks coarse at the boundary. TensorMask  designs complex pixel align operations to fix this problem, which makes the network even slower than the two-stage counterparts.
To address these issues, we propose to break up the mask representation into two parallel components: (1) a Local Shape representation that predicts a coarse mask for each local area, which can separate different instances automatically. (2) a Global Saliency Map that segment the whole image, which can provide saliency details, and realize pixel-wise alignment. To realize that, the local shape information is extracted from the point representation at object centers. Modeling object as its center point is motivated by the one-stage CenterNet  detector, thus we call our method CenterMask.
The illustration of the proposed CenterMask is shown in Figure 1
. Given an input image, the object center point locations are predicted following a keypoint estimation pipeline. Then the feature representation at the center point is extracted to form the local shape, which is represented by a coarse mask that separates the object from close ones. In the meantime, the fully convolutional backbone produces a global saliency map of the whole image, which separates the foreground from the background at pixel level. Finally, the coarse but instance-aware local shapes and the precise but instance-unaware global saliency map are assembled to form the final instance masks.
To demonstrate the robustness of CenterMask and analyze the effects of its core factors, extensive ablation experiments are conducted and the performance of multiple basic instantiations are compared. Visualization shows that the CenterMask with only Local Shape branch can separate objects well, and the model with only Global Seliency branch performs good enough in objects-non-overlapping situations. In complex and objects-overlapping situations, combination of these two branches differentiates instances and realizes pixel-wise segmentation simultaneously. Results of CenterMask on COCO  test set images are shown in Figure 2.
In summary, the main contributions of this paper are as follows:
An anchor-box free and one-stage instance segmentation method is proposed, which is simple, fast and accurate. Totally trained from scratch and without any bells and whistles, the proposed CenterMask achieves 34.5 mask AP with a speed of 12.3 fps on the challenging COCO, showing the good speed-accuracy trade-off. Besides, the method can be easily embedded to other one-stage object detectors such as FCOS and performs well, showing the generation of CenterMask.
The Local Shape representation of object masks is proposed to differentiate instances in the anchor-box free condition. Using the representation of object center points, the Local Shape branch predicts coarse masks and separate objects effectively even in the overlapping situations.
The Global Saliency Map is proposed to realize pixel-wise feature alignment naturally. Different from previous feature align operations for instance segmentation, this module is simpler, faster, and more precise. The Global Saliency generation acts similar to semantic segmentation , and hope this work can motivate one-stage panoptic segmentation  in the future.
2 Related Work
Two-stage Instance Segmentation: Two-stage instance segmentation method often follows the detect-then-segment paradigm, which first performs bounding box detection and then classifies the pixels in the bounding box area to obtain the final mask. Mask R-CNN  extends the successful Faster R-CNN  detector by adding a mask segmentation branch on each Region of Interest area. To preserve the exact spatial locations, it introduces the RoIAlign module to fix the pixel misalignment problem. PANet  aims to improve the information propagation of Mask R-CNN by introducing bottom-up path augmentation, adaptive feature pooling, and fully-connected fusion. Mask Scoring R-CNN  proposes a mask scoring module instead of the classification score to evaluate the mask, which can improve the quality of the segmented mask.
Although two-stage instance segmentation methods achieve state-of-the-art performance, these models are often complicated and slow. Advances of one-stage object detection motivate us to develop faster and simpler one-stage instance segmentation methods.
One-stage Instance Segmentation: State-of-the-art one-stage instance segmentation methods can be roughly divided into two categories: global-area-based and local-area-based approaches. Global-area-based methods first generate intermediate and shared feature maps based on the whole image, then assemble the extracted features to form the final masks for each instance.
InstanceFCN  utilizes FCN  to generate multiple instance-sensitive score maps which contain the relative positions to objects instances, then applies an assembling module to output object instances. YOLACT  generates multiple prototype masks of the global image, then utilizes per-instance mask coefficients to produce the instance level mask. Global-area-based methods can maintain the pixel-to-pixel alignment which makes masks precise, but performs worse when objects overlap. In contrast to these methods, local-area-based methods output instance masks on each local region directly. PolarMask  represents mask in its contour form and utilizes rays from the center to describe the contour, but the polygon surrounded by the contour can not depict the mask precisely and can not describe objects that have holes in the center. TensorMask  utilizes structured 4D tensors to represent masks over a spatial domain, it also introduces aligned representation and tensor bipyramid to recover spatial details, but these align operations make the network even slower than the two-stage Mask R-CNN .
Different from the above approaches, CenterMask contains both a Global Saliency generation branch and a Local Shape prediction branch, and integrates them to preserve pixel alignment and separate objects simultaneously.
The goal of this paper is to build a one-stage instance segmentation method. One-stage means that there is no pre-defined Region-of-Interests (RoIs) for mask prediction, which requires to locate, classify, and segment objects simultaneously. To realize that, we break the instance segmentation into two simple and parallel sub-tasks, and assemble the results of them to form the final masks. The first branch predicts coarse shape from the center point representation of each object, which can constrain the local area for each object and differentiate instances naturally. The second branch predicts a saliency map of the whole image, which realizes precise segmentation and preserves exact spatial locations. In the end, the mask for each instance is constructed by multiplying the outputs of the two branches.
3.1 Local Shape Prediction
To differentiate instances at different locations, we choose to model the masks from their center points. The center point is defined as the center of the surrounding bounding box for each object. A natural thought is to represent it by the extracted image feature at the center point location, but a fixed-size image feature can not represent masks in various sizes. To address this issue, we decompose the object mask into two components: the mask size and the mask shape. The size for each mask can be represented by the object height and width, and the shape can be described by a 2D binary array of fixed size.
The above two components can be predicted in parallel using fixed-size representations of the center points. The architecture of the two heads is shown in Figure 4.
represents the image features extracted by the backbone network. Letbe the output of the Local Shape head, with and represent the height and width of the whole map, and represents the number of output channels for this head. The output of the Size head is in the same height and width, with a channel size of two.
For a center point at the feature map, the shape feature at this location is extracted by . The shape vector is in the size of , and then be reshaped to a 2D shape array of size . The size prediction of the center point is , with the height and width being and . The above 2D shape array is then resized to the size of to form the final local shape prediction.
For convenience, the Local Shape Prediction branch is used to refer to the combination of the shape and size heads. This branch produces masks from local point representation, and predicts a local area for each object, which makes it suitable for instance differentiation.
3.2 Global Saliency Generation
Although the Local Shape branch generates a mask for each instance, it is not enough for precise segmentation. As the fixed-size shape vector can only predict a coarse mask, resizing and warping it to the object size losses spatial details, which is a common problem for instance segmentation. Instead of relying on complex pixel calibration mechanism [12, 2], we design a simpler and faster approach.
Motivated by semantic segmentation  which makes pixel-wise predictions on the whole image, we propose to predict a Global Saliency Map to realize pixel level feature alignment. The Map aims to represent the salience of each pixel in the whole image, i.e., whether the pixel belonging to an object area or not.
Utilizing the fully convolutional backbone, the Global Saliency branch performs the segmentation on the whole image in parallel with the existing Local Shape branch. Different from semantic segmentation methods which utilize softmax function to realize pixel-wise competition among object classes, our approach uses sigmoid function to perform binary classification. The Global Saliency Map can be class-agnostic or class-specific. In the class-agnostic setting, only one binary map is produced to indicate whether the pixels belonging to the foreground or not. For the class-specific setting, the head produces a binary mask for each object category.
An example of Global Saliency Map is shown in the top of Figure 3, using the class-agnostic setting for visualization convenience. As can be seen in the figure, the map highlights the pixels that have saliency, and achieves pixel-wise alignment with the input image.
3.3 Mask Assembly
In the end, the Local Shapes and Global Saliency Map are assembled together to form the final instance masks. The Local Shape predicts the coarse area for each instance, and the cropped Saliency Map realizes precise segmentation in the coarse area. Let represent the Local Shape for one object, and be the corresponding cropped Saliency Map. They are in the same size of the predicted height and width.
To construct the final mask, we firstly transform their values to the range of (0,1) using the function, then calculate the Hadamard product of the two matrices:
There is no separate loss for the Local Shape and Global Saliency branch, instead, all supervision comes from the loss function of the assembled mask. Letdenote the corresponding ground truth mask, the loss function of the final masks is :
where represents the pixel-wise binary cross entropy, and is the number of objects.
3.4 Overall pipeline of CenterMask
pipeline. Each channel of the output is a heatmap for the corresponding category. Obtaining the center points requires to search the peaks for each heatmap, which are defined as the local maximums within a window. The Offset head is utilized to recover the discretization error caused by the output stride.
Given the predicted center points, the Local Shapes for these points are calculated by the outputs of the Shape head and the Size head at the corresponding locations, following the approach in Section 3.1. The Saliency head produces the Global Saliency Map. In the class-agnostic setting, the output channel number is 1, the Saliency map for each instance is obtained by cropping it with the predicted location and size. In the class-specific setting, the channel of the corresponding predicted category is cropped. The final masks are constructed by assembling the Local Shapes and the Saliency Map.
Loss function: The overall loss function is composed of four losses: the center point loss, the offset loss, the size loss, and the mask loss. The center point loss is defined in the same way as the Hourglass network , let be the score at the location (i,j) for class c in the predicted heatmaps, and
be the “ground-truth” heatmap. The loss function is a pixel-wise logistic regression modified by the focal loss:
where is the number of center points in the image, and are the hyper-parameters of the focal loss; The offset loss and size loss follow the same setting of CenterNet , which utilize L1 loss to penalize the distance. Let represent the predicted offset, represent the ground truth center point, and represents the output stride, then the low-resolution equivalent of is , therefore the offset loss is:
Let the true object size be , the predicted size be , then the size loss is:
The overall training objective is the combination of the four losses:
where the mask loss is defined in Equation 2, , , and are the coefficients of the four losses respectively.
3.5 Implementation Details
Train: Two backbone networks are involved to evaluate the performance of CenterMask: Hourglass-104  and DLA-34 . equals 32 for the shape vector. , and , are set to 1,1,0.1,1 for the loss function. The input resolution is fixed with . All models are trained from scratch, using Adam 
to optimize the overall objects. The models are trained for 130 epochs, with an initial learning rate of 2.5e-4 and dropped 10at 100 and 120 epoch. As our approach directly makes use of the same hyper-parameters of CenterNet , we argue that the performance of CenterMask can be improved further if the hyper-parameters are optimized for it correspondingly.
Inference: During testing, no data augmentation and no NMS is utilized, only returning the top-100 scoring points with the corresponding masks. The binary threshold for the mask is 0.4.
The performance of the proposed CenterMask is evaluated on the MS COCO instance segmentation benchmark . The model is trained on the 115k trainval35k images and tested on the 5k minival images. Final results are evaluated on 20k test-dev.
4.1 Ablation Study
|Mask R-CNN ||ResNeXt-101-FPN||8001333||8.3||37.1||60.0||39.4||16.9||39.9||53.5|
A number of ablation experiments are performed to analyze CenterMask. Results are shown in Table 1.
Shape size Selection: Firstly, the sensitivity of our approach to the size of the Local Shape representation is analyzed in Table 0(a). Larger shape size brings more gains, but the difference is not large, indicating that the Local Shape representation is robust to the feature size. When S equals 32, the performance saturates, therefore we use the number as the default Shape size.
Backbone Architecture: Results of CenterMask with different backbones are shown in Table 0(b). The large Hourglass brings about 1.4 gains compared with the smaller DLA-34 . The model with DLA-34  backbone realizes 32.5 mAP with 25.2 FPS, achieving a good speed-accuracy trade-off.
Local Shape branch: The comparison of CenterMask with or without Local Shape branch is shown in Table 0(c), with Saliency branch in class-agnostic setting. The Shape branch brings about 10 gains. Moreover, CenterMask with only the Shape branch achieves 26.5 AP (as shown in the first row of Table 0(d)), images generated by this model are shown in Figure 4(a). Each image contains multiple objects with dense overlaps, the Shape branch can separate them well with coarse masks. The above results illustrate the effectiveness of the proposed Local Shape branch.
Global Saliency branch: The comparison of CenterMask with or without Global Saliency branch is shown in Table 0(d), introduction of the Saliency branch improves 5 points, compared with model with only Local Shape branch.
We also conduct visualization to CenterMask with only Saliency branch. As shown in Figure 4(b), there is no overlap between objects in these images. The Saliency branch performs good enough for this kind of situation by predicting precise mask for each instance, indicating the effectiveness of this branch for pixel-wise alignment.
Moreover, the two settings of the Global Saliency branch are compared in Table 0(e). The class-specific setting achieves 2.4 points higher than the class-agnostic counterpart, showing that the class-specific setting can help separate instances from different categories better.
For the class-specific version of Global Saliency branch, a binary cross-entropy loss is added to supervise the branch directly besides the mask loss Eq. (2). The comparison of CenterMask with or without the new loss is shown in Table 0(f), direct supervision brings 0.5 points.
Combination of Local Shape and Global Saliency: Although the Saliency branch performs well in non-overlapping situations, it can not handle more complex images. We conduct the comparison of Shape-only, Saliency-only and the Combination of both in challenging conditions of instance segmentation. As shown in Figure 4(c), objects overlap exists in these images. In the first column, the Shape branch separates different instances well, but the predicted masks are coarse. In the second column, the Saliency branch realizes precise segmentation but fails in the overlapping situations, which results in obvious artifacts on the overlapping area. CenterMask with both branches inherits their merits and avoid their weakness. As shown in the last column, overlapped objects are separated well and segmented precisely simultaneously, illustrating the effectiveness of our proposed model.
4.2 Comparison with state-of-the-art
In this section, we compare CenterMask with the state-of-the-art instance segmentation methods on the COCO test-dev set.
As a one-stage instance segmentation method, our model follows a simple setting to perform the comparison: totally trained from scratch without pre-trained weights for the backbone, using a single model with single-scale training and testing, and inference without any NMS.
As shown in Table 2, two models achieve higher AP than our method: the two-stage Mask R-CNN and the one-stage TensorMask, but their speed is 4 fps and 5 times slower than our largest model respectively. We think the gaps arise from the complicated and time-consuming feature align operations. Compared with the most accurate model of YOLACT , CenterMask with DLA-34 backbone achieves a higher AP with a faster speed. Compared with PolarMask , CenterMask with hourglass-104 backbone is 1.6 point higher with a faster speed.
Figure 6 shows the visualization of the results generated by the state-of-the-art models, only comparing the ones that have released code. Mask R-CNN  detects objects well, but there are still artifacts in the masks, such as the heads of the two people in (a), we suppose it is caused by feature pooling. The YOLACT  segments instance precisely, but misses object in (d) and fails in some overlapping situations, such as the two legs in (c). The PolarMask can separate different instances, but its mask is not precise due to the polygon mask representation. Our CenterMask can separate overlapping objects well and segment masks precisely.
4.3 CenterMask on FCOS Detector
Besides CenterNet, the proposed Local Shape and Global Saliency branches can be embedded into other off-the-shelf detection models easily. FCOS, which is one of the state-of-the-art one stage object detectors, is utilized to perform the experiment. The performance of CenterMask built on FCOS with different backbones are shown in Table 3, with the training followings the same setting of Mask R-CNN. With the same backbone of ResNeXt-101-FPN, CenterMask-FCOS achieves 3.8 points higher than PolarMask in Table 2, and the best model achieves 38.5 mAP on COCO test-dev, showing the generalization of CenterMask.
To show the superiority of CenterMask on precise segmentation, we evaluate the model on the higher-quality LVIS annotations. The results are shown in Table 4. Based on the same backbone, the CenterMask-FCOS achieves better performance than Mask R-CNN.
In this paper, we propose a single shot and anchor-box free instance segmentation method, which is simple, fast and accurate. The mask prediction is decoupled into two critical modules: the Local Shape branch to separate different instances effectively and the Global Saliency branch to realize precise segmentation pixel-wisely. Extensive ablation experiments and visualization images show the effectiveness of the proposed CenterMask. We hope our work can help ease more instance-level recognition tasks.
Acknowledgements This research is supported by Beijing Science and Technology Project (No. Z181100008918018).
-  (2019) YOLACT: Real-time instance segmentation. In ICCV, Cited by: §1, §1, §2, §4.2, §4.2, Table 2.
-  (2019) Tensormask: a foundation for dense object segmentation. ICCV. Cited by: §1, §1, §2, §3.2, Table 2.
-  (2016) Instance-sensitive fully convolutional networks. In ECCV, pp. 534–549. Cited by: §1, §2.
-  (2016) Instance-aware semantic segmentation via multi-task network cascades. In CVPR, pp. 3150–3158. Cited by: Table 2.
-  (2017-10) Deformable convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), Cited by: Table 3.
-  (2009) Imagenet: a large-scale hierarchical image database. In CVPR, pp. 248–255. Cited by: §4.2.
-  (2019) RetinaMask: learning to predict masks improves state-of-the-art single-shot detection for free. In arXiv preprint arXiv:1901.03353, Cited by: §1.
-  (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pp. 580–587. Cited by: §1.
-  (2015) Fast r-cnn. In CVPR, pp. 1440–1448. Cited by: §1.
LVIS: a dataset for large vocabulary instance segmentation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5356–5364. Cited by: Table 4.
-  (2014) Simultaneous detection and segmentation. In ECCV, pp. 297–312. Cited by: §1.
-  (2017) Mask r-cnn. In ICCV, pp. 2961–2969. Cited by: §1, §2, §2, §3.2, §4.2, §4.3, Table 2, Table 4.
-  (2015) Densebox: unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874. Cited by: §1.
-  (2019) Mask scoring r-cnn. In CVPR, pp. 6409–6418. Cited by: §1, §2.
-  (2014-12) Adam: a method for stochastic optimization. ICLR, pp. . Cited by: §3.5.
-  (2019-06) Panoptic segmentation. In CVPR, Cited by: 3rd item.
-  (2018) Cornernet: detecting objects as paired keypoints. In ECCV, pp. 734–750. Cited by: §1.
-  (2017) Fully convolutional instance-aware semantic segmentation. In CVPR, pp. 2359–2367. Cited by: Table 2.
-  (2017) Focal loss for dense object detection. In ICCV, pp. 2980–2988. Cited by: §3.4.
-  (2014) Microsoft coco: common objects in context. In ECCV, pp. 740–755. Cited by: §1, §4.2, §4.
-  (2018) Path aggregation network for instance segmentation. In CVPR, pp. 8759–8768. Cited by: §1, §2.
-  (2016) Ssd: single shot multibox detector. In ECCV, pp. 21–37. Cited by: §1.
-  (2015) Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431–3440. Cited by: 3rd item, §2, §3.2.
Stacked hourglass networks for human pose estimation. In ECCV, pp. 483–499. Cited by: §3.4, §3.4, §3.5.
-  (2016) You only look once: unified, real-time object detection. In CVPR, pp. 779–788. Cited by: §1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, pp. 91–99. Cited by: §1, §2.
-  (2019) FCOS: fully convolutional one-stage object detection. In ICCV, Cited by: 1st item, §1, §4.3.
-  (2019) PolarMask: single shot instance segmentation with polar representation. arXiv preprint arXiv:1909.13226. Cited by: §1, §1, §2, §4.2, §4.3, Table 2.
-  (2018) Deep layer aggregation. In CVPR, pp. 2403–2412. Cited by: §3.5, §4.1.
-  (2019) Objects as points. In arXiv preprint arXiv:1904.07850, Cited by: §1, §1, §3.4, §3.5, §4.3.
-  (2019) Bottom-up object detection by grouping extreme and center points. In CVPR, Cited by: Table 2.