Instance segmentation is a fundamental task in the computer vision, which is important for many real-world applications such as autonomous driving, robot manipulation. As the task seeks to predict both the object location and the shape, the methods for the instance segmentation are generally not as efficient as the object detection frameworks. Forwarding each object instance through an upsampling network to obtain the instance shape, as mainstream instance segmentation frameworks do[12, 22, 3, 19]
, is quite computation-consuming, especially when compared with the object detection which only needs to regress the bounding box, a 4D vector for each object. Thus, if the network can also regress the object shape to a short vector, and decode the vector to the shape (see Fig.1) in a simple way just like the bounding box, it can make the instance segmentation reach almost equal computational efficiency to the object detection. To achieve this goal, we propose a novel instance segmentation framework based on Explicit Shape Encoding and modern object detectors, named ESE-Seg.
Shape encoding is originally developed for instance retrieval [39, 17, 37], which encodes the object to a shape vector. Recently, a number of works encode the shape implicitly [9, 29, 38], which is to project the shape content to a latent vector, typically through a black-box design such as deep CNN. Thus the decoding procedure under this approach should be also put through a network, which requires several forwarding for multiple instances, and causes large computation. In pursuit of fast decoding, we employ an explicit shape encoding that involves only simple numeric transformations.
However, designing a satisfactory explicit shape encoding method is non-trivial. Concerning the CNN training, as it is known to regress with uncertainties, a preferred shape vector should be relatively short but contains sufficient information, robust to the noise, and efficiently decodable to reconstruct the shape. In this paper, we propose a contour-based shape signature to meet these requirements. A novel “Inner-center Radius” (IR) shape signature for instance shape representation is introduced. The IR first locates an inner-center inside the object segment, and based on this inner-center, it transforms the contour points to polar coordinates. That is, we can form a function of radius along the contour with respect to angle . To make the shape vector even shorter and more robust, we apply the Chebyshev polynomials to conduct the function approximation on . As such, the IR signature is represented by a small number of coefficients with small error, and these coefficients are the shape vector to be predicted. Additionally, we also in-depth discuss about the comparison with other shape signature designs. Conventional object detector (YOLOv3 ) is used to regress the shape vector, along with 4D bounding box vector. To note that our shape decoding can be implemented by simple tensor operations (multiplication and addition) which are extremely fast.
The ESE-Seg itself is independent of all the bounding box-based object detection frameworks [32, 4, 8, 20, 23]. We demonstrate the generality on Faster R-CNN , RetinaNet , YOLO  and YOLOv3-tiny  and evaluate our ESE-Seg on standard public datasets, namely Pascal VOC  and COCO . Our method achieves 69.3 mAP, 48.7 mAP respectively with IOU threshold 0.5. The score is better than Mask R-CNN  on Pascal VOC 2012, and is competitive to the performance on COCO. It is decent considering it is 7 times faster than Mask R-CNN with the same backbone ResNet-50 . The speed can be even faster at 130fps on GTX 1080Ti when the base detector changes to YOLOv3-tiny, while the mAP remains 53.2% on the Pascal VOC. It is noteworthy, ESE-Seg speeds up the instance segmentation not depending on the model acceleration techniques [15, 40], but relying on a new mechanism that cut down shape prediction after object detection.
Contributions. We propose an explicit shape encoding based instance segmentation framework, ESE-Seg. It is a top-down approach but reconstructs the shapes for multiple instances in one pass, thus greatly reduces the computational consumption, and makes the instance segmentation reach the speed of the object detection with no model acceleration techniques involved.
2 Related Work
Explicit v.s Implicit Shape Representation
A previous work with similar ideology has been done by Jetley 
. They took the implicit shape representation path by first training an autoencoder on object binary mask. The encoded shape vector is decoded to shape mask through the decoder component. In the implementation, they adopted the YOLO
to regress the bounding box and the shape vector for each detected object. The YOLO structure can thus be viewed as both detector and encoder. The encoded vector from YOLO is then decoded by the pre-trained denoising autoencoder. The major differences between our work and theirs:
Explicit shape representation is typically based on the contour, while implicit shape representation is typically based on the mask.
Explicit shape representation requires no additional decoder network training. Parallelizing the decoding process for all objects in the images, which is hard for network structured decoder, can be easily achieved by the explicit shape encoding. As a matter of fact, implicit decoding requires multiple passes for multiple objects, one for each, while explicit decoding can obtain all the shapes in one pass.
The input for training autoencoder and training YOLO (viewed as an encoder) is quite different (object scales, color pattern), which may cause trouble for the decoder, since the decoder is not further optimized with YOLO training. Such an issue does not exist for explicit shape representation.
In addition to our proposed IR shape signature, there exist various methods to represent the shape, to name a few, centroid radius, complex coordinates, cumulative angle [5, 34, 39] . While such methods sample the shape related feature along the contour, only a few of them can be decoded to reconstruct the shape.
Object detection is a richly studied field. Object detection frameworks with CNN can be roughly divided into two categories, one-stage and multi-stage. Two-stage detection scheme is a classic multi-stage scheme, which typically learns an RPN to sample region proposals and then refine the detection with roi pooling or its variations, the representative works are Faster R-CNN , R-FCN . Recently, some works extend the two-stage to multi-stage in a cascade form . On the other hand, one-stage detectors divide the input image to size-fixed grid cells and parallelize the detection on each cell with fully convolutional operations, the representative networks are SSD , YOLO , RetinaNet . Recently, point-based detections are proposed, CornerNet  directly detects the upper-left and bottom-right points, which is a one-stage detector. Grid R-CNN  regresses 9 points to construct the bounding box, which is a two-stage detector.
Our method is compatible with all the bounding box-based detection networks. We experiment with Faster R-CNN, YOLO, YOLOv3, and RetinaNet to prove the generality. See Table 4. However, it is not compatible with the point-based detector, as the shape (bounding box) in this setting is not parametrized.
Instance segmentation requires not only to locate the object instance but also to delineate the shape. The mainstream methods can be roughly divided to top-down [12, 22, 3, 19, 28, 27, 2] or bottom-up [26, 35] approaches. Ours belongs to the top-down line. The top-down approaches such as MNC , FCIS , Mask R-CNN  are generally slowed down when the object number in an image is large, as they predict the instance mask in sequence. On the contrary, our ESE-Seg alleviates the cumbersome computation by regressing the object shapes to short vectors and decoding them simultaneously. It is also the first top-down instance segmentation framework which is not affected by the instance number in the images with respect to the inference time. Besides, the works on augmenting the performance of instance segmentation frameworks through data augmentation [7, 36], scale normalization  can be easily integrated to our system.
We propose an explicit shape encoding based detection to solve the instance segmentation. It predicts all the instance segments in one forwarding pass, which can reach equal efficiency as object detection solver. Given an object instance segment, we parametrize the contour with a novel shape signature “Inner-center Radius” (IR) (Sec. 3.2.1). The Chebyshev polynomials are used to approximate the shape signature vector with a small number of coefficients (Sec. 3.2.2). Those coefficients are served as the shape descriptor, and the network will learn to regress it. (Sec. 3.3). Finally, we describe how to decode the shape descriptor under the ordinary object detection framework by simple tensor operations. (Sec. 3.4). The overall pipeline is shown in Fig. 2.
The Advantage of Explicit Shape Encoding
In object detection system (YOLOv3), the network regresses the bounding boxes (4D vectors) and the bounding box is decoded by tensor operations, which is light to process and easy to parallelize. By contrast, conventional instance segmentation (Mask R-CNN) requires an add-on network structure to compute the object shape. The decoding/upsampling forwarding involves a large number of parameters, which is heavy to load in parallel for multiple instances. This is why instance segmentation is normally much slower than object detection. Therefore, if we also regress the object shape into short vectors directly, the instance shape decoding can be achieved by fast tensor operations (multiplication and addition) in a similar way. Thus the instance segmentation can reach the speed of object detection.
3.2 Shape Signature
3.2.1 Inner-center Radius Shape Signature
In this section, we will describe the design of the “inner-center radius” shape signature and compare it to previously proposed shape signatures.
The construction of the “inner-center radius” contains two steps: First, locate an Inner center point inside the object segment as the origin point to build the polar coordinate system. Second, sampling the contour points according to the angle . This signature is translation-invariant and scale-invariant after normalized.
The inner-center point is defined by the most far-way point from the contour, which can be obtained through distance transform . To note, some commonly used center such as the center of mass, the center of the bounding box cannot guarantee to be inside the object. See Fig. 3.
In a few cases, an object is separated into disconnected regions, resulting in multiple inner centers. To deal with such situations, we dilate the broken areas to a single one and then find the contour of the dilated shape. Of course the contour is very rough, however, it can help to reorder the contour points of the outline points. The whole process is depicted in Fig. 4. Thus inner center is computed from the completed contour.
Dense Contour Sampling
We sample the contour points according to the angles at the interval of around inner-center point, thus a contour will result in points. In practice, and thus points are sampled from an object contour. If the ray casting from the inner-center intersects more than once to the contour. We collect the point with the largest radius only. The function is denoted as radius at different angles . To note, we are aware that the contour sampling in this way will not be perfect, however, after extensive experiments in Pascal VOC, and COCO, we find it suitable for natural objects (see Table 2). A further discussion is in the next Sec. 3.2.3.
3.2.2 Fitting the Signature to Coefficients
The IR makes shape representation into a vector. But, it is still too long for the network to train. Besides, the shape signature is very sensitive to the noise (see Fig. 7). Thus, we take a further step to shorten shape vector and resist noise through Chebyshev polynomial fitting.
The Chebyshev polynomial is defined in recurrence:
which is also known as The Chebyshev polynomials of the first kind. It can effectively minimize the problem of Runge’s phenomenon and provides a near-optimal approximation under the maximum norm111https://en.wikipedia.org/wiki/Chebyshev_polynomials.
Given the IR shape signature, the Chebyshev approximation is to find the coefficients in
Truncating the function with terms, we have the approximation function . are the shape signature vector to represent the object.
Comparison with Other Shape Signatures
The angle-based sampling for shape signature such as proposed IR is rarely adopted before, because it cannot perfectly fit shape segment. Actually, we compare and in-depth analyze other shape signatures and finally choose this solution. For example, a quite straight-forward design is to sample along the contour. The contour is represented by a set of contour polygon vertex coordinates. This method can nearly perfectly fit the object segment, especially non-convex shape. However, we find the performance of this design drops about mAP and more results are reported in Table 2. The possible reason is that our angle-based sampling produces 1D sample sequence, yet, contour vertices sequence is a 2D sample sequence which is more sensitive to noise. We report the reconstruction error of these two shape signatures on Pascal VOC 2012 training in Fig. 5 (denoted as “IR” and “XY” respectively). Admittedly, the XY has less reconstruction error when sampling the same points on the contour, but when compared with the same dimension of the vector, IR is more accurate. For example, the dimension of the vector of IR at is the same as XY at , the IR has a significantly less reconstruction error. Though when the gets larger, the difference gets smaller, a large will make training unstable as presented in Table 2.
Other classic shape signatures such as centroid radius, cumulative angle cannot reconstruct the shape.
Comparison with Other Fitting Methods
Other commonly used function approximation methods, namely polynomial regression and Fourier series fitting are also considered.
For polynomial regression, the goal is to fit shape vector , which is the coefficients of degree polynomials, . For Fourier series fitting, the shape vector is , the truncated degree Fourier series is . As the dimension of can be determined in advance, denoted as . Thus we compare the methods from three aspects, the reconstruction error , sensitivity to the noises, and the numeric distribution of the coefficients.
The reconstruction errors is calculated by under the same dimension and point number in Fig. 6. Then we set as an example to conduct the sensitivity analysis as shown in Fig. 7. For each coefficient, it is interrupted by the noise , is the mean of the corresponding coefficient. As we can see, the of Fourier series is extremely sensitive, which may cause the Fourier fitting not suitable for the CNN training, as the CNN is known to regression with uncertainties. If we fix , it becomes less sensitive, but has considerably larger reconstruction error. Besides, considering the difficulty for the network to learn, we also investigate the statistic on the distribution of the fitted coefficients. See Fig. 8, Fig. 9 and Fig. 10. Chebyshev polynomials are better for shape signature fitting as it has less reconstruction error, less sensitivity to noise, better numeric distribution of coefficients.
3.3 Regression Under Object Detection Framework
Our network will learn to predict the inner center , the shape vector
, along with the object bounding box. The loss function for bounding boxes regression, classification stays the same to the original object detection frameworks. For YOLOv3, the loss function for bounding boxand classification can be referred to . As for the loss function for the shape learning:
where indicates the grid cells with objects for the one-stage detectors, and the proposals for the two-stage detectors. Thus the overall objective function is:
3.4 Decoding Shape Vector to Shape
Given the shape vector dimension , the predicted shape vector , the fitted Chebyshev polynomial is . And the polar coordinate transform factor .Thus the shape can be recovered by traversing the
is the Hadamard product. This calculation can be written in tensor operation form. Given the batch size , the corresponding tensor version are for angles sampled, for the predicted shape vector, for the predicted inner centers and represents the decoded contour points. As expressed:
In the GPU setting, the computation cost of such tensor operation is very minor. Due to this extremely fast shape decoding, our instance segmentation can achieve the same speed with object detection.
We conduct extensive experiments to justify the descriptor choice and the efficacy of proposed methods. If not specified, the base detector is YOLOv3 implemented by GluonCV , the input image is .
. Other hyper-parameters stays the same as the YOLOv3 implementation. We trained 300 epochs and report the performance with the best evaluation results. For the model name with a bracket and a number in it, the number is the dimension of the shape vector.
4.1 Explicit v.s. Implicit
We first compare the explicit shape encoding with the implicit shape encoding. As the previous work  provides a baseline for implicit shape representation with YOLO  as the base detector, to be fairly compared, we also trained the ESE-Seg with YOLO base detector, the dimension of the shape vector is also the same. We denote the model as “YOLO-Cheby (50)” and “YOLO-Cheby (20)”. The experiments are on Pascal SBD 2012 val .
To note, the mainstream instance segmentation based on mask, namely SDS , MNC , FCIS , Mask RCNN , can also be viewed as implicit shape encoding. We compare them with “YOLOv3-Cheby (20)” on Pascal VOC 2012 without SBD and COCO with their reported scores, which outperforms the Mask R-CNN (with ResNet50) at mAP@0.5 on Pascal VOC and close to it on COCO. To note, the input image size is 800 on the shorter side for Mask R-CNN with ResNet50-FPN, which is almost 4 times to our . All results are reported in Table 1.
|SBD (5732 val images)|
|Embedding (50) ||32.6||14.8||28.9||30.5|
|Embedding (20) ||34.6||15.0||31.5||28.0|
|Pascal VOC 2012 val|
|+ COCO pretrained||69.3||36.7||54.2||26.0|
|COCO 2017 val|
4.2 On explicit descriptors
In this section, we will compare the object shape signatures and the function approximation methods quantitatively.
On Different Shape Signatures
For object shape signatures, we compare our proposed IR with a straightforward 2D vertices representation on Pascal VOC 2012. (See Table 2) We adopt the squared boxes, the bounding box, as the baseline. To note, the squared boxes baseline is not the object detection scores, as the baseline computes the IoU between the bounding box and the instance mask.
For each shape signature, we compare regressing directly and regressing after Chebyshev polynomial fitting. For direct regression, we control the length of the shape signature by adjusting the for each shape. We select and points to regress. We denote model trained on 2D vertices “XY”, the shape vector has a dimension of and respectively. As for the Chebyshev fitting on these signatures, we fit the coordinates and coordinates respectively. Denoted as “XY-Cheby (10+10)” means each fitted function has 10 coefficients.
On Different Function Approximation Techniques
We have already compared the function approximation techniques through off-line analysis. However, it is still interesting to know performance of the neural network on the coefficients obtained by these methods.
All the function approximations are carried out on IR . The polynomial regression is denoted as “Poly”, while “Fourier” for Fourier series fitting and “Cheby” for Chebyshev polynomial fitting. All models have tested on Pascal VOC 2012 val. See Table 3.
4.3 On base object detector
To show the generality of the object shape detection, we also conduct the shape learning on Faster R-CNN (“Faster-Cheby (20)”), RetinaNet (“Retina-Cheby (20)”) and YOLOv3-tiny (“YOLOv3-tiny-Cheby (20)”). Not only the performance is stable for all these bounding box-based detectors, but the speed boost due to the detector can be enjoyed. As shown in Table 4.
4.4 Qualitative Results
Qualitative results are shown in Fig. 11. Obviously, the predicted shape vectors indeed capture the characteristics of the contours, not produce the random noise.
5 Limitations and Future Works
Our proposed ESE-Seg can achieve the instance segmentation with minor time-consumption, with a decent performance at IoU threshold 0.5. However, due to the inaccuracy of the shape vector, and the noise comes with the CNN regression, performance at larger IoU threshold like 0.7 drop a large margin. In the future, better ways to explicitly represent the shape, and better ways to train the CNN regression which will contribute to higher performance at high IOU threshold are of high interest.
This work is supported in part by the National Key R&D Program of China, No. 2017YFA0700800, National Natural Science Foundation of China under Grants 61772332.
-  (2018) Cascade r-cnn: delving into high quality object detection. In CVPR, Cited by: §2.
Hybrid task cascade for instance segmentation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4974–4983. Cited by: §2.
-  (2016) Instance-aware semantic segmentation via multi-task network cascades. In CVPR, Cited by: §1, §2, §4.1.
-  (2016) R-fcn: object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pp. 379–387. Cited by: §1, §2.
-  (1997) Machine vision: theory, algorithms and practicalities. 1990. Academic Press. Cited by: §2.
-  (2010-06) The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88 (2), pp. 303–338. Cited by: §1.
InstaBoost: boosting instance segmentation via probability map guidedcopy-pasting. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
-  (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1.
Deep image retrieval: learning global representations for image search. In European conference on computer vision, pp. 241–257. Cited by: §1.
-  (2011) Semantic contours from inverse detectors. In International Conference on Computer Vision (ICCV), Cited by: §4.1.
-  (2014) Simultaneous detection and segmentation. In European Conference on Computer Vision, pp. 297–312. Cited by: §4.1.
-  (2017) Mask R-CNN. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1, §1, §2, §4.1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
Bag of tricks for image classification with convolutional neural networks. arXiv preprint arXiv:1812.01187. Cited by: §4.
-  (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §1.
-  (2017) Straight to shapes: real-time detection of encoded shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6550–6559. Cited by: §2, §4.1, Table 1.
-  (2000) Region-based shape descriptor invariant to rotation, scale and translation. Signal Processing: Image Communication 16 (1), pp. 87 – 93. External Links: Cited by: §1.
-  (2018) Cornernet: detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750. Cited by: §2.
-  (2017) Fully convolutional instance-aware semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2359–2367. Cited by: §1, §2, §4.1.
-  (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1, §2.
-  (2014) Microsoft coco: common objects in context. In Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Eds.), Cham, pp. 740–755. External Links: Cited by: §1.
-  (2018) Path aggregation network for instance segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1, §2.
-  (2018) Grid r-cnn. CoRR abs/1811.12030. Cited by: §2.
-  (2003) A linear time algorithm for computing exact euclidean distance transforms of binary images in arbitrary dimensions. IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (2), pp. 265–270. Cited by: §3.2.1.
-  (2017) Associative embedding: end-to-end learning for joint detection and grouping. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, USA, pp. 2274–2284. External Links: Cited by: §2.
-  (2018) Semi-convolutional operators for instance segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 86–102. Cited by: §2.
-  (2019) Deep rnn framework for visual sequential applications. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 423–432. Cited by: §2.
-  (2016) Visual instance retrieval with deep convolutional networks. ITE Transactions on Media Technology and Applications 4 (3), pp. 251–258. Cited by: §1.
-  (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §2, §2, §4.1.
-  (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §1, §1, §3.3.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §2.
-  (2018) SNIPER: efficient multi-scale training. NIPS. Cited by: §2.
-  (1991) A contour-oriented approach to shape analysis. Prentice Hall International (UK) Ltd.. Cited by: §2.
-  (2009-06) Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res. 10, pp. 207–244. External Links: Cited by: §2.
-  (2018-09) SRDA: generating instance segmentation annotation via scanning, reasoning and domain adaptation. In The European Conference on Computer Vision (ECCV), Cited by: §2.
-  (1974) An analysis technique for biological shape. i. Information and Control 25 (4), pp. 357 – 370. External Links: Cited by: §1.
-  (2015) Exploiting local features from deep networks for image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 53–61. Cited by: §1.
-  (2002) A comparative study of fourier descriptors for shape representation and retrieval. In Proc. 5th Asian Conference on Computer Vision, pp. 35. Cited by: §1, §2.
-  (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856. Cited by: §1.