Explicit Shape Encoding for Real-Time Instance Segmentation

08/12/2019 ∙ by Wenqiang Xu, et al. ∙ Shanghai Jiao Tong University 3

In this paper, we propose a novel top-down instance segmentation framework based on explicit shape encoding, named ESE-Seg. It largely reduces the computational consumption of the instance segmentation by explicitly decoding the multiple object shapes with tensor operations, thus performs the instance segmentation at almost the same speed as the object detection. ESE-Seg is based on a novel shape signature Inner-center Radius (IR), Chebyshev polynomial fitting and the strong modern object detectors. ESE-Seg with YOLOv3 outperforms the Mask R-CNN on Pascal VOC 2012 at mAP^r@0.5 while 7 times faster.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Instance segmentation is a fundamental task in the computer vision, which is important for many real-world applications such as autonomous driving, robot manipulation. As the task seeks to predict both the object location and the shape, the methods for the instance segmentation are generally not as efficient as the object detection frameworks. Forwarding each object instance through an upsampling network to obtain the instance shape, as mainstream instance segmentation frameworks do

[12, 22, 3, 19]

, is quite computation-consuming, especially when compared with the object detection which only needs to regress the bounding box, a 4D vector for each object. Thus, if the network can also regress the object shape to a short vector, and decode the vector to the shape (see Fig.

1) in a simple way just like the bounding box, it can make the instance segmentation reach almost equal computational efficiency to the object detection. To achieve this goal, we propose a novel instance segmentation framework based on Explicit Shape Encoding and modern object detectors, named ESE-Seg.

Shape encoding is originally developed for instance retrieval [39, 17, 37], which encodes the object to a shape vector. Recently, a number of works encode the shape implicitly [9, 29, 38], which is to project the shape content to a latent vector, typically through a black-box design such as deep CNN. Thus the decoding procedure under this approach should be also put through a network, which requires several forwarding for multiple instances, and causes large computation. In pursuit of fast decoding, we employ an explicit shape encoding that involves only simple numeric transformations.

Figure 1:

ESE-Seg learns to estimate the shapes of the detected objects, it can be simutaneously obtained along with the bounding boxes.

However, designing a satisfactory explicit shape encoding method is non-trivial. Concerning the CNN training, as it is known to regress with uncertainties, a preferred shape vector should be relatively short but contains sufficient information, robust to the noise, and efficiently decodable to reconstruct the shape. In this paper, we propose a contour-based shape signature to meet these requirements. A novel “Inner-center Radius” (IR) shape signature for instance shape representation is introduced. The IR first locates an inner-center inside the object segment, and based on this inner-center, it transforms the contour points to polar coordinates. That is, we can form a function of radius along the contour with respect to angle . To make the shape vector even shorter and more robust, we apply the Chebyshev polynomials to conduct the function approximation on . As such, the IR signature is represented by a small number of coefficients with small error, and these coefficients are the shape vector to be predicted. Additionally, we also in-depth discuss about the comparison with other shape signature designs. Conventional object detector (YOLOv3 [31]) is used to regress the shape vector, along with 4D bounding box vector. To note that our shape decoding can be implemented by simple tensor operations (multiplication and addition) which are extremely fast.

The ESE-Seg itself is independent of all the bounding box-based object detection frameworks [32, 4, 8, 20, 23]. We demonstrate the generality on Faster R-CNN [32], RetinaNet [20], YOLO [30] and YOLOv3-tiny [31] and evaluate our ESE-Seg on standard public datasets, namely Pascal VOC [6] and COCO [21]. Our method achieves 69.3 mAP, 48.7 mAP respectively with IOU threshold 0.5. The score is better than Mask R-CNN [12] on Pascal VOC 2012, and is competitive to the performance on COCO. It is decent considering it is 7 times faster than Mask R-CNN with the same backbone ResNet-50 [13]. The speed can be even faster at 130fps on GTX 1080Ti when the base detector changes to YOLOv3-tiny, while the mAP remains 53.2% on the Pascal VOC. It is noteworthy, ESE-Seg speeds up the instance segmentation not depending on the model acceleration techniques [15, 40], but relying on a new mechanism that cut down shape prediction after object detection.

Contributions. We propose an explicit shape encoding based instance segmentation framework, ESE-Seg. It is a top-down approach but reconstructs the shapes for multiple instances in one pass, thus greatly reduces the computational consumption, and makes the instance segmentation reach the speed of the object detection with no model acceleration techniques involved.

2 Related Work

Explicit v.s Implicit Shape Representation

A previous work with similar ideology has been done by Jetley [16]

. They took the implicit shape representation path by first training an autoencoder on object binary mask. The encoded shape vector is decoded to shape mask through the decoder component. In the implementation, they adopted the YOLO

[30]

to regress the bounding box and the shape vector for each detected object. The YOLO structure can thus be viewed as both detector and encoder. The encoded vector from YOLO is then decoded by the pre-trained denoising autoencoder. The major differences between our work and theirs:

  • Explicit shape representation is typically based on the contour, while implicit shape representation is typically based on the mask.

  • Explicit shape representation requires no additional decoder network training. Parallelizing the decoding process for all objects in the images, which is hard for network structured decoder, can be easily achieved by the explicit shape encoding. As a matter of fact, implicit decoding requires multiple passes for multiple objects, one for each, while explicit decoding can obtain all the shapes in one pass.

  • The input for training autoencoder and training YOLO (viewed as an encoder) is quite different (object scales, color pattern), which may cause trouble for the decoder, since the decoder is not further optimized with YOLO training. Such an issue does not exist for explicit shape representation.

In addition to our proposed IR shape signature, there exist various methods to represent the shape, to name a few, centroid radius, complex coordinates, cumulative angle [5, 34, 39] . While such methods sample the shape related feature along the contour, only a few of them can be decoded to reconstruct the shape.

Object detection

Object detection is a richly studied field. Object detection frameworks with CNN can be roughly divided into two categories, one-stage and multi-stage. Two-stage detection scheme is a classic multi-stage scheme, which typically learns an RPN to sample region proposals and then refine the detection with roi pooling or its variations, the representative works are Faster R-CNN [32], R-FCN [4]. Recently, some works extend the two-stage to multi-stage in a cascade form [1]. On the other hand, one-stage detectors divide the input image to size-fixed grid cells and parallelize the detection on each cell with fully convolutional operations, the representative networks are SSD [23], YOLO [30], RetinaNet [20]. Recently, point-based detections are proposed, CornerNet [18] directly detects the upper-left and bottom-right points, which is a one-stage detector. Grid R-CNN [24] regresses 9 points to construct the bounding box, which is a two-stage detector.

Figure 2: The pipeline of the shape detection, regression and reconstruction.

Our method is compatible with all the bounding box-based detection networks. We experiment with Faster R-CNN, YOLO, YOLOv3, and RetinaNet to prove the generality. See Table 4. However, it is not compatible with the point-based detector, as the shape (bounding box) in this setting is not parametrized.

Instance Segmentation

Instance segmentation requires not only to locate the object instance but also to delineate the shape. The mainstream methods can be roughly divided to top-down [12, 22, 3, 19, 28, 27, 2] or bottom-up [26, 35] approaches. Ours belongs to the top-down line. The top-down approaches such as MNC [3], FCIS [19], Mask R-CNN [12] are generally slowed down when the object number in an image is large, as they predict the instance mask in sequence. On the contrary, our ESE-Seg alleviates the cumbersome computation by regressing the object shapes to short vectors and decoding them simultaneously. It is also the first top-down instance segmentation framework which is not affected by the instance number in the images with respect to the inference time. Besides, the works on augmenting the performance of instance segmentation frameworks through data augmentation [7, 36], scale normalization [33] can be easily integrated to our system.

3 Method

3.1 Overview

We propose an explicit shape encoding based detection to solve the instance segmentation. It predicts all the instance segments in one forwarding pass, which can reach equal efficiency as object detection solver. Given an object instance segment, we parametrize the contour with a novel shape signature “Inner-center Radius” (IR) (Sec. 3.2.1). The Chebyshev polynomials are used to approximate the shape signature vector with a small number of coefficients (Sec. 3.2.2). Those coefficients are served as the shape descriptor, and the network will learn to regress it. (Sec. 3.3). Finally, we describe how to decode the shape descriptor under the ordinary object detection framework by simple tensor operations. (Sec. 3.4). The overall pipeline is shown in Fig. 2.

The Advantage of Explicit Shape Encoding

In object detection system (YOLOv3), the network regresses the bounding boxes (4D vectors) and the bounding box is decoded by tensor operations, which is light to process and easy to parallelize. By contrast, conventional instance segmentation (Mask R-CNN) requires an add-on network structure to compute the object shape. The decoding/upsampling forwarding involves a large number of parameters, which is heavy to load in parallel for multiple instances. This is why instance segmentation is normally much slower than object detection. Therefore, if we also regress the object shape into short vectors directly, the instance shape decoding can be achieved by fast tensor operations (multiplication and addition) in a similar way. Thus the instance segmentation can reach the speed of object detection.

3.2 Shape Signature

3.2.1 Inner-center Radius Shape Signature

In this section, we will describe the design of the “inner-center radius” shape signature and compare it to previously proposed shape signatures.

The construction of the “inner-center radius” contains two steps: First, locate an Inner center point inside the object segment as the origin point to build the polar coordinate system. Second, sampling the contour points according to the angle . This signature is translation-invariant and scale-invariant after normalized.

Inner center

The inner-center point is defined by the most far-way point from the contour, which can be obtained through distance transform [25]. To note, some commonly used center such as the center of mass, the center of the bounding box cannot guarantee to be inside the object. See Fig. 3.

Figure 3: The center points of an object. As we can see, bounding box center and the center of mass cannot guarantee to be inside an object.

In a few cases, an object is separated into disconnected regions, resulting in multiple inner centers. To deal with such situations, we dilate the broken areas to a single one and then find the contour of the dilated shape. Of course the contour is very rough, however, it can help to reorder the contour points of the outline points. The whole process is depicted in Fig. 4. Thus inner center is computed from the completed contour.

Figure 4: The process to complete the separated areas. An occluded object (a) has many separated areas (b), we split the contour points of each area into outline and inner points with the help of the bounding box (c), then we dilate the broken area into one, and reorder the outline points according to the dilated shape contour (d), finally, we complete the instance (e).
Dense Contour Sampling

We sample the contour points according to the angles at the interval of around inner-center point, thus a contour will result in points. In practice, and thus points are sampled from an object contour. If the ray casting from the inner-center intersects more than once to the contour. We collect the point with the largest radius only. The function is denoted as radius at different angles . To note, we are aware that the contour sampling in this way will not be perfect, however, after extensive experiments in Pascal VOC, and COCO, we find it suitable for natural objects (see Table 2). A further discussion is in the next Sec. 3.2.3.

3.2.2 Fitting the Signature to Coefficients

The IR makes shape representation into a vector. But, it is still too long for the network to train. Besides, the shape signature is very sensitive to the noise (see Fig. 7). Thus, we take a further step to shorten shape vector and resist noise through Chebyshev polynomial fitting.

Chebyshev polynomials

The Chebyshev polynomial is defined in recurrence:

(1)
(2)
(3)

which is also known as The Chebyshev polynomials of the first kind. It can effectively minimize the problem of Runge’s phenomenon and provides a near-optimal approximation under the maximum norm111https://en.wikipedia.org/wiki/Chebyshev_polynomials.

Given the IR shape signature, the Chebyshev approximation is to find the coefficients in

Truncating the function with terms, we have the approximation function . are the shape signature vector to represent the object.

3.2.3 Discussion

Comparison with Other Shape Signatures

The angle-based sampling for shape signature such as proposed IR is rarely adopted before, because it cannot perfectly fit shape segment. Actually, we compare and in-depth analyze other shape signatures and finally choose this solution. For example, a quite straight-forward design is to sample along the contour. The contour is represented by a set of contour polygon vertex coordinates. This method can nearly perfectly fit the object segment, especially non-convex shape. However, we find the performance of this design drops about mAP and more results are reported in Table 2. The possible reason is that our angle-based sampling produces 1D sample sequence, yet, contour vertices sequence is a 2D sample sequence which is more sensitive to noise. We report the reconstruction error of these two shape signatures on Pascal VOC 2012 training in Fig. 5 (denoted as “IR” and “XY” respectively). Admittedly, the XY has less reconstruction error when sampling the same points on the contour, but when compared with the same dimension of the vector, IR is more accurate. For example, the dimension of the vector of IR at is the same as XY at , the IR has a significantly less reconstruction error. Though when the gets larger, the difference gets smaller, a large will make training unstable as presented in Table 2.

Other classic shape signatures such as centroid radius, cumulative angle cannot reconstruct the shape.

Figure 5: The reconstruction error of IR and XY with different sampling number points.
Comparison with Other Fitting Methods

Other commonly used function approximation methods, namely polynomial regression and Fourier series fitting are also considered.

For polynomial regression, the goal is to fit shape vector , which is the coefficients of degree polynomials, . For Fourier series fitting, the shape vector is , the truncated degree Fourier series is . As the dimension of can be determined in advance, denoted as . Thus we compare the methods from three aspects, the reconstruction error , sensitivity to the noises, and the numeric distribution of the coefficients.

The reconstruction errors is calculated by under the same dimension and point number in Fig. 6. Then we set as an example to conduct the sensitivity analysis as shown in Fig. 7. For each coefficient, it is interrupted by the noise , is the mean of the corresponding coefficient. As we can see, the of Fourier series is extremely sensitive, which may cause the Fourier fitting not suitable for the CNN training, as the CNN is known to regression with uncertainties. If we fix , it becomes less sensitive, but has considerably larger reconstruction error. Besides, considering the difficulty for the network to learn, we also investigate the statistic on the distribution of the fitted coefficients. See Fig. 8, Fig. 9 and Fig. 10. Chebyshev polynomials are better for shape signature fitting as it has less reconstruction error, less sensitivity to noise, better numeric distribution of coefficients.

Figure 6: Comparison of on COCO 2017 training.
Figure 7: Comparison of the sensitivity on COCO 2017 training.
Figure 8:

The overall mean of the coefficients is , and the variance is for Polynomial regression.

Figure 9: Coefficients distribution of Fourier series fitting on COCO training 2017.
Figure 10: Coefficients distribution of Chebyshev polynomial fitting on COCO training 2017.

3.3 Regression Under Object Detection Framework

Our network will learn to predict the inner center , the shape vector

, along with the object bounding box. The loss function for bounding boxes regression, classification stays the same to the original object detection frameworks. For YOLOv3, the loss function for bounding box

and classification can be referred to [31]. As for the loss function for the shape learning:

where indicates the grid cells with objects for the one-stage detectors, and the proposals for the two-stage detectors. Thus the overall objective function is:

3.4 Decoding Shape Vector to Shape

Given the shape vector dimension , the predicted shape vector , the fitted Chebyshev polynomial is . And the polar coordinate transform factor .Thus the shape can be recovered by traversing the

is the Hadamard product. This calculation can be written in tensor operation form. Given the batch size , the corresponding tensor version are for angles sampled, for the predicted shape vector, for the predicted inner centers and represents the decoded contour points. As expressed:

In the GPU setting, the computation cost of such tensor operation is very minor. Due to this extremely fast shape decoding, our instance segmentation can achieve the same speed with object detection.

4 Experiment

We conduct extensive experiments to justify the descriptor choice and the efficacy of proposed methods. If not specified, the base detector is YOLOv3 implemented by GluonCV [14], the input image is .

. Other hyper-parameters stays the same as the YOLOv3 implementation. We trained 300 epochs and report the performance with the best evaluation results. For the model name with a bracket and a number in it, the number is the dimension of the shape vector.

4.1 Explicit v.s. Implicit

We first compare the explicit shape encoding with the implicit shape encoding. As the previous work [16] provides a baseline for implicit shape representation with YOLO [30] as the base detector, to be fairly compared, we also trained the ESE-Seg with YOLO base detector, the dimension of the shape vector is also the same. We denote the model as “YOLO-Cheby (50)” and “YOLO-Cheby (20)”. The experiments are on Pascal SBD 2012 val [10].

To note, the mainstream instance segmentation based on mask, namely SDS [11], MNC [3], FCIS [19], Mask RCNN [12], can also be viewed as implicit shape encoding. We compare them with “YOLOv3-Cheby (20)” on Pascal VOC 2012 without SBD and COCO with their reported scores, which outperforms the Mask R-CNN (with ResNet50) at mAP@0.5 on Pascal VOC and close to it on COCO. To note, the input image size is 800 on the shorter side for Mask R-CNN with ResNet50-FPN, which is almost 4 times to our . All results are reported in Table 1.

SBD (5732 val images)
modelmAP 0.5 0.7 vol Time (ms)
BinaryMask[16] 32.3 12.0 28.6 26.3
Radial[16] 30.0 6.5 29.0 27.1
Embedding (50) [16] 32.6 14.8 28.9 30.5
Embedding (20) [16] 34.6 15.0 31.5 28.0
YOLO-Cheby (50) 39.1 10.5 32.6 24.2
YOLO-Cheby (20) 40.7 12.1 35.3 24.0
Pascal VOC 2012 val
modelmAP 0.5 0.7 vol Time (ms)
SDS 49.7 25.3 41.4 48k
MNC 59.1 36.0 - 360
FCIS 65.7 52.1 - 160
Mask R-CNN 68.5 40.2 - 180
YOLOv3-Cheby (20) 62.6 32.4 52.0 26.0
+ COCO pretrained 69.3 36.7 54.2 26.0
COCO 2017 val
modelmAP 0.5 0.75 all Time (ms)
FCIS 49.5 - 29.2 160
Mask R-CNN 51.2 31.5 30.3 180
YOLOv3-Cheby (20) 48.7 22.4 21.6 26.0
Table 1: Comparison of ESE-Seg to the previous methods on Pascal SBD 2012 val, Pascal VOC 2012 without SBD val, and COCO 2017 val.
Figure 11: Qualitative results generated by our methods.

4.2 On explicit descriptors

In this section, we will compare the object shape signatures and the function approximation methods quantitatively.

On Different Shape Signatures

For object shape signatures, we compare our proposed IR with a straightforward 2D vertices representation on Pascal VOC 2012. (See Table 2) We adopt the squared boxes, the bounding box, as the baseline. To note, the squared boxes baseline is not the object detection scores, as the baseline computes the IoU between the bounding box and the instance mask.

For each shape signature, we compare regressing directly and regressing after Chebyshev polynomial fitting. For direct regression, we control the length of the shape signature by adjusting the for each shape. We select and points to regress. We denote model trained on 2D vertices “XY”, the shape vector has a dimension of and respectively. As for the Chebyshev fitting on these signatures, we fit the coordinates and coordinates respectively. Denoted as “XY-Cheby (10+10)” means each fitted function has 10 coefficients.

modelmAP 0.5 0.7
Squared Boxes 42.3 8.6
XY (20) 46.1 10.7
XY (40) 43.5 11.2
XY-Cheby (10+10) 48.3 16.4
XY-Cheby (20+20) 53.1 20.9
IR (20) 48.8 13.5
IR (40) 52.6 19.3
IR (60) 51.7 16.4
IR-Cheby (20) 62.6 32.4
Table 2: We compare different choice of the shape signatures on Pascal VOC 2012.
On Different Function Approximation Techniques

We have already compared the function approximation techniques through off-line analysis. However, it is still interesting to know performance of the neural network on the coefficients obtained by these methods.

All the function approximations are carried out on IR . The polynomial regression is denoted as “Poly”, while “Fourier” for Fourier series fitting and “Cheby” for Chebyshev polynomial fitting. All models have tested on Pascal VOC 2012 val. See Table 3.

modelmAP 0.5 0.7
Poly (20) 26.3 5.4
Fourier (20) 37.5 9.1
Fourier (40) 36.1 8.5
Cheby (20) 62.6 32.4
Cheby (40) 60.7 31.5
Table 3: Comparison of the performance of different shape signatures on Pascal VOC 2012 val.

4.3 On base object detector

To show the generality of the object shape detection, we also conduct the shape learning on Faster R-CNN (“Faster-Cheby (20)”), RetinaNet (“Retina-Cheby (20)”) and YOLOv3-tiny (“YOLOv3-tiny-Cheby (20)”). Not only the performance is stable for all these bounding box-based detectors, but the speed boost due to the detector can be enjoyed. As shown in Table 4.

modelmAP 0.5 0.7 vol Tims (ms)
YOLOv3-Cheby (20) 62.6 32.4 52.0 26
Faster-Cheby (20) 63.4 32.8 54.2 180
Retina-Cheby (20) 65.9 36.5 56.7 73
YOLOv3-tiny-Cheby (20) 53.2 15.8 42.5 8
Table 4: Comparison of different base object detectors with IR shape signature and Chebyshev fitting on Pascal VOC 2012 val.

4.4 Qualitative Results

Qualitative results are shown in Fig. 11. Obviously, the predicted shape vectors indeed capture the characteristics of the contours, not produce the random noise.

5 Limitations and Future Works

Our proposed ESE-Seg can achieve the instance segmentation with minor time-consumption, with a decent performance at IoU threshold 0.5. However, due to the inaccuracy of the shape vector, and the noise comes with the CNN regression, performance at larger IoU threshold like 0.7 drop a large margin. In the future, better ways to explicitly represent the shape, and better ways to train the CNN regression which will contribute to higher performance at high IOU threshold are of high interest.

Acknowledgement

This work is supported in part by the National Key R&D Program of China, No. 2017YFA0700800, National Natural Science Foundation of China under Grants 61772332.

References

  • [1] Z. Cai and N. Vasconcelos (2018) Cascade r-cnn: delving into high quality object detection. In CVPR, Cited by: §2.
  • [2] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, et al. (2019) Hybrid task cascade for instance segmentation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 4974–4983. Cited by: §2.
  • [3] J. Dai, K. He, and J. Sun (2016) Instance-aware semantic segmentation via multi-task network cascades. In CVPR, Cited by: §1, §2, §4.1.
  • [4] J. Dai, Y. Li, K. He, and J. Sun (2016) R-fcn: object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pp. 379–387. Cited by: §1, §2.
  • [5] E. R. Davies (1997) Machine vision: theory, algorithms and practicalities. 1990. Academic Press. Cited by: §2.
  • [6] Mark. Everingham, Luc. Van Gool, C. Williams, John. Winn, and Andrew. Zisserman (2010-06) The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88 (2), pp. 303–338. Cited by: §1.
  • [7] H. Fang, J. Sun, R. Wang, M. Gou, Y. Li, and C. Lu (2019-10)

    InstaBoost: boosting instance segmentation via probability map guidedcopy-pasting

    .
    In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [8] R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1.
  • [9] A. Gordo, J. Almazán, J. Revaud, and D. Larlus (2016)

    Deep image retrieval: learning global representations for image search

    .
    In European conference on computer vision, pp. 241–257. Cited by: §1.
  • [10] B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik (2011) Semantic contours from inverse detectors. In International Conference on Computer Vision (ICCV), Cited by: §4.1.
  • [11] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik (2014) Simultaneous detection and segmentation. In European Conference on Computer Vision, pp. 297–312. Cited by: §4.1.
  • [12] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask R-CNN. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §1, §1, §2, §4.1.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
  • [14] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, and M. Li (2018)

    Bag of tricks for image classification with convolutional neural networks

    .
    arXiv preprint arXiv:1812.01187. Cited by: §4.
  • [15] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §1.
  • [16] S. Jetley, M. Sapienza, S. Golodetz, and P. H. Torr (2017) Straight to shapes: real-time detection of encoded shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6550–6559. Cited by: §2, §4.1, Table 1.
  • [17] H. Kim and J. Kim (2000) Region-based shape descriptor invariant to rotation, scale and translation. Signal Processing: Image Communication 16 (1), pp. 87 – 93. External Links: ISSN 0923-5965, Document, Link Cited by: §1.
  • [18] H. Law and J. Deng (2018) Cornernet: detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750. Cited by: §2.
  • [19] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei (2017) Fully convolutional instance-aware semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2359–2367. Cited by: §1, §2, §4.1.
  • [20] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1, §2.
  • [21] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Eds.), Cham, pp. 740–755. External Links: ISBN 978-3-319-10602-1 Cited by: §1.
  • [22] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia (2018) Path aggregation network for instance segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1, §2.
  • [24] X. Lu, B. Li, Y. Yue, Q. Li, and J. Yan (2018) Grid r-cnn. CoRR abs/1811.12030. Cited by: §2.
  • [25] C. R. Maurer, R. Qi, and V. Raghavan (2003) A linear time algorithm for computing exact euclidean distance transforms of binary images in arbitrary dimensions. IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (2), pp. 265–270. Cited by: §3.2.1.
  • [26] A. Newell, Z. Huang, and J. Deng (2017) Associative embedding: end-to-end learning for joint detection and grouping. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, USA, pp. 2274–2284. External Links: ISBN 978-1-5108-6096-4, Link Cited by: §2.
  • [27] D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi (2018) Semi-convolutional operators for instance segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 86–102. Cited by: §2.
  • [28] B. Pang, K. Zha, H. Cao, C. Shi, and C. Lu (2019) Deep rnn framework for visual sequential applications. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 423–432. Cited by: §2.
  • [29] A. S. Razavian, J. Sullivan, S. Carlsson, and A. Maki (2016) Visual instance retrieval with deep convolutional networks. ITE Transactions on Media Technology and Applications 4 (3), pp. 251–258. Cited by: §1.
  • [30] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §2, §2, §4.1.
  • [31] J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §1, §1, §3.3.
  • [32] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §2.
  • [33] B. Singh, M. Najibi, and L. S. Davis (2018) SNIPER: efficient multi-scale training. NIPS. Cited by: §2.
  • [34] P. J. Van Otterloo (1991) A contour-oriented approach to shape analysis. Prentice Hall International (UK) Ltd.. Cited by: §2.
  • [35] K. Q. Weinberger and L. K. Saul (2009-06) Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res. 10, pp. 207–244. External Links: ISSN 1532-4435, Link Cited by: §2.
  • [36] W. Xu, Y. Li, and C. Lu (2018-09) SRDA: generating instance segmentation annotation via scanning, reasoning and domain adaptation. In The European Conference on Computer Vision (ECCV), Cited by: §2.
  • [37] I. T. Young, J. E. Walker, and J. E. Bowie (1974) An analysis technique for biological shape. i. Information and Control 25 (4), pp. 357 – 370. External Links: ISSN 0019-9958, Document, Link Cited by: §1.
  • [38] J. Yue-Hei Ng, F. Yang, and L. S. Davis (2015) Exploiting local features from deep networks for image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 53–61. Cited by: §1.
  • [39] D. Zhang, G. Lu, et al. (2002) A comparative study of fourier descriptors for shape representation and retrieval. In Proc. 5th Asian Conference on Computer Vision, pp. 35. Cited by: §1, §2.
  • [40] X. Zhang, X. Zhou, M. Lin, and J. Sun (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856. Cited by: §1.