Object detection, 3D detection, and pose estimation using center point detection:
Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1 multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.READ FULL TEXT VIEW PDF
Three-dimensional objects are commonly represented as 3D boxes in a
Estimating 3D orientation and translation of objects is essential for
With the advent of deep learning, object detection drifted from a bottom...
Over the last few decades, many architectures have been developed that
We present a learning-based method to estimate the object bounding box f...
With the renaissance of neural networks, object detection has slowly shi...
Determining the relative position and orientation of objects in an
Object detection, 3D detection, and pose estimation using center point detection:
Object detection powers many vision tasks like instance segmentation [21, 32, 7], pose estimation [3, 15, 39], tracking [27, 24], and action recognition . It has down-stream applications in surveillance , autonomous driving , and visual question answering . Current object detectors represent each object through an axis-aligned bounding box that tightly encompasses the object [19, 18, 46, 43, 33]. They then reduce object detection to image classification of an extensive number of potential object bounding boxes. For each bounding box, the classifier determines if the image content is a specific object or background. One-stage detectors [43, 33] slide a complex arrangement of possible bounding boxes, called anchors, over the image and classify them directly without specifying the box content. Two-stage detectors [19, 18, 46] recompute image-features for each potential box, then classify those features. Post-processing, namely non-maxima suppression, then removes duplicated detections for the same instance by computing bounding box IoU. This post-processing is hard to differentiate and train , hence most current detectors are not end-to-end trainable. Nonetheless, over the past five years , this idea has achieved good empirical success [21, 56, 12, 47, 25, 26, 35, 63, 48, 31, 62]. Sliding window based object detectors are however a bit wasteful, as they need to enumerate all possible object locations and dimensions.
In this paper, we provide a much simpler and more efficient alternative. We represent objects by a single point at their bounding box center (see Figure 2). Other properties, such as object size, dimension, 3D extent, orientation, and pose are then regressed directly from image features at the center location. Object detection is then a standard keypoint estimation problem [3, 39, 60]. We simply feed the input image to a fully convolutional network [37, 40]
that generates a heatmap. Peaks in this heatmap correspond to object centers. Image features at each peak predict the objects bounding box height and weight. The model trains using standard dense supervised learning[39, 60]. Inference is a single network forward-pass, without non-maximal suppression for post-processing.
Our method is general and can be extended to other tasks with minor effort. We provide experiments on 3D object detection  and multi-person human pose estimation , by predicting additional outputs at each center point (see Figure 4). For 3D bounding box estimation, we regress to the object absolute depth, 3D bounding box dimensions, and object orientation . For human pose estimation, we consider the 2D joint locations as offsets from the center and directly regress to them at the center point location.
The simplicity of our method, CenterNet, allows it to run at a very high speed (Figure 1). With a simple Resnet-18 and up-convolutional layers , our network runs at 142 FPS with COCO bounding box AP. With a carefully designed keypoint detection network, DLA-34 , our network achieves COCO AP at 52 FPS. Equipped with the state-of-the-art keypoint estimation network, Hourglass-104 [30, 40], and multi-scale testing, our network achieves COCO AP at 1.4 FPS. On 3D bounding box estimation and human pose estimation, we perform competitively with state-of-the-art at a higher inference speed. Code is available at https://github.com/xingyizhou/CenterNet.
One of the first successful deep object detectors, RCNN , enumerates object location from a large set of region candidates , crops them, and classifies each using a deep network. Fast-RCNN  crops image features instead, to save computation. However, both methods rely on slow low-level region proposal methods.
Faster RCNN  generates region proposal within the detection network. It samples fixed-shape bounding boxes (anchors) around a low-resolution image grid and classifies each into “foreground or not”. An anchor is labeled foreground with a overlap with any ground truth object, background with a overlap, or ignored otherwise. Each generated region proposal is again classified . Changing the proposal classifier to a multi-class classification forms the basis of one-stage detectors. Several improvements to one-stage detectors include anchor shape priors [44, 45], different feature resolution , and loss re-weighting among different samples .
Our approach is closely related to anchor-based one-stage approaches [43, 36, 33]. A center point can be seen as a single shape-agnostic anchor (see Figure 3). However, there are a few important differences. First, our CenterNet assigns the “anchor” based solely on location, not box overlap . We have no manual thresholds  for foreground and background classification. Second, we only have one positive “anchor” per object, and hence do not need Non-Maximum Suppression (NMS) . We simply extract local peaks in the keypoint heatmap [39, 4]
. Third, CenterNet uses a larger output resolution (output stride of) compared to traditional object detectors [22, 21] (output stride of ). This eliminates the need for multiple anchors .
We are not the first to use keypoint estimation for object detection. CornerNet  detects two bounding box corners as keypoints, while ExtremeNet  detects the top-, left-, bottom-, right-most, and center points of all objects. Both these methods build on the same robust keypoint estimation network as our CenterNet. However, they require a combinatorial grouping stage after keypoint detection, which significantly slows down each algorithm. Our CenterNet, on the other hand, simply extracts a single center point per object without the need for grouping or post-processing.
3D bounding box estimation powers autonomous driving . Deep3Dbox  uses a slow-RCNN  style framework, by first detecting 2D objects  and then feeding each object into a 3D estimation network. 3D RCNN  adds an additional head to Faster-RCNN  followed by a 3D projection. Deep Manta  uses a coarse-to-fine Faster-RCNN  trained on many tasks. Our method is similar to a one-stage version of Deep3Dbox  or 3DRCNN . As such, CenterNet is much simpler and faster than competing methods.
Let be an input image of width and height . Our aim is to produce a keypoint heatmap , where is the output stride and is the number of keypoint types. Keypoint types include human joints in human pose estimation [4, 55], or object categories in object detection [30, 61]. We use the default output stride of in literature [40, 4, 42]. The output stride downsamples the output prediction by a factor . A prediction corresponds to a detected keypoint, while is background. We use several different fully-convolutional encoder-decoder networks to predict from an image : A stacked hourglass network [40, 30], up-convolutional residual networks (ResNet) [22, 55], and deep layer aggregation (DLA) .
We train the keypoint prediction network following Law and Deng . For each ground truth keypoint of class , we compute a low-resolution equivalent . We then splat all ground truth keypoints onto a heatmap using a Gaussian kernel , where
is an object size-adaptive standard deviation. If two Gaussians of the same class overlap, we take the element-wise maximum 
. The training objective is a penalty-reduced pixel-wise logistic regression with focal loss:
where and are hyper-parameters of the focal loss , and is the number of keypoints in image . The normalization by is chosen as to normalize all positive focal loss instances to . We use and in all our experiments, following Law and Deng .
To recover the discretization error caused by the output stride, we additionally predict a local offset for each center point. All classes share the same offset prediction. The offset is trained with an L1 loss
The supervision acts only at keypoints locations , all other locations are ignored.
In the next section, we will show how to extend this keypoint estimator to a general purpose object detector.
Let be the bounding box of object with category . Its center point is lies at . We use our keypoint estimator to predict all center points. In addition, we regress to the object size for each object . To limit the computational burden, we use a single size prediction for all object categories. We use an L1 loss at the center point similar to Objective 2:
We do not normalize the scale and directly use the raw pixel coordinates. We instead scale the loss by a constant . The overall training objective is
We set and in all our experiments unless specified otherwise. We use a single network to predict the keypoints , offset , and size . The network predicts a total of outputs at each location. All outputs share a common fully-convolutional backbone network. For each modality, the features of the backbone are then passed through a separate
convolution, ReLU and anotherconvolution. Figure 4 shows an overview of the network output. Section 5 and supplementary material contain additional architectural details.
At inference time, we first extract the peaks in the heatmap for each category independently. We detect all responses whose value is greater or equal to its 8-connected neighbors and keep the top peaks. Let be the set of detected center points of class . Each keypoint location is given by an integer coordinates . We use the keypoint values as a measure of its detection confidence, and produce a bounding box at location
where is the offset prediction and is the size prediction. All outputs are produced directly from the keypoint estimation without the need for IoU-based non-maxima suppression (NMS) or other post-processing. The peak keypoint extraction serves as a sufficient NMS alternative and can be implemented efficiently on device using a max pooling operation.
3D detection estimates a three-dimensional bounding box per objects and requires three additional attributes per center point: depth, 3D dimension, and orientation. We add a separate head for each of them. The depth is a single scalar per center point. However, depth is difficult to regress to directly. We instead use the output transformation of Eigen  and , where
is the sigmoid function. We compute the depth as an additional output channelof our keypoint estimator. It again uses two convolutional layers separated by a ReLU. Unlike previous modalities, it uses the inverse sigmoidal transformation at the output layer. We train the depth estimator using an L1 loss in the original depth domain, after the sigmoidal transformation.
The 3D dimensions of an object are three scalars. We directly regress to their absolute values in meters using a separate head and an L1 loss.
Orientation is a single scalar by default. However, it can be hard to regress to. We follow Mousavian  and represent the orientation as two bins with in-bin regression. Specifically, the orientation is encoded using scalars, with scalars for each bin. For one bin, two scalars are used for softmax classification and the rest two scalar regress to an angle within each bin. Please see the supplementary for details about these losses.
Human pose estimation aims to estimate 2D human joint locations for every human instance in the image ( for COCO). We considered the pose as a -dimensional property of the center point, and parametrize each keypoint by an offset to the center point. We directly regress to the joint offsets (in pixels) with an L1 loss. We ignore the invisible keypoints by masking the loss. This results in a regression-based one-stage multi-person human pose estimator similar to the slow-RCNN version counterparts Toshev  and Sun .
To refine the keypoints, we further estimate human joint heatmaps using standard bottom-up multi-human pose estimation [4, 39, 41]. We train the human joint heatmap with focal loss and local pixel offset analogous to the center detection discussed in Section. 3.
We then snap our initial predictions to the closest detected keypoint on this heatmap. Here, our center offset acts as a grouping cue, to assign individual keypoint detections to their closest person instance. Specifically, let be a detected center point. We first regress to all joint locations for . We also extract all keypoint locations with a confidence for each joint type from the corresponding heatmap . We then assign each regressed location to its closest detected keypoint considering only joint detections within the bounding box of the detected object.
We experiment with 4 architectures: ResNet-18, ResNet-101 , DLA-34 , and Hourglass-104 . We modify both ResNets and DLA-34 using deformable convolution layers  and use the Hourglass network as is.
The stacked Hourglass Network [40, 30] downsamples the input by , followed by two sequential hourglass modules. Each hourglass module is a symmetric 5-layer down- and up-convolutional network with skip connections. This network is quite large, but generally yields the best keypoint estimation performance.
Xiao et al.  augment a standard residual network  with three up-convolutional networks to allow for a higher-resolution output (output stride ). We first change the channels of the three upsampling layers to , respectively, to save computation. We then add one deformable convolutional layer before each up-convolution with channel
, respectively. The up-convolutional kernels are initialized as bilinear interpolation. See supplement for a detailed architecture diagram.
Deep Layer Aggregation (DLA)  is an image classification network with hierarchical skip connections. We utilize the fully convolutional upsampling version of DLA for dense prediction, which uses iterative deep aggregation to increase feature map resolution symmetrically. We augment the skip connections with deformable convolution  from lower layers to the output. Specifically, we replace the original convolution with deformable convolution at every upsampling layer. See supplement for a detailed architecture diagram.
We add one convolutional layer with channel before each output head. A final convolution then produces the desired output. We provide more details in the supplementary material.
We train on an input resolution of . This yields an output resolution of for all the models. We use random flip, random scaling (between 0.6 to 1.3), cropping, and color jittering as data augmentation, and use Adam
to optimize the overall objective. We use no augmentation to train the 3D estimation branch, as cropping or scaling changes the 3D measurements. For the residual networks and DLA-34, we train with a batch-size of 128 (on 8 GPUs) and learning rate 5e-4 for 140 epochs, with learning rate droppedat 90 and 120 epochs, respectively (following ). For Hourglass-104, we follow ExtremeNet  and use batch-size 29 (on 5 GPUs, with master GPU batch-size 4) and learning rate 2.5e-4 for 50 epochs with learning rate dropped at the 40 epoch. For detection, we fine-tune the Hourglass-104 from ExtremeNet 
to save computation. The down-sampling layers of Resnet-101 and DLA-34 are initialized with ImageNet pretrain and the up-sampling layers are randomly initialized. Resnet-101 and DLA-34 train in 2.5 days on 8 TITAN-V GPUs, while Hourglass-104 requires 5 days.
We use three levels of test augmentations: no augmentation, flip augmentation, and flip and multi-scale (0.5, 0.75, 1, 1.25, 1.5). For flip, we average the network outputs before decoding bounding boxes. For multi-scale, we use NMS to merge results. These augmentations yield different speed-accuracy trade-off, as is shown in the next section.
|RefineDet ||ResNet-101||-||36.4 / 41.8||57.5 / 62.9||39.5 / 45.7||16.6 / 25.6||39.9 / 45.1||51.4 / 54.1|
|CornerNet ||Hourglass-104||4.1||40.5 / 42.1||56.5 / 57.8||43.1 / 45.3||19.4 / 20.8||42.7 / 44.8||53.9 / 56.7|
|ExtremeNet ||Hourglass-104||3.1||40.2 / 43.7||55.5 / 60.5||43.2 / 47.0||20.4 / 24.1||43.2 / 46.9||53.1 / 57.6|
|FSAF ||ResNeXt-101||2.7||42.9 / 44.6||63.8 / 65.2||46.3 / 48.6||26.6 / 29.7||46.2 / 47.1||52.7 / 54.6|
|CenterNet-DLA||DLA-34||28||39.2 / 41.6||57.1 / 60.3||42.8 / 45.1||19.9 / 21.5||43.0 / 43.9||51.4 / 56.0|
|CenterNet-HG||Hourglass-104||7.8||42.1 / 45.1||61.1 / 63.9||45.9 / 49.3||24.1 / 26.6||45.5 / 47.1||52.8 / 57.7|
We evaluate our object detection performance on the MS COCO dataset , which contains 118k training images (train2017), 5k validation images (val2017) and 20k hold-out testing images (test-dev). We report average precision over all IOU thresholds (AP), AP at IOU thresholds 0.5() and 0.75 (). The supplement contains additional experiments on PascalVOC .
compares CenterNet with other real-time detectors. The running time is tested on our local machine, with Intel Core i7-8086K CPU, Titan Xp GPU, Pytorch 0.4.1, CUDA 9.0, and CUDNN 7.1. We download code and pre-trained models111https://github.com/facebookresearch/Detectron222https://github.com/pjreddie/darknet to test run time for each model on the same machine.
Hourglass-104 achieves the best accuracy at a relatively good speed, with a AP in FPS. On this backbone, CenterNet outperforms CornerNet  ( AP in FPS) and ExtremeNet ( AP in FPS) in both speed and accuracy. The run time improvement comes from fewer output heads and a simpler box decoding scheme. Better accuracy indicates that center points are easier to detect than corners or extreme points.
Using ResNet-101, we outperform RetinaNet  with the same network backbone. We only use deformable convolutions in the upsampling layers, which does not affect RetinaNet. We are more than twice as fast at the same accuracy (CenterNet AP in FPS (input ) vs. RetinaNet AP in FPS (input )). Our fastest ResNet-18 model also achieves a respectable performance of COCO AP at FPS.
DLA-34 gives the best speed/accuracy trade-off. It runs at FPS with AP. This is more than twice as fast as YOLOv3  and AP more accurate. With flip testing, our model is still faster than YOLOv3  and achieves accuracy levels of Faster-RCNN-FPN  (CenterNet AP in FPS vs Faster-RCNN AP in FPS).
We compare with other state-of-the-art detectors in COCO test-dev in Table 2. With multi-scale evaluation, CenterNet with Hourglass-104 achieves an AP of , outperforming all existing one-stage detectors. Sophisticated two-stage detectors [63, 48, 35, 31] are more accurate, but also slower. There is no significant difference between CenterNet and sliding window detectors for different object sizes or IoU thresholds. CenterNet behaves like a regular detector, just faster.
In unlucky circumstances, two different objects might share the same center, if they perfectly align. In this scenario, CenterNet would only detect one of them. We start by studying how often this happens in practice and put it in relation to missing detections of competing methods.
In the COCO training set, there are pairs of objects that collide onto the same center point at stride . There are objects in total, hence CenterNet is unable to predict of objects due to collisions in center points. This is much less than slow- or fast-RCNN miss due to imperfect region proposals  (), and fewer than anchor-based methods miss due to insufficient anchor placement  ( for Faster-RCNN with anchors at IOU threshold). In addition, pairs of objects have bounding box IoU and would be assigned to two anchors, hence a center-based assignment causes fewer collisions.
To verify that IoU based NMS is not needed for CenterNet,
we ran it as a post-processing step on our predictions.
For DLA-34 (flip-test), the AP improves from to .
For Hourglass-104, the AP stays at .
Given the minor impact, we do not use it.
Next, we ablate the new hyperparameters of our model. All the experiments are done on DLA-34.
During training, we fix the input resolution to . During testing, we follow CornerNet 
to keep the original image resolution and zero-pad the input to the maximum stride of the network. For ResNet and DLA, we pad the image with up to 32 pixels, for HourglassNet, we use 128 pixels. As is shown in Table.4(a), keeping the original resolution is slightly better than fixing test resolution. Training and testing in a lower resolution () runs times faster but drops AP.
show that L1 is considerably better than Smooth L1. It yields a better accuracy at fine-scale, which the COCO evaluation metric is sensitive to. This is independently observed in keypoint regression[49, 50].
We analyze the sensitivity of our approach to the loss weight . Table 4(b) shows gives a good result. For larger values, the AP degrades significantly, due to the scale of the loss ranging from to output size or , instead of to . However, the value does not degrade significantly for lower weights.
By default, we train the keypoint estimation network for epochs with a learning rate drop at epochs. If we double the training epochs before dropping the learning rate, the performance further increases by AP (Table 4(d)), at the cost of a much longer training schedule. To save computational resources (and polar bears), we use epochs in ablation experiments, but stick with epochs for DLA when comparing to other methods.
Finally, we tried a multiple “anchor” version of CenterNet by regressing to more than one object size. The experiments did not yield any success. See supplement.
We perform 3D bounding box estimation experiments on KITTI dataset , which contains carefully annotated 3D bounding box for vehicles in a driving scenario. KITTI contains training images and we follow standard training and validation splits in literature [54, 10]. The evaluation metric is the average precision for cars at recalls ( to with increment) at IOU threshold , as in object detection . We evaluate IOUs based on 2D bounding box (AP), orientation (AOP), and Bird-eye-view bounding box (BEV AP). We keep the original image resolution and pad to for both training and testing. The training converges in epochs, with learning rate dropped at the and epoch, respectively. We use the DLA-34 backbone and set the loss weight for depth, orientation, and dimension to . All other hyper-parameters are the same as the detection experiments.
Since the number of recall thresholds is quite small, the validation AP fluctuates by up to AP. We thus train models and report the average with standard deviation.
We compare with slow-RCNN based Deep3DBox  and Faster-RCNN based method Mono3D , on their specific validation split. As is shown in Table 4, our method performs on-par with its counterparts in AP and AOS and does slightly better in BEV. Our CenterNet is two orders of magnitude faster than both methods.
Finally, we evaluate CenterNet on human pose estimation in the MS COCO dataset . We evaluate keypoint AP, which is similar to bounding box AP but replaces the bounding box IoU with object keypoint similarity. We test and compare with other methods on COCO test-dev.
We experiment with DLA-34 and Hourglass-104, both fine-tuned from center point detection. DLA-34 converges in 320 epochs (about 3 days on 8GPUs) and Hourglass-104 converges in 150 epochs (8 days on 5 GPUs). All additional loss weights are set to . All other hyper-parameters are the same as object detection.
The results are shown in Table 5. Direct regression to keypoints performs reasonably, but not at state-of-the-art. It struggles particularly in high IoU regimes. Projecting our output to the closest joint detection improves the results throughout, and performs competitively with state-of-the-art multi-person pose estimators [4, 39, 21, 41]. This verifies that CenterNet is general, easy to adapt to a new task.
Figure 5 shows qualitative examples on all tasks.
In summary, we present a new representation for objects: as points. Our CenterNet object detector builds on successful keypoint estimation networks, finds object centers, and regresses to their size. The algorithm is simple, fast, accurate, and end-to-end differentiable without any NMS post-processing. The idea is general and has broad applications beyond simple two-dimensional detection. CenterNet can estimate a range of additional object properties, such as pose, 3D orientation, depth and extent, in one single forward pass. Our initial experiments are encouraging and open up a new direction for real-time object recognition and related tasks.
3d bounding box estimation using deep learning and geometry.In CVPR, 2017.
Deeppose: Human pose estimation via deep neural networks.In CVPR, 2014.
Subcategory-aware convolutional neural networks for object proposals and detection.In WACV, 2017.
See figure. 6 for diagrams of the architectures.
Our network outputs maps for depths , 3d dimensions , and orientation encoding . For each object instance , we extract the output values from the three output maps at the ground truth center point location: , , . The depth is trained with L1 loss after converting the output to the absolute depth domain:
where is the groud truth absolute depth (in meter). Similarly, the 3D dimension is trained with L1 Loss in absolute metric:
where is the object height, width, and length in meter.
The orientation is a single scalar by default. Following Mousavian [38, 24], We use an -scalar encoding to ease learning. The scalars are divided into two groups, each for an angular bin. One bin is for angles in and the other is for angles in . Thus we have scalars for each bin. Within each bin, of the scalars are used for softmax classification (if the orientation falls into to this bin ). And the rest scalars are for the and value of in-bin offset (to the bin center ). I.e., The classification are trained with softmax and the angular values are trained with L1 loss:
where , . is the indicator function. The predicted orientation is decoded from the -scalar encoding by
where is the bin index which has a larger classification score.
We analysis the annotations of COCO training set to show how often the collision cases happen. COCO training set (train 2017) contains images and objects (with small objects, medium objects, and large objects) in categories. Let the -th bounding box of image of category be , its center after the stride is . And Let be the number of object of category in image . The number of center point collisions is calculated by:
We get on the dataset.
Similarly, we calculate the IoU based collision by
This gives and .
RetinaNet  assigns anchors to a ground truth bounding box if they have IoU. In the case that a ground truth bounding box has not been covered by any anchor with IoU , the anchor with the largest IoU will be assigned to it. We calculate how often this forced assignment happens. We use anchors ( size: 32, 64, 128, 256, 512, and aspect-ratio: 0.5, 1, 2, as is in RetinaNet ) at stride . For each image, after resizing it as its shorter edge to be , we place these anchors at positions , where and . W, H are the image weight and height (the smaller one is equal to 800). This results in a set of anchors . . We calculate the number of the forced assignments by:
RenitaNet requires forced assignments: for small objects ( of all small objects), for medium objects ( of all medium objects), and for large objects ( of all large objects).
|Faster RCNN ||76.4||5|
|Faster RCNN* ||79.8||5|
Pascal VOC  is a popular small object detection dataset. We train on VOC 2007 and VOC 2012 trainval sets, and test on VOC 2007 test set. It contains training images and testing images of categories. The evaluation metric is mean average precision (mAP) at IOU threshold .
We experiment with our modified ResNet-18, ResNet-101, and DLA-34 (See main paper Section. 5) in two training resolution: and . For all networks, we train epochs with learning rate dropped at and epochs, respectively. We use batchsize and learning rate -4 following the linear learning rate rule . It takes one GPU hours/ hours to train in for ResNet-101 and DLA-34, respectively. And for , the training takes the same time in two GPUs. Flip augmentation is used in testing. All other hyper-parameters are the same as the COCO experiments. We do not use Hourglass-104  because it fails to converge in a reasonable time (2 days) when trained from scratch.
The results are shown in Table. 6. Our best CenterNet-DLA model performs competitively with top-tier methods, and keeps a real-time speed.
|w/ gt size||41.9||56.6||45.4|
|w/ gt heatmap||54.2||82.6||58.1|
|w/ gt heatmap+size||83.1||97.9||90.1|
|w/ gt hm.+size+offset||99.5||99.7||99.6|
We perform an error analysis by replacing each output head with its ground truth. For the center point heatmap, we use the rendered Gaussian ground truth heatmap. For the bounding box size, we use the nearest ground truth size for each detection.
The results in Table 7 show that improving both size map leads to a modest performance gain, while the center map gains are much larger. If only the keypoint offset is not predicted, the maximum AP reaches . The entire pipeline on ground truth misses about of objects, due to discretization and estimation errors in the Gaussian heatmap rendering.