With the rapid development of depth sensors and application requirements in the fields of auto-driving and robotics, point clouds based 3D object detection has become more and more popular. Many detection methods have been proposed, and these methods could be divided into two categories: point-based and tensor-based.
Point-based methods [19, 26, 2] do 3D detection by using raw point clouds directly to avoid loss of geometric details. However, these architectures come with complex structures and significant computational overhead, making it unfavorable for real-time scenarios with large-scale point clouds.
Unlike point-based methods, tensor-based methods transform point clouds into tensors such as 3D voxel grids or bird’s-eye-view (BEV) images [28, 24, 22, 11, 23] first, then localize 3D objects with mature 2D detectors like SSD , RetinaNet  and so on. Tensor-based methods take advantage of compact and fast architectures in 2D detection, which achieves great success.
However, there is still one problem in tensor-based methods: the performance of 2D detectors is restricted because of the inconsistency between 2D and 3D data. Specifically, the distribution of point clouds is uneven, and the densest regions, together with its corresponding feature (what we call dense features), that are important for localizing objects gather on only a small part of the 3D space. However, the 2D detector always extracts features evenly, so it has difficulty aggregating dense features for accurate detection.
In order to capture dense features, one possible way is to increase the receptive field, which, however, may introduce more inclusion of nearby objects and clutter. An alternative way would be to adopt deformable convolution , which tries to capture the most critical features by inducing “offset” into the convolution layer. However, “offset” in deformable convolution is also learned from feature maps, which is sub-optimal because it ignores the inherent distribution of point clouds and thus can only be optimized implicitly.
Note that most of the point clouds gather on the boundary of objects. To better utilize the dense features lying around the boundary, we propose our module called DENse Feature Indicator (DENFI) to capture dense features explicitly in a boundary-aware manner. The steps are as follows: 1) DENFI utilizes a Dense Boundary Proposal Module (DBPM) to predicts the dense boundary information according to feature maps extracted by the backbone. 2) DENFI leverages DENFIConv to adaptively captures dense features from the backbone feature map under the guidance of the dense boundary information from DBPM. The refined features are then used for detection purposes.
Extensive experiments on the challenging benchmark of KITTI  dataset have shown that DENFI improves the performance of single-stage 3D detector PointPillars  by 2.71, 2.70 and 2.13 mAP on objects with easy, moderate and hard difficulty level. Also, DENFI is lightweight and permits the detectors to run at real-time speed.
It is worthwhile to highlight our contributions:
We point out the inconsistency of adopting 2D detection frameworks for 3D object detection due to uneven distribution of point clouds, and it can be mitigated if we adaptively capture dense features from uneven distribution in a boundary-aware manner.
We propose an efficient and universal module called DENFI, which is specially designed for 3D detection to help 3D detectors capture dense features.
We propose DENFIDet by combining DENFI with PointPillars, which achieves new state-of-the-art performance on KITTI dataset while running at 34FPS.
2 Related Work
Point-Based 3D Object Detection from Point Clouds Current Point-Based detectors heavily rely on a two-stage framework to achieve decent performance.  propose PointRCNN, a two-stage network, for directly detecting 3D objects from the raw point clouds. It first performs points segmentation to localize foreground points and regresses proposals from foreground points. Then it refines the proposals to generate final detection results by further gathering local spatial features. STD  introduces spherical anchors at the first stage of PointRCNN to reduce the number of foreground points. It then applies voxelization at the second stage of PointRCNN for speedup, which integrates the speed advantage of the tensor-based methods into point-based methods. However, since this type of work direct processes the raw point clouds, it is difficult to be extended to large-scale point clouds scenarios requiring real-time speed.
Tensor-Based 3D Object Detection from Point Cloud Tensor-based detection transforms the point cloud into a compact tensor first and leverages off-the-shelf 2D detectors to this tensor. So, tensor-based detection, just like 2D object detection, can be divided into two categories: anchor-based detection and anchor-free detection: 1) anchor-based detection relies on a set of predefined anchor boxes.  proposes VoxelNet, an end-to-end network unifying feature extraction and bounding box prediction into a single stage.  explores the use of sparse convolution [7, 6] to efficiently accelerate VoxelNet.  further proposed PointPillars to learn the feature suitable for 2D convolution, making the network fast and accurate. 2) anchor-free detection predicts the position of bounding boxes directly, and there is no need to compute the intersection over-union (IOU) scores between all anchor boxes and ground-truth boxes during training, thus making it simple and time-efficient, which is favorable for industrial application.  proposed PIXOR as an end-to-end anchor-free network for speed and simplicity.  further improved PIXOR results by exploring high-precision maps information. Though simple and fast, anchor-free detection often performs worse than an anchor-based one on accuracy.
Deformable-Convolution-Based Feature Extractor  proposes deformable convolution, which introduces offsets to the regular grid sampling locations in the standard convolution to enable more flexible feature extraction, and offsets are induced as learnable parameters learned from the feature map. Instead of learning offsets implicitly from the feature map, Guided-Anchoring  propose learning offsets from the shape of learned anchors to align learned anchors with the feature map. Different from those methods, our DENFI learns offsets from the predicted boundary because of the characteristics of the point clouds as we described in the introduction. Besides, we design an operator called depthwise separable deformable convolution as an alternative to vanilla deformable convolution, which runs several times faster and delivers similar performance. We compare different Deformable-Convolution-Based Feature Extractor in Figure 2.
3 DENse Feature Indicator
To solve the problem we depicted in the introduction, we propose our DENse Feature Indicator (DENFI). DENFI is a fully convolutional module which can be used to extract dense features in a boundary-aware manner. As shown in Figure 3, DENFI consists of two parts: dense boundary proposal module (DBPM), which predicts the boundary of objects, and DENFIConv, which extracts dense features.
3.1 Dense Boundary Proposal Module
Since the point cloud is mostly distributed at the boundary of objects, we resort to the boundary of objects for dense features indication. We devise a simple but effective dense boundary proposal module (DBPM) for boundary proposal predictions, as shown in Figure 4. It takes the backbone feature of size as its input and works in an anchor-free manner.
DBPM consists of two branches, which are the classification branch and the regression branch. The classification branch predicts the class score for each pixel of the feature map, and the regression branch performs boundary regression for each pixel of the feature map.
Classification Branch The classification branch consists of a single 1x1 convolution, and its output channel number is equal to the number of categories
. As a result, the classification branch produces the probability mapof size . The pixel represents the predicted score (with a range from 0 to 1) for the th category, and a more significant value indicates a higher confidence. We also define the ground truth class for the pixel location as .
For the definition of positive and negative samples, previous work  proposed the ”effective zone” for balanced sampling, which, however, only works on axis-aligned 2D bounding boxes. We extend it to rotated bounding boxes for 3D object detection in an anchor-free manner, as shown in Figure 4 (A Car example). First, we define a rotated ground-truth box from the bird’s-eye-view as , where represents the center of the object, represents the size of the object, and represents the heading angle, which is within the range . The positive area for it is defined as a shrink version of the rotated ground-truth box , and is the positive scaling factor. Next, for the negative area , we define another shrink version of the rotated ground-truth box , where is the negative scaling factor and . Areas that are not included in this rotated box are defined as a negative area . Moreover, we define areas neither positive nor negative as ignore, which is not considered during training.
We define the pixels in the positive area as positive pixels and do the same to the negative and ignore area. Also, we set for negative pixels and for the ignore pixels. Due to the imbalance of the negative-positive pixels, we adopt focal loss as in . The overall classification loss is the summation of focal loss over all non-ignoring pixels, which is normalized by the total number of positive pixels.
Regression Branch The regression branch consists of a single 1x1 convolution, too. It outputs a regression map of size . We will next elaborate on the meaning of each channel and the meaning of .
Ground truth 3D object from the bird’s-eye-view is usually encoded as . However, this encoding does not directly represent the relationship between each point inside the 3D objects with its corresponding boundary. As a result, we propose encoding ground truth 3D objects as
, where the boundary vectorrepresents the distance from the positive pixel to the four sides of the corresponding bounding box (left, top, right, bottom), as illustrated in Figure 4 (A Car example).
The total regression loss is computed as the average loss of positive pixels. For each positive pixel, the loss consists of two parts: IoU loss  for , and bin-based rotation loss  for . We directly add up the weighted two parts to form the final regression loss.
Specifically, for the loss of the boundary vector . We formulate the regression target for the boundary vector in Equation 1.
For the loss of orientation , we observe that directly predicting orientation is hard. Thus we use a bin-based loss similar to that in , which decomposes the direct orientation regression into the bin classification and the residual regression within the corresponding bin. Specifically, we divide the orientation range of into bins and define the bin target of the orientation in Equation 3.
Then the residual target in the corresponding bin is defined in Equation 4.
Finally, we define the overall regression loss in Equation 6.
where and is the weight for balancing two loss. We simply set and here.
To sum up, we define our training loss function for DBPM in Equation7:
where is the focal loss  and is the indicator function (being 1 when is true and otherwise). denotes the regression output , and denotes the regression target . denotes the number of positive pixels and is set to for balancing regression and classification loss.
Auto Scaling We also observe that it is difficult to directly regress the target in an anchor-free manner due to their extensive range, which can be monitored by generating anchor-free detection results from the classification branch and the regression branch of DBPM. Hence, we make use of a trainable scaling scalar to automatically adjust the scale of the object information. As a result, instead of using the standard , we make use of for the regression branch, which empirically improves the overall performance.
Dense Boundary Proposal As shown in Figure 4, we acquire the dense boundary proposal from the regression output of the DBPM. The classification branch of DBPM is only used in the training stage as an auxiliary task to ease the optimization of the regression branch.
Since the information from the regression output of DBPM is encoded to represents , we decode it into original to form the dense boundary proposal. The dense boundary proposal is a tensor of size serving as a pixel-wise indication for the boundary location. Note that we do not predict the information on the z-axis, including , because we focus on deformable sampling on x-y dimensions. In this way, we make each pixel in the dense boundary proposal aware of the position of the boundary.
3.2 Boundary-Aware Dense Feature Capture
With the dense boundary proposal from DBPM, we aim at capturing dense feature in a boundary-aware manner.
Motivation Deformable convolution  adjust the sampling position of the convolution layer dynamically by introducing an offset for each pixel of the feature map, thereby allowing us to guide the convolution layer where to focus. The offset map provides an offset of size for each pixel of the feature map, where denotes the size of the convolution kernel, and denotes the offset of convolution sampling point along direction respectively. However, the original deformable convolution directly learns the offset map from the feature map, which is not accurate because the feature map does not explicitly provide the location of the dense features. We thus resort to explicit guidance to ease the optimization process. We also prove the significant advantage of explicit guidance in the ablation studies (See Table 2).
DENFIConv As shown in Figure 5, we devise an operator called DENFIConv to utilize the boundary information for capturing dense features effectively. We learn the offset map by stacking 1x1 convolution on it since the dense boundary proposal contains pixel-wise boundary information. The offset map is then sent to a deformable convolution for explicit feature capturing on the backbone feature map. Finally, we use the refined feature map for detection purposes. We adopt a single DENFIConv for the regression branch and the classification branch of the detection head, respectively, which is shown in Figure 3.
Depth-Wise Separable Deformable Convolution  adopts a 3x3 deformable convolution in their original paper for its sizeable receptive field. However, because the size of the backbone feature map is large, 3x3 deformable convolution brings significant computational overhead. Inspired by the design of MobileNets , we design depth-wise separable deformable convolution, which decomposes a 3x3 deformable convolution into a 3x3 depth-wise convolution and a 1x1 deformable convolution. This design delivers similar performance while running four times faster.
We take PointPillars  as our 3D object detection framework in DENFIDet. In short, PointPillars consists of three parts: a Pillar Feature Net for learning 2D compact tensor representation, a 2D convolutional Backbone for high-level feature learning, and an SSD detection head for 3D object detection and regression. We directly insert DENFI between the backbone and the SSD detection head to form DENFIDet.
Joint Objective The loss function of DENFIDet consists of two parts. One part is from the anchor-based detection head of original PointPillars , which is denoted as . Another part is from DBPM, which is defined in Equation 7. We train the network with multi-tasks loss defined in Equation 8. is the weight parameter for balancing two tasks.
4.1 Dataset and Evaluation Protocol
Dataset We train and evaluate DENFIDet on the challenging KITTI  dataset. It contains 7481 training samples and 7518 testing samples with three categories (Car, Pedestrian, and Cyclist). We use the same setting provided by [28, 11] to divide the training data into train/val split for experimental studies. The train split contains 3712 samples and the val split contains 3769 samples. For test submission, we create a mini-val split containing 785 samples and train our model on the remaining 6696 samples as in .
Evaluation Protocol Following , we evaluate the performance of 3D detectors on the KITTI bird-eye-view (BEV) benchmark and report the results of the new 40-point standard KITTI metrics (officially adopted by KITTI benchmark on October 8, 2019), which is the primary focus of KITTI 3D object detection task. Specifically, for each category of objects, they are divided into easy, moderate, and hard levels according to the heights of their 2D bounding boxes, occlusion levels, and truncation levels. We then calculate average precision (AP) with rotated IOU threshold 0.7 for Car and 0.5 for Cyclist & Pedestrian for each difficulty level. Besides, we report mean average precision (mAP) cross all categories.
4.2 Implementation Details
Network Architecture We adopt the same settings as in PointPillars for point cloud transformation, anchors, and corresponding matching strategy for anchors. Please refer to the original paper for the details. Specifically, xy resolution is set to m. The focal loss in both DBPM and the detection head of PointPillars is set to and . For DBPM, we set , for both Car and Cyclist, and , for Pedestrian. Moreover, we set the orientation bin num ; thus, we have orientation bin size . Finally, we simply set in the joint objective. Following , we train one network for Car and one network for Pedestrian and Cyclist.
We implement the network with PyTorch
and adopt Adam optimization algorithm for training with a mini-batch size of 2. We train 160 epochs in total. The learning rate begins with 0.0002 and decays with a rate of 0.8 every 15 epochs. Considering the limited amount of training set on KITTI dataset, it is crucial to adopt data augmentation to alleviate the overfitting problem. We adopt the same data augmentation strategy as in PointPillars for a fair comparison. We first create a database containing all 3D ground truth bounding boxes and the point clouds falling inside theses 3D boxes. For each training sample, we randomly select 15,0,8 samples for Car, Pedestrian, and Cyclists, respectively, from the database and place them into the current point cloud. Next, we randomly disturb ground truth objects by rotating (uniformly sampled from ) and translating (x, y, and z independently sampled from ). Finally, for the global point cloud, we conduct random mirroring flip along the x-axis, global rotation (uniformly sampled from ), global scaling (uniformly sampled from ) and global translation (x, y, z sampled from ).
|MV3D ||Lidar & Img.||360||-||-||-||86.62||78.93||69.80||-||-||-||-||-||-|
|AVOD ||Lidar & Img.||100||72.96||65.09||59.23||89.75||84.95||78.32||42.58||33.57||30.14||64.11||48.15||42.37|
|AVOD-FPN ||Lidar & Img.||100||72.96||64.09||59.23||90.99||84.82||79.62||58.49||50.32||46.98||69.39||57.12||51.09|
|F-PointNet ||Lidar & Img.||170||75.19||65.20||58.01||91.17||84.67||74.77||57.13||49.57||45.48||77.26||61.37||53.78|
|IPOD ||Lidar & Img.||200||76.24||64.60||58.92||89.64||84.62||79.96||60.88||49.79||45.43||78.19||59.40||51.38|
|F-ConvNet ||Lidar & Img.||470||77.57||67.89||60.16||91.51||85.84||76.11||57.04||48.96||44.33||84.16||68.88||60.05|
|UberATG-MMF ||Lidar & Img||80||-||-||-||93.67||88.21||81.99||-||-||-||-||-||-|
|Fast PointRCNN ||Lidar||65||-||-||-||90.87||87.84||80.52||-||-||-||-||-||-|
|ContFuse ||Lidar & Img.||60||-||-||-||94.07||85.35||75.88||-||-||-||-||-||-|
|HDNET ||Lidar & Map||50||-||-||-||93.13||87.98||81.23||-||-||-||-||-||-|
|3D IOU Loss ||Lidar||80||-||-||-||91.36||86.22||81.20||-||-||-||-||-||-|
Inference Details During inference, we first select the top 1000 detection results with the highest score from the output of the detection head. Then, we filter them with a score threshold of 0.05 and use rotated non-maximum suppression (NMS) with an overlap threshold of 0.01 IoU to generate final detection results. For fairness, both the speed of PointPillars and DENFIDet are measured in a PyTorch environment with a 2080Ti GPU and an Intel i7 CPU.
4.3 Analysis Experiments
In this section, we conduct detailed experimental studies to analyze the effectiveness of the introduced components. Results are evaluated on the KITTI val split since the KITTI test split can only be used for the submission of final results.
Effect of Boundary-Aware Dense Features Capture In this part, we study the effectiveness of boundary-aware dense features capture for objects with a different scale. A straight baseline for our proposed module is the original depth-wise separable deformable convolution (denotes as DSDC). DSDC learns the offset map from the feature map itself while our method learns the offset map from the dense boundary proposal for explicit guidance. For a fair comparison, we make official PointPillar codebase111https://github.com/nutonomy/second.pytorch as our another baseline. Then we add an extra deformable module on it, including both original depth-wise separable deformable convolution and DENFI. As shown in Table 2, we find that PointPillars-DSDC has only a minimal performance improvement compared to PointPillars. Compared to PointPilalrs-DSDC, DENFIDet outperforms it by 2.24, 1.17, and 1.34 mAP on the easy, moderate, and difficult levels with the same runtime. We also see more than 4, 5, and 7 times relative improvement on the easy, moderate, and hard difficulty levels compared to PointPilalrs-DSDC, which demonstrates the power of explicit boundary-aware feature capture, especially for those hard objects with fewer points on them.
Runtime Analysis We next discuss the inference time of the different deformable modules. As shown in Table 2, they share the same computational overhead. The reasons are that 1) DSDC uses the backbone feature map of size ( in our experiments) to learn the offset map while DENFI uses the dense boundary proposal of size for this purpose. 2) we need to learn two offset maps for the classification and the regression branch of the anchor-based detection head. So even though DENFI has one more 1x1 convolutions for the regression branch of DBPM during inference, the runtime difference between DSDC and DENFI is negligible.
Results Analysis In the previous experiments, we use an IoU threshold of 0.7 for the Car category to calculate the AP for performance evaluation. However, detection with the highest possible quality is exceptionally critical in practical scenarios such as autonomous driving. In this section, we examine the effectiveness of DENFI on high-quality detection by increasing the IoU threshold of AP. The results are shown in Table 1. We observe that DENFI brings a much more significant performance improvement when the IoU threshold is set at a higher value for all difficulty levels. At the IoU threshold of 0.9, the relative improvement reaches 17.97%, 19.69% and 15.31% on easy, moderate, and hard difficulty levels, which is 5 times compared to that at the IoU threshold of 0.8. This demonstrates the effectiveness of DENFI for aggregating localization information for accurate detection.
4.4 Compared with State-of-the-Art
We compare DENFIDet with a wide range of state-of-the-art 3D object detectors and summarize the results on the test split. As shown in Table 3, compared to PointPillars, DENFIDet achieves better results across all categories and difficulty, outperforming it by 2.71, 2.70, and 2.13 mAP on easy, moderate and difficult levels respectively. In terms of performance improvement for different categories, DENFIDet surpass PointPillars by 2.00, 3.32, and 2.76 AP on the most critical ”moderate” level for Car, Pedestrian, and Cyclist categories respectively. It indicates that DENFI-guided 3D detection can achieve considerable performance improvement for objects with different scales and difficulty, especially for those hard objects with fewer points on them.
With the significant improvement over the PointPillars, DENFIDet outperforms all methods, including both two-stage and multi-sensor fusion based, in terms of mean average precision (mAP), achieving new state-of-art performance with only point clouds as its input. Compared to previous best methods, DENFIDet is 2.7 times faster than STD  (the prior art on KITTI) in speed and achieves better performance in a single-stage manner. Note that STD is a two-stage detector and needs to be trained stage-by-stage to save GPU memory while ours is trained end-to-end. Also, a significant performance advantage for the most difficult Pedestrain category is observed, outperforming AVOD-FPN  (prior art for Pedestrain. A multi-sensor and two-stage method) by 2.66, 1.64 and 2.05 AP on the easy, moderate and hard difficulty levels while running 3 times faster. This further demonstrates the power of the proposed DENFI module.
4.5 Case Study
Figure 6 shows some qualitative comparison of baseline PointPillars and DENFIDet on the KITTI val split set. As we see, PointPillars generate inappropriate results sometimes, including inaccurate regression, false-positive detection, and false-negative detection. Most of these faults are positioned in areas where points are very sparse, and thus it is easy to generate wrong results if we cannot capture their features accurately. In contrast, our DENFIDet performs well for those hard objects as DENFIDet always explicitly perceive where their densest features are.
In this work, we pointed out the inconsistency of adopting 2D detection frameworks for 3D detection, which is caused by the uneven distribution of point clouds, and we can mitigate the problem by guiding 3D detector to explicitly capture the densest region in a boundary-aware manner. We propose a dense boundary proposal module (DBPM) to predict high-quality boundary information. We further design an efficient operator called DENFIConv to take advantage of the boundary information provided by DRBPM for dense features capture, thereby improving the quality of the features provided to the detection head. The simple and efficient design of DRBPM and DENFIConv allows us to combine them with many 3D detectors for improving performance while maintaining real-time speed.
-  (2017) Multi-view 3d object detection network for autonomous driving. In , pp. 1907–1915. Cited by: Table 3.
-  (2019) Fast point r-cnn. arXiv preprint arXiv:1908.02990. Cited by: §1, Table 3.
-  (2017) Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 764–773. Cited by: §1, §2, §3.2.
-  (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: Figure 1, §1, §4.1.
-  (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §3.1.
-  (2018) 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232. Cited by: §2.
-  (2017) Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307. Cited by: §2.
Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §3.2.
-  (2015) Densebox: unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874. Cited by: §3.1.
-  (2018) Joint 3d proposal generation and object detection from view aggregation. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. Cited by: Figure 1, §4.4, Table 3.
-  (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: Figure 1, §1, §1, §2, §3.3, §3.3, §4.1, §4.1, §4.2, §4.2, §4.4, §4.4, Table 1, Table 2, Table 3.
-  (2019) Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7345–7353. Cited by: Table 3.
-  (2018) Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 641–656. Cited by: Table 3.
-  (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1, §3.1, §3.1.
-  (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1.
-  (2017) Automatic differentiation in pytorch. Cited by: §4.2.
-  (2018) Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927. Cited by: Figure 1, §3.1, §3.1, Table 3.
-  (2019) Generalized intersection over union: a metric and a loss for bounding box regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 658–666. Cited by: §3.1, §3.1.
-  (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: Figure 1, §1, §2, Table 3.
-  (2019) Region proposal by guided anchoring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2965–2974. Cited by: §2, §3.2.
-  (2019) Frustum convnet: sliding frustums to aggregate local point-wise features for amodal 3d object detection. arXiv preprint arXiv:1903.01864. Cited by: Figure 1, Table 3.
-  (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: Figure 1, §1, §2, Table 3.
-  (2018) Hdnet: exploiting hd maps for 3d object detection. In Conference on Robot Learning, pp. 146–155. Cited by: §1, §2, Table 3.
-  (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §1, §2, Table 3.
-  (2018) IPOD: intensive point-based object detector for point cloud. arXiv preprint arXiv:1812.05276. Cited by: Figure 1, Table 3.
-  (2019) STD: sparse-to-dense 3d object detector for point cloud. arXiv preprint arXiv:1907.10471. Cited by: Figure 1, §1, §2, §4.4, Table 3.
-  (2019) IoU loss for 2d/3d object detection. arXiv preprint arXiv:1908.03851. Cited by: Table 3.
-  (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. Cited by: §1, §2, §4.1.