With the widespread popularity of LIDAR sensors in autonomous driving Geiger et al. (2012) and augmented reality Park et al. (2008), 3D object detection from point clouds has become a mainstream research direction. Compared to RGB images from video cameras, point clouds could provide accurate depth and geometric information which can be used not only to locate the object, but also to describe the shape of the object. However, the properties of unordered, sparsity and relevance of point clouds make it a challenging task to utilize point clouds for 3D object detection directly.
In recent years, several pioneering approaches have been proposed to tackle these challenges for 3D object detection on point clouds. The main ideas for processing point clouds data are to project point clouds to different viewsSimon et al. (2019); Chen et al. (2017b); Ku et al. (2018); Liang et al. (2018); Yang et al. (2018a, b) or divide the point clouds into equally spaced voxelsLi (2017); Engelcke et al. (2017); Zhou and Tuzel (2018); Yan et al. (2018)
. Then convolutional neural networks and mature 2D objection detection frameworksRen et al. (2015); Redmon et al. (2016) are applied to extract features. However, because projection alone cannot capture the object’s geometric information well, many methodsChen et al. (2017b); Wang and Jia (2019); Qi et al. (2018); Sindagi et al. (2019) have to combine RGB images in the designed network. While the methods using only voxelization do not make good use of the properties of the point clouds and bring a huge computational burdenLiu et al. (2019) as resolution increases. Apart from converting point clouds into other formats, some works Shi et al. (2019b); Yang et al. (2019) take Pointnets Qi et al. (2017a, b) as backbone to process point clouds directly. Although Pointnets build a hierarchical network and use a symmetric function to maintain permutation invariance, they fail to construct the neighbour relationships between the grouped point sets Wang et al. (2019).
Considering the properties of point clouds, we should notice the superiority of graphs in dealing with the irregular data. In fact, in the domain of point clouds for segmentation and classification tasks, the method of processing with graphs has been deeply studied by many works Qi et al. (2017c); Bi et al. (2019); Landrieu and Simonovsky (2018); Shen et al. (2018a); Wang et al. (2019). However, few researches have used graphs to make 3D object detection from point clouds. To our knowledge, Point-GNNShi and Rajkumar (2020)
may be the first to prove the potential of using the graph neural network as a new approach for 3D object detection. Point-GNN introduces auto-registration mechanism to reduce translation variance and designs box merging and scoring operation to combine detection results from multiple vertices accurately. However, similar to ShapeContextNetXie et al. (2018) and Pointnet++ Qi et al. (2017b)
, the relationship between point sets is not well established in the feature extraction process and a large number of matrix operations will bring heavy calculation burden and memory cost.
In this paper, we propose the sparse voxel-graph attention network (SVGA-Net) for 3D object detection. SVGA-Net is an end-to-end trainable network which takes raw point clouds as input as outputs the category and bounding boxes information of the object. Specifically, SVGA-Net mainly consists of voxel-graph network module and sparse-to-dense regression module. Instead of normalized rectangle voxels, we divide the point clouds into 3D spherical space with a fixed radius. The voxel-graph network aims to construct local complete graph for each voxel and global KNN graph for all voxels. The local and global serve as the attention mechanism that can provide a parameter supervision factor for the feature vector of each point. In this way, the local aggregated features can be combined with the global point-wise features. Then we design the sparse-to-dense regression module to predict the category and 3D bounding box by processing the features at different scales. Evaluation on KITTI benchmark demonstrates that our proposed method can achieve comparable results with the state-of-the-art approaches.
Our key contributions can be summarized as follows:
We propose a new end-to-end trainable 3D object detection network from point clouds which uses graph representations without converting to other formats.
We design a voxel-graph network, which constructs the local complete graph within each spherical voxel and the global KNN graph through all voxels to learn the discriminative feature representation simultaneously.
We propose a novel 3D boxes estimation method that aggregates features at different scales to achieve higher 3D localization accuracy.
Our proposed SVGA-Net achieves decent experimental results with the state-of-the-art methods on the challenging KITTI 3D detection dataset.
2 Related work
Projection-based methods for point clouds. To align with RGB images, series of works process point clouds through projection Chen et al. (2017b); Ku et al. (2018); Liang et al. (2018); Yang et al. (2018b); Liang et al. (2019). Among them, MV3D Chen et al. (2017b) projects point clouds to bird view and trains a Region Proposal Network (RPN) to generate positive proposals. It extracts features from LiDAR bird view, LIDAR front view and RGB image, for every proposal to generate refined 3D bounding boxes. AVOD Ku et al. (2018) improves MV3D by fusing image and bird view features and merges features from multiple views in the RPN phase to generate positive proposals. Note that accurate geometric information may be lost in the high-level layers with this scheme.
Volumetric methods for point clouds. Another typical method for processing point clouds is voxelization. VoxelNet Zhou and Tuzel (2018)
is the first network to process point clouds with voxelization, which use stacked VFE layers to extract features tensors. Following it, a large number of methodsLiu et al. (2020); Zhou and Tuzel (2018); Yan et al. (2018); Shi et al. (2019a); Chen et al. (2019) divide the 3D space into regular grids and group the points in a grid as a whole. However, they often need to stack heavy 3D CNN layers to realize geometric pose inference which bring large computation.
Pointnet-based methods for point clouds. To process point clouds directly, PointNet Qi et al. (2017a) and PonintNet++ Qi et al. (2017b) are the two groundbreaking works to design parallel MLPs to extract features from the raw irregular data, which improve the accuracy greatly. Taking them as backbone, many works Shi et al. (2019b); Qi et al. (2018); Lang et al. (2019); Shi et al. (2019c); Yang et al. (2019, 2020) begin to design different feature extractors to achieve better performance. Although Pointnets are effective to abstract features, they still suffer feature loss between the local and global point sets.
Graph-based methods for point clouds. Constructing graphs to learn the order-invariant representation of the irregular point clouds data has been explored in classification and segmentation tasks Simonovsky and Komodakis (2017); Shen et al. (2018b); Kaul et al. (2019); Wang et al. (2019). Graph convolution operation is efficient to compute features between points. DGCNN Wang et al. (2019) proposes EdgeConv in the neighbor point sets to fuse local features in a KNN graph. SAWNet Kaul et al. (2019) extends the ideas of PointNet and DGCNN to learn both local and global information for points. Surprisingly, few researches have considered applying graph for 3D object detection. Point-GNN may be the first work to design a GNN for 3D object detection. Point-GNN Shi and Rajkumar (2020) designs a one-stage graph neural network to predict the category and shape of the object with an auto-registration mechanism, merging and scoring operation, which demonstrate the potential of using the graph neural network as a new approach for 3D object detection.
3 Proposed method
In this section, we detail the architecture of the proposed SVGA-Net for 3D detection from point clouds. As shown in Figure 1, our SVGA-Net architecture mainly consists of two modules: voxel-graph network and spare-to-dense regression.
3.1 Voxel-graph network architecture
Spherical voxel grouping. Consider the original point clouds are represented as , where indicting points in a dimensional metric space. In our practice, is set to 4 so each point in 3D space is defined as , where denote the coordinate values of each point along the axes X, Y, Z and the fourth dimension is the laser reflection intensity which denoted as .
Then in order to cover the entire point set better, we use the iterative farthest point sampling Qi et al. (2017b) to choose farthest points . According to each point in , we search its nearest neighbor within a fixed radius to form a local voxel sphere:
In this way, we can subdivide the 3D space into 3D spherical voxels .
Local point-wise feature. As shown in Figure 1, for each spherical voxel with points ( varies for different voxel sphere), the coordinate information of all points inside form the input vector. We extract the local point-wise features for each voxel sphere by learning a mapping:
Then, we could obtain the local point-wise feature representation for each voxel sphere , which are transformed by the subsequent layers for deeper feature learning.
Local point-attention layer. Taken the features of each nodes as input, the local point-attention layer outputs the refined features through series of information aggregation. As shown in Figure 2, we construct a complete graph for each local node set and KNN graph for all the spherical voxels. We aggregate the information of each node according to the local and global attention score. The feature aggregation of -th node is represented as:
where denotes the dynamic updated feature of node and is the input feature of node . denotes the index of the other nodes inside the same sphere. denotes the feature of the -th nodes inside the same sphere. is the local attention score between node and the other nodes inside the same sphere. is the global attention score from the global KNN graph in the -th iterations.
As shown in Figure 2 (a), we construct a complete graph for all nodes within a voxel sphere to learn the features constrained by each other. And the local attention score is calculated by:
Global attention layer. By constructing the local complete graph, the aggregated features can only describe the local feature and do not integrate with the global information. So we design the global attention layer to learn the global feature of each spherical voxel and offer a feature factor aligned to each node.
For the points within each in 3D spherical voxels , we calculate the physical centers of all voxels which denoted as . Each center is learned by a 3-layer MLP to get the initial global feature . As Figure 2 (b) shows, we construct a KNN graph for the voxel sphere. For each node , the attention score between node and its -th neighbor is calculated as follows:
where denotes the index of the neighbors of node . is the number of the point attention layers.
Voxel-graph features representation. The point attention operation on each spherical voxel can combine the parameter factor from both local and global, each of which is inserted with a 2-layer MLP with a nonlinear activation to transform each updated feature . By stacking multiple point attention layers, both local aggregated feature and global point-wise feature can be learned. We then apply maxpool on the aggregated feature to obtain the final feature vector. To process all the spherical voxel, we obtain a set of voxel sphere features, each of which corresponds to the spatial coordinates of the voxels and is taken as input of the sparse-to-dense regression module.
3.2 Sparse-to-dense regression
For each 3D bounding box in 3D space, the predicted box information is represented as , where is the center coordinate of the bounding box, is the size information alongside length, width and height respectively, and is the heading angle. Feature map from the voxel-graph network is processed by region proposal regression module. The architecture of the specified sparse-to-dense regression(SDR) module is illustrated in Figure 3.
layers, followed by BatchNorm and a ReLU, whereand are the number of input and output channels,
represent the kernel size, stride size and padding size respectively. The stride size is set to 2 for the first layer of each block to downsample the feature map by half, followed by sequence of convolutions with stride 1. And the output of the three blocks is denoted as, , respectively.
In order to combine high-resolution features with large receptive fields and low-resolution features with small receptive fields, we concat the output of the second and third modules , with the output of the first and second modules , after upsampling. In this way, the dense feature range of the lower level can be well combined with the sparse feature range of the higher level. Then a series of convolution operations with an upsampling layer are performed in parallel on three scale channels to generate three feature maps with the same scale size, which are denoted as , , .
In addition, we consider that the features output of , , are more densely fit to our final goal than the original three modules. Therefore, in order to combine the original sparse feature map and the series of processed dense feature maps, we combine the original output , , after upsampling and , , by element-wise addition. The final output is obtained by concatenating the fused feature maps after a convolution layer. And is taken as input to perform category classification and 3D bounding box regression.
3.3 Loss function
We use a multi-task loss to train our network. Each prior anchor and ground truth bounding box are parameterized as and respectively. The regression residuals between anchors and ground truth are computed as:
where . And we use Smooth L1 lossGirshick (2015) as our 3D bounding box regression loss .
For the object classification loss, we apply the classification binary cross entropy loss.
where and are the number of the positive and negative anchors. and are the softmax output for positive and negative anchors respectively. and are positive constants to balance the different anchors, which are set to 1.5 and 1 respectively in our practice.
Our total loss is composed of two parts, the classification loss and the bounding box regression loss . The total loss is denoted as:
where and are the predicated residual and the regression target respectively. Weighting parameters and are used to balance the relative importance of different parts, and their values are set to 1 and 2 respectively.
We evaluate our method on the widely used KITTI 3D object detection benchmark Geiger et al. (2012). It includes 7481 training samples and 7518 test samples with three categories: car, pedestrian and cyclist. For each category, detection results are evaluated based on three levels of difficulty: easy, moderate and hard. Following Chen et al. (2017a), we divide the training data into a training set (3712 images and point clouds) and a validation set (3769 images and point clouds) at a ratio of about 1: 1 (Ablation studies are conducted on this split). We train our model on train split and compare our results with state-of-the-art methods on both val split and test split. For evaluation, the average precision (AP) metric is to compare with different methods and the 3D IoU of car, cyclist, and pedestrian are 0.7, 0.5, and 0.5 respectively.
4.1 Implementation details
Network Architecture. As shown in Figure 1, in the local point-wise feature and global attention layer, the point sets are first processed by 3-layer MLP and the sizes are all (64, 128, 128). In the local point attention layer, we stack local point-attention graph to aggregate the features, each followed by a 2-layer MLP. And the sizes of the three MLPs are (128, 128), (128, 256) and (512, 1024) respectively. Following Ku et al. (2018); Zhou and Tuzel (2018); Yang et al. (2019), we train two networks, one for cars and another for both pedestrians and cyclists.
|MV3DChen et al. (2017b)||R+L||71.09||62.35||55.12||-||-||-||-||-||-|
|F-PointnetQi et al. (2018)||R+L||81.20||70.39||62.19||51.21||44.89||40.23||71.96||56.77||50.39|
|AVOD-FPNKu et al. (2018)||R+L||81.94||71.88||66.38||50.80||42.81||40.88||64.00||52.18||46.61|
|F-ConvNetWang and Jia (2019)||R+L||85.88||76.51||68.08||52.37||45.61||41.49||79.58||64.68||57.03|
|MMFLiang et al. (2019)||R+L||86.81||76.75||68.41||-||-||-||-||-||-|
|VoxelnetZhou and Tuzel (2018)||L||77.47||65.11||57.73||39.48||33.69||31.51||61.22||48.36||44.37|
|SECONDYan et al. (2018)||L||83.13||73.66||66.20||51.07||42.56||37.29||70.51||53.85||46.90|
|PointPillarsLang et al. (2019)||L||79.05||74.99||68.30||52.08||43.43||41.49||75.78||59.07||52.92|
|PointRCNNShi et al. (2019b)||L||85.94||75.76||68.32||49.43||41.78||38.63||73.93||59.60||53.59|
|STDYang et al. (2019)||L||86.61||77.63||76.06||53.08||44.24||41.97||78.89||62.53||55.77|
|3DSSDYang et al. (2020)||L||88.36||79.57||74.55||-||-||-||-||-||-|
|SA-SSDHe et al.||L||88.75||79.79||74.16||-||-||-||-||-||-|
|PV-RCNN Shi et al. (2019a)||L||90.25||81.43||76.82||-||-||-||78.60||63.71||57.65|
|Point-GNNShi and Rajkumar (2020)||L||88.33||79.47||72.29||51.92||43.77||40.14||78.60||63.48||57.08|
Performance comparison on KITTI 3D object detection for car, pedestrian and cyclists.The evaluation metrics is the average precision (AP) on the official test set. ’R’ denotes RGB images input and ’L’ denotes Lidar point clouds input.
For cars, we sample to form the initial point sets. To construct the local complete graph, we choose . For anchors, an anchor is considered as positive if it has the highest IoU with a ground truth or its IoU score is over 0.6. An anchor is considered as negative if the IoU with all ground truth boxes is less than 0.45. To reduce redundancy, we apply IoU threshold of 0.7 for NMS. For cyclist and pedestrian, the number of the initial point sets is . We set to construct the local graph. The anchor is considered as positive if its highest IoU score with a ground truth box or an IoU score is over than 0.5. And an anchor is considered as negative if its IoU score with ground truth box is less than 0.35. The IoU threshold of NMS is set to 0.6.
Training. The network is trained in an end-to-end manner on GTX 1080 GPU. The ADAM optimizer Kingma and Ba (2014)
is employed to train our network and its initial learning rate is 0.001 for the first 140 epoches and is decayed by 10 times in every 20 epoches. We train our network for 200 epoches with a batch size of 16 on 4 GPU cards. Furthermore, we also apply data augmentation asLang et al. (2019); Zhou and Tuzel (2018) do to prevent overfitting.
4.2 Comparing with state-of-the-art methods
Performance on KITTI test dataset. We evaluate our method on the 3D detection benchmark and the bird’s eye view detection benchmark of the KITTI test server. As shown in Table 1 and Table 2, we compare our results with state-of-the-art RGB+Lidar and Lidar only methods for the 3D object detection and the bird’s view detection task. Our proposed method outperforms the most effective RGB+Lidar methods MMFLiang et al. (2019) by (4.86%, 6.2%, 6.22%) and (6.33%, 4.2%, 5.79%) for car category on three difficulty levels of 3D detection and BEV detection.
Compared with the Lidar-based methods, our SVGA-Net can still show decent performance on the three categories. In particular, we are far superior to Point-GNNShi and Rajkumar (2020) using the same graph representation method in the detection of the three categories. We believe that this may benefit from our construction of local and global graphs to better capture the feature information of point clouds. The slight inferiority of the hard difficulty level in the two detection tasks may be due to the fact that the local graph cannot be constructed for objects with occlusion ratio exceeding 80%.
|MV3DChen et al. (2017b)||R+L||86.02||76.90||68.49||-||-||-||-||-||-|
|F-PointnetQi et al. (2018)||R+L||88.70||84.00||75.33||58.09||50.22||47.20||75.38||61.96||54.68|
|AVOD-FPNKu et al. (2018)||R+L||88.53||83.79||77.90||58.75||51.05||47.54||68.09||57.48||50.77|
|F-ConvNetWang and Jia (2019)||R+L||89.69||83.08||74.56||58.90||50.48||46.72||82.59||68.62||60.62|
|MMFLiang et al. (2019)||R+L||89.49||87.47||79.10||-||-||-||-||-||-|
|VoxelnetZhou and Tuzel (2018)||L||89.35||79.26||77.39||46.13||40.74||38.11||66.70||54.76||50.55|
|SECONDYan et al. (2018)||L||88.07||79.37||77.95||55.10||46.27||44.76||73.67||56.04||48.78|
|PointPillarsLang et al. (2019)||L||88.35||86.10||79.83||58.66||50.23||47.19||79.14||62.25||56.00|
|PointRCNNShi et al. (2019b)||L||89.47||85.58||79.10||-||-||-||81.52||66.77||60.78|
|STDYang et al. (2019)||L||89.66||87.76||86.89||60.99||51.39||45.89||81.04||65.32||57.85|
|SA-SSDHe et al.||L||95.03||91.03||85.96||-||-||-||-||-||-|
|PV-RCNN Shi et al. (2019a)||L||94.98||90.65||86.14||-||-||-||82.49||68.89||62.41|
|Point-GNNShi and Rajkumar (2020)||L||93.11||89.17||83.90||55.36||47.07||44.61||81.17||67.28||59.67|
4.3 Qualitative results
As shown in Figure 4, we illustrate some qualitative predicted bounding results of our proposed SVGA-Net on the test split on KITTI dataset. For better visualization, we project the 3D bounding boxes into RGB images and BEV in point clouds. From the figures we could see that our proposed network could estimate accurate 3D bounding boxes in different scenes. Surprisingly, SVGA-Net can still produce accurate 3D bounding boxes even under poor lighting conditions and severe occlusion.
4.4 Ablation studies
In this section, we conduct series of extensive ablation studies on the validation split of KITTI to illustrate the role of each module in improving the final result and our parameter selection. All ablation studies are implemented on the car class which contains the largest amount of training examples. The evaluation metric is the average precision (AP %) on the val set.
Results on KITTI validation dataset. For the most important car category, we first report the performance of our method on KITTI val split and the results are shown in Table 4 and Table 4. For car, our proposed method achieves better or comparable results than state-of-the-art methods on three difficulty levels which illustrate the superiority of our method.
|MV3D Chen et al. (2017b)||R+L||71.29||62.68||56.56|
|F-Pointnet Qi et al. (2018)||R+L||83.76||70.92||63.65|
|AVOD-FPN Ku et al. (2018)||R+L||84.41||74.44||68.65|
|Cont-Fuse Liang et al. (2018)||R+L||86.32||73.25||67.81|
|F-ConvNetWang and Jia (2019)||R+L||89.02||78.80||77.09|
|Voxelnet Zhou and Tuzel (2018)||L||81.97||65.46||62.85|
|SECOND Yan et al. (2018)||L||87.43||76.48||69.10|
|PointRCNN Shi et al. (2019b)||L||88.88||78.63||77.38|
|Fast PointRCNN Chen et al. (2019)||L||89.12||79.00||77.48|
|STDYang et al. (2019)||L||89.70||79.80||79.30|
|SA-SSDHe et al.||L||90.15||79.91||78.78|
|3DSSDYang et al. (2020)||L||89.71||79.45||78.67|
|Point-GNNShi and Rajkumar (2020)||L||87.89||78.34||77.38|
|MV3D Chen et al. (2017b)||R+L||86.55||78.10||76.67|
|F-Pointnet Qi et al. (2018)||R+L||88.16||84.02||76.44|
|F-ConvNetWang and Jia (2019)||R+L||90.23||88.79||86.84|
|Voxelnet Zhou and Tuzel (2018)||L||89.60||84.81||78.57|
|SECOND Yan et al. (2018)||L||89.96||87.07||79.66|
|Fast PointRCNN Chen et al. (2019)||L||90.12||88.10||86.24|
|STDYang et al. (2019)||L||90.50||88.50||88.10|
|Point-GNNShi and Rajkumar (2020)||L||89.82||88.31||87.16|
Effect of different design choice. In the local point attention layer, we stack several local complete layers to extract aggregated features. In order to show the impact of the number of the point attention layer, we train our network with varying from 1 to 4. As shown in Table 5, when the local feature information is transmitted on the 1st to 3rd layers, the detection accuracy is continuously improved because the features are continuously aggregated to the object itself. When increases to 4, the detection accuracy decreases slightly, and we believe that the network should be over-learning.
Furthermore, we study the importance of the global attention layer in improving the detection accuracy. As shown in Table 5, the AP values on both detection tasks are greatly reduced when we remove this module from the network, which proves the importance of this design in providing global feature information for each point.
In the last three rows of Table 5, we aim to explore the effect of different design in the spare-to-dense regression module. SR is to remove the concatenation of with the upsampled and DR is to remove the addition of with . Results show that only the design of sparse-to-dense regression ranks the first in improving detection accuracy.
Our network is written in Python and implemented in Pytorch for GPU computation. The average inference time for one sample is 62 ms, including 14.5%(9 ms) for data reading and pre-processing, 66.1%(41 ms) for local and global features aggregation and 19.4%(12 ms) for final boxes detection.
In this paper, we have proposed a novel sparse voxel-graph attention network(SVGA-Net) for 3D Object Detection from raw Point Clouds. We introduce graph representation to process point clouds. By constructing a local complete graph in the divided spherical voxel space, we can get a better local representation of the point feature, and the information between the point and its neighborhood can be fused. By constructing a global graph, we can better supervise and learn the features of points. In addition, the sparse-to-dense regression module can also fuse feature maps at different scales. Experiments have demonstrated the efficiency of the design choice in our network. Future work will extend SVGA-Net to combine RGB images to further improve detection accuracy.
Our work aims to solve the problem of detection of road objects in autonomous driving, which is particularly critical in autonomous driving scenarios. The improvement of detection accuracy can accelerate the implementation of unmanned vehicles. Our paper introduces a new method of representing road point clouds data, which will bring new ideas to further improve the accuracy to a certain extent. However, we have to admit that the popularization of autonomous driving will bring a series of traffic safety problems, and the resulting car accidents are also incalculable.
This work is supported by a grant from the National Natural Science Foundation of China (No. 61872068, 61720106004), by a grant from Science & Technology Department of Sichuan Province of China (No.2018GZ0071, 2019YFG0426), and by a grant from the Fundamental Research Funds for the Central Universities (No.2672018ZYGX2018J014).
Graph-based object classification for neuromorphic vision sensing.
2019 IEEE/CVF International Conference on Computer Vision (ICCV), Vol. , pp. 491–501. Cited by: §1.
-  (2017) 3d object proposals using stereo imagery for accurate object class detection. IEEE transactions on pattern analysis and machine intelligence 40 (5), pp. 1259–1272. Cited by: §4.
Multi-view 3d object detection network for autonomous driving.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1907–1915. Cited by: §1, §2, Table 1, Table 2, Table 4.
-  (2019) Fast point r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9775–9784. Cited by: §2, Table 4.
-  (2017) Vote3deep: fast object detection in 3d point clouds using efficient convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1355–1361. Cited by: §1.
-  (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: §1, §4.
-  (2015) Fast r-cnn. In 2015 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 1440–1448. Cited by: §3.3.
-  Structure aware single-stage 3d object detection from point cloud. Cited by: Table 1, Table 2, Table 4.
-  (2019) SAWNet: a spatially aware deep neural network for 3d point cloud processing. arXiv preprint arXiv:1905.07650. Cited by: §2.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
-  (2018) Joint 3d proposal generation and object detection from view aggregation. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. Cited by: §1, §2, §4.1, Table 1, Table 2, Table 4.
-  (2018) Large-scale point cloud semantic segmentation with superpoint graphs. In CVPR 2018, Cited by: §1.
-  (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: §2, §3.2, §4.1, Table 1, Table 2.
-  (2017) 3d fully convolutional network for vehicle detection in point cloud. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1513–1518. Cited by: §1.
-  (2019) Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7345–7353. Cited by: §2, §4.2, Table 1, Table 2.
-  (2018) Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 641–656. Cited by: §1, §2, Table 4.
-  (2020) TANet: robust 3d object detection from point clouds with triple attention. AAAI. External Links: Cited by: §2.
Point-voxel cnn for efficient 3d deep learning. In Advances in Neural Information Processing Systems, pp. 963–973. Cited by: §1.
-  (2008) Multiple 3d object tracking for augmented reality. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR ’08, Washington, DC, USA, pp. 117–120. External Links: Cited by: §1.
-  (2018) Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927. Cited by: §1, §2, Table 1, Table 2, Table 4.
-  (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §1, §2.
-  (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §1, §1, §2, §3.1.
-  (2017) 3D graph neural networks for rgbd semantic segmentation. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 5209–5218. Cited by: §1.
-  (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1.
-  (2018) Mining point cloud local structures by kernel correlation and graph pooling. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 4548–4557. Cited by: §1.
-  (2018) Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4548–4557. Cited by: §2.
-  (2019) PV-rcnn: point-voxel feature set abstraction for 3d object detection. arXiv preprint arXiv:1912.13192. Cited by: §2, Table 1, Table 2.
-  (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §1, §2, Table 1, Table 2, Table 4.
-  (2019) Part-a^ 2 net: 3d part-aware and aggregation neural network for object detection from point cloud. arXiv preprint arXiv:1907.03670. Cited by: §2.
-  (2020-06) Point-gnn: graph neural network for 3d object detection in a point cloud. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §4.2, Table 1, Table 2, Table 4.
-  (2019) Complexer-yolo: real-time 3d object detection and tracking on semantic point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §1.
-  (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3693–3702. Cited by: §2.
-  (2019) MVX-net: multimodal voxelnet for 3d object detection. In 2019 International Conference on Robotics and Automation (ICRA), Vol. , pp. 7276–7282. Cited by: §1.
-  (2019-10) Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 38 (5), pp. 146:1–146:12. External Links: Cited by: §1, §1, §2.
-  (2019) Frustum convnet: sliding frustums to aggregate local point-wise features for amodal. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1742–1749. Cited by: §1, Table 1, Table 2, Table 4.
-  (2018) Attentional shapecontextnet for point cloud recognition. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 4606–4615. Cited by: §1.
-  (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §1, §2, Table 1, Table 2, Table 4.
-  (2018) Hdnet: exploiting hd maps for 3d object detection. In Conference on Robot Learning, pp. 146–155. Cited by: §1.
-  (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §1, §2.
-  (2020) 3DSSD: point-based 3d single stage object detector. arXiv preprint arXiv:2002.10187. Cited by: §2, Table 1, Table 4.
-  (2019) Std: sparse-to-dense 3d object detector for point cloud. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1951–1960. Cited by: §1, §2, §4.1, Table 1, Table 2, Table 4.
-  (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. Cited by: §1, §2, §3.2, §4.1, §4.1, Table 1, Table 2, Table 4.