End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds

10/15/2019 ∙ by Yin Zhou, et al. ∙ 0

Recent work on 3D object detection advocates point cloud voxelization in birds-eye view, where objects preserve their physical dimensions and are naturally separable. When represented in this view, however, point clouds are sparse and have highly variable point density, which may cause detectors difficulties in detecting distant or small objects (pedestrians, traffic signs, etc.). On the other hand, perspective view provides dense observations, which could allow more favorable feature encoding for such cases. In this paper, we aim to synergize the birds-eye view and the perspective view and propose a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both. Specifically, we introduce dynamic voxelization, which has four merits compared to existing voxelization methods, i) removing the need of pre-allocating a tensor with fixed size; ii) overcoming the information loss due to stochastic point/voxel dropout; iii) yielding deterministic voxel embeddings and more stable detection outcomes; iv) establishing the bi-directional relationship between points and voxels, which potentially lays a natural foundation for cross-view feature fusion. By employing dynamic voxelization, the proposed feature fusion architecture enables each point to learn to fuse context information from different views. MVF operates on points and can be naturally extended to other approaches using LiDAR point clouds. We evaluate our MVF model extensively on the newly released Waymo Open Dataset and on the KITTI dataset and demonstrate that it significantly improves detection accuracy over the comparable single-view PointPillars baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding the 3D environment from LiDAR sensors is one of the core capabilities required for autonomous driving. Most techniques employ some forms of voxelization, either via custom discretization of the 3D point cloud (e.g,. Pixor [32]) or via learned voxel embeddings (e.g., VoxelNet [33], PointPillars [10]). The latter typically involves pooling information across points from the same voxel, then enriching each point with context information about its neighbors. These voxelized features are then projected to a birds-eye view (BEV) representation that is compatible with standard 2D convolutions. One benefit of operating in the BEV space is that it preserves the metric space, i.e., object sizes remain constant with respect to distance from the sensor. This allows models to leverage prior information about the size of objects during training. On the other hand, as the point cloud becomes sparser or as measurements get farther away from the sensor, the number of points available for each voxel embedding becomes more limited.

Recently, there has been a lot of progress on utilizing the perspective range-image, a more native representation of the raw LiDAR data (e.g., LaserNet [18]). This representation has been shown to perform well at longer ranges where the point cloud becomes very sparse, and especially on small objects. By operating on the “dense” range-image, this representation can also be very computationally efficient. Due to the perspective nature, however, object shapes are not distance-invariant and objects may overlap heavily with each other in a cluttered scene.

Many of these approaches utilize a single representation of the LiDAR point cloud, typically either BEV or range-image. As each view has its own advantages, a natural question is how to combine multiple LiDAR representations into the same model. Several approaches have looked at combining BEV laser data with perspective RGB images, either at the ROI pooling stage (MV3D [1], AVOD [9]) or at a per-point level (MVX-Net [27]). Distinct from the idea of combining data from two different sensors, we focus on how fusing different views of the same sensor can provide a model with richer information than a single view by itself.

In this paper, we make two major contributions. First, we propose a novel end-to-end multi-view fusion (MVF) algorithm that can leverage the complementary information between BEV and perspective views of the same LiDAR point cloud. Motivated by the strong performance of models that learn to generate per-point embeddings, we designed our fusion algorithm to operate at an early stage, where the net still preserves the point-level representation (e.g., before the final pooling layer in VoxelNet [33]). Each individual 3D point now becomes the conduit for sharing information across views, a key idea that forms the basis for multi-view fusion. Furthermore, the type of embedding can be tailored for each view. For the BEV encoding, we use vertical column voxelization (i.e., PointPillars [10]) that has been shown to provide a very strong baseline in terms of both accuracy and latency. For the perspective embedding, we use a standard 2D convolutional tower on the “range-image-like” feature map that can aggregate information across a large receptive field, helping to alleviate the point sparsity issue. Each point is now infused with context information about its neighbors from both BEV and perspective view. These point-level embeddings are pooled one last time to generate the final voxel-level embeddings. Since MVF enhances feature learning at the point level, our approach can be conveniently incorporated to other LiDAR-based detectors [33, 10, 25].

Our second main contribution is the concept of dynamic voxelization (DV) that offers four main benefits over traditional (i.e., hard voxelization (HV)  [33, 10]):

  • [nosep]

  • DV eliminates the need to sample a predefined number of points per voxel. This means that every point can be used by the model, minimizing information loss.

  • It eliminates the need to pad voxels to a predefined size, even when they have significantly fewer points. This can greatly reduce the extra space and compute overhead from HV, especially at longer ranges where the point cloud becomes very sparse. For example, previous models like VoxelNet and PointPillars allocate 100 or more points per voxel (or per equivalent 3D volume).

  • DV overcomes stochastic dropout of points/voxels and yields deterministic voxel embeddings, which leads to more stable detection outcomes.

  • It serves as a natural foundation for fusing point-level context information from multiple views.

MVF and dynamic voxelization allow us to significantly improve detection accuracy on the recently released Waymo Open Dataset and on the KITTI dataset.

2 Related Work

2D Object Detection. Starting from the R-CNN [5] detector proposed by Girshick et al.

, researchers have developed many modern detector architectures based on Convolutional Neural Networks (CNN). Among them, there are two representative branches: two-stage detectors

[24, 4] and single-stage detectors [22, 17, 23]. The seminal Faster RCNN paper [24] proposes a two-stage detector system, consisting of a Region Proposal Network (RPN) that produces candidate object proposals and a second stage network, which processes these proposals to predict object classes and regress bounding boxes. On the single-stage detector front, SSD by Liu et al. [17]

simultaneously classifies which anchor boxes among a dense set contain objects of interest, and regresses their dimensions. Single-stage detectors are usually more efficient than two-stage detectors in terms of inference time, but they achieve slightly lower accuracy compared to their two-stage counterparts on the public benchmarks such as MSCOCO 

[16], especially on smaller objects. Recently Lin et al.

demonstrated that using the focal loss function

[15] on a single-stage detector can lead to superior performance than two-stage methods, in terms of both accuracy and inference time.

3D Object Detection in Point Clouds. A popular paradigm for processing a point cloud produced by LiDAR is to project it in birds-eye view (BEV) and transform it into a multi-channel 2D pseudo-image, which can then be processed by a 2D CNN architecture for both 2D and 3D object detection. The transformation process is usually hand-crafted, some representative works include Vote3D [28], Vote3Deep [2], 3DFCN [11], AVOD [8], PIXOR [32] and Complex YOLO [26]. VoxelNet by Zhou et al.  [33] divides the point cloud into a 3D voxel grid (i.e. voxels) and uses a PointNet-like network [20] to learn an embedding of the points inside each voxel. PointPillars [10] builds on the idea of VoxelNet to encode the points feature on pillars (i.e. vertical columns). Shi et al. [25] propose a PointRCNN model that utilizes a two-stage pipeline, in which the first stage produces 3D bounding box proposals and the second stage refines the canonical 3D boxes. Perspective view is another widely used representation for LiDAR. Along this line of research, some representative works are VeloFCN [12] and LaserNet [18].

Multi-Modal Fusion. Beyond using only LiDAR, MV3D [1]

combines CNN features extracted from multiple views (front view, birds-eye view as well as camera view) to improve 3D object detection accuracy. A separate line of work, such as Frustum PointNet 

[19] and PointFusion [29], first generates 2D object proposals from the RGB image using a standard image detector and extrudes each 2D detection box to a 3D frustum, which is then processed by a PointNet-like network[20, 21] to predict the corresponding 3D bounding box. ContFuse [13]

combines discrete BEV feature map with image information by interpolating RGB features based on 3D point neighborhood. HDNET 

[31] encodes elevation map information together with BEV feature map. MMF [14] fuses BEV feature map, elevation map and RGB image via multi-task learning to improve detection accuracy. Our work introduces a method for point-wise feature fusion that operates at the point-level rather than the voxel or ROI level. This allows it to better preserve the original 3D structure of the LiDAR data, before the points have been aggregated via ROI or voxel-level pooling.

3 Multi-View Fusion

Our Multi-View Fusion (MVF) algorithm consists of two novel components: dynamic voxelization and feature fusion network architecture. We introduce each in the following subsections.

3.1 Voxelization and Feature Encoding

Figure 1: Illustration of the differences between hard voxelization and dynamic voxelization. The space is devided into four voxels, indexed as , , , , which contain 6, 4, 2 and 1 points respectively. hard voxelization drops one point in and misses , with memory usage, whereas dynamic voxelization captures all four voxels with optimal memory usage .

Voxelization divides a point cloud into an evenly spaced grid of voxels, then generates a many-to-one mapping between 3D points and their respective voxels. VoxelNet [33] formulates voxelization as a two stage process: grouping and sampling. Given a point cloud , the process assigns points to a buffer with size , where is the maximum number of voxels, is the maximum number of points in a voxel and represents the feature dimension. In the grouping stage, points are assigned to voxels based on their spatial coordinates. Since a voxel may be assigned more points than its fixed point capacity allows, the sampling stage sub-samples a fixed number of points from each voxel. Similarly, if the point cloud produces more voxels than the fixed voxel capacity , the voxels are sub-sampled. On the other hand, when there are fewer points (voxels) than the fixed capacity (), the unused entries in the buffer are zero-padded. We call this process hard voxelization [33].

Define as the mapping that assigns each point to a voxel where the point resides and define as the mapping that gathers points within a voxel . Formally, hard voxelization can be summarized as

(1)
(2)

Hard voxelization (HV) has three intrinsic limitations: (1) As points and voxels are dropped when they exceed the buffer capacity, HV forces the model to throw away information that may be useful for detection; (2) This stochastic dropout of points and voxels may also lead to non-deterministic voxel embeddings, and consequently unstable or jittery detection outcomes; (3) Voxels that are padded cost unnecessary computation, which hinders the run-time performance.

Figure 2: Multi-View Fusion (MVF) Network Architecture. Given a raw LiDAR point cloud as input, the proposed MVF first embeds each point into a high dimensional feature space via one fully connected (FC) layer, which is shared for different views. Then, it applies dynamic voxelization in the birds-eye view and the perspective view respectively and establishes the bi-directional mapping ( and ) between points and voxels therein, where . Next, in each view, it employs one additional FC layer to learn view-dependent features, and by referencing

it aggregates voxel information via Max Pooling. Over the voxel-wise feature map, it uses a convolution tower to further process context information within an enlarged receptive field, while still maintaining the same spatial resolution. Finally, based on

, it fuses features from three different sources for each point, i.e., the corresponding voxel features from the birds-eye view and the perspective view as well as the corresponding point feature obtained via the shared FC.

We introduce dynamic voxelization (DV) to overcome these drawbacks. DV keeps the grouping stage the same, however, instead of sampling the points into a fixed number of fixed-capacity voxels, it preserves the complete mapping between points and voxels. As a result, the number of voxels and the number of points per voxel are both dynamic, depending on the specific mapping function. This removes the need for a fixed size buffer and eliminates stochastic point and voxel dropout. The point-voxel relationships can be formalized as

(3)
(4)

Since all the raw point and voxel information is preserved, dynamic voxelization does not introduce any information loss and yields deterministic voxel embeddings, leading to more stable detection results. In addition, and establish bi-directional relationships between every pair of and , which lays a natural foundation for fusing point-level context features from different views, as will be discussed shortly.

Figure 1 illustrates the key differences between hard voxelization and dynamic voxelization. In this example, we set and as a balanced trade off between point/voxel coverage and memory/compute usage. This still leaves nearly half of the buffer empty. Moreover, it leads to points dropout in the voxel and a complete miss of the voxel , as a result of the random sampling. To have full coverage of the four voxels, hard voxelization requires at least buffer size. Clearly, for real-world LiDAR scans with highly variable point density, achieving a good balance between point/voxel coverage and efficient memory usage will be a challenge for hard voxelization. On the other hand, dynamic voxelization dynamically and efficiently allocates resources to manage all points and voxels. In our example, it ensures the full coverage of the space with the minimum memory usage of . Upon completing voxelization, the LiDAR points can be transformed into a high dimensional space via the feature encoding techniques reported in  [20, 33, 10].

3.2 Feature Fusion

Figure 3: Convolution tower for encoding context information.

Multi-View Representations.

Our aim is to effectively fuse information from different views based on the same LiDAR point cloud. We consider two views: the birds-eye view and the perspective view. The birds-eye view is defined based on the Cartesian coordinate system, in which objects preserve their canonical 3D shape information and are naturally separable. The majority of current 3D object detectors 

[33, 10] with hard voxelization operate in this view. However it has the downside that the point cloud becomes highly sparse at longer ranges. On the other hand, the perspective view can represent the LiDAR range image densely, and can have a corresponding tiling of the scene in the Spherical coordinate system. The shortcoming of perspective view is that object shapes are not distance-invariant and objects can overlap heavily with each other in a cluttered scene. Therefore, it is desirable to utilize the complementary information from both views.

So far, we have considered each voxel as a cuboid-shaped volume in the birds-eye view. Here, we propose to extend the conventional voxel to a more generic idea, in our case, to include a 3D frustum in perspective view. Given a point cloud defined in the Cartesian coordinate system, its Spherical coordinate representation is computed as

(5)

For a LiDAR point cloud, applying dynamic voxelization in both the birds-eye-view and the perspective view will expose each point within different local neighborhoods, i.e., Cartesian voxel and Spherical frustum, thus allow each point to leverage the complementary context information. The established point/voxel mappings are (, ) and (, ) for the birds-eye view and the perspective view, respectively.

Network Architecture

As illustrated in Fig. 2

, the proposed MVF model takes the raw LiDAR point cloud as input. First, we compute point embeddings. For each point, we compute its local 3D coordinates in the voxel or frustum it belongs to. The local coordinates from the two views and the point intensity are concatenated before they are embedded into a 128D feature space via one fully connected (FC) layer. The FC layer is composed of a linear layer, a batch normalization (BN) layer and a rectified linear unit (ReLU) layer. Then, we apply

dynamic voxelization in the both the birds-eye view and the perspective view and establish the bi-directional mapping ( and ) between points and voxels, where . Next, in each view, we employ one additional FC layer to learn view-dependent features with 64 dimensions, and by referencing we aggregate voxel-level information from the points within each voxel via max pooling. Over this voxel-level feature map, we use a convolution tower to further process context information, in which the input and output feature dimensions are both 64. Finally, using the point-to-voxel mapping , we fuse features from three different information sources for each point: 1) the point’s corresponding Cartesian voxel from the birds-eye view, 2) the point’s corresponding Spherical voxel from the perspective view, and 3) the point-wise features from the shared FC layer. The point-wise feature can be optionally transformed to a lower feature dimension to reduce computational cost.

The architecture of the convolution tower is shown in Figure 3. We apply two ResNet layers [6], each with

2D convolution kernels and stride size

, to gradually downsample the input voxel feature maps into tensors with and of the original feature map dimensions. Then, we upsample and concatenate these tensors to construct a feature map with the same spatial resolution as the input. Finally, this tensor is transformed to the desired feature dimension. Note that the consistent spatial resolution between input and output feature maps effectively ensures that the point/voxel correspondences remain unchanged.

3.3 Loss Function

We use the same loss functions as in SECOND [30] and PointPillars [10]. We parametrize ground truth and anchor boxes as and respectively. The regression residuals between ground truth and anchors are defined as:

(6)
(7)
(8)

where is the diagonal of the base of the anchor box [33]. The overall regression loss is:

(9)

where denotes predicted residuals. For anchor classification, we use the focal loss [15]:

(10)

where p denotes the probability as a positive anchor. We adopt the recommended configurations from

[15] and set and .

During training, we use the Adam optimizer [7] and apply cosine decay to the learning rate. The initial learning rate is set to and ramps up to

during the first epoch. The training finishes after 100 epochs.

4 Experimental Results

To investigate the effectiveness of the proposed MVF algorithm, we have reproduced a recently published top-performing algorithm, PointPillars [10], as our baseline. PointPillars is a LiDAR-based single-view 3D detector using hard voxelization, which we denote as HV+SV in the results. In fact, PointPillars can be conveniently summarized as three functional modules: voxelization in the birds-eye view, point feature encoding and a CNN backbone. To more directly examine the importance of dynamic voxelization, we implement a variant of PointPillars by using dynamic instead of hard voxelization, which we denote DV+SV. Finally, our MVF method features both the proposed dynamic voxelization and multi-view feature fusion network. For a fair comparison, we keep the original PointPillars network backbone for all three algorithms: we learn a 64D point feature embedding for HV+SV and DV+SV and reduce the output dimension of MVF to 64D, as well.

4.1 Evaluation on the Waymo Open Dataset

Dataset. We have tested our method on the Waymo Open Dataset, which is a large-scale dataset recently released for benchmarking object detection algorithms at industrial production level.

Figure 4: Visual comparison between DV+SV and MVF on the Waymo Open Dataset. Color schemes are: grouth truth: yellow, DV+SV: blue, MVF: red. Missing detections by DV+SV are highlighed in green dashed circles. Best viewed in color.

The dataset provides information collected from a set of sensors on an autonomous vehicle, including multiple LiDARs and cameras. It captures multiple major cities in the U.S., under a variety of weather conditions and across different times of the day. The dataset provides a total number of 1000 sequences. Specifically, the training split consists of 798 sequences of 20s duration each, sampled at 10Hz, containing 4.81M vehicle and 2.22M pedestrian boxes. The validation split consists of 202 sequences with the same duration and sampling frequency, containing 1.25M vehicle and 539K pedestrian boxes. The effective annotation radius is 75m for all object classes. For our experiments, we evaluate both 3D and BEV object detection metrics for vehicles and pedestrians.

Compared to the widely used KITTI dataset [3], the Waymo Open Dataset has several advantages: (1) It is more than 20 times larger than KITTI, which enables performance evaluation at a scale that is much closer to production; (2) It supports detection for the full 360-degree field of view (FOV), unlike the 90-degree forward FOV for KITTI. (3) Its evaluation protocol considers realistic autonomous driving scenarios including annotations within the full range and under all occlusion conditions, which makes the benchmark substantially more challenging.

Method BEV AP (IoU=0.7) 3D AP (IoU=0.7)
Overall 0 - 30m 30 - 50m 50m - Inf Overall 0 - 30m 30 - 50m 50m - Inf
HV+SV 75.57 92.1 74.06 55.47 56.62 81.01 51.75 27.94

DV+SV
77.18 93.04 76.07 57.67 59.29 84.9 56.08 31.07

MVF
80.40 93.59 79.21 63.09 62.93 86.30 60.02 36.02
Table 1: Comparison of methods for vehicle detection on the Waymo Open Dataset.
Method BEV AP (IoU=0.5) 3D AP (IoU=0.5)
Overall 0 - 30m 30 - 50m 50m - Inf Overall 0 - 30m 30 - 50m 50m - Inf
HV+SV 68.57 75.02 67.11 53.86 59.25 67.99 57.01 41.29
DV+SV 70.25 77.01 68.96 54.15 60.83 69.76 58.43 42.06
MVF 74.38 80.01 72.98 62.51 65.33 72.51 63.35 50.62
Table 2: Comparison of methods for pedestrian detection on the Waymo Open Dataset.

Evaluation Metrics.

We evaluate models on the standard average precision (AP) metric for both 7-degree-of-freedom(DOF) 3D boxes and 5-DOF BEV boxes, using intersection over union (IoU) thresholds of 0.7 for vehicles and 0.5 for pedestrians, as recommended on the dataset official website.

Experiments Setup. We set voxel size to m and detection range to m along the X and Y axes for both classes. For vehicles, we define anchors as m with and orientations and set the detection range to m along the Z axis. For pedestrians, we set anchors to m with and orientations and set the detection range to m along the Z axis. Using the PointPillars network backbone for both vehicles and pedestrians results in a feature map size of . As discussed in Section 3, pre-defining a proper setting of and for HV+SV is critical and requires extensive experiments. Therefore, we have conducted a hyper-parameter search to choose a satisfactory configuration for this method. Here we set and to accommodate the panoramic detection, which includes 4X more voxels and creates a 2X bigger buffer size compared to [10].

Results. The evaluation results on vehicle and pedestrian categories are listed in Table 1 and Table 2, respectively. In addition to overall AP, we give a detailed performance breakdown for three different ranges of interest: 0-30m, 30-50m and 50m. We can see that DV+SV consistently matches or improves the performance against HV+SV on both vehicle and pedestrian detection across all ranges, which validates the effectiveness of dynamic voxelization. Fusing multi-view information further enhances the detection performance in all cases, especially for small objects, i.e., pedestrians. Finally, a closer look at distance based results indicates that as the the detection range increases, the performance improvements from MVF become more pronounced. Figure 4 shows two examples for both vehicle and pedestrian detection where multi-view fusion generates more accurate detections for occluded objects at long range. The experimental results also verify our hypothesis that the perspective view voxelization can capture complementary information compared to BEV, which is especially useful when the objects are far away and sparsely sampled.

Latency. For vehicle detection, the proposed MVF, DV+SV and HV+SV run at 65.2ms, 41.1ms and 41.1ms per frame, respectively. For pedestrian detection, the latency per frame are 60.6ms, 34.7ms and 36.1ms, for the proposed MVF, DV+SV and HV+SV, respectively.

4.2 Evaluation on the KITTI Dataset

KITTI [3] is a popular dataset for benchmarking 3D object detectors for autonomous driving. It contains 7481 training samples and 7518 samples held-out for testing; each contains the ground truth boxes for a camera image and its associated LiDAR scan points. Similar to [1], we divide the official training LiDAR data into a training split containing 3712 samples and a validation split consisting of 3769 samples. On the derived training and validation splits, we evaluate and compare HV+SV, DV+SV and MVF on the 3D vehicle detection task using the official KITTI evaluation tool. Our methods are trained with the same settings and data augmentations as in  [10].

As listed in Table 3, using single view, dynamic voxelization yields clearly better detection accuracy compared to hard voxelization. With the help of multi-view information, MVF further improves the detection performance significantly. In addition, compared to other top-performing methods [1, 33, 9, 19, 30, 25], MVF yields competitive accuracy. MFV is a general method of enriching the point level feature representations and can be applied to enhance other LiDAR-based detectors, e.g., PointRCNN [25], which we plan to do in future work.

Method AP (IoU=0.7)
Easy Moderate Hard
MV3D [1] 71.29 62.68 56.56
VoxelNet [33] 81.98 65.46 62.85
AVOD-FPN [9] 84.41 74.44 68.65
F-PointNet [19] 83.76 70.92 63.65
SECOND [30] 87.43 76.48 69.10
PointRCNN [25] 88.88 78.63 77.38
HV+SV 85.9 74.7 70.5
DV+SV 88.77 77.86 73.53
MVF 90.23 79.12 76.43
Table 3: Comparison to state of the art methods on the KITTI validation split for 3D car detection. [10] didn’t report results on the validation split in the original paper and HV+SV is based on our implementation.

5 Conclusion

We introduce MVF, a novel end-to-end multi-view fusion framework for 3D object detection from LiDAR point clouds. In contrast to existing 3D LiDAR detectors [10, 33], which use hard voxelization, we propose dynamic voxelization that preserves the complete raw point cloud, yields deterministic voxel features and serves as a natural foundation for fusing information across different views. We present a multi-view fusion architecture that can encode point features with more discriminative context information extracted from the different views. Experimental results on the Waymo Open Dataset and on the KITTI dataset demonstrate that our dynamic voxelization and multi-view fusion techniques significantly improve detection accuracy. Adding camera data and temporal information are exciting future directions, which should further improve our detection framework.

Acknowledgement

We would like to thank Alireza Fathi, Yuning Chai, Brandyn White, Scott Ettinger and Charles Ruizhongtai Qi for their insightful suggestions. We also thank Yiming Chen and Paul Tsui for their Waymo Open Dataset and infrastructure-related help.

References

  • [1] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia (2017) Multi-view 3d object detection network for autonomous driving.

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 6526–6534.
    Cited by: §1, §2, §4.2, §4.2, Table 3.
  • [2] M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner (2017-05) Vote3Deep: fast object detection in 3d point clouds using efficient convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 1355–1361. External Links: Document, ISSN Cited by: §2.
  • [3] A. Geiger, P. Lenz, and R. Urtasun (2012-06) Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 3354–3361. External Links: Document, ISSN 1063-6919 Cited by: §4.1, §4.2.
  • [4] R. Girshick (2015-12) Fast r-cnn. In 2015 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 1440–1448. Cited by: §2.
  • [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §2.
  • [6] K. He, X. Zhang, S. Ren, and J. Sun (2016-06) Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 770–778. External Links: ISSN 1063-6919 Cited by: §3.2.
  • [7] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization.. CoRR. Cited by: §3.3.
  • [8] J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. L. Waslander (2018) Joint 3d proposal generation and object detection from view aggregation. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. Cited by: §2.
  • [9] J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. Waslander (2018) Joint 3d proposal generation and object detection from view aggregation. IROS. Cited by: §1, §4.2, Table 3.
  • [10] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. CVPR. Cited by: §1, §1, §1, §2, §3.1, §3.2, §3.3, §4.1, §4.2, Table 3, §4, §5.
  • [11] B. Li (2017-Sep.) 3D fully convolutional network for vehicle detection in point cloud. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 1513–1518. Cited by: §2.
  • [12] B. Li, T. Zhang, and T. Xia Vehicle detection from 3d lidar using fully convolutional network. In RSS 2016, Cited by: §2.
  • [13] M. Liang, B. Yang, S. Wang, and R. Urtasun (2018) Deep continuous fusion for multi-sensor 3d object detection. In ECCV, Cited by: §2.
  • [14] M. Liang*, B. Yang*, Y. Chen, R. Hu, and R. Urtasun (2019) Multi-task multi-sensor fusion for 3d object detection. In CVPR, Cited by: §2.
  • [15] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar (2018) Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (), pp. 1–1. External Links: Document, ISSN 0162-8828 Cited by: §2, §3.3.
  • [16] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §2.
  • [17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) SSD: single shot multibox detector. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 21–37. Cited by: §2.
  • [18] G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington (2019) LaserNet: an efficient probabilistic 3D object detector for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [19] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas (2017) Frustum pointnets for 3d object detection from rgb-d data. arXiv preprint arXiv:1711.08488. Cited by: §2, §4.2, Table 3.
  • [20] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) PointNet: deep learning on point sets for 3d classification and segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77–85. Cited by: §2, §2, §3.1.
  • [21] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. pp. 5099–5108. Cited by: §2.
  • [22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §2.
  • [23] J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Cited by: §2.
  • [24] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 91–99. Cited by: §2.
  • [25] S. Shi, X. Wang, and H. Li (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §1, §2, §4.2, Table 3.
  • [26] M. Simony, S. Milzy, K. Amendey, and H. Gross (2018) Complex-yolo: an euler-region-proposal for real-time 3d object detection on point clouds. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §2.
  • [27] V. A. Sindagi, Y. Zhou, and O. Tuzel (2019) MVX-net: multimodal voxelnet for 3d object detection. CoRR abs/1904.01649. Cited by: §1.
  • [28] D. Z. Wang and I. Posner (2015-07) Voting for voting in online point cloud object detection. In Proceedings of Robotics: Science and Systems, Rome, Italy. Cited by: §2.
  • [29] D. Xu, D. Anguelov, and A. Jain (2018-06)

    PointFusion: deep sensor fusion for 3d bounding box estimation

    .
    In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 244–253. External Links: ISSN 2575-7075 Cited by: §2.
  • [30] Y. Yan, Y. Mao, and B. Li (2018) SECOND: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §3.3, §4.2, Table 3.
  • [31] B. Yang, M. Liang, and R. Urtasun (2018) HDNET: exploiting hd maps for 3d object detection. In 2nd Conference on Robot Learning (CoRL), Cited by: §2.
  • [32] B. Yang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §1, §2.
  • [33] Y. Zhou and O. Tuzel (2018-06) VoxelNet: end-to-end learning for point cloud based 3d object detection. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 4490–4499. External Links: ISSN 2575-7075 Cited by: §1, §1, §1, §2, §3.1, §3.1, §3.2, §3.3, §4.2, Table 3, §5.