3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection

04/27/2020
by   Jin Hyeok Yoo, et al.
0

In this paper, we propose a new deep architecture for fusing camera and LiDAR sensors for 3D object detection. Because the camera and LiDAR sensor signals have different characteristics and distributions, fusing these two modalities is expected to improve both the accuracy and robustness of 3D object detection. One of the challenges presented by the fusion of cameras and LiDAR is that the spatial feature maps obtained from each modality are represented by significantly different views in the camera and world coordinates; hence, it is not an easy task to combine two heterogeneous feature maps without loss of information. To address this problem, we propose a method called 3D-CVF that combines the camera and LiDAR features using the cross-view spatial feature fusion strategy. First, the method employs auto-calibrated projection, to transform the 2D camera features to a smooth spatial feature map with the highest correspondence to the LiDAR features in the bird's eye view (BEV) domain. Then, a gated feature fusion network is applied to use the spatial attention maps to mix the camera and LiDAR features appropriately according to the region. Next, camera-LiDAR feature fusion is also achieved in the subsequent proposal refinement stage. The camera feature is used from the 2D camera-view domain via 3D RoI grid pooling and fused with the BEV feature for proposal refinement. Our evaluations, conducted on the KITTI and nuScenes 3D object detection datasets demonstrate that the camera-LiDAR fusion offers significant performance gain over single modality and that the proposed 3D-CVF achieves state-of-the-art performance in the KITTI benchmark.

READ FULL TEXT

page 2

page 5

research
11/17/2017

Fusing Bird View LIDAR Point Cloud and Front View Camera Image for Deep Object Detection

We propose a new method for fusing a LIDAR point cloud and camera-captur...
research
11/24/2022

3D Dual-Fusion: Dual-Domain Dual-Query Camera-LiDAR Fusion for 3D Object Detection

Fusing data from cameras and LiDAR sensors is an essential technique to ...
research
12/12/2022

PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion

Fusing camera with LiDAR is a promising technique to improve the accurac...
research
09/13/2023

SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection

In this paper, we propose a novel training strategy called SupFusion, wh...
research
02/22/2022

Enabling Efficient Deep Convolutional Neural Network-based Sensor Fusion for Autonomous Driving

Autonomous driving demands accurate perception and safe decision-making....
research
09/02/2020

CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

There have been significant advances in neural networks for both 3D obje...
research
04/19/2023

CrossFusion: Interleaving Cross-modal Complementation for Noise-resistant 3D Object Detection

The combination of LiDAR and camera modalities is proven to be necessary...

Please sign up or login with your details

Forgot password? Click here to reset