HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework

07/21/2023
by   Kai Lei, et al.
0

In the field of autonomous driving, 3D object detection is a very important perception module. Although the current SOTA algorithm combines Camera and Lidar sensors, limited by the high price of Lidar, the current mainstream landing schemes are pure Camera sensors or Camera+Radar sensors. In this study, we propose a new detection algorithm called HVDetFusion, which is a multi-modal detection algorithm that not only supports pure camera data as input for detection, but also can perform fusion input of radar data and camera data. The camera stream does not depend on the input of Radar data, thus addressing the downside of previous methods. In the pure camera stream, we modify the framework of Bevdet4D for better perception and more efficient inference, and this stream has the whole 3D detection output. Further, to incorporate the benefits of Radar signals, we use the prior information of different object positions to filter the false positive information of the original radar data, according to the positioning information and radial velocity information recorded by the radar sensors to supplement and fuse the BEV features generated by the original camera data, and the effect is further improved in the process of fusion training. Finally, HVDetFusion achieves the new state-of-the-art 67.4% NDS on the challenging nuScenes test set among all camera-radar 3D object detectors. The code is available at https://github.com/HVXLab/HVDetFusion

READ FULL TEXT
research
08/20/2023

Efficient-VRNet: An Exquisite Fusion Network for Riverway Panoptic Perception based on Asymmetric Fair Fusion of Vision and 4D mmWave Radar

Panoptic perception is essential to unmanned surface vehicles (USVs) for...
research
04/13/2023

RadarGNN: Transformation Invariant Graph Neural Network for Radar-based Perception

A reliable perception has to be robust against challenging environmental...
research
05/27/2022

BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework

Fusing the camera and LiDAR information has become a de-facto standard f...
research
09/14/2022

CRAFT: Camera-Radar 3D Object Detection with Spatio-Contextual Fusion Transformer

Camera and radar sensors have significant advantages in cost, reliabilit...
research
05/30/2022

Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object Detection

There are two critical sensors for 3D perception in autonomous driving, ...
research
06/02/2023

Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection

LiDAR and Radar are two complementary sensing approaches in that LiDAR s...
research
06/05/2021

Radar-Camera Pixel Depth Association for Depth Completion

While radar and video data can be readily fused at the detection level, ...

Please sign up or login with your details

Forgot password? Click here to reset