RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization

02/09/2021
by   Yizhou Wang, et al.
9

Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the commonly used sensors, radar has usually been considered as a robust and cost-effective solution even in adverse driving scenarios, e.g., weak/strong lighting or bad weather. Instead of considering to fuse the unreliable information from all available sensors, perception from pure radar data becomes a valuable alternative that is worth exploring. In this paper, we propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm without laborious annotation efforts, to effectively detect objects from the radio frequency (RF) images in real-time. First, the raw signals captured by millimeter-wave radars are transformed to RF images in range-azimuth coordinates. Second, our proposed RODNet takes a sequence of RF images as the input to predict the likelihood of objects in the radar field of view (FoV). Two customized modules are also added to handle multi-chirp information and object relative motion. Instead of using human-labeled ground truth for training, the proposed RODNet is cross-supervised by a novel 3D localization of detected objects using a camera-radar fusion (CRF) strategy in the training stage. Finally, we propose a method to evaluate the object detection performance of the RODNet. Due to no existing public dataset available for our task, we create a new dataset, named CRUW, which contains synchronized RGB and RF image sequences in various driving scenarios. With intensive experiments, our proposed cross-supervised RODNet achieves 86 performance, which shows the robustness to noisy scenarios in various driving conditions.

READ FULL TEXT

page 6

page 8

page 9

page 12

page 15

page 16

page 17

page 18

research
03/03/2020

RODNet: Object Detection under Severe Conditions Using Vision-Radio Cross-Modal Supervision

Radar is usually more robust than the camera in severe autonomous drivin...
research
05/11/2021

Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic Annotator via Coordinate Alignment

Radar has long been a common sensor on autonomous vehicles for obstacle ...
research
07/03/2021

Scene-aware Learning Network for Radar Object Detection

Object detection is essential to safe autonomous or assisted driving. Pr...
research
01/25/2022

RFMask: A Simple Baseline for Human Silhouette Segmentation with Radio Signals

Human silhouette segmentation, which is originally defined in computer v...
research
05/25/2023

Emergency Response Person Localization and Vital Sign Estimation Using a Semi-Autonomous Robot Mounted SFCW Radar

The large number and scale of natural and man-made disasters have led to...
research
06/02/2021

Modeling Buried Object Brightness and Visibility for Ground Penetrating Radar

Comparing the observed brightness of various buried objects is a straigh...
research
10/13/2020

Radar + RGB Fusion For Robust Object Detection In Autonomous Vehicle

This paper presents two variations of architecture referred to as RANet ...

Please sign up or login with your details

Forgot password? Click here to reset