RGB-Event Fusion for Moving Object Detection in Autonomous Driving

09/17/2022
by   Zhuyun Zhou, et al.
0

Moving Object Detection (MOD) is a critical vision task for successfully achieving safe autonomous driving. Despite plausible results of deep learning methods, most existing approaches are only frame-based and may fail to reach reasonable performance when dealing with dynamic traffic participants. Recent advances in sensor technologies, especially the Event camera, can naturally complement the conventional camera approach to better model moving objects. However, event-based works often adopt a pre-defined time window for event representation, and simply integrate it to estimate image intensities from events, neglecting much of the rich temporal information from the available asynchronous events. Therefore, from a new perspective, we propose RENet, a novel RGB-Event fusion Network, that jointly exploits the two complementary modalities to achieve more robust MOD under challenging scenarios for autonomous driving. Specifically, we first design a temporal multi-scale aggregation module to fully leverage event frames from both the RGB exposure time and larger intervals. Then we introduce a bi-directional fusion module to attentively calibrate and fuse multi-modal features. To evaluate the performance of our network, we carefully select and annotate a sub-MOD dataset from the commonly used DSEC dataset. Extensive experiments demonstrate that our proposed method performs significantly better than the state-of-the-art RGB-Event fusion alternatives.

READ FULL TEXT

page 1

page 3

page 6

research
09/17/2023

Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera

The ability to detect objects in all lighting (i.e., normal-, over-, and...
research
05/12/2023

Multi-Modal 3D Object Detection by Box Matching

Multi-modal 3D object detection has received growing attention as the in...
research
08/08/2023

SODFormer: Streaming Object Detection with Transformer Using Events and Frames

DAVIS camera, streaming two complementary sensing modalities of asynchro...
research
02/11/2022

Multi-Modal Fusion for Sensorimotor Coordination in Steering Angle Prediction

Imitation learning is employed to learn sensorimotor coordination for st...
research
07/30/2023

Uncertainty-Encoded Multi-Modal Fusion for Robust Object Detection in Autonomous Driving

Multi-modal fusion has shown initial promising results for object detect...
research
10/03/2022

DOTIE – Detecting Objects through Temporal Isolation of Events using a Spiking Architecture

Vision-based autonomous navigation systems rely on fast and accurate obj...
research
07/18/2021

Multi-Modal Temporal Convolutional Network for Anticipating Actions in Egocentric Videos

Anticipating human actions is an important task that needs to be address...

Please sign up or login with your details

Forgot password? Click here to reset