Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera

09/17/2023
by   Jiahang Cao, et al.
0

The ability to detect objects in all lighting (i.e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving.Traditional RGB-based detectors often fail under such varying lighting conditions.Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection. In this paper, we propose EOLO, a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities. Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events. Buttressed by it, we first introduce an Event Temporal Attention (ETA) module to learn the high temporal information from events while preserving crucial edge information. Secondly, as different modalities exhibit varying levels of importance under diverse lighting conditions, we propose a novel Symmetric RGB-Event Fusion (SREF) module to effectively fuse RGB-Event features without relying on a specific modality, thus ensuring a balanced and adaptive fusion for all-day detection. In addition, to compensate for the lack of paired RGB-Event datasets for all-day training and evaluation, we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image. We further build two new datasets, E-MSCOCO and E-VOC based on the popular benchmarks MSCOCO and PASCAL VOC. Extensive experiments demonstrate that our EOLO outperforms the state-of-the-art detectors,e.g.,RENet,by a substantial margin (+3.74 all lighting conditions.Our code and datasets will be available at https://vlislab22.github.io/EOLO/

READ FULL TEXT

page 1

page 3

page 5

page 6

research
09/17/2022

RGB-Event Fusion for Moving Object Detection in Autonomous Driving

Moving Object Detection (MOD) is a critical vision task for successfully...
research
10/03/2022

DOTIE – Detecting Objects through Temporal Isolation of Events using a Spiking Architecture

Vision-based autonomous navigation systems rely on fast and accurate obj...
research
05/25/2023

Frame-Event Alignment and Fusion Network for High Frame Rate Tracking

Most existing RGB-based trackers target low frame rate benchmarks of aro...
research
08/06/2023

E-CLIP: Towards Label-efficient Event-based Open-world Understanding by CLIP

Contrasting Language-image pertaining (CLIP) has recently shown promisin...
research
12/05/2022

Learning to See Through with Events

Although synthetic aperture imaging (SAI) can achieve the seeing-through...
research
06/17/2023

Residual Spatial Fusion Network for RGB-Thermal Semantic Segmentation

Semantic segmentation plays an important role in widespread applications...
research
08/08/2023

SODFormer: Streaming Object Detection with Transformer Using Events and Frames

DAVIS camera, streaming two complementary sensing modalities of asynchro...

Please sign up or login with your details

Forgot password? Click here to reset