Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network for Motion Deblurring

06/01/2023
by   Dan Yang, et al.
0

Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences. While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output. Despite the fact that event data carries useful information that can be utilized in motion deblurring of RGB cameras, integrating event and image information remains a challenge. Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period. In most of these techniques, however, the number of event frames is fixed and predefined, which reduces temporal resolution drastically, particularly for scenarios when fast-moving objects are present or when longer exposure times are required. It is also important to note that recent modern cameras (e.g., cameras in mobile phones) dynamically set the exposure time of the image, which presents an additional problem for networks developed for a fixed number of event frames. A Long Short-Term Memory (LSTM)-based event feature extraction module has been developed for addressing these challenges, which enables us to use a dynamically varying number of event frames. Using these modules, we constructed a state-of-the-art deblurring network, Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network (DLEFNet). It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene. It has been demonstrated through evaluation results that the proposed method can outperform the existing state-of-the-art networks for deblurring task in synthetic and real-world data sets.

READ FULL TEXT

page 6

page 8

page 9

research
01/05/2020

Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing Trajectories

This paper investigates trajectory prediction for robotics, to improve t...
research
09/15/2023

Deformable Neural Radiance Fields using RGB and Event Cameras

Modeling Neural Radiance Fields for fast-moving deformable objects from ...
research
11/30/2021

MEFNet: Multi-scale Event Fusion Network for Motion Deblurring

Traditional frame-based cameras inevitably suffer from motion blur due t...
research
01/05/2020

Exploiting Event-Driven Cameras for Spatio-Temporal Prediction of Fast-Changing Trajectories

This paper investigates solutions to trajectory prediction problems for ...
research
01/27/2021

e-ACJ: Accurate Junction Extraction For Event Cameras

Junctions reflect the important geometrical structure information of the...
research
03/23/2022

Autofocus for Event Cameras

Focus control (FC) is crucial for cameras to capture sharp images in cha...
research
10/18/2022

MotionDeltaCNN: Sparse CNN Inference of Frame Differences in Moving Camera Videos

Convolutional neural network inference on video input is computationally...

Please sign up or login with your details

Forgot password? Click here to reset