DeepAI AI Chat
Log In Sign Up

Matrix-LSTM: a Differentiable Recurrent Surface for Asynchronous Event-Based Data

by   Marco Cannici, et al.

Dynamic Vision Sensors (DVSs) asynchronously stream events in correspondence of pixels subject to brightness changes. Differently from classic vision devices, they produce a sparse representation of the scene. Therefore, to apply standard computer vision algorithms, events need to be integrated into a frame or event-surface. This is usually attained through hand-crafted grids that reconstruct the frame using ad-hoc heuristics. In this paper, we propose Matrix-LSTM, a grid of Long Short-Term Memory (LSTM) cells to learn end-to-end a task-dependent event-surfaces. Compared to existing reconstruction approaches, our learned event-surface shows good flexibility and expressiveness improving the baselines on optical flow estimation on the MVSEC benchmark and the state-of-the-art of event-based object classification on the N-Cars dataset.


End-to-End Learning of Representations for Asynchronous Event-Based Data

Event cameras are vision sensors that record asynchronous streams of per...

Time Matters: Time-Aware LSTMs for Predictive Business Process Monitoring

Predictive business process monitoring (PBPM) aims to predict future pro...

Event-LSTM: An Unsupervised and Asynchronous Learning-based Representation for Event-based Data

Event cameras are activity-driven bio-inspired vision sensors, thereby r...

Grid Long Short-Term Memory

This paper introduces Grid Long Short-Term Memory, a network of LSTM cel...

End-to-end LSTM based estimation of volcano event epicenter localization

In this paper, an end-to-end based LSTM scheme is proposed to address th...

Passive Non-line-of-sight Imaging for Moving Targets with an Event Camera

Non-line-of-sight (NLOS) imaging is an emerging technique for detecting ...

ABMOF: A Novel Optical Flow Algorithm for Dynamic Vision Sensors

Dynamic Vision Sensors (DVS), which output asynchronous log intensity ch...