Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem Formulation

09/23/2020
by   Dimche Kostadinov, et al.
42

Event-based cameras record an asynchronous stream of per-pixel brightness changes. As such, they have numerous advantages over the standard frame-based cameras, including high temporal resolution, high dynamic range, and no motion blur. Due to the asynchronous nature, efficient learning of compact representation for event data is challenging. While it remains not explored the extent to which the spatial and temporal event "information" is useful for pattern recognition tasks. In this paper, we focus on single-layer architectures. We analyze the performance of two general problem formulations: the direct and the inverse, for unsupervised feature learning from local event data (local volumes of events described in space-time). We identify and show the main advantages of each approach. Theoretically, we analyze guarantees for an optimal solution, possibility for asynchronous, parallel parameter update, and the computational complexity. We present numerical experiments for object recognition. We evaluate the solution under the direct and the inverse problem and give a comparison with the state-of-the-art methods. Our empirical results highlight the advantages of both approaches for representation learning from event data. We show improvements of up to 9 compared to the state-of-the-art methods from the same class of methods.

READ FULL TEXT

page 1

page 6

research
04/17/2019

End-to-End Learning of Representations for Asynchronous Event-Based Data

Event cameras are vision sensors that record asynchronous streams of per...
research
03/20/2020

Event-based Asynchronous Sparse Convolutional Networks

Event cameras are bio-inspired sensors that respond to per-pixel brightn...
research
08/28/2023

Graph-based Asynchronous Event Processing for Rapid Object Recognition

Different from traditional video cameras, event cameras capture asynchro...
research
04/30/2023

EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction

Event cameras are a new type of vision sensor that incorporates asynchro...
research
01/31/2019

Network Parameter Learning Using Nonlinear Transforms, Local Representation Goals and Local Propagation Constraints

In this paper, we introduce a novel concept for learning of the paramete...
research
05/10/2021

Event-LSTM: An Unsupervised and Asynchronous Learning-based Representation for Event-based Data

Event cameras are activity-driven bio-inspired vision sensors, thereby r...
research
10/08/2019

Graph-based Spatial-temporal Feature Learning for Neuromorphic Vision Sensing

Neuromorphic vision sensing (NVS) allows for significantly higher event ...

Please sign up or login with your details

Forgot password? Click here to reset