Asynchronous, Photometric Feature Tracking using Events and Frames

07/25/2018
by   Daniel Gehrig, et al.
0

We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called "events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes.

READ FULL TEXT

page 4

page 6

page 11

page 12

page 18

page 20

page 21

page 22

research
07/12/2016

Event-based, 6-DOF Camera Tracking from Photometric Depth Maps

Event cameras are bio-inspired vision sensors that output pixel-level br...
research
02/06/2021

Standard and Event Cameras Fusion for Dense Mapping

Event cameras are a kind of bio-inspired sensors that generate data when...
research
08/01/2021

Powerline Tracking with Event Cameras

Autonomous inspection of powerlines with quadrotors is challenging. Flig...
research
11/23/2022

Data-driven Feature Tracking for Event Cameras

Because of their high temporal resolution, increased resilience to motio...
research
02/18/2021

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction

Event cameras are novel vision sensors that report per-pixel brightness ...
research
11/22/2022

FE-Fusion-VPR: Attention-based Multi-Scale Network Architecture for Visual Place Recognition by Fusing Frames and Events

Traditional visual place recognition (VPR), usually using standard camer...
research
09/15/2023

Deformable Neural Radiance Fields using RGB and Event Cameras

Modeling Neural Radiance Fields for fast-moving deformable objects from ...

Please sign up or login with your details

Forgot password? Click here to reset