How Asynchronous Events Encode Video

06/09/2022
by   Karen Adam, et al.
0

As event-based sensing gains in popularity, theoretical understanding is needed to harness this technology's potential. Instead of recording video by capturing frames, event-based cameras have sensors that emit events when their inputs change, thus encoding information in the timing of events. This creates new challenges in establishing reconstruction guarantees and algorithms, but also provides advantages over frame-based video. We use time encoding machines to model event-based sensors: TEMs also encode their inputs by emitting events characterized by their timing and reconstruction from time encodings is well understood. We consider the case of time encoding bandlimited video and demonstrate a dependence between spatial sensor density and overall spatial and temporal resolution. Such a dependence does not occur in frame-based video, where temporal resolution depends solely on the frame rate of the video and spatial resolution depends solely on the pixel grid. However, this dependence arises naturally in event-based video and allows oversampling in space to provide better time resolution. As such, event-based vision encourages using more sensors that emit fewer events over time.

READ FULL TEXT
research
03/02/2020

Learning to Deblur and Generate High Frame Rate Video with an Event Camera

Event cameras are bio-inspired cameras which can measure the change of i...
research
02/26/2019

Event-driven Video Frame Synthesis

Video frame synthesis is an active computer vision problem which has app...
research
05/16/2018

Photorealistic Image Reconstruction from Hybrid Intensity and Event based Sensor

Event based sensors encode pixel wise contrast changes as positive or ne...
research
05/03/2020

Quadtree Driven Lossy Event Compression

Event cameras are emerging bio-inspired sensors that offer salient benef...
research
04/30/2023

EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction

Event cameras are a new type of vision sensor that incorporates asynchro...
research
06/23/2022

Anticipating the cost of drought events in France by super learning

Drought events are the second most expensive type of natural disaster wi...
research
10/18/2019

Animation Synthesis Triggered by Vocal Mimics

We propose a method leveraging the naturally time-related expressivity o...

Please sign up or login with your details

Forgot password? Click here to reset