Sampling Based Scene-Space Video Processing

02/05/2021
by   Felix Klose, et al.
1

Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this "scene-space" information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we propose a novel pixel gathering and filtering approach. The gathering step is general and collects pixel samples in scene-space, while the filtering step is application-specific and computes a desired output video from the gathered sample sets. Our approach is easily parallelizable and has been implemented on GPU, allowing us to take full advantage of large volumes of video data and facilitating practical runtimes on HD video using a standard desktop computer. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, deblurring, super resolution, object removal, computational shutter functions, and other scene-space camera effects. We present results for various casually captured, hand-held, moving, compressed, monocular videos depicting challenging scenes recorded in uncontrolled environments.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

page 10

research
04/25/2019

Learning the Depths of Moving People by Watching Frozen People

We present a method for predicting dense depth in scenarios where both a...
research
01/17/2017

Computing Egomotion with Local Loop Closures for Egocentric Videos

Finding the camera pose is an important step in many egocentric video ap...
research
01/09/2019

Neural RGB->D Sensing: Depth and Uncertainty from a Video Camera

Depth sensing is crucial for 3D reconstruction and scene understanding. ...
research
09/17/2013

Photon counting compressive depth mapping

We demonstrate a compressed sensing, photon counting lidar system based ...
research
10/12/2016

Video Depth-From-Defocus

Many compelling video post-processing effects, in particular aesthetic f...
research
05/18/2018

Scanner: Efficient Video Analysis at Scale

A growing number of visual computing applications depend on the analysis...
research
12/05/2016

Turning an Urban Scene Video into a Cinemagraph

This paper proposes an algorithm that turns a regular video capturing ur...

Please sign up or login with your details

Forgot password? Click here to reset