Mononizing Binocular Videos

09/03/2020
by   Wenbo Hu, et al.
0

This paper presents the idea ofmono-nizingbinocular videos and a frame-work to effectively realize it. Mono-nize means we purposely convert abinocular video into a regular monocular video with the stereo informationimplicitly encoded in a visual but nearly-imperceptible form. Hence, wecan impartially distribute and show the mononized video as an ordinarymonocular video. Unlike ordinary monocular videos, we can restore from itthe original binocular video and show it on a stereoscopic display. To start,we formulate an encoding-and-decoding framework with the pyramidal de-formable fusion module to exploit long-range correspondences between theleft and right views, a quantization layer to suppress the restoring artifacts,and the compression noise simulation module to resist the compressionnoise introduced by modern video codecs. Our framework is self-supervised,as we articulate our objective function with loss terms defined on the input:a monocular term for creating the mononized video, an invertibility termfor restoring the original video, and a temporal term for frame-to-framecoherence. Further, we conducted extensive experiments to evaluate ourgenerated mononized videos and restored binocular videos for diverse typesof images and 3D movies. Quantitative results on both standard metrics anduser perception studies show the effectiveness of our method.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 9

page 13

page 14

page 16

research
09/06/2019

Self-supervised Dense 3D Reconstruction from Monocular Endoscopic Video

We present a self-supervised learning-based pipeline for dense 3D recons...
research
12/16/2019

FISR: Deep Joint Frame Interpolation and Super-Resolution with A Multi-scale Temporal Loss

Super-resolution (SR) has been widely used to convert low-resolution leg...
research
12/15/2021

Transcoded Video Restoration by Temporal Spatial Auxiliary Network

In most video platforms, such as Youtube, and TikTok, the played videos ...
research
09/29/2021

Multi-frame Joint Enhancement for Early Interlaced Videos

Early interlaced videos usually contain multiple and interlacing and com...
research
10/06/2022

Compressed Vision for Efficient Video Understanding

Experience and reasoning occur across multiple temporal scales: millisec...
research
12/11/2014

EgoSampling: Fast-Forward and Stereo for Egocentric Videos

While egocentric cameras like GoPro are gaining popularity, the videos t...
research
07/06/2021

NRST: Non-rigid Surface Tracking from Monocular Video

We propose an efficient method for non-rigid surface tracking from monoc...

Please sign up or login with your details

Forgot password? Click here to reset