Learning Blind Video Temporal Consistency

08/01/2018
by   Wei-Sheng Lai, et al.
6

Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient end-to-end approach based on deep recurrent network for enforcing temporal consistency in a video. Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied on the original video. We train the proposed network by minimizing both short-term and long-term temporal losses as well as the perceptual loss to strike a balance between temporal stability and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 11

page 14

research
06/08/2022

Learning Task Agnostic Temporal Consistency Correction

Due to the scarcity of video processing methodologies, image processing ...
research
12/10/2019

HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

Video-to-video translation for super-resolution, inpainting, style trans...
research
06/29/2023

Training-Free Neural Matte Extraction for Visual Effects

Alpha matting is widely used in video conferencing as well as in movies,...
research
10/24/2021

AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally Consistent Video Semantic Segmentation

In video segmentation, generating temporally consistent results across f...
research
03/31/2021

Long-Term Temporally Consistent Unpaired Video Translation from Simulated Surgical 3D Data

Research in unpaired video translation has mainly focused on short-term ...
research
11/24/2020

Is a Green Screen Really Necessary for Real-Time Human Matting?

For human matting without the green screen, existing works either requir...
research
03/12/2021

Learning Long-Term Style-Preserving Blind Video Temporal Consistency

When trying to independently apply image-trained algorithms to successiv...

Please sign up or login with your details

Forgot password? Click here to reset