Learning Temporally and Semantically Consistent Unpaired Video-to-video Translation Through Pseudo-Supervision From Synthetic Optical Flow

01/15/2022
by   Kaihong Wang, et al.
0

Unpaired video-to-video translation aims to translate videos between a source and a target domain without the need of paired training data, making it more feasible for real applications. Unfortunately, the translated videos generally suffer from temporal and semantic inconsistency. To address this, many existing works adopt spatiotemporal consistency constraints incorporating temporal information based on motion estimation. However, the inaccuracies in the estimation of motion deteriorate the quality of the guidance towards spatiotemporal consistency, which leads to unstable translation. In this work, we propose a novel paradigm that regularizes the spatiotemporal consistency by synthesizing motions in input videos with the generated optical flow instead of estimating them. Therefore, the synthetic motion can be applied in the regularization paradigm to keep motions consistent across domains without the risk of errors in motion estimation. Thereafter, we utilize our unsupervised recycle and unsupervised spatial loss, guided by the pseudo-supervision provided by the synthetic optical flow, to accurately enforce spatiotemporal consistency in both domains. Experiments show that our method is versatile in various scenarios and achieves state-of-the-art performance in generating temporally and semantically consistent videos. Code is available at: https://github.com/wangkaihong/Unsup_Recycle_GAN/.

READ FULL TEXT

page 2

page 5

page 6

page 7

page 12

research
05/30/2023

Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video Translation Using Conditional Image Diffusion Models

In this study, we present an efficient and effective approach for achiev...
research
07/11/2023

Towards Anytime Optical Flow Estimation with Event Cameras

Event cameras are capable of responding to log-brightness changes in mic...
research
09/01/2022

Delving into the Frequency: Temporally Consistent Human Motion Transfer in the Fourier Space

Human motion transfer refers to synthesizing photo-realistic and tempora...
research
07/11/2023

Offline and Online Optical Flow Enhancement for Deep Video Compression

Video compression relies heavily on exploiting the temporal redundancy b...
research
06/08/2016

Point-wise mutual information-based video segmentation with high temporal consistency

In this paper, we tackle the problem of temporally consistent boundary d...
research
02/27/2019

Single-frame Regularization for Temporally Stable CNNs

Convolutional neural networks (CNNs) can model complicated non-linear re...
research
03/31/2022

CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow

Optical flow estimation aims to find the 2D motion field by identifying ...

Please sign up or login with your details

Forgot password? Click here to reset