Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video Translation Using Conditional Image Diffusion Models

05/30/2023
by   Ernie Chu, et al.
0

In this study, we present an efficient and effective approach for achieving temporally consistent synthetic-to-real video translation in videos of varying lengths. Our method leverages off-the-shelf conditional image diffusion models, allowing us to perform multiple synthetic-to-real image generations in parallel. By utilizing the available optical flow information from the synthetic videos, our approach seamlessly enforces temporal consistency among corresponding pixels across frames. This is achieved through joint noise optimization, effectively minimizing spatial and temporal discrepancies. To the best of our knowledge, our proposed method is the first to accomplish diverse and temporally consistent synthetic-to-real video translation using conditional image diffusion models. Furthermore, our approach does not require any training or fine-tuning of the diffusion models. Extensive experiments conducted on various benchmarks for synthetic-to-real video translation demonstrate the effectiveness of our approach, both quantitatively and qualitatively. Finally, we show that our method outperforms other baseline methods in terms of both temporal consistency and visual quality.

READ FULL TEXT

page 1

page 4

page 6

research
08/19/2023

MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance

This study introduces an efficient and effective method, MeDM, that util...
research
01/15/2022

Learning Temporally and Semantically Consistent Unpaired Video-to-video Translation Through Pseudo-Supervision From Synthetic Optical Flow

Unpaired video-to-video translation aims to translate videos between a s...
research
06/08/2016

Point-wise mutual information-based video segmentation with high temporal consistency

In this paper, we tackle the problem of temporally consistent boundary d...
research
12/10/2019

HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

Video-to-video translation for super-resolution, inpainting, style trans...
research
08/24/2023

APLA: Additional Perturbation for Latent Noise with Adversarial Training Enables Consistency

Diffusion models have exhibited promising progress in video generation. ...
research
05/11/2022

Video-ReTime: Learning Temporally Varying Speediness for Time Remapping

We propose a method for generating a temporally remapped video that matc...
research
09/06/2021

STRIVE: Scene Text Replacement In Videos

We propose replacing scene text in videos using deep style transfer and ...

Please sign up or login with your details

Forgot password? Click here to reset