Video-to-Video Synthesis

08/20/2018
by   Ting-Chun Wang, et al.
4

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems.

READ FULL TEXT

page 2

page 7

page 8

page 9

page 14

research
08/06/2020

Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications

The generative adversarial network (GAN) framework has emerged as a powe...
research
10/28/2019

Few-shot Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims at converting an input semantic ...
research
04/06/2022

Simple and Effective Synthesis of Indoor 3D Scenes

We study the problem of synthesizing immersive 3D indoor scenes from one...
research
03/21/2022

Generative Adversarial Network for Future Hand Segmentation from Egocentric Video

We introduce the novel problem of anticipating a time series of future h...
research
08/26/2023

Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models

Text-to-video (T2V) synthesis has gained increasing attention in the com...
research
12/11/2020

Intrinsic Temporal Regularization for High-resolution Human Video Synthesis

Temporal consistency is crucial for extending image processing pipelines...
research
12/12/2022

Breaking the "Object" in Video Object Segmentation

The appearance of an object can be fleeting when it transforms. As eggs ...

Please sign up or login with your details

Forgot password? Click here to reset