Video-to-Video Synthesis

08/20/2018 ∙ by Ting-Chun Wang, et al. ∙ MIT Nvidia 4

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 8

page 9

page 14

Code Repositories

vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents. Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics. For example, in model-based reinforcement learning 

arulkumaran2017deep ; ha2018world , a video synthesis model finds use in approximating visual dynamics of the world for training the agent with less amount of real experience data. Using a learned video synthesis model, one can generate realistic videos without explicitly specifying scene geometry, materials, lighting, and dynamics, which would be cumbersome but necessary when using a standard graphics rendering engine kajiya1986rendering .

The video synthesis problem exists in various forms, including future video prediction srivastava2015unsupervised ; mathieu2015deep ; finn2016unsupervised ; lotter2016deep ; xue2016visual ; walker2016uncertain ; denton2017unsupervised ; villegas2017decomposing ; liang2017dual and unconditional video synthesis vondrick2016generating ; saito2017temporal ; tulyakov2017mocogan . In this paper, we study a new form: video-to-video synthesis. At the core, we aim to learn a mapping function that can convert an input video to an output video. To the best of our knowledge, a general-purpose solution to video-to-video synthesis has not yet been explored by prior work, although its image counterpart, the image-to-image translation problem, is a popular research topic isola2017image ; taigman2016unsupervised ; bousmalis2016unsupervised ; shrivastava2016learning ; zhu2017unpaired ; liu2016unsupervised ; liu2016coupled ; huang2018munit ; zhu2017toward ; wang2017high . Our method is inspired by previous application-specific video synthesis methods schodl2000video ; shechtman2005space ; wexler2007space ; ruder2016artistic .

We cast the video-to-video synthesis problem as a distribution matching problem, where the goal is to train a model such that the conditional distribution of the synthesized videos given input videos resembles that of real videos. To this end, we learn a conditional generative adversarial model goodfellow2014generative given paired input and output videos, With carefully-designed generators and discriminators, and a new spatio-temporal learning objective, our method can learn to synthesize high-resolution, photorealistic, temporally coherent videos. Moreover, we extend our method to multimodal video synthesis. Conditioning on the same input, our model can produce videos with diverse appearances.

We conduct extensive experiments on several datasets on the task of converting a sequence of segmentation masks to photorealistic videos. Both quantitative and qualitative results indicate that our synthesized footage looks more photorealistic than those from strong baselines. See Figure 1 for example. We further demonstrate that the proposed approach can generate photorealistic 2K resolution videos, up to seconds long. Our method also grants users flexible high-level control over the video generation results. For example, a user can easily replace all the buildings with trees in a street view video. In addition, our method works for other input video formats such as face sketches and body poses, enabling many applications from face swapping to human motion transfer. Finally, we extend our approach to future prediction and show that our method can outperform existing systems. Please visit our website for code, models, and more results.

2 Related Work

Figure 1: Generating a photorealistic video from an input segmentation map video on Cityscapes. Top left: input. Top right: pix2pixHD. Bottom left: COVST. Bottom right: vid2vid (ours). Click the image to play the video clip in a browser.

Generative Adversarial Networks (GANs). We build our model on GANs goodfellow2014generative . During GAN training, a generator and a discriminator play a zero-sum game. The generator aims to produce realistic synthetic data so that the discriminator cannot differentiate between real and the synthesized data. In addition to noise distributions goodfellow2014generative ; radford2015unsupervised ; denton2015deep , various forms of data can be used as input to the generator, including images isola2017image ; zhu2017unpaired ; liu2016unsupervised , categorical labels odena2016conditional ; miyato2018cgans , and textual descriptions reed2016generative ; zhang2017stackgan . Such conditional models are called conditional GANs, and allow flexible control over the output of the model. Our method belongs to the category of conditional video generation with GANs. However, instead of predicting future videos conditioning on the current observed frames mathieu2015deep ; lee2018stochastic ; vondrick2016generating , our method synthesizes photorealistic videos conditioning on manipulable semantic representations, such as segmentation masks, sketches, and poses.

Image-to-image translation algorithms transfer an input image from one domain to a corresponding image in another domain. There exists a large body of work for this problem isola2017image ; taigman2016unsupervised ; bousmalis2016unsupervised ; shrivastava2016learning ; zhu2017unpaired ; liu2016unsupervised ; liu2016coupled ; huang2018munit ; zhu2017toward ; wang2017high . Our approach is their video counterpart. In addition to ensuring that each video frame looks photorealistic, a video synthesis model also has to produce temporally coherent frames, which is a challenging task, especially for a long duration video.

Unconditional video synthesis. Recent work vondrick2016generating ; saito2017temporal ; tulyakov2017mocogan

extends the GAN framework for unconditional video synthesis, which learns a generator for converting a random vector to a video. VGAN 

vondrick2016generating uses a spatio-temporal convolutional network. TGAN saito2017temporal projects a latent code to a set of latent image codes and uses an image generator to convert those latent image codes to frames. MoCoGAN tulyakov2017mocogan

disentangles the latent space to motion and content subspaces and uses a recurrent neural network to generate a sequence of motion codes. Due to the unconditional setting, these methods often produce low-resolution and short-length videos.

Future video prediction. Conditioning on the observed frames, video prediction models are trained to predict future frames srivastava2015unsupervised ; kalchbrenner2016video ; finn2016unsupervised ; mathieu2015deep ; lotter2016deep ; xue2016visual ; walker2016uncertain ; walker2017pose ; denton2017unsupervised ; villegas2017decomposing ; liang2017dual ; lee2018stochastic . Many of these models are trained with image reconstruction losses, often producing blurry videos due to the classic regress-to-the-mean problem. Also, they fail to generate long duration videos even with adversarial training mathieu2015deep ; liang2017dual . The video-to-video synthesis problem is substantially different because it does not attempt to predict object motions or camera motions. Instead, our approach is conditional on an existing video and can produce high-resolution and long-length videos in a different domain.

Video-to-video synthesis.

While video super-resolution 

shechtman2005space ; shi2016real , video matting and blending bai2009video ; chen2013motion , and video inpainting wexler2004space can be considered as special cases of the video-to-video synthesis problem, existing approaches rely on problem-specific constraints and designs. Hence, these methods cannot be easily applied to other applications. Video style transfer chen2017coherent ; gupta2017characterizing ; huang2017real ; ruder2016artistic , transferring the style of a reference painting to a natural scene video, is also related. In Section 4 , we show that our method outperforms a strong baseline that combines a recent video style transfer with a state-of-the-art image-to-image translation approach.

3 Video-to-Video Synthesis

Let be a sequence of source video frames. For example, it can be a sequence of semantic segmentation masks or edge maps. Let be the sequence of corresponding real video frames. The goal of video-to-video synthesis is to learn a mapping function that can convert to a sequence of output video frames, , so that the conditional distribution of given is identical to the conditional distribution of given .

(1)

Through matching the conditional video distributions, the model learns to generate photorealistic, temporally coherent output sequences as if they were captured by a video camera.

We propose a conditional GAN framework for this conditional video distribution matching task. Let be a generator that maps an input source sequence to a corresponding output frame sequence: . We train the generator by solving the minimax optimization problem given by

(2)

where is the discriminator. We note that as solving (2), we minimize the Jensen-Shannon divergence between and as shown by Goodfellow et al. goodfellow2014generative .

Solving the minimax optimization problem in (2) is a well-known, challenging task. Careful designs of network architectures and objective functions are essential to achieve good performance as shown in the literature denton2015deep ; radford2015unsupervised ; huang2017stacked ; zhang2017stackgan ; gulrajani2017improved ; mao2017least ; karras2017progressive ; wang2017high ; miyato2018spectral . We follow the same spirit and propose new network designs and a spatio-temporal objective for video-to-video synthesis as detailed below.

Sequential generator. To simplify the video-to-video synthesis problem, we make a Markov assumption where we factorize the conditional distribution to a product form given by

(3)

In other words, we assume the video frames can be generated sequentially, and the generation of the -th frame only depends on three factors: 1) current source frame , 2) past source frames , and 3) past generated frames . We train a feed-forward network to model the conditional distribution using . We obtain the final output by applying the function in a recursive manner. We found that a small (e.g., ) causes training instability, while a large increases training time and GPU memory but with minimal quality improvement. In our experiments, we set .

Video signals contain a large amount of redundant information in consecutive frames. If the optical flow lucas1981iterative

between consecutive frames is known, we can estimate the next frame by warping the current frame 

vondrick2017generating ; ohnishi2018ftgan . This estimation would be largely correct except for the occluded areas. Based on this observation, we model as

(4)

where is the element-wise product operator and is an image of all ones. The first part corresponds to pixels warped from the previous frame, while the second part hallucinates new pixels. The definitions of the other terms in Equation 4 are given below.

  • [leftmargin=5mm,topsep=0pt]

  • is the estimated optical flow from to , and is the optical flow prediction network. We estimate the optical flow using both input source images and previously synthesized images . By , we warp based on .

  • is the hallucinated image, synthesized directly by the generator .

  • is the occlusion mask with continuous values between 0 and 1. denotes the mask prediction network. Our occlusion mask is soft instead of binary to better handle the “zoom in” scenario. For example, when an object is moving closer to our camera, the object will become blurrier over time if we only warp previous frames. To increase the resolution of the object, we need to synthesize new texture details. By using a soft mask, we can add details by gradually blending the warped pixels and the newly synthesized pixels.

We use residual networks he2016deep for , , and . To generate high-resolution videos, we adopt a coarse-to-fine generator design similar to the method of Wang et. al wang2017high .

As using multiple discriminators can mitigate the mode collapse problem during GANs training ghosh2017multi ; tulyakov2017mocogan ; wang2017high , we also design two types of discriminators as detailed below.

Conditional image discriminator . The purpose of is to ensure that each output frame resembles a real image given the same source image. This conditional discriminator should output for a true pair and for a fake one .

Conditional video discriminator . The purpose of is to ensure that consecutive output frames resemble the temporal dynamics of a real video given the same optical flow. While conditions on the source image, conditions on the flow. Let be optical flow for the consecutive real images . This conditional discriminator should output 1 for a true pair and 0 for a fake one .

We introduce two sampling operators to facilitate the discussion. First, let be a random image sampling operator such that where is an integer uniformly sampled from 1 to . In other words, randomly samples a pair of images from . Second, we define as a sampling operator that randomly retrieve consecutive frames. Specifically, where is an integer uniformly sampled from to . This operator retrieves consecutive frames and the corresponding optical flow images. With and , we are ready to present our learning objective function.

Learning objective function. We train the sequential video synthesis function by solving

(5)

where is the GAN loss on images defined by the conditional image discriminator , is the GAN loss on consecutive frames defined by , and is the flow estimation loss. The weight is set to throughout the experiments based on a grid search. In addition to the loss terms in Equation 5, we use the discriminator feature matching loss larsen2015autoencoding ; wang2017high and VGG feature matching loss johnson2016perceptual ; dosovitskiy2016generating ; wang2017high as they improve the convergence speed and training stability wang2017high . Please see the appendix for more details.

We further define the image-conditional GAN loss  isola2017image using the operator

(6)

Similarly, the video GAN loss is given by

(7)

Recall that we synthesize a video by recursively applying .

The flow loss includes two terms. The first is the endpoint error between the ground truth and the estimated flow, and the second is the warping loss when the flow warps the previous frame to the next frame. Let be the ground truth flow from to . The flow loss is given by

(8)

Foreground-background prior. When using semantic segmentation masks as the source video, we can divide an image into foreground and background areas based on the semantics. For example, buildings and roads belong to the background, while cars and pedestrians are considered as the foreground. We leverage this strong foreground-background prior in the generator design to further improve the synthesis performance of the proposed model.

In particular, we decompose the image hallucination network into a foreground model and a background model . We note that background motion can be modeled as a global transformation in general, where optical flow can be estimated quite accurately. As a result, the background region can be generated accurately via warping, and the background hallucination network only needs to synthesize the occluded areas. On the other hand, a foreground object often has a large motion and only occupies a small portion of the image, which makes optical flow estimation difficult. The network has to synthesize most of the foreground content from scratch. With this foreground–background prior, is then given by

(9)

where is the background mask derived from the ground truth segmentation mask . This prior improves the visual quality by a large margin with the cost of minor flickering artifacts. In Table 3, our user study shows that most people prefer the results with foreground–background modeling.

Multimodal synthesis. The synthesis network is a unimodal mapping function. Given an input source video, it can only generate one output video. To achieve multimodal synthesis zhu2017toward ; wang2017high ; ghosh2017multi , we adopt a feature embedding scheme wang2017high for the source video that consists of instance-level semantic segmentation masks. Specifically, at training time, we train an image encoder to encode the ground truth real image into a -dimensional feature map ( in our experiments). We then apply an instance-wise average pooling to the map so that all the pixels within the same object share the same feature vectors. We then feed both the instance-wise averaged feature map and the input semantic segmentation mask to the generator

. Once training is done, we fit a mixture of Gaussian distribution to the feature vectors that belong to the same object class. At test time, we sample a feature vector for each object instance using the estimated distribution of that object class. Given different feature vectors, the generator

can synthesize videos with different visual appearances.

4 Experiments

Implementation details. We train our network in a spatio-temporally progressive manner. In particular, we start with generating low-resolution videos with few frames, and all the way up to generating full resolution videos with (or more) frames. Our coarse-to-fine generator consists of three scales: , , and resolutions, respectively. The mask prediction network and flow prediction network share all the weights except for the output layer. We use the multi-scale PatchGAN discriminator architecture isola2017image ; wang2017high for the image discriminator . In addition to multi-scale in the spatial resolution, our multi-scale video discriminator also looks at different frame rates of the video to ensure both short-term and long-term consistency. See the appendix for more details.

We train our model for epochs using the ADAM optimizer kingma2014adam with and on an NVIDIA DGX1 machine. We use the LSGAN loss mao2017least . Due to the high image resolution, even with one short video per batch, we have to use all the GPUs in DGX1 (8 V100 GPUs, each with 16GB memory) for training. We distribute the generator computation task to GPUs and the discriminator task to the other GPUs. Training takes days for 2K resolution.

Datasets. We evaluate the proposed approach on several datasets.

  • [leftmargin=5mm,topsep=0pt]

  • Cityscapes Cordts2016cityscapes . The dataset consists of street scene videos captured in several German cities. Only a subset of images in the videos contains ground truth semantic segmentation masks. To obtain the input source videos, we use those images to train a DeepLabV3 semantic segmentation network chen2017rethinking and apply the trained network to segment all the videos. We use the optical flow extracted by FlowNet2 ilg2017flownet as the ground truth flow . We treat the instance segmentation masks computed by the Mask R-CNN he2017mask as our instance-level ground truth. In summary, the training set contains videos, each with frames. The validation set consists of videos, each with frames. Finally, we test our method on three long sequences from the Cityscapes demo videos, with , , and frames, respectively. We will show that although trained on short videos, our model can synthesize long videos.

  • Apolloscape huang2018apolloscape consists of street scene videos captured in Beijing, whose video lengths vary from to frames. Similar to Cityscapes, Apolloscape is constructed for the image/video semantic segmentation task. But we use it for synthesizing videos using the semantic segmentation mask. We split the dataset into half for training and validation.

  • Face video dataset rossler2018faceforensics . We use the real videos in the FaceForensics dataset, which contains videos of news briefing from different reporters. We use this dataset for the sketch video to face video synthesis task. To extract a sequence of sketches from a video, we first apply a face alignment algorithm dlib2009 to localize facial landmarks in each frame. The facial landmarks are then connected to create the face sketch. For background, we extract Canny edges outside the face regions. We split the dataset into videos for training and videos for validation.

  • Dance video dataset. We download YouTube dance videos for the pose to human motion synthesis task. Each video is about minutes long at resolution, and we crop the central regions. We extract human poses with DensePose Guler2018DensePose and OpenPose cao2017realtime , and directly concatenate the results together. Each training set includes a dance video from a single dancer, while the test set contains videos of other dance motions or from other dancers.

Fréchet Inception Dist. I3D ResNeXt
pix2pixHD 5.57 0.18
COVST 5.55 0.18
vid2vid (ours) 4.66 0.15
Human Preference Score short seq. long seq.
vid2vid (ours) / pix2pixHD 0.87 / 0.13 0.83 / 0.17
vid2vid (ours) / COVST 0.84 / 0.16 0.80 / 0.20
Table 2: Ablation study. We compare the proposed approach to its three variants.
Human Preference Score
vid2vid (ours) / no background--foreground prior 0.80 / 0.20
vid2vid (ours) / no conditional video discriminator 0.84 / 0.16
vid2vid (ours) / no flow warping 0.67 / 0.33
Table 3: Comparison between future video prediction methods on Cityscapes.
Fréchet Inception Dist. I3D ResNeXt
PredNet 11.18 0.59
MCNet 10.00 0.43
vid2vid (ours) 3.44 0.18
Human Preference Score
vid2vid (ours) / PredNet 0.92 / 0.08
vid2vid (ours) / MCNet 0.98 / 0.02
Table 1: Comparison between competing video-to-video synthesis approaches on Cityscapes.

Baselines. We compare our approach to two baselines trained on the same data.

  • [leftmargin=5mm,topsep=0pt]

  • pix2pixHD wang2017high is the state-of-the-art image-to-image translation approach. When applying the approach to the video-to-video synthesis task, we process input videos frame-by-frame.

  • COVST is built on the coherent video style transfer chen2017coherent by replacing the stylization network with pix2pixHD. The key idea in COVST

     is to warp high-level deep features using optical flow for achieving temporally coherent outputs. No additional adversarial training is applied. We feed in ground truth optical flow to

    COVST, which is impractical for real applications. In contrast, our model estimates optical flow from source videos.

Evaluation metrics. We use both subjective and objective metrics for evaluation.

  • [leftmargin=5mm,topsep=0pt]

  • Human preference score. We perform a human subjective test for evaluating the visual quality of synthesized videos. We use the Amazon Mechanical Turk (AMT) platform. During each test, an AMT participant is first shown two videos at a time (results synthesized by two different algorithms) and then asked which one looks more like a video captured by a real camera. We specifically ask the worker to check for both temporal coherence and image quality. A worker must have a life-time task approval rate greater than 98% to participate in the evaluation. For each question, we gather answers from different workers. We evaluate the algorithm by the ratio that the algorithm outputs are preferred.

  • Fréchet Inception Distance (FID) heusel2017gans is a widely used metric for implicit generative models, as it correlates well with the visual quality of generated samples. The FID was originally developed for evaluating image generation. We propose a variant for video evaluation, which measures both visual quality and temporal consistency. Specifically, we use a pre-trained video recognition CNN as a feature extractor after removing the last few layers from the network. This feature extractor will be our “inception” network. For each video, we extract a spatio-temporal feature map with this CNN. We then compute the mean and covariance matrix for the feature vectors from all the synthesized videos. We also calculate the same quantities and for the ground truth videos. The FID is then calculated as . We use two different pre-trained video recognition CNNs in our evaluation: I3D carreira2017quo and ResNeXt xie2017aggregated .

Figure 2: Apolloscape results. Left: pix2pixHD. Center: COVST. Right: proposed. The input semantic segmentation mask video is shown in the left video. Click the image to play the video clip in a browser.

Figure 3: Example multi-modal video synthesis results. These synthesized videos contain different road surfaces. Click the image to play the video clip in a browser.

Figure 4: Example results of changing input semantic segmentation masks to generate diverse videos. Left: treebuilding. Right: buildingtree. The original video is shown in Figure 4. Click the image to play the video clip in a browser.

Figure 5: Example facesketchface results. Each set shows the original video, the extracted edges, and our synthesized video. Click the image to play the video clip in a browser.

Figure 6: Example danceposedance results. Each set shows the original dancer, the extracted poses, and the synthesized video. Click the image to play the video clip in a browser.

Main results. We compare the proposed approach to the baselines on the Cityscapes benchmark, where we apply the learned models to synthesize short video clips in the validation set. As shown in Table 3, our results have a smaller FID and are often favored by the human subjects. We also report the human preference scores on the three long test videos. Again, the videos rendered by our approach are considered more realistic by the human subjects. The human preference scores for the Apolloscape dataset are given in the appendix.

Figures 1 and 4 show the video synthesis results. Although each frame rendered by pix2pixHD is photorealistic, the resulting video lacks temporal coherence. The road lane markings and building appearances are inconsistent across frames. While improving upon pix2pixHD, COVST still suffers from temporal inconsistency. On the contrary, our approach produces a high-resolution, photorealistic, temporally consistent video output. We can generate -second long videos, showing that our approach synthesizes convincing videos with longer lengths.

We conduct an ablation study to analyze several design choices of our method. Specifically, we create three variants. In one variant, we do not use the foreground-background prior, which is termed no background--foreground prior. That is, instead of using Equation 9, we use Equation 4. The second variant is no conditional video discriminator where we do not use the video discriminator for training. In the last variant, we remove the optical flow prediction network and the mask prediction network from the generator in Equation 4 and only use for synthesis. This variant is referred to as no flow warping. We use the human preference score on Cityscapes for this ablation study. Table 3 shows that the visual quality of output videos degrades significantly without the ablated components. To evaluate the effectiveness of different components in our network, we also experimented with directly using ground truth flows instead of estimated flows by our network. We found the results visually similar, which suggests that our network is robust to the errors in the estimated flows.

Multimodal results. Figure 4 shows example multimodal synthesis results. In this example, we keep the sampled feature vectors of all the object instances in the video the same except for the road instance. The figure shows temporally smooth videos with different road appearances.

Semantic manipulation. Our approach also allows the user to manipulate the semantics of source videos. In Figure 4, we show an example of changing the semantic labels. In the left video, we replace all trees with buildings in the original segmentation masks and synthesize a new video. On the right, we show the result of replacing buildings with trees.

Figure 7: Future video prediction results. Top left: ground truth. Top right: PredNet lotter2016deep . Bottom left: MCNet villegas2017decomposing . Bottom right: ours. Click the image to play the video clip in a browser.

Sketch-to-video synthesis for face swapping. We train a sketch-to-face synthesis video model using the real face videos in the FaceForensics dataset rossler2018faceforensics . As shown in Figure 6, our model can convert sequences of sketches to photorealistic output videos. This model can be used to change the facial appearance of the original face videos bitouk2008face .

Pose-to-video synthesis for human motion transfer. We also apply our method to the task of converting sequences of human poses to photorealistic output videos. We note that the image counterpart was studied in recent works ma2017pose ; esser2018variational ; balakrishnan2018synthesizing ; ma2018disentangled . As shown in Figure 6, our model learns to synthesize high-resolution photorealistic output dance videos that contain unseen body shapes and motions. Our method can change the clothing yang2016detailed ; zheng2017image for the same dancer (Figure 6 left) as well as transfer the visual appearance to new dancers (Figure 6 right) as explored in concurrent work yang2018pose ; chan2018dance ; aberman2018deep .

Future video prediction. We show an extension of our approach to the future video prediction task: learning to predict the future video given a few observed frames. We decompose the task into two sub-tasks: 1) synthesizing future semantic segmentation masks using the observed frames, and 2) converting the synthesized segmentation masks into videos. In practice, after extracting the segmentation masks from the observed frames, we train a generator to predict future semantic masks. We then use the proposed video-to-video synthesis approach to convert the predicted segmentation masks to a future video.

We conduct both quantitative and qualitative evaluations with comparisons to two start-of-the-art approaches: PredNet lotter2016deep and MCNet villegas2017decomposing . We follow the prior work vondrick2017generating ; lee2018stochastic and report the human preference score. We also include the FID scores. As shown in Table 3, our model produces smaller FIDs, and the human subjects favor our resulting videos. In Figure 7, we visualize the future video synthesis results. While the image quality of the results from the competing algorithms degrades significantly over time, ours remains consistent.

5 Discussion

We present a general video-to-video synthesis framework based on conditional GANs. Through carefully-designed generators and discriminators as well as a spatio-temporal adversarial objective, we can synthesize high-resolution, photorealistic, and temporally consistent videos. Extensive experiments demonstrate that our results are significantly better than the results by state-of-the-art methods. Our method also compares favorably against the competing video prediction methods.

Although our approach outperforms previous methods, our model still fails in a couple of situations. For example, our model struggles in synthesizing turning cars due to insufficient information in label maps. This could be potentially addressed by adding additional 3D cues, such as depth maps. Furthermore, our model still can not guarantee that an object has a consistent appearance across the whole video. Occasionally, a car may change its color gradually. This issue might be alleviated if object tracking information is used to enforce that the same object shares the same appearance throughout the entire video. Finally, when we perform semantic manipulations such as turning trees into buildings, visible artifacts occasionally appear as building and trees have different label shapes. This might be resolved if we train our model with coarser semantic labels, as the trained model would be less sensitive to label shapes.

Acknowledgements We thank Karan Sapra, Fitsum Reda, and Matthieu Le for generating the segmentation maps for us. We also thank Lisa Rhee and Miss Ketsuki for allowing us to use their dance videos for training. We thank William S. Peebles for proofreading the paper.

References

  • (1) K. Aberman, M. Shi, J. Liao, D. Lischinski, B. Chen, and D. Cohen-Or. Deep video-based performance cloning. arXiv preprint arXiv:1808.06847, 2018.
  • (2) K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine, 34(6):26–38, 2017.
  • (3) X. Bai, J. Wang, D. Simons, and G. Sapiro.

    Video snapcut: robust video object cutout using localized classifiers.

    ACM Transactions on Graphics (TOG), 28(3):70, 2009.
  • (4) G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag. Synthesizing images of humans in unseen poses. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2018.
  • (5) D. Bitouk, N. Kumar, S. Dhillon, P. Belhumeur, and S. K. Nayar. Face swapping: automatically replacing faces in photographs. In ACM SIGGRAPH, 2008.
  • (6) K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (7) Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2D pose estimation using part affinity fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (8) J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (9) C. Chan, S. Ginosar, T. Zhou, and A. A. Efros. Everybody dance now. In European Conference on Computer Vision (ECCV) Workshop, 2018.
  • (10) D. Chen, J. Liao, L. Yuan, N. Yu, and G. Hua. Coherent online video style transfer. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (11) L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
  • (12) T. Chen, J.-Y. Zhu, A. Shamir, and S.-M. Hu. Motion-aware gradient domain video composition. IEEE Trans. Image Processing, 22(7):2532–2544, 2013.
  • (13) M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele.

    The Cityscapes dataset for semantic urban scene understanding.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • (14) E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
  • (15) E. L. Denton and V. Birodkar. Unsupervised learning of disentangled representations from video. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (16) A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • (17) P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (18) C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • (19) A. Ghosh, V. Kulharia, V. Namboodiri, P. H. Torr, and P. K. Dokania. Multi-agent diverse generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (20) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • (21) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein GANs. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (22) A. Gupta, J. Johnson, A. Alahi, and L. Fei-Fei. Characterizing and improving stability in neural style transfer. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (23) R. A. Güler, N. Neverova, and I. Kokkinos. Densepose: Dense human pose estimation in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (24) D. Ha and J. Schmidhuber. World models. In Advances in Neural Information Processing Systems (NIPS), 2018.
  • (25) K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (26) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • (27) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (28) H. Huang, H. Wang, W. Luo, L. Ma, W. Jiang, X. Zhu, Z. Li, and W. Liu. Real-time neural style transfer for videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (29) X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang. The ApolloScape dataset for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (30) X. Huang, Y. Li, O. Poursaeed, J. E. Hopcroft, and S. J. Belongie. Stacked generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (31) X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018.
  • (32) E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (33) P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (34) J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision (ECCV), 2016.
  • (35) J. T. Kajiya. The rendering equation. In ACM SIGGRAPH, 1986.
  • (36) N. Kalchbrenner, A. v. d. Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
  • (37) T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), 2018.
  • (38) D. E. King.

    Dlib-ml: A machine learning toolkit.

    Journal of Machine Learning Research, 2009.
  • (39) D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  • (40) A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), 2016.
  • (41) A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine. Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523, 2018.
  • (42) X. Liang, L. Lee, W. Dai, and E. P. Xing. Dual motion GAN for future-flow embedded video prediction. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (43) M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (44) M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • (45) W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. In International Conference on Learning Representations (ICLR), 2017.
  • (46) B. D. Lucas, T. Kanade, et al. An iterative image registration technique with an application to stereo vision.

    International Joint Conference on Artificial Intelligence (IJCAI)

    , 1981.
  • (47) L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • (48) L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (49) X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (50) M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In International Conference on Learning Representations (ICLR), 2016.
  • (51) T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2018.
  • (52) T. Miyato and M. Koyama. cGANs with projection discriminator. In International Conference on Learning Representations (ICLR), 2018.
  • (53) A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier GANs. In International Conference on Machine Learning (ICML), 2017.
  • (54) K. Ohnishi, S. Yamamoto, Y. Ushiku, and T. Harada. Hierarchical video generation from orthogonal information: Optical flow and texture. In AAAI, 2018.
  • (55) A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR), 2015.
  • (56) S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In International Conference on Machine Learning (ICML), 2016.
  • (57) A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner. Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179, 2018.
  • (58) M. Ruder, A. Dosovitskiy, and T. Brox. Artistic style transfer for videos. In German Conference on Pattern Recognition, 2016.
  • (59) M. Saito, E. Matsumoto, and S. Saito.

    Temporal generative adversarial nets with singular value clipping.

    In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (60) A. Schödl, R. Szeliski, D. H. Salesin, and I. Essa. Video textures. ACM Transactions on Graphics (TOG), 2000.
  • (61) E. Shechtman, Y. Caspi, and M. Irani. Space-time super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 27(4):531–545, 2005.
  • (62) W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang.

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • (63) A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (64) K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • (65) N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In International Conference on Machine Learning (ICML), 2015.
  • (66) Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. In International Conference on Learning Representations (ICLR), 2017.
  • (67) S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz. MoCoGAN: Decomposing motion and content for video generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (68) R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee. Decomposing motion and content for natural video sequence prediction. In International Conference on Learning Representations (ICLR), 2017.
  • (69) C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • (70) C. Vondrick and A. Torralba. Generating the future with adversarial transformers. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (71) J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In European Conference on Computer Vision (ECCV), 2016.
  • (72) J. Walker, K. Marino, A. Gupta, and M. Hebert. The pose knows: Video forecasting by generating pose futures. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (73) T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional GANs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • (74) Y. Wexler, E. Shechtman, and M. Irani. Space-time video completion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004.
  • (75) Y. Wexler, E. Shechtman, and M. Irani. Space-time completion of video. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29(3), 2007.
  • (76) S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (77) T. Xue, J. Wu, K. Bouman, and B. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • (78) C. Yang, Z. Wang, X. Zhu, C. Huang, J. Shi, and D. Lin. Pose guided human video generation. In European Conference on Computer Vision (ECCV), 2018.
  • (79) S. Yang, T. Ambert, Z. Pan, K. Wang, L. Yu, T. Berg, and M. C. Lin. Detailed garment recovery from a single-view image. arXiv preprint arXiv:1608.01250, 2016.
  • (80) H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (81) Z.-H. Zheng, H.-T. Zhang, F.-L. Zhang, and T.-J. Mu. Image-based clothes changing system. Computational Visual Media, 3(4):337–347, 2017.
  • (82) J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.
  • (83) J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems (NIPS), 2017.

Appendix A Network Architecture

Figure 8: The network architecture () for low-res videos. Our network takes in a number of semantic label maps and previously generated images, and outputs the intermediate frame as well as the flow map and the mask.
Figure 9: The network architecture () for higher resolution videos. The label maps and previous frames are downsampled and fed into the low-res network . Then, the features from the high-res network and the last layer of the low-res network are summed and fed into another series of residual blocks to output the final images.
Figure 8: The network architecture () for low-res videos. Our network takes in a number of semantic label maps and previously generated images, and outputs the intermediate frame as well as the flow map and the mask.

a.1 Generators

Our network adopts a coarse-to-fine architecture. For the lowest resolution, the network takes in a number of semantic label maps and previously generated frames as input. The label maps are concatenated together and undergo several residual blocks to form intermediate high-level features. We apply the same processing for the previously generated images. Then, these two intermediate layers are added and fed into two separate residual networks to output the hallucinated image as well as the flow map and the mask (Figure 9).

Next, to build from low-res results to higher-res results, we use another network on top of the low-res network (Figure 9). In particular, we first downsample the inputs and fed them into . Then, we extract features from the last feature layer of and add them to the intermediate feature layer of . These summed features are then fed into another series of residual blocks to output the higher resolution images.

a.2 Discriminators

For our image discriminator , we adopt the multi-scale PatchGAN architecture isola2017image ; wang2017high . We also design a temporally multi-scale video discriminator by downsampling the frame rates of the real/generated videos. In the finest scale, the discriminator takes consecutive frames in the original sequence as input. In the next scale, we subsample the video by a factor of (i.e., skipping every intermediate frames), and the discriminator takes consecutive frames in this new sequence as input. We do this for up to three scales in our implementation and find that this helps us ensure both short-term and long-term consistency. Note that is also multi-scale in the spatial domain as .

a.3 Feature matching loss

In our learning objective function, we also add VGG feature matching loss and discriminator feature matching loss to improve the training stability. For VGG feature matching loss, we use the VGG network simonyan2014very as a feature extractor and minimize L1 losses between the extracted features from the real and the generated images. In particular, we add to our objective, where denotes the -th layer with

elements of the VGG network. Similarly, we adopt the discriminator feature matching loss, to match the statistics of features extracted by the GAN discriminators. We use both the image discriminator

and the video discriminator .

Appendix B Evaluation for the Apolloscape Dataset

We provide both the FID and the human preference score on the Apolloscape dataset. For both metrics, our method outperforms the other baselines.

Inception Net. of FID I3D ResNeXt
pix2pixHD 2.33 0.128
COVST 2.36 0.128
vid2vid (ours) 2.24 0.125
Human Preference Score
vid2vid (ours) / pix2pixHD 0.61 / 0.39
vid2vid (ours) / COVST 0.59 / 0.41
Table 4: Comparison between competing video-to-video synthesis approaches on Apolloscape.