Future Video Synthesis with Object Motion Prediction

04/01/2020 ∙ by Yue Wu, et al. ∙ 0

We present an approach to predict future video frames given a sequence of continuous video frames in the past. Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects. The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects. The anticipated appearances are combined to create a reasonable video in the future. With this procedure, our method exhibits much less tearing or distortion artifact compared to other approaches. Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Can an artificial intelligence system predict a photorealistic video conditioned on past visual observation? With an accurate video prediction model, an intelligent agent can plan its motion according to the predicted video. Future video generation techniques can also be used to synthesize a long video by repeatedly extending the future of the video. Video prediction has been adopted in various applications such as sensorimotor control, autonomous driving, and video analysis 

[9, 33, 24, 39].

Video prediction has not been solved yet, especially if we need to synthesize frames of an extended period. Existing methods tend to generate blurry and distorted images where rigid objects are usually bent and spread. This issue indicates that it is necessary to consider several aspects: forecasting the motion of dynamic objects, creating new visual data for unveiled regions, finding spatio-temporal relationships when two objects overlap, and so on. Therefore, to generate realistic future video, understanding essential information such as semantics, shape, or dynamics of the scene is necessary.

Most existing methods tackle the video prediction task by generating future video frames one by one in an unsupervised fashion [23, 48, 38, 6]. These approaches synthesize future frames at the pixel level without explicit modeling of the motions or semantics of the scene. Thus, it is difficult for the model to grasp the concept of object boundaries to create different movements for different objects. For instance, a moving car should be treated individually instead of modeling a car and background scene as a whole. Recently, Wang et al. [42]

propose a general video-to-video translation model (vid2vid) that demonstrates future video prediction as a sub-task. The model takes semantic maps in the past and estimates future semantic maps to synthesize the next video frame. With this idea, the generated video can preserve the structure of objects better, but the shape of objects deforms unnaturally in the long term. To synthesize more realistic future videos, we find that the explicit modeling of object trajectories is highly beneficial.

The key idea of our video prediction model is that we synthesize future video frames conditioned on predicted object trajectories. The trajectory of an object is defined as its 2D pixel location in each video frame. In particular, we identify each dynamic object and predict its moving path, scale change, and shape in the future. Object appearance in the next few frames can be roughly approximated by applying an affine transformation on the object segment in the last input frame. In this way, appearance is highly regularized and avoids unexpected deformation. For the background with static objects, we directly predict a motion field between the last frame and each future frame. Then we warp the background image with the estimated motion field. In this background image in the future, dynamic objects are located. Since the future background images may contain missing regions due to occlusion, we apply refinement steps to complete missing areas and harmonize components. Our experiments indicate that our approach can synthesize future videos that are more photo-realistic than state-of-the-art video prediction methods.

Voxel-Flow [22] MCNet [37] vid2vid [42] Ours
Figure 1: Results of predicting the frames , , and on the Cityscapes dataset [5].

2 Related Work

Future frame synthesis is initially studied at the patch level [35]

. Recent advances in the future prediction from image sequence can be classified into the three-fold.

Single image prediction. This class of works synthesizes a single frame for the next time step. Patraucean et al[29]

use a convolutional version of long short-term memory. Lotter 

et al[23] introduce a predictive coding network, and Byeon et al[3] improve image quality using parallel multi-dimensional long short-term memory (LSTM). Liang et al[20] design the generative adversarial loss [12] for both on predicted optical flow and synthesized image to enforce consistency explicitly. Liu et al[21] introduce an efficient atomic operator to predict the next frame in an unsupervised manner.

Long-term prediction. More recently, long-term video prediction becomes an active research area. Srivastava et al[33] use LSTM to encode and decode video sequence. Denton and Fergus et al[6] introduce an approach that produces plausible frame predictions with stochastic latent variables to generate sharp frames. Mathieu et al[28] propose a multi-scale approach by reducing blur artifact with the aid of mean squared error loss. Lee et al[18] combine the latent variable for stochastic reasoning and adversarial loss for photo-realistic image synthesis. Wichers et al[45] introduce a hierarchical approach without using ground truth annotation of high-level structures. A probabilistic approach by Xue et al[48] synthesizes various motions from a single image. Villegas et al[37] and Reda et al[30] involve motion encoder to explicitly regard foreground motion. Wang et al[42] propose an advanced framework that can synthesize long-term video. The power of this approach comes from a concrete design of generative adversarial loss for image domain and temporal domain. Ye et al[50] proposed a pixel-level future prediction approach given a single image with the prediction of future states of independent entities. Liu et al[4]

used variational recurrent neural networks with higher capacity likelihood models. Hang 

et al[10] introduced a confidence-aware warping operator to predict occluded area and disoccluded area separately. Ho et al[53] proposed a parametric video prediction approach based on a sparse motion field.

Other tasks. In addition to the next image or long-term image synthesis, additional tasks, including scene semantics or motion of dynamic objects, have been studied. A method by Walker et al[41] predicts the movement of the foreground object from a single image, and recent works [1, 2, 13, 17, 26, 49, 8, 31] demonstrate that human movements or trajectories can be estimated successfully from the real dataset. Vondrick et al[40] synthesize a one-second video from weakly annotated natural videos using a network understanding dynamics of foreground object and motion classes. Jin et al[16] propose a new fully convolutional network for predicting semantic label and optical flow for a next frame.

Our approach is in line with recent works [7, 37, 25] that decouple stationary and moving part of the scene. We incorporate high-level semantics and instances to consider movements of individual foreground objects explicitly. As shown in the paper, the synthesized frames are more realistic than previous state-of-the-art on complex scenes, such as real-world driving videos.

3 Model

Problem definition. Let be the video frame at time step , and be the corresponding semantic and instance map of , and be the motion field (or optical flow) from frame to frame . Then our video prediction task could be formulated as follows. Given input video frames , semantic maps , instance maps from time-steps and optical flows between consecutive frames, predict the future video frames for . is the index to the last input frames, and is the index of the last prediction frame. To solve this problem, we propose a separate-predict-composite approach to produce realistic future frames.

Overview.

To begin with, we attempt to classify objects into dynamic and static ones to trace and handle various motions in the scene effectively. We train a moving object detection network to classify moving objects and static scenes. This idea is different from previous approaches that divide frames to the foreground and background region based on semantic class. After obtaining dynamic and static regions, we predict the optical flow of the static scene and warp the last input video frame to get future frames of the static scene. Then, we use a background-aware spatial transformation network (STN) to predict the motion of dynamic objects. The holes in warped static scenes are filled using an image inpainting method 

[52]. The warped images serve as the future background information for the STN network. The estimated static and dynamic scene is composed in the last stage to generate a seamless image. Fig. 2 illustrates the proposed pipeline.

Figure 2: Overview of the proposed architecture. We use a dynamic object detection model to separate moving objects and static background. The missing foreground area in the generated future background is inpainted using the inpainting model . By providing the background images for the future, we apply a spatial transformer to predict moving objects. After that, we composite the foreground and background images and use a video inpainting module to inpaint occluded area.

3.1 Moving object detection

There are two major causes for the appearance change between continuous frames. The first one is the dynamic motion of moving objects, and the second one is the ego-motion of the camera. To handle such a scene effectively, we train a moving object detection network to identify moving objects and static scenes. Based on Cityscapes-Motion dataset and KITTI-Motion dataset [36] that provide annotations of moving areas, we build an encoder-decoder architecture to detect moving regions, with ResNet50 as the backbone [14]. The input of the network is observed sequences of frames, semantic maps, instance maps, and optical flow between consecutive frames. The output of the network is a binary mask to indicate the region of the moving object.

3.2 Background prediction

With the identified moving objects, the pipeline handles the static motion of the scene that is predicted by an optical flow network. The network predicts the forward and backward optical flow between the last observed frame and each future frame111Backward optical flow is used for warping to the future because this can avoid warping artifacts.. Note that our pipeline does not predict frame-wise motion recursively. Instead, the batch prediction of flow maps alleviates the effect of accumulated error and possible blur artifacts.

Generative model. We propose a conditional generative adversarial network to predict the future optical flow. The pipeline has one generator and two types of discriminators, one for evaluating single frame and the other for temporal coherence of multiple video frames . The generator is an encoder-decoder structure with skip connections. The encoder follows the structure of ResNet50 [14]

with activation function replaced with Leaky ReLU 

[27]

. The input of encoder is a tensor that collates sequential input images

, sequential semantic layout generated by [56], sequential instance maps computed by [46], and sequential optical flow between consecutive input frames using PWC-Net [34].

The decoder consists of several upsample modules. We employ the multi-scale strategy to predict optical flow at different spatial resolutions. The input to each module is a concatenation of feature maps produced at the corresponding resolution by the encoder, feature maps provided by the preceding module, and optical flow prediction result. Each upsample module consist of a bilinear upsample layer and a convolutional layer to recover the spatial resolution.

A loss function of frame discriminator

checks if estimated flow creates weird artifacts by warping using predicted optical flow:

(1)

where is the optical flow from frame to frame predicted by , is inversely warped image of using , and is the ground-truth optical flow from frame to frame . The loss on is defined as:

(2)

where concatenates images in the channel-wise manner, is concatenated optical flow, and others are defined similarly. In contrast to , this function penalizes unrealistic image and motion by directly analyzing a range of image frames and flow maps. This is realized by concatenating frames to learn temporal changes. In this way, unrealistic temporal behavior is discouraged.

Flow evaluation. We have an additive loss to evaluate estimated flow. is linear combination of multiple criterions , where () is empirically set to (1.0, 15.0, 1.0, 1.0), respectively.

is a data term that penalizes the discrepancy between predicted flow and the flow from real images:

(3)

where a confidence map indicates whether the optical flow on this pixel is valid.

We also compute a perceptual loss between warped image and ground truth image. We use VGG19 model [32]

for feature extraction and define a

loss between warped images and ground truth images in the feature domain:

(4)

where is the number of VGG feature layers. where denote feature map from the -th layer in the VGG-19 network having a number of feature parameter . To make the predicted optical flow coherent with the structure of , we adopt smoothness loss for optical flow weighted by image gradient :

(5)

where indicates the gradient operator.

To make the training more stable, we use a forward-backward consistency loss [51]:

(6)

where is the discrepancy obtained from forward and backward flow check at pixel location . It is defined as , where . is a conditional scalar for robustness. is 1 if or 0 otherwise.

is empirically set to (3, 0.05). Pixels where the forward and backward flows contradict seriously are regarded as possible outliers.

As a result, we train the flow prediction network using a combination of proposed losses222We also define the similar losses with opposite flow direction to improve consistency.:

(7)

The weight for frame discriminator and video discriminator is empirically set to 1.0 and 2.0. Here we use the multi-scale loss that is defined as the sum of the losses when images are evaluated at different resolutions: full resolution, half resolution, resolution, and so on.

Background inpainting. For better future prediction, we decompose moving objects and static scenes from the input image. After extracting moving objects, the area where moving objects were placed remains blank. Such a blank region is filled with an inpainting network based on Wasserstein GANs with a contextual attention layer [52]. To make the inpainting network even better, we feed randomly cropped patches from background classes (such as buildings, trees, or roads in traffic scenes) and perform fine-tuning. This procedure makes an inpainted background image from the original image with holes.

The background inpainting operation is necessary. It is because the inpainted background is used as the extra guidance for dynamic object trajectories prediction. Without background inpainting, the regions for moving objects are denoted as black. Then the dynamic object trajectories prediction module will overfit to predict motion to match the black pixels, which is not desirable.

Figure 3: Training loss for dynamic motion prediction. Our approach puts the predicted objects (by spatial transformer) on the predicted background images and generates virtual images. We also use two discriminators to ensure the locations of predicted objects are spatially and temporally coherent.

3.3 Dynamic object motion prediction

Our approach identifies dynamic objects in the scene and handles their motion explicitly. Instead of treating cluttered scenes as a whole, this scheme helps to understand the history of an individual object so that it can predict the future better. We presume the motion of dynamic objects can be adequately approximated with 2D affine transformation. Due to this rigid motion constraint, predicted appearance does not show distortion or unrealistic texture that are common problems in previous approaches. Our model detects all the moving objects, and each object is treated separately using our transformation network.

Network. The input to the motion prediction network is a sequence of binary object masks , optical flow , semantic maps , objects , and inpainted background images . The network produces a series of 2D affine transformation that expresses the predicted object motion. Note that the network takes background images as input because the location of objects is highly related to the background. For example, a car should be placed on the road, and trees should not block the road, etc. Without background information, the network may predict unrealistic trajectories because the prediction is purely based on past motions.

The network is an encoder architecture and outputs the parameters of a series of 2D affine transformation . Then the following grid sampler transforms coordinate of object’s pixels in the last frame using the estimated parameters. By combining the estimated background image and transformed object , we can build a composition image .

Similar to the background prediction module, the motion prediction network is equipped with two discriminators: single object discriminator and object sequence discriminator . The input of is a pair of an object mask and a composed image to determine whether the predicted location is natural. This discriminator is used to suppress some unreasonable areas, such as cars on a building. The produced image is made by placing a transformed object on an inpainted background. The input of takes a sequence of masks representing the object trajectory as input and determines whether the predicted object trajectory is reasonable.

We define the discriminator loss on single object discriminator and the discriminator loss on object sequence discriminator as follows:

(8)
(9)

where is the GAN loss on mask and synthetic image pair defined by single object discriminator , is the GAN loss on sequential masks defined by object sequence discriminator . is the composite of transformed object and background information, and is a binary mask of moving object.

Another loss consists of three terms, and it is equivalent to , where is set to (1.0, 1.0, 2.0). is the difference between appearance of a -th object in -th frame and its ground truth: , where is transformed object. is the smoothness loss to improve the temporal coherency of predicted parameters: .

is a regularization term on predicted parameters to prevent abrupt change from original state, or identity transform : .

As a result, for each moving object, we a separate motion estimation network and the loss for training this network is:

(10)

where is set to and is the foreground object generator.

Training data generation. The Cityscapes dataset [5] and KITTI dataset [11] does not provide tracking information for each instance. Therefore, we employ a tracking algorithm to produce data for training the proposed network. We first generate an instance mask using the approach by Xiong et al[46]. Then, few-shot tracking algorithm [19] is employed to obtain bounding boxes of the tracked objects in a video sequence. After getting the bounding boxes of the tracked objects, we compute the intersection of bounding boxes and instance maps to obtain the corresponding binary masks. We employ several strategies to delete some failure tracking samples. For instance, we compute the SSIM [43] score of objects being tracked to determine whether they are the same object.

3.4 Background-foreground composition

After predicting motion for the background scene and moving objects, the composition module fuses the scene components to create future video frames. We determine the relative depth order of moving objects according to the relative depth obtained by GeoNet [51]. Then we place the moving objects one by one onto the predicted background. Note that we have a hole-filled background image , we may directly use those frames for producing output, but it does not have temporal coherence.

Therefore, we adopt a video inpainting approach to minimize flickering artifact. Following the method [47], we utilize forward and backward optical flow between consecutive frames, and employ a consistency check to find valid optical flow. With adequate optical flow, we build a connection between pixels across continuous frames. Pixels with the valid flow are propagated bidirectionally to fill the missing regions. This procedure repeats to minimize holes in the video. If there are still missing regions, the image inpainting method [52] is employed to fill such areas.

Voxel-Flow [22] MCNet [37] Ours
Figure 4: Results of predicting the frames , , and on the KITTI dataset [11].

4 Experiments

Cityscapes KITTI
Next frame Next 5 frames Next 10 frames Next frame Next 3 frames Next 5 frames
MS-SSIM LPIPS MS-SSIM LPIPS MS-SSIM LPIPS MS-SSIM LPIPS MS-SSIM LPIPS MS-SSIM LPIPS
PredNet [23] 0.8403 0.2599 0.7521 0.3603 0.6633 0.5221 0.5626 0.5535 0.5147 0.5866 0.4756 0.6295
MCNET [37] 0.8969 0.1888 0.7058 0.3734 0.5971 0.4513 0.7535 0.2405 0.6352 0.3171 0.5548 0.3739
Voxel Flow [22] 0.8385 0.1737 0.7111 0.2879 0.6341 0.3655 0.5393 0.3247 0.4699 0.3743 0.4262 0.4159
Vid2vid [42] 0.8816 0.1058 0.7513 0.2014 0.6690 0.2705 - - - - - -
Ours-WC 0.8792 0.0903 0.7430 0.1718 0.6593 0.2411 0.6853 0.2252 0.5850 0.2897 0.5217 0.3482
Ours-WM 0.8866 0.0899 0.7537 0.1694 0.6727 0.2351 0.7634 0.1987 0.6504 0.2588 0.5839 0.3136
Ours 0.8910 0.0850 0.7568 0.1650 0.6741 0.2328 0.7928 0.1848 0.6765 0.2461 0.6077 0.3049
Table 1: Comparison with state-of-the-art methods on the Cityscapes and KITTI datasets. The table shows the image quality of the synthesized images. The higher MS-SSIM is better. The lower LPIPS is better.

We conduct both quantitative and qualitative experiments on real-world datasets concerning the capability of predicting future video. We compare our approach with other approaches that produce the next-frame or multiple-frames for the future.

4.1 Datasets

We conducted our experiments on Cityscapes dataset [5] and KITTI dataset  [11]. Cityscapes dataset contains resolution image sequences for city scene captured at 17 FPS. For the fair comparison with other approaches that do not produce such resolution, we experiment at the resolution. KITTI dataset contains resolution image sequences for driving scenes captured at 10 FPS. The semantic maps are generated using the method of  [56]. For the experiment, we get instance maps using UPSNet [46] and obtain optical flow fields with PWCNet [34]. For the fair comparison, we experiment at the resolution. We apply techniques such as random horizontal flipping to augment data.

Cityscapes dataset contains 2975 video sequences for training and 500 video sequences for testing. KITTI dataset for our training and evaluation includes 28 video sequences. We randomly select four sequences for assessment.

4.2 Implementation

We use the multi-scale PatchGAN discriminator [15] architecture for all the discriminators in our framework. For the Cityscapes dataset, the input frame length is set to 4, and the prediction length is set to 5. We first train a model at the resolution, then train a resolution model by adding an upsampling module. By recurrently test our model twice, we obtain future predictions for the next 10 frames.

For the KITTI dataset, the input frame length is set to 4. Because the KITTI dataset has a more substantial motion, generating optical flow between two long period frame is difficult using PWCNet [34]. The prediction length for the background prediction model is set to 3. And the prediction length for the dynamic object motion prediction model is set to 5. We experiment at the resolution. By recurrently test the model twice, we obtain predicted images in the next 5 frames.

All parts of our model are implemented with Pytorch 1.1.0, and we use the ADAM optimizer. For the background prediction model, we train 200 epochs, with learning rate 2e-4 for the first 100 epochs, then linearly decrease the learning rate. For the dynamic trajectory prediction model, we train 60 epochs with a learning rate of 3e-5. Training takes about three days for a

resolution model. The experiment is done with Nvidia RTX 2080 Ti.

4.3 Evaluation metrics

We evaluate our model using several metrics measuring the accuracy of video frames in the future. We use a multi-scale structure similarity (MS-SSIM) [44] index and perceptual image patch similarity (LPIPS) [54]. Higher MS-SSIM scores and lower LPIPS distances suggest better performance.

4.4 Baselines

To evaluate our model for future prediction, we compare our model with the following baselines, where the first several are state-of-the-art approaches, and the rest are variants of our model.

PredNet [23]. PredNet is a prior approach for next-frame prediction. We fine-tune their model on our dataset, and recurrently perform next-frame prediction to get multiple-frame results.

MCNet [37]. This is a state-of-the-art approach for the next frame prediction. We re-train their model on our datasets using their public source code. The multiple frames are generated by recurrently applying the pipeline.

Voxel-Flow [22]. This is a video synthesis approach with optical flow fields across space and time. This approach can be applied for video extrapolation. We re-train their model on our dataset for evaluation.

Vid2vid [42]. This is a video-to-video translation framework. The approach can generate a video conditioned on a sequence of semantic layouts. For future prediction, their approach predicts the semantic layout and converts a sequence of semantic layouts into a real video. We directly compare our method with their provided video prediction results on the Cityscapes dataset.

Ours-WC. Our ablated model without foreground-background composition. To demonstrate the effectiveness of our foreground-background separation approach, we train a model to directly output the optical flow prediction for a full image using the same model with the background prediction.

Ours-WM. Our ablated model without moving object detection. For this model, we remove the moving object detection module and use an STN to predict the trajectories of all possible moving objects (cars, pedestrians) based on semantic classes.

4.5 Evaluation on Cityscapes and KITTI

We evaluate the capability of our model to predict future video frames in both the next-frame and multiple-frames prediction. Our result on Cityscapes and KITTI dataset is shown in Table 1. The frame rates of Cityscapes dataset and KITTI dataset are 17 FPS and 10 FPS, respectively. Then we predict the next 10 frames on the Cityscapes dataset and the next 5 frames on the KITTI dataset, about 0.5 seconds.

On the Cityscapes dataset, in terms of MS-SSIM score, our model achieves comparable scores with MCNet [37] and vid2vid [42]. The performance of our model in LPIPS is 20%, 18%, 14% better than the second-best model for the evaluation of the next frame, next five frames, and the next ten frames. On the KITTI dataset, our model outperforms all state-of-the-art methods in all metrics. Our model’s improvement in terms of LPIPS in the next frame, next three frames, next five frames, is 23%, 22%, 18% respectively against the second-best result. And the improvement of MS-SSIM is 5%, 7%, 10%, respectively. It demonstrates that our method can achieve better performance in both short-term and long term prediction. It is because our approach highly keeps the rigidity of objects. The current state-of-the-art method appears to have significant distortion artifact around object boundary, while our approach alleviates this phenomenon a lot and makes the result more realistic.

We also perform an ablation study on Ours-WC and Ours-WM. From the results, we can see that all the strategies in our model are helpful. The foreground-background decomposition keeps the rigidity of objects and makes the background prediction easier. The moving object detection strategy classifies objects into dynamic or static and predicts separately based on the motion type.

As demonstrated in Fig. 1 and  4, our model produces more realistic results over state-of-the-art methods. Our method keeps the rigidity of objects even in long-term prediction, while the state-of-the-art techniques suffer from distortion around motion boundaries. Also, our method produces a result with less blurriness because we predict the motion of multiple frames together. This strategy alleviates the accumulated error by recurrent prediction. More visual comparisons are shown in the supplement.

4.6 Additional experiments

We also conduct experiments beyond driving scenes on the BAIR robot pushing dataset [9] and the Penn Action dataset [55]. The BAIR dataset consists of videos about a robot arm pushing multiple objects. The Penn dataset has videos with various non-rigid human actions. The results are presented in the supplement.

5 Conclusion

We have presented a separate-predict-composite model for future frame prediction. Our method produces future frames by firstly decomposing possible moving objects into currently-moving or static objects. Then for moving objects, we employ a spatial transformer network to predict the trajectories of objects. This helps to preserve the structure of objects while producing reliable future motion. For background, we use an optical flow prediction network to predict the background of multiple frames at once. Then we integrate the foreground and background and add a video in-painting module to help alleviate the artifact in composition. The experiments have shown that our approach outperforms prior work on future video prediction.

References

  • [1] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, F. Li, and S. Savarese (2016) Social LSTM: human trajectory prediction in crowded spaces. In CVPR, Cited by: §2.
  • [2] A. Bhattacharyya, M. Fritz, and B. Schiele (2018) Long-term on-board prediction of people in traffic scenes under uncertainty. In CVPR, Cited by: §2.
  • [3] W. Byeon, Q. Wang, R. K. Srivastava, and P. Koumoutsakos (2018) ContextVP: fully context-aware video prediction. In ECCV, Cited by: §2.
  • [4] L. Castrejon, N. Ballas, and A. Courville (2019) Improved conditional vrnns for video prediction. In ICCV, Cited by: §2.
  • [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In CVPR, Cited by: Figure 1, §3.3, §4.1.
  • [6] E. Denton and R. Fergus (2018) Stochastic video generation with a learned prior. In ICML, Cited by: §1, §2.
  • [7] E. L. Denton and V. Birodkar (2017) Unsupervised learning of disentangled representations from video. In NeurIPS, Cited by: §2.
  • [8] N. Djuric, V. Radosavljevic, H. Cui, T. Nguyen, F. Chou, T. Lin, and J. Schneider (2018) Motion prediction of traffic actors for autonomous driving using deep convolutional networks. arXiv preprint arXiv:1808.05819. Cited by: §2.
  • [9] C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In NeurIPS, Cited by: §1, §4.6.
  • [10] H. Gao, H. Xu, Q. Cai, R. Wang, F. Yu, and T. Darrell (2019) Disentangling propagation and generation for video prediction. In ICCV, Cited by: §2.
  • [11] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. I. J. Robotics Res.. Cited by: Figure 4, §3.3, §4.1.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, Warde-Farley, David, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial networks. In NeurIPS, Cited by: §2.
  • [13] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi (2018) Social GAN: socially acceptable trajectories with generative adversarial networks. In CVPR, Cited by: §2.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §3.1, §3.2.
  • [15] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In CVPR, Cited by: §4.2.
  • [16] X. Jin, H. Xiao, X. Shen, J. Yang, Z. Lin, Y. Chen, Z. Jie, J. Feng, and S. Yan (2017) Predicting scene parsing and motion dynamics in the future. In NeurIPS, Cited by: §2.
  • [17] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert (2012) Activity forecasting. In ECCV, Cited by: §2.
  • [18] A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine (2018) Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523. Cited by: §2.
  • [19] B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan (2018) SiamRPN++: evolution of siamese visual tracking with very deep networks. arXiv:1812.11703. Cited by: §3.3.
  • [20] X. Liang, L. Lee, W. Dai, and E. P. Xing (2017) Dual motion gan for future-flow embedded video prediction. In ICCV, Cited by: §2.
  • [21] W. Liu, A. Sharma, O. Camps, and M. Sznaier (2018) DYAN - a dynamical atoms-based network for video prediction. In ECCV, Cited by: §2.
  • [22] Z. Liu, R. Yeh, Y. L. Xiaoou Tang, and A. Agarwala (2017-10) Video frame synthesis using deep voxel flow. In ICCV, Cited by: Figure 1, Figure 4, §4.4, Table 1.
  • [23] W. Lotter, G. Kreiman, and D. Cox (2017) Deep predictive coding networks for video prediction and unsupervised learning. In ICLR, Cited by: §1, §2, §4.4, Table 1.
  • [24] P. Luc, N. Neverova, C. Couprie, J. Verbeek, and Y. LeCun (2017) Predicting deeper into the future of semantic segmentation. In ICCV, Cited by: §1.
  • [25] Z. Luo, B. Peng, D. Huang, A. Alahi, and L. Fei-Fei (2017) Unsupervised learning of long-term motion dynamics for videos. In CVPR, Cited by: §2.
  • [26] Y. Ma, X. Zhu, S. Zhang, R. Yang, W. Wang, and D. Manocha (2019) TrafficPredict: trajectory prediction for heterogeneous traffic-agents. In AAAI, Cited by: §2.
  • [27] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In ICML, Cited by: §3.2.
  • [28] M. Mathieu, C. Couprie, and Y. LeCun (2016) Deep multi-scale video prediction beyond mean square error. In ICLR, Cited by: §2.
  • [29] V. Patraucean, A. Handa, and R. Cipolla (2016)

    Spatio-temporal video autoencoder with differentiable memory.

    .
    In ICLR workshop, Cited by: §2.
  • [30] F. A. Reda, G. Liu, K. J. Shih, R. Kirby, J. Barker, D. Tarjan, A. Tao, and B. Catanzaro (2018) SDC-net: video prediction using spatially-displaced convolution. In ECCV, Cited by: §2.
  • [31] A. Sadeghian, V. Kosaraju, A. Sadeghian, N. Hirose, and S. Savarese (2018) Sophie: an attentive GAN for predicting paths compliant to social and physical constraints. arXiv preprint arXiv:1806.01482. Cited by: §2.
  • [32] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §3.2.
  • [33] N. Srivastava, E. Mansimov, and R. Salakhutdinov (2015) Unsupervised learning of video representations using lstms. In ICML, Cited by: §1, §2.
  • [34] D. Sun, X. Yang, M. Liu, and J. Kautz (2018) PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In CVPR, Cited by: §3.2, §4.1, §4.2.
  • [35] I. Sutskever, G. E. Hinton, and G. W. Taylor (2009)

    The recurrent temporal restricted boltzmann machine

    .
    In NeurIPS, Cited by: §2.
  • [36] J. Vertens, A. Valada, and W. Burgard (2017)

    SMSnet: semantic motion segmentation using deep convolutional neural networks

    .
    In IROS, Cited by: §3.1.
  • [37] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee (2017) Decomposing motion and content for natural video sequence prediction. In ICLR, Cited by: Figure 1, §2, §2, Figure 4, §4.4, §4.5, Table 1.
  • [38] R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee (2017) Learning to generate long-term future via hierarchical prediction. In ICML, Cited by: §1.
  • [39] R. Villegas (2018) High fidelity video prediction with large stochastic recurrent neural networks. In NeurIPS, Cited by: §1.
  • [40] C. Vondrick, H. Pirsiavash, and A. Torralba (2016) Generating videos with scene dynamics. In NeurIPS, Cited by: §2.
  • [41] J. Walker, C. Doersch, A. Gupta, and M. Hebert (2016) An uncertain future: forecasting from static images using variational autoencoders. In ECCV, Cited by: §2.
  • [42] T. Wang, M. Liu, J. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro (2018) Video-to-video synthesis. In NeurIPS, Cited by: Figure 1, §1, §2, §4.4, §4.5, Table 1.
  • [43] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §3.3.
  • [44] Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: §4.3.
  • [45] N. Wichers, R. Villegas, D. Erhan, and H. Lee (2018) Hierarchical long-term video prediction without supervision. In ICML, Cited by: §2.
  • [46] Y. Xiong, R. Liao, H. Zhao, R. Hu, M. Bai, E. Yumer, and R. Urtasun (2019) UPSNet: A unified panoptic segmentation network. In CVPR, Cited by: §3.2, §3.3, §4.1.
  • [47] R. Xu, X. Li, B. Zhou, and C. C. Loy (2019-06) Deep flow-guided video inpainting. In CVPR, Cited by: §3.4.
  • [48] T. Xue, J. Wu, K. Bouman, and B. Freeman (2016) Visual dynamics: probabilistic future frame synthesis via cross convolutional networks. In NeurIPS, Cited by: §1, §2.
  • [49] T. Yagi, K. Mangalam, R. Yonetani, and Y. Sato (2018) Future person localization in first-person videos. In CVPR, Cited by: §2.
  • [50] Y. Ye, M. Singh, A. Gupta, and S. Tulsiani (2019) Compositional video prediction. In ICCV, Cited by: §2.
  • [51] Z. Yin and J. Shi (2018) GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In CVPR, Cited by: §3.2, §3.4.
  • [52] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018) Generative image inpainting with contextual attention. In CVPR, Cited by: §3.2, §3.4, §3.
  • [53] W. P. Yung-Han Ho and G. Jin (2019)

    SME-net: sparse motion estimation for parametric video prediction through reinforcement learning

    .
    In ICCV, Cited by: §2.
  • [54] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    .
    In CVPR, Cited by: §4.3.
  • [55] W. Zhang, M. Zhu, and K. Derpanis (2013) From actemes to action: a strongly-supervised representation for detailed action understanding". In ICCV, Cited by: §4.6.
  • [56] Y. Zhu, K. Sapra, F. A. Reda, K. J. Shih, S. Newsam, A. Tao, and B. Catanzaro (2019) Improving semantic segmentation via video propagation and label relaxation. In CVPR, Cited by: §3.2, §4.1.