Transformation-based Adversarial Video Prediction on Large-Scale Data

by   Pauline Luc, et al.

Recent breakthroughs in adversarial generative modeling have led to models capable of producing video samples of high quality, even on large and complex datasets of real-world video. In this work, we focus on the task of video prediction, where given a sequence of frames extracted from a video, the goal is to generate a plausible future sequence. We first improve the state of the art by performing a systematic empirical study of discriminator decompositions and proposing an architecture that yields faster convergence and higher performance than previous approaches. We then analyze recurrent units in the generator, and propose a novel recurrent unit which transforms its past hidden state according to predicted motion-like features, and refines it to to handle dis-occlusions, scene changes and other complex behavior. We show that this recurrent unit consistently outperforms previous designs. Our final model leads to a leap in the state-of-the-art performance, obtaining a test set Frechet Video Distance of 25.7, down from 69.2, on the large-scale Kinetics-600 dataset.



page 2

page 8


High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

Predicting future video frames is extremely challenging, as there are ma...

Efficient Video Generation on Complex Datasets

Generative models of natural images have progressed towards high fidelit...

Fully Context-Aware Video Prediction

This paper proposes a new neural network design for unsupervised learnin...

Perceptual Learned Video Compression with Recurrent Conditional GAN

This paper proposes a Perceptual Learned Video Compression (PLVC) approa...

FASTER Recurrent Networks for Video Classification

Video classification methods often divide the video into short clips, do...

Real-Time Video Deblurring via Lightweight Motion Compensation

While motion compensation greatly improves video deblurring quality, sep...

DYAN: A Dynamical Atoms Network for Video Prediction

The ability to anticipate the future is essential when making real time ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: An example of video prediction on UCF-101. The 5 frames on the left are real frames, followed by 11 predicted frames.

The ability to anticipate future events is a crucial component of intelligence. Video prediction, the task of generating a plausible future sequence of frames given initial conditioning frames, has been increasingly studied as a proxy task towards developing this ability, as video is a modality that is cheap to acquire, available in abundant quantities and extremely rich in information. It has been shown to be useful for representation learning (Srivastava et al., 2015; Walker et al., 2016; Vondrick et al., 2016b)

and in the context of reinforcement learning, where it has been used to define intrinsic rewards

(Burda et al., 2018), and for planning and control (Wahlström et al., 2015; Finn and Levine, 2015; Watter et al., 2015).

Recently, large-scale adversarial generative modeling of images has led to highly realistic generations, even at high resolutions (Karras et al., 2018; Brock et al., 2019; Karras et al., 2019). Modeling of video (Tulyakov et al., 2018; Saito and Saito, 2018) has also seen impressive advances. DVD-GAN-FP (Clark et al., 2019) showed strong results on class-conditional generation of Kinetics-600, but the setting of video prediction was found to be surprisingly difficult in comparison. This may be due to the mode dropping behavior of GANs. Indeed, generating a video from scratch allows the network to learn just a few plausible types of sequences instead of needing to infer a plausible continuation for any sequence. In the video prediction case, the strong conditioning leaves less opportunity for the generator to focus on particular modes and forces it to attempt to generate a much wider set of videos.

In this work we focus on improving DVD-GAN-FP in two ways. First, following their observation that the architecture of the discriminator is key for efficient training on large-scale datasets, we propose alternative decompositions for the discriminator and conduct an empirical study, validating a new architecture that achieves faster convergence and yields better performance in terms of video quality.

Second, we draw inspiration from recent state-of-the-art approaches for video prediction (Hao et al., 2018; Gao et al., 2019) that predict the parameters of transformations used to warp the conditioning frames, and combine this with direct generation. On the one hand, transformation-based prediction can use information in the conditioning frames without the need to learn to compress or represent the relevant elements. On the other, in the presence of dis-occlusion, of appearing objects, or of new regions of the scene unfolding due to camera motion, warping past information seems an overly complex task in comparison with generating these pixel values directly. The two approaches are hence highly complementary. We seek to incorporate such transformations in the DVD-GAN-FP architecture, to provide sufficient flexibility to model camera and object motion in a straightforward fashion, while retaining scalability of the overall architecture. For this purpose, we introduce the tsru, a novel recurrent unit that predicts motion-like features, and uses them to warp its past input hidden state according to these predictions. It then refines the obtained predictions by generating features to account for newly visible areas of the scene and finally fuses the two streams.

To summarize, our contributions are as follows:

  • [leftmargin=*]

  • We conduct a systematic performance study of discriminator decompositions for DVD-GAN-FP and propose an architecture that achieves faster convergence and yields higher performance than previous approaches.

  • We propose a novel transformation-based recurrent unit to make the generator more expressive. This module is more efficient than existing transformation-based alternatives, and we show that it brings substantial performance improvements over classical recurrent units.

  • Our final approach, combining these two improvements, leads to a large improvement on the Kinetics-600 video prediction benchmark. We further demonstrate that our model is able to generate diverse predictions on the BAIR dataset. Qualitatively, our model yields highly realistic frames and motion patterns, as shown in Figure 1.

2 Background and related work

2.1 Video generation

Since the introduction of the next frame prediction task (Ranzato et al., 2014; Srivastava et al., 2015), video prediction and generation have become increasingly popular research topics. An important line of work has focused on extending ideas from generative modeling of images to this task (Mathieu et al., 2016; Vondrick et al., 2016b; Xue et al., 2016; Babaeizadeh et al., 2018; Denton and Fergus, 2018; Kalchbrenner et al., 2017). Several works also motivate the use of various reconstruction, ranking and perceptual losses (Mathieu et al., 2016; Jang et al., 2018; Liu et al., 2018; Li et al., 2018a; Xiong et al., 2018).

A number of works advocate for disentangling the factors of variation in the representations used by the models through careful design of the loss and network architecture. Denton and Birodkar (2017); Villegas et al. (2017a); Tulyakov et al. (2018) disentangle motion and content, or pose and appearance; Vondrick et al. (2016b) disentangle foreground and background; Kosiorek et al. (2018); Zhu et al. (2018) push this further to yield object-centric representations.

Specific architectures have also been proposed to alleviate the accumulation of errors that arises in autoregressive models and, more generally but less severely, in all recurrent settings

(Oliu et al., 2018; Villegas et al., 2017b). The transformation-based architecture of (Vondrick and Torralba, 2017) is particularly relevant, as it is designed to predict transformation parameters for each of the predicted frames, with respect to the last input frame.

Another line of work reformulates video prediction in other feature spaces than the pixel space, such as high level image features (Srivastava et al., 2015; Vondrick et al., 2016a), optical flow (Walker et al., 2015), dense trajectories (Walker et al., 2016) and segmentation maps (Luc et al., 2017, 2018). Jayaraman et al. (2019) operate in the pixel space, but choose not to commit to a fixed time offset between the conditioning frames and the prediction, in order to sidestep some the difficulties arising from the low level nature of RGB prediction.

2.2 GANs for video generation

Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are generative models which, in their simplest form, define a minimax game between a discriminator learning to distinguish real data from generated samples, and a generator

learning to minimize the likelihood of the discriminator classifying generated samples as generated. In practice, a number of modifications to this objective are required for stable training, and the specific objective we use is described in the appendix.

Most GANs for video use a 3D convolutional network as the discriminator (Vondrick et al., 2016b; Saito et al., 2017; Tulyakov et al., 2018; Clark et al., 2019). Designs for the generator vary depending on the approach: some models rely heavily on 3D convolutions in the generator as well (Vondrick et al., 2016b), while others introduce recurrent units to maintain temporal consistency (Lee et al., 2018; Tulyakov et al., 2018; Saito and Saito, 2018). In this work, we base our network architecture off DVD-GAN-FP (Clark et al., 2019), where the generator uses 2D residual blocks to generate frames independently, with convolutional recurrent units interspersed between blocks at multiple resolutions to model consistency between frames. A convolutional encoder processes conditioning frames and feeds the resulting features as the initial states of the generator’s recurrent units.

Many GAN approaches decompose the discriminator into multiple subnetworks. This includes multi-resolution approaches used for images (Denton et al., 2015; Zhang et al., 2017; Huang et al., 2018), speech (Bińkowski et al., 2020; Kumar et al., 2019) and video (Tulyakov et al., 2018). Donahue and Simonyan (2019) also use multiple discriminators in a representation learning setup. Some of these methods decompose the discriminator in a way which requires the existence of multiple generators, potentially sharing some hidden layers (Saito and Saito, 2018; Karras et al., 2018). While an important research direction, for the purposes of our analysis of discriminators we do not consider variants which require multiple generators.

2.3 Kernel-based and vector-based transformations

Tranformation-based models for video prediction rely on differentiable warping functions to generate future frames as a function of past frames. Following the terminology introduced by Reda et al. (2018), they fall into one of two families of methods: vector-based or kernel-based.

Vector-based approaches consist in predicting the coordinates of a grid used to differentiably sample from the input (Jaderberg et al., 2015)

. This kind of approach has been used in conjunction with optical flow estimation

(Ranzato et al., 2014; Pătrăucean et al., 2016). Liu et al. (2017) extend this idea, predicting a 3D pseudo-optical flow field

used to sample from the entire sequence of conditioning frames using tri-linear interpolation.

Kernel-based approaches predict the parameters of dynamic, depth-wise locally connected layers (Xue et al., 2016; Finn et al., 2016; Vondrick and Torralba, 2017; Reda et al., 2018). These predicted parameters can be shared across spatial locations, forcing each spatial position to undergo the same transformation. To allow for varying motion across locations, Xue et al. (2016) use several such transformations, and combine them using a predicted vector of weights at each position, to yield per-pixel filters. Predicting a fixed number of depth-wise transformation kernels alongside a per-spatial-location weighting over transformations can be seen as a factorized version of predicting a full depth-wise transformation at each position. Finn et al. (2016) explores both and finds them to perform similarly. We refer to Figure 3 for an illustration of both kernel-based warping approaches, which we refer to as factorized and pixelwise.

Finally, Reda et al. (2018) combine the kernel-based and vector-based approaches, and show high quality next frame prediction at higher resolutions, using intermediate predictions of a network trained with supervision to predict full resolution optical flow.

2.4 Spatial recurrent units for video processing

Shi et al. (2015) and Ballas et al. (2015) introduce convolutional variants of the lstm (Hochreiter and Schmidhuber, 1997) and gru modules (Cho et al., 2014). Our proposed recurrent unit builds on convgru, which produce output based on input and previous output as:

&r = σ(W_r ⋆_k [h_t-1; x_t] + b_r)
&h^′_t = r ⊙h_t-1
&c = ρ(W_c ⋆_k [h^′_t ; x_t] + b_c)
&u = σ(W_u ⋆_k [h_t-1; x_t] + b_u)
&h_t = u ⊙h_t-1 + (1 - u) ⊙c


is the sigmoid function,

is the chosen activation function,

represents a convolution, and the operator is elementwise multiplication.

With similar motivations to ours, Xu et al. (2018) propose a recurrent unit for structured video prediction, relying on dynamic prediction of the weights used in the convlstm update equations. To avoid over-parameterization, the authors propose a shared learned mapping of feature channel differences to corresponding kernels. This requires processing inputs, which is prohibitively expensive in our large-scale setting. We refer the interested reader to the appendix for a more detailed discussion.

Shi et al. (2017) introduce TrajGRU, which also warps the past hidden state to account for motion; and provides the result as input to all gates of a convgru. In our work, we propose a straightforward kernel-based extension of this method, called K-TrajGRU. We then formulate a novel transformation-based recurrent unit that is simpler and more directly interpretable.

3 Large-scale transformation-based video prediction

Figure 2: A visualization of the inputs each discriminator must judge. From left to right: the original DVD-GAN-FP discriminator decomposition processes full resolution frames and downsampled videos; our proposed faster decomposition processes downsampled frames and cropped videos; our proposed stronger decomposition additionally processes downsampled videos.

3.1 Discriminator decompositions

Architectures that can effectively leverage a large dataset like Kinetics-600 require careful design. While the generator must create temporally consistent sequences of plausible frames, the discriminator must also be able to assess both temporal consistency and overall plausibility.

Both Tulyakov et al. (2018) and DVD-GAN-FP (Clark et al., 2019) employ two discriminators: a Spatial Discriminator and a Temporal Discriminator , which respectively process individual frames and video clips. While Tulyakov et al. (2018) motivate the use of mainly by improved convergence, DVD-GAN-FP makes the two discriminators complementary in order to additionally decrease computational complexity. In DVD-GAN-FP, assesses single frame content and structure by randomly sampling full-resolution frames and judging them individually, while is charged with assessing global temporal structure by processing spatially downsampled sequences, by a factor .

In contrast with DVD-GAN-FP, we propose an alternative split of the roles of the discriminators, with judging per-frame global structure, while critiques local spatio-temporal structure. We achieve this by downsampling the randomly sampled frames fed to by a factor , and cropping clips inside the high resolution video fed to , where , , , correspond to time, height, width and channel dimension of the input. This further reduces the number of pixels to process per video, from to . We call this decomposition DVD-GAN-FP. In practice, in all of our experiments, we follow DVD-GAN-FP and choose .

The potential blind spot of our new decomposition is global spatio-temporal coherence. For instance, if the motion of two spatially distant objects is correlated, then this decomposition will fail to teach that one object must move based on the movement of the other. To address this limitation, we investigate a second configuration which consists in combining our proposed decomposition with a second temporal discriminator assessing the quality of the downsampled generated videos, like in DVD-GAN-FP, and call this decomposition DVD-GAN-FP. We summarize these configurations in Figure 2 and Table 1.

3.2 Transformation-based Recurrent Units

Figure 3: Left: information flow for convgru (top-left), and our proposed TSRU (top-right), TSRU (bottom-left) and TSRU (bottom-right). Right: variants of kernel-based warping: factorized (top) and pixel-wise (bottom).

3.2.1 Stacked kernel-based warping for efficient, multi-scale modeling of motion

Vector-based transformations allow modeling of arbitrarily large motions with a fixed number of parameters, but they require random memory access, whereas modern accelerators are better suited for computation formulated in terms of matrix multiplications. As a result, in our work, we employ kernel-based transformations, and rely on the stacking of multiple such transformations in the feature hierarchy of the generator to allow the modeling of large motion in an multi-scale manner, while keeping the number of parameters under control.

We therefore first study a kernel-based extension of TrajGRU, whose equations we report in the appendix. Inspired by recent work that leverages complementary pixel-based generations and transformation-based predictions (Hao et al., 2018; Gao et al., 2019), we also propose a novel recurrent unit whose design is simpler and more directly interpretable, which we describe next.

3.2.2 Tsru

We begin by observing that the final equation of convgru, Eq (2.4), already combines past information with newly generated content . Similarly, our recurrent unit combines , obtained by warping , with newly generated content . Instead of computing the reset gate and resetting , we compute the parameters of a transformation , which we use to warp . The rest of our model is unchanged (with playing the role of in ’s update equation, Eq (2.4)). Our module is described by the following equations:

&θ_h, x = f(h_t-1, x_t)
&~h_t-1 = w(h_t-1 ; θ_h, x)
&c = ρ(W_c ⋆_k [ ~h_t-1 ; x_t] + b_c)
&u = σ(W_u ⋆_k [h_t-1; x_t] + b_u)
&h_t = u ⊙~h_t-1 + (1 - u) ⊙c,


is a shallow convolutional neural network,

is a warping function described next, and the rest of the notations follow from Eqs (2.4)-(2.4). We have designed this recurrent unit to closely follow the design of convgru and call it TSRU. We study two alternatives for the information flow between gates. TSRU computes , and in parallel given and , yielding the following replacement for Eq (3.2.2): .

At the other end, TSRU computes each intermediate output in a fully sequential manner: like in TSRU, is given access to , but additionally, is given access to both outputs and , so as to make an informed decision prior to mixing. This yields the following replacement for Eq (3.2.2): . We summarize these different variants and how they relate to the convgru design in Figure 3.

The resulting modules are generic and can be used as drop-in replacements for other recurrent units, which are already widely used for generative modeling of video. Provided the conditioning frame encodings are used as initialization of its hidden representation, as in the DVD-GAN-FP architecture, this unit can predict and account for motion in the use it makes of past information, if this is beneficial. When interleaved at different levels of the feature hierarchy in the generator, like in DVD-GAN-FP, motion can naturally be modeled in a multi-scale approach. The modules are designed in such a way that a pixel-level motion representation can emerge, but we do not enforce this. In particular, we do not provide any supervision (e.g. ground truth flow fields) for the predicted motion-like features, instead allowing the network to learn representations in a data-driven way.

We study both factorized and pixelwise kernel-based warping (c.f. Section 2.3). Specifically, for factorized warping, we predict a sample-dependent set of -convolution kernels of size and a selection map

, where each spatial position holds a vector of probabilities over the

warpings, which we use as coefficients to linearly combine the basis weight vectors into per-pixel weights. This map is intended to act as a pseudo-motion segmentation map, splitting the feature space into regions of homogeneous motion. In practice, we choose and . Pixelwise kernel-based warping consists in predicting a weight vector for each spatial position, which is used locally to weigh the surrounding values of the input in a weighted average. We denote this approach by ptsru.

4 Experiments and Analysis

4.1 Experimental setup

4.1.1 Kinetics

Kinetics is a large dataset of YouTube videos intended for action recognition tasks. There are three versions of the dataset with increasing numbers of classes and samples: Kinetics-400 (Kay et al., 2017), Kinetics-600 (Carreira et al., 2018) and Kinetics-700 (Carreira et al., 2019). Here we use the second iteration of the dataset, Kinetics-600, to enable fair comparison to prior work. This dataset contains around 500,000 10-second videos at 25 FPS spread across 600 classes. Kinetics is often used in the action recognition field, but has only recently been used for generative purposes. Li et al. (2018b) and Balaji et al. (2018) used filtered and processed versions of the dataset, and more recently the entire unfiltered dataset has been used for generative modeling with GANs as well as autoregressive models (Clark et al., 2019; Weissenborn et al., 2020). Following these works, we resize all videos to resolution and train on randomly-selected 16 frame sequences from the training set. Specifically, is trained to predict 11 frames conditioned on 5 input frames. Unfortunately, the Kinetics dataset in general is not covered by a license which would allow for showing random data samples in a paper. For this reason, for all qualitative analysis and shown samples in this work, we use conditioning frames taken from the UCF-101 dataset (Soomro et al., 2012), though the underlying models are trained completely on Kinetics-600.

4.1.2 Evaluation metrics

Traditional measures for video prediction models such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM)

(Wang et al., 2004) rely on the existence of ground truth continuations of the conditioning frames and judge the generated frames by their similarity to the real ones. While this works for small time scales with near deterministic futures, these metrics fail when there are many potential future outcomes. Unterthiner et al. (2018) propose the Fréchet Video Distance (FVD), an adaption of the Fréchet Inception Distance (FID) metric (Heusel et al., 2017)

which judges a generative model by comparing summary statistics of a set of samples with a set of real examples in a relevant embedding space. For FVD, the logits of an I3D network trained on Kinetics-400 

(Carreira and Zisserman, 2017) are used as the embedding space. We adopt this metric for ease of comparison with prior work.

4.1.3 Training details

All models were trained on 32 to 512 TPUv3 accelerators using the Adam (Kingma and Ba, 2015) optimizer. In all our experiments, we validate the best model according to performance on the training set. Like BigGAN (Brock et al., 2019)

, we find it important to use cross-replica batch statistics where the per-batch mean and variance are calculated across all accelerators at each training step. More details on our training setup are provided in the appendix.

We note that for calculating step time and maximum memory usage in Table 1, each training step corresponds to two updates and a single update, involving three passes of with batch size 8 and two passes of with batch size 16 alongside one with batch size 8. Furthermore, we emphasize that TPUs rely on the XLA intermediate compilation language, which automatically performs some re-materialization and optimization. As such, these numbers might change for other implementations.

4.2 Comparing Discriminator Decompositions

F dF clV dV crV IS FVD K ms GB
Clip 10.9 59.6 758 488 3.49
Downsampled 8.9 122.1 1000 370 3.01

9.4 159.9 997 371 3.10
Cropped + Downsampled 11.5 44.1 792 467 3.30
MoCoGAN-like 11.3 47.4 651 568 3.95
MoCoGAN-like 11.4 46.2 561 660 3.95
DVD-GAN-FP (ours) 10.9 55.0 689 537 3.53
DVD-GAN-FP 11.7 46.1 817 453 3.54
DVD-GAN-FP 11.8 39.3 675 548 3.65
Table 1: A comparison of different discriminator decompositions for video prediction trained for a fixed time budget evaluated on the Kinetics 600 validation set. We describe each in terms of the types of sub-discriminators it contains: frame-level (F), downsampled frame-level (dF), clipped video (clV), downsampled video (dV) and cropped video (crV). Average step duration in milliseconds (ms) and maximum memory consumption (GB) are measured for a single forward-backward training step, averaged over 30 seconds. We report the step budget for each run in number of thousands of iterations (K).

We begin by comparing several alternative discriminator decompositions on the video prediction benchmark proposed by Weissenborn et al. (2020) for the Kinetics dataset. Unless otherwise specified we fix the number of frames sampled for the spatial discriminator to . Because we are interested in architectures which improve real experimentation time, we fix the step budget for each method according to its average step duration, measured over seconds for a batch size of , and make sure that we do not account for any lag introduced by data loading or preprocessing. For these normalized step budgets, reported for each method in Table 1, training takes approximately 103 hours, corresponding to iterations for the fastest one.

For MoCoGAN-like, we evaluate their original setting, where , as well as . Finally, for each of the discriminator decompositions, we evaluate a baseline where only the temporal discriminator is used, denoted respectively by Clip (for MoCoGAN-like), Downsampled (for DVD-GAN-FP), Cropped (for DVD-GAN-FP) and Cropped & Downsampled (for DVD-GAN-FP). We summarize results in Table 1.

For all four discriminator decompositions, the addition of a frame discriminator yields large improvements in FVD, along with improving convergence speed as previously noted by Tulyakov et al. (2018). We also observe that the Downsampled and Cropped baselines obtain poor performance compared to the other approaches, consistent with our observation that the Downsampled baseline can only assess global spatio-temporal structure, and the Cropped baseline can only assess local structure. In contrast, a combination of the two (Cropped + Downsampled) largely improves on the Clip baseline, which has access to all conditioning and predicted frames, and obtains improved performance with respect to the two MoCoGAN-like variants, while being faster to train. Next, we find that in this setting, DVD-GAN-FP yields an improvement in terms of average step duration over the MoCoGAN-like baselines, but lags behind in terms of prediction quality. As expected, the alternative decomposition we propose has a much shorter average step duration than the MoCoGAN-like and DVD-GAN-FP baselines, and additionally closes the performance gap with MoCoGAN-like. Adding a spatial discriminator to the Cropped & Downsampled baseline yields our final proposed decomposition, DVD-GAN-FP, which obtains a substantial improvement over previous approaches.

4.3 Comparing Recurrent Units

We now report a comparison between recurrent units in Table 2. Here, we use the fastest of our discriminator decompositions, DVD-GAN-FP, so as to maximize the number of training steps which can be afforded for each configuration in this large-scale setting. We train all models for 500k steps. Our setup is otherwise identical to the one presented in Section 4.2. We find that all our proposed transformation-based approaches perform better than using convgru or convlstm, suggesting that the increased expressivity of the generator is beneficial for generation quality, with the best results observed for tsru in this setting. Interestingly, we observe that ptsru performs slightly worse than its tsru counterpart (which is the least competitive of our tsru designs); suggesting that the factorized version may be more effective at predicting structured, globally consistent motion-like features. As a result, we do not investigate other versions of the ptsru design. Finally, we observe a small performance improvement for all tsru compared with K-TrajGRU, suggesting that its simplified design facilitates learning.

IS () FVD ()
ConvLSTM 11.4 51.3
ConvGRU 11.4 49.4

11.6 45.9
TSRU 11.6 44.2
TSRU 11.5
TSRU 11.6 45.8
PTSRU 11.5 47.1
Table 2: Comparison of recurrent unit designs for video prediction on Kinetics 600 validation set. All models are trained for 500k steps.

4.4 Scaling up

Putting it all together, we combine our strongest decomposition DVD-GAN-FP with TSRU. We call this model TrIVD-GAN-FP, for “Transformation-based & TrIple Video Discriminator GAN”. We increase the channel multiplier to from 48 to 120 (described in the appendix) and train this model for 700,000 steps. We report the results of this model evaluated on the test set of Kinetics 600 in Table 3, together with the results reported by Weissenborn et al. (2020) and Clark et al. (2019)

. We also provide unbiased estimates of standard deviation for 10 training runs. In comparison with the strong DVD-GAN-FP baseline, our contributions lead to a substantial decrease of FVD.

In Figure 4

, we show predictions and corresponding motion fields from our final model TrIVD-GAN-FP. We select them to demonstrate the interpretability of these motion fields for a number of samples. We obtain them by interpreting the local predicted weights for a given spatial position as a probability distribution over flow vectors corresponding to each of the kernel grid positions and taking the resulting expected flow value. They can be combined across resolutions by iteratively upsampling a coarse flow field, multiplying it by the upsampling factor to account for the higher resolution, and refining it with the next lower level flow field. These predictions have been obtained on UCF-101 and also show strong transfer performance. Though this is not entirely surprising given the similarity between the classes and content, it further emphasizes the generalization properties of our model. We provide additional samples in the appendix.

Figure 4: Qualitative evaluation on UCF-101 test set 1. Top: predictions. Second row: motion field obtained by combined the two lower levels of predicted motion-like features. All sequences are temporally subsampled by two for visualization.

Method IS () FVD ()
Video Transformer 170 5
DVD-GAN-FP 69.15 1.16
Table 3: Video prediction performance on Kinetics-600 test set, without frame skipping. FVD is computed using test set video statistics. We predict 11 frames conditioned on 5 input frames. We provide unbiased estimates of standard deviation over 10 runs.

4.5 BAIR Robot Pushing

The BAIR Robot Pushing dataset of robotic arm motion (Ebert et al., 2017) has been used as a benchmark in prior work on video prediction (Babaeizadeh et al., 2018; Denton and Fergus, 2018; Lee et al., 2018; Unterthiner et al., 2018; Clark et al., 2019; Weissenborn et al., 2020). We report results on this dataset in Table LABEL:tab:bair-results for DVD-GAN-FP with TSRU , using a channel multiplier in of 48, and in of 92. FVD is reported on the test set to compare to Video Transformer and DVD-GAN-FP. Like those two models, we train conditioning on one frame to produce 15 future frames.

Method FVD ()
SAVP 116.4
DVD-GAN-FP 109.8
TrIVD-GAN-FP 103.3
Video Transformer 94 2
Table 4: FVD on BAIR Video Prediction

Despite substantial improvement over DVD-GAN-FP and Video Transformer (Weissenborn et al., 2020) on Kinetics, on BAIR, TrIVD-GAN-FP only sees slight improvement over DVD-GAN-FP, still higher than Video Transformer. Qualitatively, we do not observe a visual difference between the models, and echo comments from both papers that FVD may not be an adequate metric on BAIR.

Finally, we report SSIM scores from BAIR, collected as in SAVP (Lee et al., 2018). This involves generating continuations of a single frame and selecting the one which maximizes the average frame SSIM, then reporting per-frame SSIMs for all “best continuations” of conditioning frames. The results of this experiment are in Figure 5. Notably, we see high correlation between SSIM and increasing , meaning that generating more samples leads to a meaningfully diverse variety of futures, such that some are closer to the ground truth than others.

Figure 5: Per-frame SSIM for TrIVD-GAN-FP where SSIM is taken as the max over differing numbers of sampled continuations.

5 Conclusion

Effectively training video prediction models on large datasets requires scalable and powerful network architectures. In this work we have systematically analyzed several approaches towards reducing the computational cost of GAN discriminators and proposed DVD-GAN-FP and DVD-GAN-FP, which improve wall-clock time and performance over the strong baseline of DVD-GAN-FP. We have further motivated and proposed a family of transformation-based recurrent units – drop-in replacements for standard recurrent units – which further improve performance. Combining these contributions led to our final model, TrIVD-GAN-FP, which obtains large performance improvements and is able to make diverse predictions. For future directions, we plan to investigate the use of these recurrent units for video processing tasks, as well as the usefulness of the representations that our models learn.


We thank Jean-Baptiste Alayrac, Lucas Smaira, Jeff Donahue, Jacob Walker, Jordan Hoffman, Carl Doersch, Joao Carreira, Andy Brock, Yusuf Aytar, Matteusz Malinowski, Karel Lenc, Suman Ravuri, Jason Ramapuram, Viorica Patraucean, Simon Osindero and Chloe Rosenberg for their help, feedback and insights.


  • M. Babaeizadeh, C. Finn, D. Erhan, R. H. Campbell, and S. Levine (2018) Stochastic variational video prediction. International Conference on Learning Representations. Cited by: §2.1, §4.5.
  • Y. Balaji, M. R. Min, B. Bai, R. Chellappa, and H. P. Graf (2018) TFGAN: improving conditioning for text-to-video synthesis. Cited by: §4.1.1.
  • N. Ballas, L. Yao, C. Pal, and A. Courville (2015) Delving deeper into convolutional networks for learning video representations. arXiv:1511.06432. Cited by: §2.4.
  • M. Bińkowski, J. Donahue, S. Dieleman, A. Clark, E. Elsen, N. Casagrande, L. C. Cobo, and K. Simonyan (2020) High fidelity speech synthesis with adversarial networks. In International Conference on Learning Representations, Cited by: §2.2.
  • A. Brock, J. Donahue, and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. International Conference on Learning Representations. Cited by: §B.1, §C.1, §1, §4.1.3.
  • Y. Burda, H. Edwards, A. Storkey, and O. Klimov (2018) Exploration by random network distillation. arXiv preprint arXiv:1810.12894. Cited by: §1.
  • J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, and A. Zisserman (2018) A short note about Kinetics-600. arXiv:1808.01340. Cited by: §4.1.1.
  • J. Carreira, E. Noland, C. Hillier, and A. Zisserman (2019) A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987. Cited by: §4.1.1.
  • J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? A new model and the Kinetics dataset. In

    International Conference on Computer Vision

    Cited by: §4.1.2.
  • K. Cho, B. van Merrienboer, C. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. In

    Empirical Methods in Natural Language Processing

    Cited by: §2.4.
  • A. Clark, J. Donahue, and K. Simonyan (2019) Adversarial video generation on complex datasets. In arxiv:1907.06571, Cited by: §A.1, §B.1, §1, §2.2, §3.1, §4.1.1, §4.4, §4.5.
  • H. De Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville (2017) Modulating early visual processing by language. In Neural Information Processing Systems, Cited by: §B.2.
  • E. Denton and R. Fergus (2018) Stochastic video generation with a learned prior. In

    International Conference on Machine Learning

    Cited by: §2.1, §4.5.
  • E. L. Denton and V. Birodkar (2017) Unsupervised learning of disentangled representations from video. In Neural Information Processing Systems, Cited by: §2.1.
  • E. L. Denton, S. Chintala, A. Szlam, and R. Fergus (2015) Deep generative image models using a Laplacian pyramid of adversarial networks. In Neural Information Processing Systems, Cited by: §2.2.
  • J. Donahue and K. Simonyan (2019) Large scale adversarial representation learning. Neural Information Processing Systems. Cited by: §2.2.
  • V. Dumoulin, J. Shlens, and M. Kudlur (2017) A learned representation for artistic style. In International Conference on Learning Representations, Cited by: §B.2.
  • F. Ebert, C. Finn, A. X. Lee, and S. Levine (2017) Self-supervised visual planning with temporal skip connections. In Conference on Robot Learning, Cited by: §4.5.
  • C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In Neural Information Processing Systems, Cited by: §2.3.
  • C. Finn and S. Levine (2015) Deep visual foresight for planning robot motion. In

    ICML Deep Learning Workshop

    Cited by: §1.
  • H. Gao, H. Xu, Q. Cai, R. Wang, F. Yu, and T. Darrell (2019) Disentangling propagation and generation for video prediction. In International Conference on Computer Vision, Cited by: §1, §3.2.1.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Neural Information Processing Systems, Cited by: §2.2.
  • Z. Hao, X. Huang, and S. Belongie (2018) Controllable video generation with sparse trajectories. In

    Conference on Computer Vision and Pattern Recognition

    Cited by: §1, §3.2.1.
  • M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Neural Information Processing Systems, Cited by: §4.1.2.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.4.
  • X. Huang, M. Liu, S. Belongie, and J. Kautz (2018)

    Multimodal unsupervised image-to-image translation

    In European Conference on Computer Vision, Cited by: §2.2.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, Cited by: §B.2.
  • M. Jaderberg, K. Simonyan, A. Zisserman, and k. kavukcuoglu (2015) Spatial transformer networks. In Neural Information Processing Systems, Cited by: §2.3.
  • Y. Jang, G. Kim, and Y. Song (2018) Video prediction with appearance and motion conditions. In International Conference on Machine Learning, Cited by: §2.1.
  • D. Jayaraman, F. Ebert, A. A. Efros, and S. Levine (2019) Time-agnostic prediction: predicting predictable video frames. In International Conference on Learning Representations, Cited by: §2.1.
  • N. Kalchbrenner, A. van den Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu (2017) Video pixel networks. In International Conference on Machine Learning, Cited by: §2.1.
  • T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, Cited by: §1, §2.2.
  • T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. Conference on Computer Vision and Pattern Recognition. Cited by: §1.
  • W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman (2017) The Kinetics human action video dataset. arXiv:1705.06950. Cited by: §4.1.1.
  • D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations, Cited by: §B.2, §4.1.3.
  • A. R. Kosiorek, H. Kim, I. Posner, and Y. W. Teh (2018) Sequential attend, infer, repeat: generative modelling of moving objects. In Neural Information Processing Systems, Cited by: §2.1.
  • K. Kumar, R. Kumar, T. de Boissiere, L. Gestin, W. Z. Teoh, J. Sotelo, A. de Brébisson, Y. Bengio, and A. C. Courville (2019) Melgan: generative adversarial networks for conditional waveform synthesis. In Neural Information Processing Systems, Cited by: §2.2.
  • A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine (2018) Stochastic adversarial video prediction. arXiv:1804.01523. Cited by: §2.2, §4.5, §4.5.
  • Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M. Yang (2018a) Flow-grounded spatial-temporal video prediction from still images. In European Conference on Computer Vision, Cited by: §2.1.
  • Y. Li, M. R. Min, D. Shen, D. Carlson, and L. Carin (2018b) Video generation from text. In

    AAAI Conference on Artificial Intelligence

    Cited by: §4.1.1.
  • X. Liang, L. Lee, W. Dai, and E. P. Xing (2017) Dual motion GAN for future-flow embedded video prediction. In International Conference on Computer Vision, Cited by: §B.1.
  • W. Liu, W. Luo, D. Lian, and S. Gao (2018)

    Future frame prediction for anomaly detection – a new baseline

    In Conference on Computer Vision and Pattern Recognition, Cited by: §2.1.
  • Z. Liu, R. A. Yeh, X. Tang, Y. Liu, and A. Agarwala (2017) Video frame synthesis using deep voxel flow. In International Conference on Computer Vision, Cited by: §2.3.
  • P. Luc, C. Couprie, Y. LeCun, and J. Verbeek (2018) Predicting future instance segmentation by forecasting convolutional features. In European Conference on Computer Vision, Cited by: §2.1.
  • P. Luc, N. Neverova, C. Couprie, J. Verbeek, and Y. LeCun (2017) Predicting deeper into the future of semantic segmentation. In International Conference on Computer Vision, Cited by: §2.1.
  • M. Mathieu, C. Couprie, and Y. LeCun (2016) Deep multi-scale video prediction beyond mean square error. International Conference on Learning Representations. Cited by: §2.1.
  • M. Oliu, J. Selva, and S. Escalera (2018)

    Folded recurrent neural networks for future video prediction

    In European Conference on Computer Vision, Cited by: §2.1.
  • V. Pătrăucean, A. Handa, and R. Cipolla (2016)

    Spatio-temporal video autoencoder with differentiable memory

    In International Conference on Learning Representations, Cited by: §2.3.
  • M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra (2014) Video (language) modeling: a baseline for generative models of natural videos. arXiv:1412.6604. Cited by: §2.1, §2.3.
  • F. A. Reda, G. Liu, K. J. Shih, R. Kirby, J. Barker, D. Tarjan, A. Tao, and B. Catanzaro (2018) SDC-net: video prediction using spatially-displaced convolution. In European Conference on Computer Vision, Cited by: §2.3, §2.3, §2.3.
  • M. Saito, E. Matsumoto, and S. Saito (2017)

    Temporal generative adversarial nets with singular value clipping

    In International Conference on Computer Vision, Cited by: §2.2.
  • M. Saito and S. Saito (2018) TGANv2: efficient training of large models for video generation with multiple subsampling layers. arXiv:1811.09245. Cited by: §1, §2.2, §2.2.
  • A. M. Saxe, J. L. McClelland, and S. Ganguli (2014) Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In International Conference on Learning Representations, Cited by: §B.2.
  • X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo (2015) Convolutional lstm network: a machine learning approach for precipitation nowcasting. In Neural Information Processing Systems, Cited by: §2.4.
  • X. Shi, Z. Gao, L. Lausen, H. Wang, D. Yeung, W. Wong, and W. Woo (2017) Deep learning for precipitation nowcasting: a benchmark and a new model. In Neural Information Processing Systems, Cited by: §A.2, §A.2, §A.4, §A.4, §2.4.
  • K. Soomro, A. R. Zamir, and M. Shah (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402. Cited by: §4.1.1.
  • N. Srivastava, E. Mansimov, and R. Salakhutdinov (2015) Unsupervised learning of video representations using LSTMs. In International Conference on Machine Learning, Cited by: §1, §2.1, §2.1.
  • S. Tulyakov, M. Liu, X. Yang, and J. Kautz (2018) MoCoGAN: decomposing motion and content for video generation. In Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2.1, §2.2, §2.2, §3.1, §4.2.
  • T. Unterthiner, S. van Steenkiste, K. Kurach, R. Marinier, M. Michalski, and S. Gelly (2018) Towards accurate generative models of video: a new metric & challenges. arXiv:1812.01717. Cited by: §4.1.2, §4.5.
  • R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee (2017a) Decomposing motion and content for natural video sequence prediction. In International Conference on Learning Representations, Cited by: §2.1.
  • R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee (2017b) Learning to generate long-term future via hierarchical prediction. In International Conference on Machine Learning, Cited by: §2.1.
  • C. Vondrick, P. Hamed, and A. Torralba (2016a) Anticipating the future by watching unlabeled video. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.1.
  • C. Vondrick, H. Pirsiavash, and A. Torralba (2016b) Generating videos with scene dynamics. In Neural Information Processing Systems, Cited by: §1, §2.1, §2.1, §2.2.
  • C. Vondrick and A. Torralba (2017) Generating the future with adversarial transformers. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.1, §2.3.
  • N. Wahlström, T. B. Schön, and M. P. Deisenroth (2015) From pixels to torques: policy learning with deep dynamical models. In ICML Deep Learning Workshop, Cited by: §1.
  • J. Walker, C. Doersch, A. Gupta, and M. Hebert (2016) An uncertain future: forecasting from static images using variational autoencoders. In European Conference on Computer Vision, Cited by: §1, §2.1.
  • J. Walker, A. Gupta, and M. Hebert (2015) Dense optical flow prediction from a static image. In International Conference on Computer Vision, Cited by: §2.1.
  • Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §4.1.2.
  • M. Watter, J. T. Springenberg, J. Boedecker, and M. Riedmiller (2015) A locally linear latent dynamics model for control from raw images. In NeurIPS, Cited by: §1.
  • D. Weissenborn, O. Täckström, and J. Uszkoreit (2020) Scaling autoregressive video models. In International Conference on Learning Representations, Cited by: §4.1.1, §4.2, §4.4, §4.5, §4.5.
  • W. Xiong, W. Luo, L. Ma, W. Liu, and J. Luo (2018) Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.1.
  • J. Xu, B. Ni, Z. Li, S. Cheng, and X. Yang (2018) Structure preserving video prediction. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.4.
  • T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman (2016) Visual dynamics: probabilistic future frame synthesis via cross convolutional networks. In Neural Information Processing Systems, Cited by: §2.1, §2.3.
  • H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena (2019) Self-attention generative adversarial networks. In International Conference on Machine Learning, Cited by: §B.1, §B.2.
  • H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In International Conference on Computer Vision, Cited by: §2.2.
  • G. Zhu, Z. Huang, and C. Zhang (2018) Object-oriented dynamics predictor. In Neural Information Processing Systems, Cited by: §2.1.

Appendix A Further details on the recurrent units

a.1 Implementation details

For the ConvGRU modules in DVD-GAN-FP, Clark et al. (2019)

use ReLU activation functions, rather than the hyperbolic tangent (tanh). For the ConvLSTM module, we experiment with both tanh, which is traditionally used, and the ReLU activation. We find that ReLU significantly outperforms tanh early on, getting 51.8 train FVD after 180K steps, against 62.7 for tanh. As a result, for our comparison of recurrent units, we report results using the ReLU activation function.

The hyper-network

used to predict the parameters of our transformations differs depending on the output dimension. In the case of PTSRU, we use a resolution-preserving, 2-hidden layer CNN. For both TSRU and K-TrajGRU, a single convolutional layer is followed by adaptive max-pooling, leading to

output spatial dimension, then by a single hidden-layer MLP, to predict the set of kernels used across the input. The pseudo-segmentation map is obtained using a single convolutional layer applied to the input. For all three architectures, ReLU activations are interleaved between the linear layers, and we use a softmax activation on the output. Each architecture has zero or two hidden layers, and we set the number of output hidden channels (or units in case of linear layers) to half the number of input channels.

a.2 K-TrajGRU

Our proposed K-TrajGRU is a kernel-based extension of TrajGRU (Shi et al., 2017), which we further detail here. Formally, it is described by the following equations:

&θ_h, x = f(h_t-1, x_t),
&~h_t-1 = w(h_t-1 ; θ_h, x),
&r = σ(W_r ⋆_k [~h_t-1; x_t] + b_r),
&h^′_t = r ⊙~h_t-1,
&c = ρ(W_c ⋆_k [~h^′_t ; x_t] + b_c),
&u = σ(W_u ⋆_k [~h_t-1; x_t] + b_u),
&h_t = u ⊙h_t-1 + (1 - u) ⊙c,

where and denote respectively the past hidden features and the input features; and are respectively the shallow convolutional neural network and the factorized warping function presented in the main paper and used by our TSRU modules; denotes a convolution operation with kernel size , is the activation function used (in our case, tanh), denotes the sigmoid function. denotes elementwise multiplication, and and are kernel and bias parameters for the convolutions used to produce each intermediate output .

Conceptually, we see that K-TrajGRU and TSRU are related, in that they both warp the past hidden features to account for motion. TSRU however has a simpler design, which can be intuitively interpreted: it fuses the warped features and novelly generated content , using a predicted gate . K-TrajGRU, instead, provides the warped features to equations (A.2, A.2, A.2), but still combines the input hidden features unchanged, together with .

We note that Shi et al. (2017) propose to predict warped versions of the hidden state, and that our proposed recurrent unit specifically extends the case where . Increasing , as well as the addition of a read gate to the TSRU module, are potential ideas for further improving our transformation-based recurrent unit’s expressivity, which we leave to be explored in future work.

a.3 ConvLSTM

For comparison with other recurrent units, the hidden state and memory cell of our ConvLSTM baseline each have half as many channels as the hidden state used in the other designs, that do not maintain a memory cell (ConvGRU, K-TrajGRU and TSRU variants). The hidden state and the memory cell are initialized with respectively the first and the second half of the conditioning sequence’s encoding. Like for the other recurrent units, this encoding is obtained by passing the concatenated image-level features extracted by the encoder for each frame, through a 3x3 convolution layer, followed by a ReLU activation, to compress the representation to the right dimension. Then, the output hidden state and memory cell for each time step are concatenated and passed on to the subsequent residual blocks of the generator.

a.4 Dynamic ConvLSTM

The dynamic ConvLSTM proposed by Shi et al. (2017) is conceptually simple: for each sample, a set of kernels are predicted and used in the convolutional layers applied on both the recurrent unit’s input features and its hidden features , in the input, forget and output gate equations, as well as for the intermediate state’s equation. The network predicting these kernels, denoted by “Kernel CNN”, employs a channel-wise softmax operation along the input channel to increase sparsity of the predicted kernels. We refer the reader to the Section 3.3 of the original paper for a more extensive presentation.

It is important to note that such dynamic convolutions require a different kernel to be predicted for each input and output channel. To keep to a small number of parameters of the Kernel CNN, Shi et al. (2017) propose to share weights across the input and output channel dimensions, by mapping the difference between image features extracted from the two previous frames to the corresponding 2D kernel , where denotes the channel of features , and where corresponds to the filter used for input channel and output channel . This relies on the fact that the generator is autoregressive, and encodes each prediction before making the next one. We adapt this to our architecture, as described later in this section for the sake of completeness.

Before going into more implementation details however, we draw the attention of the reader to the following important analysis. In contrast with dynamic convolutions, our proposed transformation-based recurrent units all essentially rely on dynamic, depth-wise, locally connected layers. Hence, while these recurrent units predict transformation parameters whose dimension scales with the spatial dimensions of the hidden features, the dimension of the Dynamic ConvLSTM transformation’s parameters scales with the number of input and output channels. We find that this leads to a largely increased computational cost, even for medium sized models. Specifically, with extensive use of parallelisation across devices, when using Dynamic ConvLSTMs, our medium-size models, that use 48 channel multiplier, run at an average step duration of 6.385s for a batch size of 512; while in the same set up, models using ConvGRU (resp. TSRU) run at 284ms (resp. 354ms) per step. In this context, the Dynamic ConvLSTM module hence only allows a very limited width of architectures. In this sense, we argue that it is not scalable, and therefore we do not consider it for comparison with other recurrent units for the purposes of our large-scale setting.

Implementation details

To allow the use of their proposed recurrent unit in our architecture, at time step , we treat the first channels of the previous and current features as the input features to the Kernel CNN, where denotes the number of channels of the hidden representation of the recurrent unit. We note that this requires the number of channels of the inputs features to be a multiple of , so that each difference of features can be mapped to kernels, where is such that . For the first time step, we use the first channels of the sequence encoding to play the role of the previous features .

A second difference is that their proposed unit additionally fuses the past hidden features, and past cell states to produce the hidden and cell states that are input to the module. This operation is denoted by in the paper. This might be redundant with the intended function of the hidden state and memory cell of a ConvLSTM. In practice, it is observed by the authors to bring only a marginal gain in general. Additionally, it is an orthogonal contribution, which could be implemented for any of the other recurrent units we have considered. As a result, we do not implement this aspect of their recurrent unit.

Appendix B Experimental Details

b.1 Architecture Description

Except for the improvements discussed in the main text, our model is identical to DVD-GAN-FP (Clark et al., 2019), and we refer the reader there for a detailed description of the network. Here we give only a brief overview.

is primarily a 2D convolutional residual network based on the BigGAN architecture (Brock et al., 2019). Beginning at a 4x4 spatial resolution, hidden features are processed by two identical BigGAN residual blocks (a functional depth of 4 convolutions, not accounting for the skip connections), then upsampled by 2x via a nearest-neighbor upsampling. In order to propagate information between frames, after each upsampling (and at the very beginning) a convolutional RNN is run over the features at all timesteps in order, and the outputs are propagated to the next layer of residual blocks. The initial 4x4 features are a reshaped affine transformation of the latent vector for the sample in question. This concatenation is also used as the conditioning to the batch normalization layers within each residual block. With the exception of this linear embedding, all linear and convolutional layers in the entire model have Spectral Normalization applied to their weights (Zhang et al., 2019).

All discriminators discussed in the main text are convolutional networks almost identical to BigGAN’s discriminator except in the input they take. Per-frame discriminators are entirely unchanged. For discriminators judging temporal qualities, all convolutions in the first two residual blocks are replaced with 3D convolutions. Afterwards, all layers are applied to each frame separately until there is a sum-over-time before the final linear output layer.

In order to condition on initial frames, a network identical to the spatial discriminator network is applied to the conditional frames. For each RNN present in , the corresponding output from the residual block in this network with matching resolution is taken, all features from all frames stacked in the channel dimension, and this feature is compressed using a single convolutional layer, to be used as the initial state for the corresponding RNN. In this way, all the recurrent units in are initialized with features that have knowledge of the conditioning frames.

The discriminator and generator are jointly trained with a geometric hinge loss (Liang et al., 2017), updating the discriminators two times for each generator update. Unless otherwise described, the width of each network is identical to that in DVD-GAN, where our channel multiplier is 48 for both the generator and discriminator.

b.2 Implementation Details

Spectral Normalization (Zhang et al., 2019) is applied to all weights, approximated by only the first singular value. All weights are initialized orthogonally (Saxe et al., 2014), and during training we maintain an auxiliary loss which penalizes weights diverging from orthogonality. This loss is added to the GAN loss before optimization with weight . All samples for evaluation (but not for training) use an exponential moving average of ’s weights, which is accumulated with decay and is initialized after 20,000 training steps with the value of weights at that point. In order to disambiguate the effect of the weights’ moving averages from those present in the Batch Normalization layers, we always evaluate using ”standing statistics”, where we run a large number of forward passes of the model (100 batches), and record the average means and variances for all Batch Normalization layers. These averages are used as the ”batch statistics” for all subsequent evaluations. This process is repeated for each evaluation. The model is optimized using Adam (Kingma and Ba, 2015) with batch size and a learning rate of and for and respectively. Conditioning of the Batch Normalization layers (Ioffe and Szegedy, 2015; De Vries et al., 2017; Dumoulin et al., 2017) is performed by having the scale and offset parameters be an affine transformation of the conditioning vector, here provided by the latent variable . We note that the final batch normalization layer in is not conditional, and uses a normal learned scale and offset.

Appendix C Extended qualitative analysis

We start by showing in Section C.1 a qualitative comparison between recurrent units, using our medium-size models, described in Section 4.3 of the main paper. Next, in Sections C.2 and C.3, we provide further qualitative analysis of the samples predicted by our final large-scale TrIVD-GAN-FP model, described in Section 4.4 of the main paper. We provide accompanying videos at to support our analysis.

All models have been trained on Kinetics-600 train set, and all predictions have been sampled randomly on the UCF-101 test set 1, due to restrictions on Kinetics-600.

c.1 Qualitative comparison between ConvGRU and TSRU

We now compare our proposed recurrent unit TSRU with ConvGRU, using for each the best model obtained when trained for 1M steps. We employ the truncation trick (Brock et al., 2019), as we find that this significantly improves both models’ predictions. Specifically, we employ moderate truncation of the latent representation, using threshold .

We refer the reader to the accompanying video in folder C.1., showing a side-by-side comparison of randomly selected predictions, with the ConvGRU baseline on the left, and the TSRU model on the right. We find that our proposed module yields a slight but perceptible improvement over ConvGRU, in that the model using TSRU is more often able to conserve a plausible object shape across frame predictions.

c.2 Influence of the latent representation on the predictions

Here, we provide a qualitative analysis of the influence of the latent representation on the predictions, using our large-scale TriVD-GAN-FP model. In the directory C.2., we provide a comparison of samples obtained using, on the left constant zero latent representations (i.e. extreme truncation) and on the right random latent representations sampled from the same distribution as the one used during training (i.e. no truncation).

We find that in comparison with the constant zero latent representation samples, the non-truncated samples exhibit much more drastic changes in the predictions with respect to the input frames. Camera motion, zooming or object motion tend to be more important in a number of samples when using the non-truncated distribution. Effects like fade-to-black or novel object appearances, tend to appear regularly in the non-truncated examples, while we have not observed them in the samples that use a fixed zero latent representation. We point out examples of such effects in the annotated comparison (annotatedconstantzvsnotruncation); and also provide a non-annotated comparison (constantzvsnotruncation

). This points to a significant influence of the random variable on the predictions, as also evaluated quantitatively on the BAIR dataset in Section 

4.5 of the main paper.

Finally, for the interested reader, we also provide a larger batch of random samples in folder C.2.all, with the following levels of truncation: extreme truncation, where we sample all videos with the fixed zero latent representation; moderate truncation with threshold; and no truncation at all.

c.3 Analysis of success and failure cases

As a result of the findings described in Section C.2, we find that our large-scale model also benefits from the truncation trick, in terms of generation quality. Focusing on the constant zero latent representation samples, overall, we find that our final model TRiVD-GAN-FP generates highly plausible motion patterns, including in highly complex cases such as intricate hand motions or water motion patterns. The model also makes plausible predictions of complex motion in a number of cases, where points move with a non-uniform acceleration across frames, suggesting that it has learned a good representation for physical phenomena such as gravity (eg. for people jumping or juggling) and for some forms of object interactions (eg. for demonstrations of knitting). We highlight some of these in the accompanying video C.3.annotatedsuccessconstantz.

We also highlight some failure cases in C.3.annotatedfailureconstantzvsnotruncation, where we provide constant zero latent representation samples on the left, and non-truncated samples on the right. We find that when the truncation trick is not used, the model sometimes predicts very large motion, that leads to important object deformations. We also sometimes observe large artifacts across the entire scene. Occasionally, the model makes predictions of obviously implausible trajectories. In contrast, when using truncation, we see that these failures cases largely disappear.

Across both levels of truncation however, we find that the model struggles when globally coherent motion must be predicted for objects with fine spatial structure, eg. in the case of hoola loops, or horse legs. We also find that in certain cases, motion seems to simply stop, presumably since this is an easy way for the model to yield plausible predictions in an over-conservative manner. On rare occasions, we also observe objects disappearing into the void. A final failure case comes up when the input sequence is blurry, which leads to poor predictions.