PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning

by   Yunbo Wang, et al.

We present PredRNN++, an improved recurrent network for video predictive learning. In pursuit of a greater spatiotemporal modeling capability, our approach increases the transition depth between adjacent states by leveraging a novel recurrent unit, which is named Causal LSTM for re-organizing the spatial and temporal memories in a cascaded mechanism. However, there is still a dilemma in video predictive learning: increasingly deep-in-time models have been designed for capturing complex variations, while introducing more difficulties in the gradient back-propagation. To alleviate this undesirable effect, we propose a Gradient Highway architecture, which provides alternative shorter routes for gradient flows from outputs back to long-range inputs. This architecture works seamlessly with causal LSTMs, enabling PredRNN++ to capture short-term and long-term dependencies adaptively. We assess our model on both synthetic and real video datasets, showing its ability to ease the vanishing gradient problem and yield state-of-the-art prediction results even in a difficult objects occlusion scenario.


page 5

page 8


Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies

Modelling long-term dependencies is a challenge for recurrent neural net...

Decoupling Long- and Short-Term Patterns in Spatiotemporal Inference

Sensors are the key to sensing the environment and imparting benefits to...

TCTN: A 3D-Temporal Convolutional Transformer Network for Spatiotemporal Predictive Learning

Spatiotemporal predictive learning is to generate future frames given a ...

Overcoming the vanishing gradient problem in plain recurrent networks

Plain recurrent networks greatly suffer from the vanishing gradient prob...

Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics

Natural spatiotemporal processes can be highly non-stationary in many wa...

MotionRNN: A Flexible Model for Video Prediction with Spacetime-Varying Motions

This paper tackles video prediction from a new dimension of predicting s...

Physics-informed Tensor-train ConvLSTM for Volumetric Velocity Forecasting

According to the National Academies, a weekly forecast of velocity, vert...

Code Repositories


A TensorFlow implementation of our paper

view repo

1 Introduction

Spatiotemporal predictive learning is to learn the features from label-free video data in a self-supervised manner (sometimes called unsupervised) and use them to perform a specific task. This learning paradigm has benefited or could potentially benefit practical applications, e.g. precipitation forecasting (Shi et al., 2015; Wang et al., 2017), traffic flows prediction (Zhang et al., 2017; Xu et al., 2018) and physical interactions simulation (Lerer et al., 2016; Finn et al., 2016).

An accurate predictive learning method requires effectively modeling video dynamics in different time scales. Consider two typical situations: (i) When sudden changes happen, future images should be generated upon nearby frames rather than distant frames, which requires that the predictive model learns short-term video dynamics; (ii) When the moving objects in the scene are frequently entangled, it would be hard to separate them in the generated frames. This requires that the predictive model recalls previous contexts before the occlusion happens. Thus, video relations in the short term and the long term should be adaptively taken into account.

(a) Stacked ConvLSTMs
(b) Deep Transition ConvLSTMs
(c) PredRNN with ST-LSTMs
Figure 1: Comparison of data flows in (a) the stacked ConvLSTM network, (b) the deep transition ConvLSTM network, and (c) PredRNN with the spatiotemporal LSTM (ST-LSTM). The two memories of PredRNN work in parallel: the red lines in subplot (c) denote the deep transition paths of the spatial memory, while horizontal black arrows indicate the update directions of the temporal memories.

1.1 Deep-in-Time Structures and Vanishing Gradients Dilemma in Spatiotemporal Modeling

In order to capture the long-term frame dependencies, recurrent neural networks (RNNs)

(Rumelhart et al., 1988; Werbos, 1990; Williams & Zipser, 1995) have been recently applied to video predictive learning (Ranzato et al., 2014). However, most methods (Srivastava et al., 2015a; Shi et al., 2015; Patraucean et al., 2016)

followed the traditional RNNs chain structure and did not fully utilize the network depth. The transitions between adjacent RNN states from one time step to the next are modeled by simple functions, though theoretical evidence shows that deeper networks can be exponentially more efficient in both spatial feature extraction

(Bianchini & Scarselli, 2014) and sequence modeling (Pascanu et al., 2013). We believe that making the network deeper-in-time, i.e. increasing the number of recurrent states from the input to the output, would significantly increase its strength in learning short-term video dynamics.

Motivated by this, a former state-of-the-art model named PredRNN (Wang et al., 2017)

applied complex nonlinear transition functions from one frame to the next, constructing a dual memory structure upon Long Short-Term Memory (LSTM)

(Hochreiter & Schmidhuber, 1997). Unfortunately, this complex structure easily suffers from the vanishing gradient problem (Bengio et al., 1994; Pascanu et al., 2013), that the magnitude of the gradients decays exponentially during the back-propagation through time (BPTT). There is a dilemma in spatiotemporal predictive learning: the increasingly deep-in-time networks have been designed for complex video dynamics, while also introducing more difficulties in gradients propagation. Therefore, how to maintain a steady flow of gradients in a deep-in-time predictive model, is a path worth exploring. Our key insight is to build adaptive connections among RNN states or layers, providing our model with both longer routes and shorter routes at the same time, from input frames to the expected future predictions.

2 Related Work

Recurrent neural networks (RNNs) are widely used in video prediction. Ranzato et al. (2014) constructed a RNN model to predict the next frames. Srivastava et al. (2015a) adapted the sequence to sequence LSTM framework for multiple frames prediction. Shi et al. (2015) extended this model and presented the convolutional LSTM (ConvLSTM) by plugging the convolutional operations in recurrent connections. Finn et al. (2016) developed an action-conditioned predictive model that explicitly predicts a distribution over pixel motion from previous frames. Lotter et al. (2017) built a predictive model upon ConvLSTMs, mainly focusing on increasing the prediction quality of the next one frame. Villegas et al. (2017a) proposed a network that separates the information components (motion and content) into different encoder pathways. Patraucean et al. (2016) predicted intermediate pixel flow and applied the flow to predict image pixels. Kalchbrenner et al. (2017)

proposed a sophisticated model combining gated CNN and ConvLSTM structures. It estimates pixel values in a video one-by-one using the well-established but complicated PixelCNNs

(van den Oord et al., 2016), thus severely suffers from low prediction efficiency. Wang et al. (2017) proposed a deep-transition RNN with two memory cells, where the spatiotemporal memory flows through all RNN states across different RNN layers.

Convolutional neural networks (CNNs) are also involved in video prediction, although they only create representations for fixed size inputs. Oh et al. (2015)

defined a CNN-based autoencoder model for Atari games prediction.

De Brabandere et al. (2016) adapted filter operations of the convolutional network to the specific input samples. Villegas et al. (2017b) proposed a three-stage framework with additional annotated human joints data to make longer predictions.

To deal with the inherent diversity of future predictions, Babaeizadeh et al. (2018) and Denton & Fergus (2018) explored stochastic variational methods in video predictive models. But it is difficult to assess the performance of these stochastic models. Generative adversarial networks (Goodfellow et al., 2014; Denton et al., 2015) were employed to video prediction (Mathieu et al., 2016; Vondrick et al., 2016; Bhattacharjee & Das, 2017; Denton et al., 2017; Lu et al., 2017; Tulyakov et al., 2018). These methods attempt to preserve the sharpness of the generated images by treating it as a major characteristic to distinguish real/fake video frames. But the performance of these models significantly depends on a careful training of the unstable adversarial networks.

In summary, prior video prediction models yield different drawbacks. CNN-based approaches predict a limited number of frames in one pass. They focus on spatial appearances rather than the temporal coherence in long-term motions. The RNN-based approaches, in contrast, capture temporal dynamics with recurrent connections. However, their predictions suffer from the well-known vanishing gradient problem of RNNs, thus particularly rely on closest frames. In our preliminary experiments, it was hard to preserve the shapes of the moving objects in generated future frames, especially after they overlapped. In this paper, we solve this problem by proposing a new gradient highway recurrent unit, which absorbs knowledge from previous video frames and effectively leverages long-term information.

3 Revisiting Deep-in-Time Architectures

A general method to increase the depth of RNNs is stacking multiple hidden layers. A typical stacked recurrent network for video prediction (Shi et al., 2015) can be presented as Figure 1(a). The recurrent unit, ConvLSTM, is designed to properly keep and forget past information via gated structures, and then fuse it with current spatial representations. Nevertheless, stacked ConvLSTMs do not add extra modeling capability to the step-to-step recurrent state transitions.

In our preliminary observations, increasing the step-to-step transition depth in ConvLSTMs can significantly improve its modeling capability to the short-term dynamics. As shown in Figure 1(b), the hidden state, , and memory state, , are updated in a zigzag direction. The extended recurrence depth between horizontally adjacent states enables the network to learn complex non-linear transition functions of nearby frames in a short interval. However, it introduces vanishing gradient issues, making it difficult to capture long-term correlations from the video. Though a simplified cell structure, the recurrent highway (Zilly et al., 2017), might somewhat ease this problem, it sacrifices the spatiotemporal modeling power, exactly as the dilemma described earlier.

Based on the deep transition architecture, a well-performed predictive learning approach, PredRNN (Wang et al., 2017), added extra connections between adjacent time steps in a stacked spatiotemporal LSTM (ST-LSTM), in pursuit of both long-term coherence and short-term recurrence depth. Figure 1(c) illustrates its information flows. PredRNN leverages a dual memory mechanism and combines, by a simple concatenation with gates, the horizontally updated temporal memory with the vertically transformed spatial memory . Despite the favorable information flows provided by the spatiotemporal memory, this parallel memory structure followed by a concatenation operator, and a convolution layer for a constant number of channels, is not an efficient mechanism for increasing the recurrence depth. Besides, as a straight-forward combination of the stacked recurrent network and the deep transition network, PredRNN still faces the same vanishing gradient problem as previous models.

4 PredRNN++

In this section, we would give detailed descriptions of the improved predictive recurrent neural network (PredRNN++). Compared with the above deep-in-time recurrent architectures, our approach has two key insights: First, it presents a new spatiotemporal memory mechanism, causal LSTM, in order to increase the recurrence depth from one time step to the next, and by this means, derives a more powerful modeling capability to stronger spatial correlations and short-term dynamics. Second, it attempts to solve gradient back-propagation issues for the sake of long-term video modeling. It constructs an alternative gradient highway, a shorter route from future outputs back to distant inputs.

4.1 Causal LSTM

Figure 2: Causal LSTM, in which the temporal and spatial memories are connected in a cascaded way through gated structures. Colored parts are newly designed operations, concentric circles denote concatenation, and

is the element-wise Sigmoid function.

The causal LSTM is enlightened by the idea of adding more non-linear layers to recurrent transitions, increasing the network depth from one state to the next. A schematic of this new recurrent unit is shown in Figure 2. A causal LSTM unit contains dual memories, the temporal memory , and the spatial memory , where the subscript denotes the time step, while the superscript denotes the hidden layer in a stacked causal LSTM network. The current temporal memory directly depends on its previous state , and is controlled through a forget gate , an input gate , and an input modulation gate . The current spatial memory depends on in the deep transition path. Specifically for the bottom layer (), we assign the topmost spatial memory at to . Evidently different from the original spatiotemporal LSTM (Wang et al., 2017), causal LSTM adopts a cascaded mechanism, where the spatial memory is particularly a function of the temporal memory via another set of gate structures. Update equations of the causal LSTM at the layer can be presented as follows:


where is convolution, is the element-wise multiplication,

is the element-wise Sigmoid function, the square brackets indicate a concatenation of the tensors and the round brackets indicate a system of equations.

are convolutional filters, where and are convolutional filters for changing the number of filters. The final output is co-determined by the dual memory states and .

Due to a significant increase in the recurrence depth along the spatiotemporal transition pathway, this newly designed cascaded memory is superior to the simple concatenation structure of the spatiotemporal LSTM (Wang et al., 2017). Each pixel in the final generated frame would have a larger receptive field of the input volume at every time step, which endows the predictive model with greater modeling power for short-term video dynamics and sudden changes.

We also consider another spatial-to-temporal causal LSTM variant. We swap the positions of the two memories, updating in the first place, and then calculating based on . An experimental comparison of these two alternative structures would be presented in Section 5, in which we would demonstrate that both of them lead to better video prediction results than the original spatiotemporal LSTM.

4.2 Gradient Highway

Beyond short-term video dynamics, causal LSTMs tend to suffer from gradient back-propagation difficulties for the long term. In particular, the temporal memory may forget the outdated frame appearance due to longer transitions. Such a recurrent architecture remains unsettled, especially for videos with periodic motions or frequent occlusions. We need an information highway to learn skip-frame relations.

Theoretical evidence indicates that highway layers (Srivastava et al., 2015b) are able to deliver gradients efficiently in very deep feed-forward networks. We exploit this idea to recurrent networks for keeping long-term gradients from quickly vanishing, and propose a new spatiotemporal recurrent structure named Gradient Highway Unit (GHU), with a schematic shown in Figure 3. Equations of the GHU can be presented as follows:


where stands for the convolutional filters. is named as Switch Gate, since it enables an adaptive learning between the transformed inputs and the hidden states . Equation 2 can be briefly expressed as .

In pursuit of great spatiotemporal modeling capability, we build a deeper-in-time network with causal LSTMs, and then attempt to deal with the vanishing gradient problem with the GHU. The final architecture is shown in Figure 3. Specifically, we stack causal LSTMs and inject a GHU between the and the causal LSTMs. Key equations of the entire model are presented as follows (for ):


In this architecture, the gradient highway works seamlessly with the causal LSTMs to separately capture long-term and short-term video dependencies. With quickly updated hidden states , the gradient highway shows an alternative quick route from the very first to the last time step (the blue line in Figure 3). But unlike temporal skip connections, it controls the proportions of and the deep transition features through the switch gate , enabling an adaptive learning of the long-term and the short-term frame relations.

Figure 3: Final architecture (top) with the gradient highway unit (bottom), where concentric circles denote concatenation, and is the element-wise Sigmoid function. Blue parts indicate the gradient highway connecting the current time step directly with previous inputs, while the red parts show the deep transition pathway.

We also explore other architecture variants by injecting GHU into a different hidden layer slot, for example, between the and causal LSTMs. Experimental comparisons would be given in Section 5. The network discussed above outperforms the others, indicating the importance of modeling characteristics of raw inputs rather than the abstracted representations at higher layers.

As for network details, we observe that the numbers of the hidden state channels, especially those in lower layers, have strong impacts on the final prediction performance. We thus propose a 5-layer architecture, in pursuit of high prediction quality with reasonable training time and memory usage, consisting of 4 causal LSTMs with 128, 64, 64, 64 channels respectively, as well as a 128-channel gradient highway unit on the top of the bottom causal LSTM layer. We also set the convolution filter size to inside all recurrent units.

5 Experiments

10 time steps 30 time steps 10 time steps
FC-LSTM (Srivastava et al., 2015a) 0.690 118.3 0.583 180.1 0.651 162.4
ConvLSTM (Shi et al., 2015) 0.707 103.3 0.597 156.2 0.673 142.1
TrajGRU (Shi et al., 2017) 0.713 106.9 0.588 163.0 0.682 134.0
CDNA (Finn et al., 2016) 0.721 97.4 0.609 142.3 0.669 138.2
DFN (De Brabandere et al., 2016) 0.726 89.0 0.601 149.5 0.679 140.5
VPN* (Kalchbrenner et al., 2017) 0.870 64.1 0.620 129.6 0.734 112.3
PredRNN (Wang et al., 2017) 0.867 56.8 0.645 112.2 0.782 93.4
Causal LSTM 0.882 52.5 0.685 100.7 0.795 89.2
Causal LSTM (Variant: spatial-to-temporal) 0.875 54.0 0.672 103.6 0.784 91.8
PredRNN + GHU 0.886 50.7 0.713 98.4 0.790 88.9
Causal LSTM + GHU (Final) 0.898 46.5 0.733 91.1 0.814 81.7
Table 1: Results of PredRNN++ comparing with other models. We report per-frame SSIM and MSE of generated sequences. Higher SSIM or lower MSE denotes higher prediction quality. (*) indicates models that are not open source and are reproduced by us or others.

To measure the performance of our approach, we use two video prediction datasets in this paper: a synthetic dataset with moving digits and a real video dataset with human actions. For codes and results on more datasets, please refer to

We train all compared models using TensorFlow

(Abadi et al., 2016) and optimize them to convergence using ADAM (Kingma & Ba, 2015) with a starting learning rate of . Besides, we apply the scheduled sampling strategy (Bengio et al., 2015) to all of the models to stitch the discrepancy between training and inference. As for the objective function, we use the + loss to simultaneously enhance the sharpness and the smoothness of the generated frames.

5.1 Moving MNIST Dataset


We first follow the typical setups on the Moving MNIST dataset by predicting 10 future frames given 10 previous frames. Then we extend the predicting time horizon from 10 to 30 time steps to explore the capability of the compared models in making long-range predictions. Each frame contains 2 handwritten digits bouncing inside a grid of image. To assure the trained model has never seen the digits during inference period, we sample digits from different parts of the original MNIST dataset to construct our training set and test set. The dataset volume is fixed, with sequences for the training set, sequences for the validation set and sequences for the test set. In order to measure the generalization and transfer ability, we evaluate all models trained with moving digits on another digits test set.

Figure 4: Two prediction examples respectively with entangled digits in the input or output frames on Moving MNIST-2 test set.
(a) MNIST-2
(b) MNIST-3
Figure 5: Frame-wise MSE over the test sets. Lower curves denote higher prediction quality. All models are trained on MNIST-2.


To evaluate the performance of our model, we measure the per-frame structural similarity index measure (SSIM) (Wang et al., 2004) and the mean square error (MSE). SSIM ranges between -1 and 1, and a larger score indicates a greater similarity between the generated image and the ground truth image. Table 1 compares the state-of-the-art models using these metrics. In particular, we include the baseline version of the VPN model (Kalchbrenner et al., 2017) that generates each frame in one pass. Our model outperforms the others for predicting the next 10 frames. In order to approach its temporal limit for high-quality predictions, we extend the predicting time horizon from 10 to 30 frames. Even though our model still performs the best in this scenario, it begins to generate increasingly more blurry images due to the inherent uncertainty of the future. Hereafter, we only discuss the 10-frame experimental settings.

Figure 5 illustrates the frame-wise MSE results, and lower curves denote higher prediction accuracy. For all models, the quality of the generated images degrades over time. Our model yields a smaller degradation rate, indicating its capability to overcome the long-term information loss and learn skip-frame video relations with the gradient highway.

In Figure 4, we show examples of the predicted frames. With causal memories, our model makes the most accurate predictions of digit trajectories. We also observe that the most challenging task in future predictions is to maintain the shape of the digits after occlusion happens. This scenario requires our model to learn from previously distant contexts. For example, in the first case in Figure 4, two digits entangle with each other at the beginning of the target future sequence. Most prior models fail to preserve the correct shape of digit “8”, since their outcomes mostly depend on high level representations at nearby time steps, rather than the distant previous inputs (please see our afterwards gradient analysis). Similar situations happen in the second example, all compared models present various but incorrect shapes of digit “2” in predicted frames, while PredRNN++ maintains its appearance. It is the gradient highway architecture that enables our approach to learn more disentangled representations and predict both correct shapes and trajectories of moving objects.

Ablation Study

As shown in Table 1, it is beneficial to use causal LSTMs in place of ST-LSTMs, improving the SSIM score of PredRNN from to . It proves the superiority of the cascaded structure over the simple concatenation in connecting the spatial and temporal memories. As a control experiment, we swap the positions of spatial and temporal memories in causal LSTMs. This structure (the spatial-to-temporal variant) outperforms the original ST-LSTMs, with SSIM increased from to , but yields a lower accuracy than using standard causal LSTMs.

Table 1 also indicates that the gradient highway unit (GHU) cooperates well with both ST-LSTMs and causal LSTMs. It could boost the performance of deep transition recurrent models consistently. In Table 2, we discuss multiple network variants that inject the GHU into different slots between causal LSTMs. It turns out that setting this unit right above the bottom causal LSTM performs best. In this way, the GHU could select the importance of the three information streams: the long-term features in the highway, the short-term features in the deep transition path, as well as the spatial features extracted from the current input frame.

Location SSIM MSE
Bottom (PredRNN++) 1,2 0.898 46.5
Middle 2,3 0.894 48.1
Top 3,4 0.885 52.0
Table 2: Ablation study: injecting the GHU into a 4-layer causal LSTM network. The slot of the GHU is positioned by the indexes () of the causal LSTMs that are connected with it.

Gradient Analysis

We observe that the moving digits are frequently entangled, in a manner similar to real-world occlusions. If digits get tangled up, it becomes difficult to separate them apart in future predictions while maintaining their original shapes. This is probably caused by the vanishing gradient problem that prevents the deep-in-time networks from capturing long-term frame relations. We evaluate the gradients of these models in Figure


is the gradient norm of the last time-step loss function w.r.t. each input frame. Unlike other models that have gradient curves that steeply decay back in time, indicating a severe vanishing gradient problem, our model has a unique bowl-shape curve, which shows that it manages to ease vanishing gradients. We also observe that this bowl-shape curve is in accordance with the occlusion frequencies over time as shown in Figure

7(b), which demonstrates that the proposed model manages to capture the long-term dependencies.

(a) Deep Transition ConvLSTMs
(b) PredRNN
(c) PredRNN++
Figure 6: The gradient norm of the loss function at the last time step, , with respect to intermediate activities in the encoder, including hidden states, temporal memory states and the spatial memory states: , , .
(b) Occlusion Frequency
Figure 7: Gradient analysis: (a) The gradient norm of the loss function at the last time step with respect to each input frame, averaged over the test set. (b) The frequency of digits entangling in each input frame among sequences over the test set.

Figure 6 analyzes by what means our approach eases the vanishing gradient problem, illustrating the absolute values of the loss function derivatives at the last time step with respect to intermediate hidden states and memory states: , , and . The vanishing gradient problem leads the gradients to decrease from the top layer down to the bottom layer. For simplicity, we analyze recurrent models consisting of layers. In Figure 6(a), the gradient of vanishes rapidly back in time, indicating that previous true frames yield negligible influence on the last frame prediction. With temporal memory connections , the PredRNN model in Figure 6(b) provides the gradient a shorter pathway from previous bottom states to the top. As the curve of arises back in time, it emphasizes the representations of the more correlated hidden states. In Figure 6(c), the gradient highway states hold the largest derivatives while decays steeply back in time, indicating that gradient highway stores long-term dependencies and allows causal LSTMs to concentrate on short-term frame relations. By this means, PredRNN++ disentangles video representations in different time scales with different network components, leading to more accurate predictions.

5.2 KTH Action Dataset

The KTH action dataset (Schuldt et al., 2004) contains types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) in different scenarios: indoors and outdoors with scale variations or different clothes. Each video clip has a length of four seconds in average and was taken with a static camera in fps frame rate.


The experimental setup is adopted from (Villegas et al., 2017a): videos clips are divided into a training set of and a test set of sequences. Then we resize each frame into a resolution of pixels. We train all of the compared models by giving them frames and making them generate the subsequent frames. The mini-batch size is set to and the training process is terminated after iterations. At test time, we extend the prediction horizon to future time steps.


Although few occlusions exist due to monotonous actions and plain backgrounds, predicting a longer video sequence accurately is still difficult for previous methods, probably resulting from the vanishing gradient problem. The key to this problem is to capture long-term frame relations. In this dataset, it means learning human movements that are performing repeatedly in the long term, such as the swinging arms and legs when the actor is walking (Figure 9).

We use quantitative metrics PSNR (Peak Signal to Noise Ratio) and SSIM to evaluate the predicted video frames. PSNR emphasizes the foreground appearance, and a higher score indicates a greater similarity between two images. Empirically, we find that these two metrics are complementary in some aspects: PSNR is more concerned about pixel-level correctness, while SSIM is also sensitive to the difference in image sharpness. In general, both of them need to be taken into account to assess a predictive model. Table 3 evaluates the overall prediction quality. For each sequence, the metric values are averaged over the 20 generated frames. Figure 8 provides a more specific frame-wise comparison. Our approach performs consistently better than the state of the art at every future time step on both PSNR and SSIM. These results are in accordance with the quantitative examples in Figure 9, which indicates that our model makes relatively accurate predictions about the human moving trajectories and generates less blurry video frames.

ConvLSTM (Shi et al., 2015) 23.58 0.712
TrajGRU (Shi et al., 2017) 26.97 0.790
DFN (De Brabandere et al., 2016) 27.26 0.794
MCnet (Villegas et al., 2017a) 25.95 0.804
PredRNN (Wang et al., 2017) 27.55 0.839
PredRNN++ 28.47 0.865
Table 3: A quantitative evaluation of different methods on the KTH human action test set. These metrics are averaged over the 20 predicted frames. A higher score denotes a better prediction quality.

We also notice that, in Figure 8, all metric curves degrade quickly for the first 10 time steps in the output sequence. But the metric curves of our model declines most slowly from the to the time step, indicating its great power for capturing long-term video dependencies. It is an important characteristic of our approach, since it significantly declines the uncertainty of future predictions. For a model that is deep-in-time but without gradient highway, it would fail to remember the repeated human actions, leading to an incorrect inference about future moving trajectories. In general, this “amnesia” effect would result in diverse future possibilities, eventually making the generated images blurry. Our model could make future predictions more deterministic.

(a) Frame-wise PSNR
(b) Frame-wise SSIM
Figure 8: Frame-wise PSNR and SSIM comparisons of different models on the KTH test set. Higher curves denote better results.

Figure 9: KTH prediction examples. We predict 20 frames into the future by observing 10 frames. Frames are shown at a three frames interval. It is worth noting that these two sequences were also presented in (Villegas et al., 2017a).

6 Conclusions

In this paper, we presented a predictive recurrent network named PredRNN++, towards a resolution of the spatiotemporal predictive learning dilemma between deep-in-time structures and vanishing gradients. To strengthen its power for modeling short-term dynamics, we designed the causal LSTM with the cascaded dual memory structure. To alleviate the vanishing gradient problem, we proposed a gradient highway unit, which provided the gradients with quick routes from future predictions back to distant previous inputs. By evaluating PredRNN++ on a synthetic moving digits dataset with frequent object occlusions, and a real video dataset with periodic human actions, we demonstrated that it is able to learning long-term and short-term dependencies adaptively and obtain state-of-the-art prediction results.

7 Acknowledgements

This work is supported by National Key R&D Program of China (2017YFC1502003), NSFC through grants 61772299, 61672313, 71690231, and NSF through grants IIS-1526499, IIS-1763325, CNS-1626432.


  • Abadi et al. (2016) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • Babaeizadeh et al. (2018) Babaeizadeh, M., Finn, C., Erhan, D., Campbell, R. H., and Levine, S. Stochastic variational video prediction. In ICLR, 2018.
  • Bengio et al. (2015) Bengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1171–1179, 2015.
  • Bengio et al. (1994) Bengio, Y., Simard, P., and Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
  • Bhattacharjee & Das (2017) Bhattacharjee, P. and Das, S. Temporal coherency based criteria for predicting video frames using deep multi-stage generative adversarial networks. In Advances in Neural Information Processing Systems, pp. 4271–4280, 2017.
  • Bianchini & Scarselli (2014) Bianchini, M. and Scarselli, F.

    On the complexity of neural network classifiers: A comparison between shallow and deep architectures.

    IEEE transactions on neural networks and learning systems, 25(8):1553–1565, 2014.
  • De Brabandere et al. (2016) De Brabandere, B., Jia, X., Tuytelaars, T., and Van Gool, L. Dynamic filter networks. In NIPS, 2016.
  • Denton & Fergus (2018) Denton, E. and Fergus, R. Stochastic video generation with a learned prior. arXiv preprint arXiv:1802.07687, 2018.
  • Denton et al. (2015) Denton, E. L., Chintala, S., Fergus, R., et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, pp. 1486–1494, 2015.
  • Denton et al. (2017) Denton, E. L. et al. Unsupervised learning of disentangled representations from video. In Advances in Neural Information Processing Systems, pp. 4417–4426, 2017.
  • Finn et al. (2016) Finn, C., Goodfellow, I., and Levine, S. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.
  • Goodfellow et al. (2014) Goodfellow, I. J., Pougetabadie, J., Mirza, M., Xu, B., Wardefarley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial networks. NIPS, 3:2672–2680, 2014.
  • Hochreiter & Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • Kalchbrenner et al. (2017) Kalchbrenner, N., Oord, A. v. d., Simonyan, K., Danihelka, I., Vinyals, O., Graves, A., and Kavukcuoglu, K. Video pixel networks. In ICML, 2017.
  • Kingma & Ba (2015) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015.
  • Lerer et al. (2016) Lerer, A., Gross, S., and Fergus, R. Learning physical intuition of block towers by example. In ICML, 2016.
  • Lotter et al. (2017) Lotter, W., Kreiman, G., and Cox, D. Deep predictive coding networks for video prediction and unsupervised learning. In International Conference on Learning Representations (ICLR), 2017.
  • Lu et al. (2017) Lu, C., Hirsch, M., and Schölkopf, B. Flexible spatio-temporal networks for video prediction. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 6523–6531, 2017.
  • Mathieu et al. (2016) Mathieu, M., Couprie, C., and LeCun, Y. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016.
  • Oh et al. (2015) Oh, J., Guo, X., Lee, H., Lewis, R. L., and Singh, S. Action-conditional video prediction using deep networks in atari games. In NIPS, pp. 2863–2871, 2015.
  • Pascanu et al. (2013) Pascanu, R., Mikolov, T., and Bengio, Y. On the difficulty of training recurrent neural networks. ICML, 28:1310–1318, 2013.
  • Patraucean et al. (2016) Patraucean, V., Handa, A., and Cipolla, R. Spatio-temporal video autoencoder with differentiable memory. In ICLR Workshop, 2016.
  • Ranzato et al. (2014) Ranzato, M., Szlam, A., Bruna, J., Mathieu, M., Collobert, R., and Chopra, S. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014.
  • Rumelhart et al. (1988) Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Learning representations by back-propagating errors. Cognitive modeling, 5(3):1, 1988.
  • Schuldt et al. (2004) Schuldt, C., Laptev, I., and Caputo, B. Recognizing human actions: a local svm approach. In International Conference on Pattern Recognition, pp. 32–36 Vol.3, 2004.
  • Shi et al. (2015) Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., and Woo, W.-c.

    Convolutional lstm network: A machine learning approach for precipitation nowcasting.

    In NIPS, pp. 802–810, 2015.
  • Shi et al. (2017) Shi, X., Gao, Z., Lausen, L., Wang, H., Yeung, D.-Y., Wong, W.-k., and Woo, W.-c. Deep learning for precipitation nowcasting: A benchmark and a new model. In Advances in Neural Information Processing Systems, 2017.
  • Srivastava et al. (2015a) Srivastava, N., Mansimov, E., and Salakhutdinov, R. Unsupervised learning of video representations using lstms. In ICML, 2015a.
  • Srivastava et al. (2015b) Srivastava, R. K., Greff, K., and Schmidhuber, J. Training very deep networks. In Advances in neural information processing systems, pp. 2377–2385, 2015b.
  • Tulyakov et al. (2018) Tulyakov, S., Liu, M.-Y., Yang, X., and Kautz, J. Mocogan: Decomposing motion and content for video generation. In CVPR, 2018.
  • van den Oord et al. (2016) van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with pixelcnn decoders. In NIPS, pp. 4790–4798, 2016.
  • Villegas et al. (2017a) Villegas, R., Yang, J., Hong, S., Lin, X., and Lee, H. Decomposing motion and content for natural video sequence prediction. In International Conference on Learning Representations (ICLR), 2017a.
  • Villegas et al. (2017b) Villegas, R., Yang, J., Zou, Y., Sohn, S., Lin, X., and Lee, H. Learning to generate long-term future via hierarchical prediction. arXiv preprint arXiv:1704.05831, 2017b.
  • Vondrick et al. (2016) Vondrick, C., Pirsiavash, H., and Torralba, A. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613–621, 2016.
  • Wang et al. (2017) Wang, Y., Long, M., Wang, J., Gao, Z., and Philip, S. Y. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. In Advances in Neural Information Processing Systems, pp. 879–888, 2017.
  • Wang et al. (2004) Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. TIP, 13(4):600, 2004.
  • Werbos (1990) Werbos, P. J. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, 1990.
  • Williams & Zipser (1995) Williams, R. J. and Zipser, D. Gradient-based learning algorithms for recurrent networks and their computational complexity. Backpropagation: Theory, architectures, and applications, 1:433–486, 1995.
  • Xu et al. (2018) Xu, Z., Wang, Y., Long, M., and Wang, J. Predcnn: Predictive learning with cascade convolutions. In IJCAI, 2018.
  • Zhang et al. (2017) Zhang, J., Zheng, Y., and Qi, D. Deep spatio-temporal residual networks for citywide crowd flows prediction. In AAAI, pp. 1655–1661, 2017.
  • Zilly et al. (2017) Zilly, J. G., Srivastava, R. K., Koutník, J., and Schmidhuber, J. Recurrent highway networks. In ICML, 2017.