Humans are remarkably capable of predicting object motions and scene dynamics over short timescales. But why do humans excel at this work? Generally, we depend on extensive knowledge about the real-world through our past visual experience that consisted of rich objects and interactive relationships of different scenes to make future predictions. We can apply accumulated knowledge when predicting a new scene for a short time, even a little longer period. The prediction ability is also important for the intelligent agents and autonomous systems, because it is a useful and valuable auxiliary method for the task of path planning and decision making. To accomplish the prediction task, we need a model that is usually required to understand the scenes and how the scenes may transform. It is a challenging task for the autonomous vehicle, because the diverse objects and physics rules of the visual scenes are difficult to describe.
We are particularly inspired by recent work that made video prediction about the motion of human and the manipulator in an end-to-end framework . In their paper, the background are fixed and the interaction between different objects are simple. In addition, limited number of explicit states and motions are difficult to deal with the complex urban driving scenes. Therefore, in this paper, we introduce the optical flow to our framework because it implicitly contains valuable motion information of each pixel. we propose a novel predictive model that exploit spatial-temporal appearance information of previous frames and the inter-frame optical flow information as shown in Figure 1
. Our model has two independent input steams, one is for the previous RGB frames and the other is for the optical flow estimation. We take an end-to-end framework in allowing the model to map directly from input pixels to the prediction of next frame or longer. Our model can be trained simply using sequences of RGB images and optical flow maps without manual labeled data or camera ego-motion information.
Our approach builds upon the insight that merge appearance of images and the motion of pixels extracted from the optical flow. The prediction model includes three core modules. First, we refer to as optical flow prediction networks (OFPN), which inputs 4 successive optical flow maps and outputs the prediction optical flow map. All the 5 optical flow maps then enter into the second part that is the motion estimation networks (MEN). The MEN is responsible for the motion estimation, and outputs dense transformation maps T. The last module, which we call spatial-temporal prediction networks (STPN), outputs the next frame or longer prediction frames. In 
, the authors developed an action-conditioned prediction model that explicitly models pixel motion. While limited number of explicit motion could produce some reasonable video prediction for certain types of scenes (e.g., static background scene), the same model would fail miserably when presented with another set of scenes with more diverse appearance and complex interaction between objects. Thus, we aim to formulate the optical flow motion estimation as the supplementary information for the video prediction neural networks. Experiments demonstrate that our method significantly outperforms prior methods on prediction across diverse and complex visual scenes.
The main contributions of our paper are summarized as follows:
We propose a novel video prediction model that uses the previous frames and optical flow maps between neighbor frames.
Our video prediction model achieves state-of-the-art performance on the KITTI dataset and the Cityscapes dataset.
Our model is capable of making prediction of natural images and semantic segmentations.
Our model can learn jointly from the optical flow prediction loss and the frame prediction loss.
The structure of this paper is organized as follows. In section 2, we introduce the related works about video prediction problem. In section 3, we introduce our framework for video prediction using optical flow. Experiments of video prediction using optical flow maps are reported and analyzed in section 4. We conclude our paper in the last section.
Ii Related work
In this section, we survey the most related works. We first provide a literature review of video prediction with a particular focus on the video prediction using deep learning methods.
Ii-a Video prediction
Scenes of Video Prediction : Motivated by the great success of deep learning in machine vision (e.g., image classification and object recognition) . Recently some models using the deep learning approaches have been proposed to tackle the problem of video prediction under different scenes. Early work about video prediction focused on low resolution video clips or images containing simple predictable content without any background such as the moving digit [3, 4] and Atari Game prediction . Higher resolution natural scenes with static background are more complicated but promising.  proposed different models to predict the actions, poses or paths of human.  built a model to make prediction of the robot arms. Generally these training images have static background, and visual representations are not that complex. There are some real-world videos that contain not only the moving targets but also the dynamic background such as the urban traffic scenes.  predicted realistic looking frames, and  only predicted future semantic segmentations rather than natural frames. In contrast to these work, our model can predict complicated real-world scenes (e.g., urban scenes). At the same time, we also verify our model to predict the semantic segmentations.
Methods of Video Prediction : There have been a number of promising approaches for the task of video prediction. 
introduced a generative model that used the recurrent neural network (RNN) to predict the next frame. adapted a LSTM model for video prediction. 
achieved sharper video prediction by using multi-scale architecture and an adversarial loss function. Rather than just transforming the pixels from previous frames, our method warps the optical flow features containing rich motion information over RGB images. presented an unsupervised representation learning approach to predict long-term 3D motions.  proposed a generative adversarial network (GAN) for video prediction with a convolutional architecture that untangles the foreground of scenes from the background.  developed an action-conditioned video prediction model by predicting a distribution over pixels. In a similar spirit, 
introduced a Convolution neural networks (CNN) for learning the dynamics of a physical system from raw visual observations. This work highlighted the importance of using the dynamics of the system. In contrast to the traditional deterministic method, proposed a probabilistic method for the frame prediction to solve the uncertainty problem.
In this section, we introduce our framework for video prediction as shown in Figure 2. Our ultimate goal is to make accurate prediction of future frames in complex and real-world scenes. Given RGB frames observed at time steps and corresponding optical flow maps extracted from neighbor frames, our model takes advantage of the history information of frames to predict the future frames. In addition, our model is able to jointly train optical flow prediction and video prediction.
The core modules of our model are the optical flow prediction networks (OFPN), the motion estimation networks (MEN) and the spatial-temporal prediction networks (STPN). Formally, let be a sequence of predicted frames. Input RGB frames of video sequences and optical flow are denoted as and respectively.
Iii-a Optical flow prediction networks (OFPN)
Optical flow is a vector field, having the same size as the RGB frames, with 2D flow vector per pixel. Optical flow represents the apparent displacements of pixels in and directions due to the relative motion between consecutive frames see Figure 3(b). Acquiring optical flow field needs precise per-pixel localization, and also requires finding correspondences between a pair of input RGB frames (e.g., and ) shown in Figure 3(a).
As we want to predict the next RGB frame, the last optical flow that extracts between the predicted frame and the frame before that is unknown. We therefore use the OFPN to predict the future optical flow. In this approach, the OFPN is trained as an auxiliary task to predict the optical flow over the previous optical flow . By this way, we can get the optical flow frame . We can formulate the objective function for the optical flow prediction as:
where is an optical flow prediction loss.
Minimizing the above loss equation 1 can generate the predicted optical flow field , which perfectly minimize the auxiliary loss. The predicted optical flow together with previous optical flow maps are applied as input to the motion estimation networks.
Iii-B Motion estimation networks (MEN)
The motion estimation networks (MEN) produces a transformation feature map , representing the dense motion information of pixels (i.e. the rotation and displacement of every pixel). The optical flow implicitly consists of the object motion. Unlike , we utilize the optical flow to represent the motion field rather that the limited number of states and actions in our approach.
The MEN uses the 3D convolutional layers with kernel () to generate the motion transformation for each pixel.
Note that the inputs to the MEN are the predict optical flow frame from OFPN and previous optical flow frames , and the outputs are the transformation map that consists of relative displacement and rotation between the consecutive frames. Therefore, the transformation maps are used as input to the spatial-temporal prediction networks (STPN), wrapping the corresponding RGB frames to predict the future frame.
Iii-C Spatial-temporal prediction networks (STPN)
The STPN is one core part of the proposed model. It can generate the prediction frame. To generate more accurate prediction frame, we employ stacked Convolutional LSTM layers (ConvLSTM)  in the STPN. Recurrence through convolutions is good at multi-step video prediction because it utilizes the spatial invariance of image, as well as the temporal information between the neighbor frames.
More formally, the transformation applied to the current frame produces the next predict frame . Let denote the homogeneous coordinates of pixels in the current image and in the next image.
Training the STPN comes down to minimizing the prediction error between the predict frame and the ground truth frame . Inspired by , the loss is used in the STPN,
where is the gradient difference loss, defined as
where , and denotes the absolute value function. and denote and respectively.
Our final objective becomes,
where and are the weighting for the optical flow loss and prediction networks respectively. When training the model we choose .
Iii-D Network architecture
For the optical flow prediction network (OFPN), we use 3 convolutional layers and all layers are followed by batch normalization (BN) and ReLU activation. In the Motion estimation networks (MEN), 4 3D-convolutional layers are used to generate the transformation maps. We employ ConvLSTM as backbone model of the spatial-temporal prediction networks. This architecture captures spatial-temporal correlations very well. The detailed specifications of the three parts are shown in Table I.
|Layer type||Kernel size||Feature maps|
|Optical flow prediction networks (OFPN)|
|Motion estimation networks (MEN)|
|Spatial-temporal prediction networks (STPN)|
In this section, we evaluate the performance of our model and make comparison with previous approaches of video prediction. We evaluate our system on the Cityscapes dataset  as well as on the KITTI dataset .
Training details:17]. For all the experiments, we used batch normalization for all the layers except the output layers, and the Adam  optimizer with the suggested super parameters (learning rate ). Our systems are trained and deployed on a NVIDIA TITAN X GPU with 12GB memory. During training and testing stage, we resize the image sequences of the KITTI dataset to and images of the Cityscapes dataset to . Our networks are trained in an end-to-end way.
Iv-a Video prediction of natural scenes
In our first set of experiments, we train our system on real-world sequences from the KITTI dataset, with about 20K frames in total. No other data augmentation is used. The dataset has 18K and 2K images for training and testing respectively. Training and testing sequences are given in the form of tuple , where I are the input raw RGB frames, are the optical flow maps input and is the predicted frame. We train our model using a frame rate of 1, and taking 5 consecutive RGB frames and 4 corresponding optical flow maps as the input. That is, we pass in initial RGB natural sequences i.e., , as well as the corresponding optical flow maps include , then roll out the model sequentially, passing in the motion predictions from the previous optical flow maps.
In Figure 4, we compare the video prediction results using different approaches from the KITTI test set. In terms of complex real-world urban scenes, our results match the ground truth frames significantly better than previous video prediction methods. When compared with the ground truth, PredNet produces the blurry prediction results and does not give the accurate prediction as shown in Figure 4(c). The results generated by PredNet is very similar to the input frames rather than the ground truth frames. Overall, the results of our approach tends to be more accurate and sharper.
To evaluate the quality of the video prediction generating from different models, we compute the Peak Signal to Noise Ratio (PSNR) and the Structural Similarity Index Measures (SSIM) between the true frame and the predict frame . The SSIM ranges from to , and larger score represents better prediction performance. In table 1, we presents the quantitative comparisons among different methods. We can clearly see the advantage of our method over other methods. The optical flow containing the motion features are fed into the STPN to yield the higher quality prediction.
Iv-B Video prediction of semantic segmentations
In addition to the prediction of natural image sequences, we also investigate predicting future semantic segmentations. Semantic segmentation is a simple form of visual understanding, where each pixel has the corresponding label (e.g., vehicle, road, pedestrian, etc.). Making prediction in semantic-level does not consider too much about the detailed textures and edge boundaries. By this way, the prediction model can focus on predicting the motions of different objects. We train the proposed model on the Cityscapes dataset, which has 8K training and 1K testing semantic segmentations. Here we used loss function for the video prediction.
We show some prediction results made by our model and ConvLSTM with loss function quantitatively in Figure 5. Our model can make a more accurate prediction than the ConvLSTM. The results generated by ConvLSTM is blurry, and is similar to the input semantic frames instead of the ground truth frames. Though the results by our networks have some noise, the motions of the pedestrian are precisely predicted. The results demonstrate that our model is an effective method to make predictions in semantic segmentation space.
Table III presents the mean pixel SSIM error and PSNR of different methods. The higher PSNR means better prediction performance, and our model trained with the loss objective function can achieve better performance. From the SSIM and PSNR in the table, frame prediction generated by our method can make more accurate prediction of future frames.
Overall, our model using optical flow is capable of making accurate video prediction both in RGB-level and semantic-level large-scale scenes.
We present a novel end-to-end learning framework that utilizes the spatial-temporal information of frames that make high quality video prediction. Our model is trained on unlabeled image sequences with an application to diverse and complex urban scenes. The motion of pixels in optical flow is implicitly incorporated in the feature maps. By doing this, our model can predict more accurate future frames both at RGB-level and semantic-level. In addition, our model as a predictive perception part can be easily deployed on the autonomous driving system. There are some other challenging problems for video prediction problems, such as how to make longer prediction. It is difficult to make longer horizon prediction because the uncertainty of the complex and changeable scenes will increase rapidly.
C. Finn, I. Goodfellow, and S. Levine, “Unsupervised learning for physical interaction through video prediction,” inAdvances in Neural Information Processing Systems, 2016, pp. 64–72.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inAdvances in neural information processing systems, 2012, pp. 1097–1105.
N. Srivastava, E. Mansimov, and R. Salakhudinov, “Unsupervised learning of
video representations using lstms,” in
International Conference on Machine Learning, 2015, pp. 843–852.
-  T. Xue, J. Wu, K. Bouman, and B. Freeman, “Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks,” in Advances in Neural Information Processing Systems, 2016, pp. 91–99.
-  J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh, “Action-conditional video prediction using deep networks in atari games,” in Advances in Neural Information Processing Systems, 2015, pp. 2863–2871.
-  M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440, 2015.
-  W. Lotter, G. Kreiman, and D. Cox, “Deep predictive coding networks for video prediction and unsupervised learning,” arXiv preprint arXiv:1605.08104, 2016.
-  N. Neverova, P. Luc, C. Couprie, J. Verbeek, and Y. LeCun, “Predicting deeper into the future of semantic segmentation,” arXiv preprint arXiv:1703.07684, 2017.
-  M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra, “Video (language) modeling: a baseline for generative models of natural videos,” arXiv preprint arXiv:1412.6604, 2014.
-  Z. Luo, B. Peng, D.-A. Huang, A. Alahi, and L. Fei-Fei, “Unsupervised learning of long-term motion dynamics for videos,” arXiv preprint arXiv:1701.01821, 2017.
-  C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos with scene dynamics,” in Advances In Neural Information Processing Systems, 2016, pp. 613–621.
-  N. Watters, A. Tacchetti, T. Weber, R. Pascanu, P. Battaglia, and D. Zoran, “Visual interaction networks,” CoRR, vol. abs/1706.01433, 2017.
-  B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, vol. 17, no. 1-3, pp. 185–203, 1981.
-  S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” in Advances in neural information processing systems, 2015, pp. 802–810.
-  M. Cordts, M. Omran, S. Ramos, T. Scharwächter, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset,” in CVPR Workshop on The Future of Datasets in Vision, 2015.
-  A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in
-  F. Chollet et al., “Keras,” https://github.com/fchollet/keras, 2015.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.