1 Introduction
(a)  (b) 
The recent exponential growth in data collection capabilities and the use of supervised deep learning approaches have helped to make tremendous progress in computer vision. However, learning good representations for the analysis and understanding of dynamic scenes, with limited or no supervision, remains a challenging task. This is in no small part due to the complexity of the changes in appearance and of the motions that are observed in video sequences of natural scenes. Yet, these changes and motions provide powerful cues to understand dynamic scenes such as the one shown in Figure
1(a), and they can be used to predict what is going to happen next. Furthermore, the ability of anticipating the future is essential to make decisions and take action in critical real time systems such as autonomous driving. Indeed, recent approaches to video understanding [17, 22, 31] suggest that being capable to accurately generate/predict future frames in video sequences can help to learn useful features with limited or no supervision.Predicting future frames to anticipate what is going to happen next requires good generative models that can make forecasts based on the available past data. Recurrent Neural Networks (RNN) and in particular Long ShortTerm Memory (LSTM) have been widely used to process sequential data and make such predictions. Unfortunately, RNNs are hard to train due to the exploding and vanishing gradient problems. As a result, they can easily learn short term but not longterm dependencies. On the other hand, LSTMs and the related Gated Recurrent Units (GRU), addressed the vanishing gradient problem and are easier to use. However, their design is adhoc, with many components whose purpose is not easy to interpret
[13].More recent approaches [22, 37, 35, 20] advocate using generative adversarial network (GAN) learning [7]. Intuitively, this is motivated by reasoning that the better the generative models, the better the prediction will be, and viceversa: by learning how to distinguish predictions from real data, the network will learn better models. However, GANs are also reportedly hard to train, since training requires finding a Nash equilibrium of a game, which might be hard to get using gradient descent techniques.
In this paper, we present a novel DYnamical Atomsbased Network, DYAN, shown in Figure 1(b). DYAN is similar in spirit to LSTMs, in the sense that it also captures short and long term dependencies. However, DYAN is designed using concepts from dynamic systems identification theory, which help to drastically reduce its size and provide easy interpretation of its parameters. By adopting ideas from atombased system identification, DYAN learns a structured dictionary of atoms to exploit dynamicsbased affine invariants in video data sequences. Using this dictionary, the network is able to capture actionable information from the dynamics of the data and map it into a set of very sparse features, which can then be used in video processing tasks, such as frame prediction, activity recognition, semantic segmentation, etc. We demonstrate the power of DYAN’s autoencoding by using it to generate future frames in video sequences. Our extensive experiments using several standard video datasets show that DYAN can predict future frames more accurately and efficiently than current stateofart approaches.
In summary, the main contributions of this paper are:

A novel autoencoder network that captures long and short term temporal information and explicitly incorporates dynamicsbased affine invariants;

The proposed network is shallow, with very few parameters. It is easy to train and it does not take large disk space to save the learned model.

The proposed network is easy to interpret and it is easy to visualize what it learns, since the parameters of the network have a clear physical meaning.

The proposed network can predict future frames accurately and efficiently without introducing blurriness.

The model is differentiable, so it can be finetuned for another task if necessary. For example, the front end (encoder) of the proposed network can be easily incorporated at the front of other networks designed for video tasks such as activity recognition, semantic video segmentation, etc.
The rest of the paper is organized as follows. Section 2 discusses related previous work. Section 3 gives a brief summary of the concepts and procedures from dynamic systems theory, which are used in the design of DYAN. Section 4 describes the design of DYAN, its components and how it is trained. Section 5 gives more details of the actual implementation of DYAN, followed by section 6 where we report experiments comparing its performance in frame prediction against the stateofart approaches. Finally, section 7 provides concluding remarks and directions for future applications of DYAN.
2 Related Work
There exist an extensive literature devoted to the problem of extracting optical flow from images [10], including recent deep learning approaches [5, 12]. Most of these methods focus on Lagrangian optical flow, where the flow field represents the displacement between corresponding pixels or features across frames. In contrast, DYAN can also work with Eulerian optical flow, where the motion is captured by the changes at individual pixels, without requiring finding correspondences or tracking features. Eulerian flow has been shown to be useful for tasks such as motion enhancement [33]
and video frame interpolation
[23].Stateofart algorithms for action detection and recognition also exploit temporal information. Most deep learning approaches to action recognition use spatiotemporal data, starting with detections at the frame level [29, 27] and linking them across time by using very shortterm temporal features such as optical flow. However, using such a short horizon misses the longer term dynamics of the action and can negatively impact performance. This issue is often addressed by following up with some costly hierarchical aggregation over time. More recently, some approaches detect tubelets [15, 11] starting with a longer temporal support than optical flow. However, they still rely on a relatively small number of frames, which is fixed a priori, regardless of the complexity of the action. Finally, most of these approaches do not provide explicit encoding and decoding of the involved dynamics, which if available could be useful for inference and generative problems.
In contrast to the large volume of literature on action recognition and motion detection, there are relatively few approaches to frame prediction. Recurrent Neural Networks (RNN) and in particular Long ShortTerm Memory (LSTM) have been used to predict frames. Ranzato et al. [28] proposed a RNN to predict frames based on a discrete set of patch clusters, where an average of 64 overlapping tile predictions were used to avoid blockiness effects. In [31] Srivastava et al. used instead an LSTM architecture with an loss function. Both of these approaches produce blurry predictions due to using averaging. Other LSTMbased approaches include the work of Luo et al. [21] using an encoding/decoding architecture with optical flow and the work of Kalchbrenner et al. [14]
that estimates the probability distribution of the pixels.
In [22], Mathieu et al. used generative adversarial network (GAN) [7] learning together with a multiscale approach and a new loss based on image gradients to improve image sharpness in the predictions. Zhou and Berg [37] used a similar approach to predict future state of objects and Xue et al. [35] used a variational autoencoder to predict future frames from a single frame. More recently, Luc et al. [20] proposed an autoregressive convolutional network to predict semantic segmentations in future frames bypassing pixel prediction. Liu et al. [18] introduced a network that synthesizes frames by estimating voxel flow. However, it assumes that the optical flow is constant across multiple frames. Finally, Liang et al. [17] proposed a dual motion GAN architecture that combines frame and flow predictions to generate future frames. All of these approaches involve large networks, potentially hard to train.
Lastly, DYAN’s encoder was inspired by the sparsification layers introduced by Sun et al. in [32] to perform image classification. However, DYAN’s encoder is fundamentally different since it must use a structured dictionary (see (6)) in order to model dynamic data, while the sparsification layers in [32] do not.
3 Background
3.1 Dynamicsbased Invariants
The power of geometric invariants in computer vision has been recognized for a long time [25]. On the other hand, dynamicsbased affine invariants have been used far less. These dynamicsbased invariants, which were originally proposed for tracking [1], activity recognition [16], and chronological sorting of images [3], tap on the properties of linear time invariant (LTI) dynamical systems. As briefly summarized below, the main idea behind these invariants, is that if the available sequential data (i.e. the trajectory of a target being tracked or the values of a pixel as a function of time) can be modeled as the output of some unknown LTI system, then, this underlying system has several attributes/properties that are invariant to affine transformations (i.e. viewpoint or illumination changes). In this paper, as described in detail in section 4, we propose to use this affine invariance property to reduce the number of parameters in the proposed network, by leveraging the fact that multiple observations of one motion, captured in different conditions, can be described using one single set of these invariants.
Let
be a LTI system, described either by an autoregressive model or a state space model:
(1)  
(2)  
where ^{1}^{1}1For simplicity of notation, we consider here scalar, but the invariants also hold for . is the observation at time , and is the (unknown a priori) order of the model (memory of the system). Consider now a given initial condition and its corresponding sequence
. The Ztransform of a sequence
is defined as , where is a complex variable . Taking transforms on both sides of (2) yields:(3) 
where is the transfer function from initial conditions to outputs. Using the explicit expression for the matrix inversion and assuming nonrepeated poles, leads to
(4) 
where the roots of the denominator,
, are the eigenvalues of
(e.g. poles of the system) and the coefficients depend on the initial conditions. Consider now an affine transformation . Then, substituting^{2}^{2}2(using homogeneous coordinates) in (1) we have, . Hence, the order , the model coefficients (and hence the poles ) are affine invariant since the sequence is explained by the same autoregressive model as the sequence .3.2 LTI System Identification using Atoms
Next, we briefly summarize an atomsbased algorithm [36] to identify an LTI system from a given output sequence.
First, consider a set with an infinite number of atoms, where each atom is the impulse response of a LTI first order (or second order) system with a single real pole (or two conjugate complex poles, and ). Their transfer functions can be written as:
where , and their impulse responses are given by and , for first and second order systems, respectively.
Next, from (3), every proper transfer function can be approximated to arbitrary precision as a linear combination of the above transfer functions^{3}^{3}3Provided that if a complex pole is used, then its conjugate is also used.:
Hence, low order dynamical models can be estimated from output data by solving the following sparsification problem:
where denotes cardinality and the constraint imposes fidelity to the data. Finally, note that solving the above optimization is not trivial since minimizing cardinality is an NPhard problem and the number of poles to consider is infinite. The authors in [36] proposed to address these issues by 1) using the norm relaxation for cardinality, 2) using impulse responses of the atoms truncated to the length of the available data, and 3) using a finite set of atoms with uniformly sampled poles in the unit disk. Then, using these ideas one could solve instead:
(5) 
where , is a structured dictionary matrix with rows and columns:
(6) 
where each column corresponds to the impulse response of a pole , inside or near the unit disk in . Note that the dictionary is completely parameterized by the magnitude and phase of its poles.
4 DYAN: A dynamical atomsbased network
In this section we describe in detail the architecture of DYAN, a dynamical atomsbased network. Figure 1(b) shows its block diagram, depicting its two main components: a dynamicsbased encoder and dynamicsbased decoder. Figure 2 illustrates how these two modules work together to capture the dynamics at each pixel, reconstruct the input data and predict future frames.
The goal of DYAN is to capture the dynamics of the input by mapping them to a latent space, which is learned during training, and to provide the inverse mapping from this feature space back to the input domain. The implicit assumption is that the dynamics of the input data should have a sparse representation in this latent space, and that this representation should be enough to reconstruct the input and to predict future frames.
Following the ideas from dynamic system identification presented in section 3
, we propose to use as latent space, the space spanned by a set of atoms that are the impulse responses of a set of first (single real pole) and second order (pair of complex conjugate poles) LTI systems, as illustrated in Figure
2. However, instead of using a set of random poles in the unit disk as proposed in [36], the proposed network learns a set of “good” poles by minimizing a loss function that penalizes reconstruction and predictive poor quality.The main advantages of the DYAN architecture are:

Compactness: Each pole in the dictionary can be used by more than one pixel, and affine invariance allows to reuse the same poles, even if the data was captured under different conditions from the ones used in training. Thus, the total number of poles needed to have a rich dictionary, capable of modeling the dynamics of a wide range of inputs, is relatively small. Our experiments show that the total number of parameters of the dictionary, which are the magnitude and phase of its poles, can be below two hundred and the network still produces high quality frame predictions.

Adaptiveness to the dynamics complexity: The network adapts to the complexity of the dynamics of the input by automatically deciding how many atoms it needs to use to explain them. The more complex the dynamics, the higher the order of the model is needed, i.e. the higher the number of atoms will be selected, and the longer term memory of the data will be used by the decoder to reconstruct and predict frames.

Interpretable: Similarly to CNNs that learn sets of convolutional filters, which can be easily visualized, DYAN learns a basis of very simple dynamic systems, which are also easy to visualize by looking at their poles and impulse responses.

Performance: Since pixels are processed in parallel, independently of each other^{4}^{4}4On the other hand, if modeling crosspixel correlations is desired, it is easy to modify the network to process jointly local neighborhoods using a group Lasso optimization in the encoder., blurring in the predicted frames and computational time are both reduced.
4.1 DYAN’s encoder
The encoder stage takes as input a set of consecutive frames (or features), which are flattened into , vectors, as shown in Figure 1(b). Let one of these vectors be . Then, the output of the encoder is the collection of the minimizers of sparsification optimization problems:
(7) 
where is the dictionary with the learned atoms, which is shared by all pixels and is a regularization parameter. Thus, using a dictionary, the output of the encoder stage is a set of sparse vectors, that can be reshaped into features.
In order to avoid working with complex poles , we use instead a dictionary with columns corresponding to the real and imaginary parts of increasing powers of the poles in the first quadrant (), of their conjugates and of their mirror images in the third and fourth quadrant^{5}^{5}5But eliminating duplicate columns.: , , , and with . In addition, we include a fixed atom at to model constant inputs.
(8) 
Note that while equation (5) finds one (and a set of poles) for each feature , it is trivial to process all the features in parallel with significant computational time savings. Furthermore, (5) can be easily modified to force neighboring features, or features at the same location but from different channels, to select the same poles by using a group Lasso formulation.
In principle, there are available several sparse recovery algorithms that could be used to solve Problem (7), including LARS [9], ISTA and FISTA[2], and LISTA [8]. Unfortunately, the structure of the dictionary needed here does not admit a matrix factorization of its Gram kernel, making the LISTA algorithm a poor choice in this case [24]. Thus, we chose to use FISTA, shown in Algorithm 1, since very efficient GPU implementations of this algorithm are available.
4.2 DYAN’s decoder
The decoder stage takes as input the output of the encoder, i.e. a set of sparse vectors and multiplies them with the encoder dictionary, extended with one more row:
(9) 
to reconstruct the input frames and to predict the frame. Thus, the output of the decoder is a set of vectors that can be reshaped into , frames.
4.3 DYAN’s training
The parameters of the dictionary are learned using Steepest Gradient Descent (SGD) and the loss function. The back propagation rules for the encoder, decoder layers can be derived by taking the subgradient of the empirical loss function with respect to the magnitudes and phases of the first quadrant poles and the regularizing parameters. Here, for simplicity, we give the derivation for , but the one for can be derived in a similar manner.
Let be the solution of one of the minimization problems in (5), where we dropped the subscript and the superscript to simplify notation, and define
Taking subgradients with respect to :
where , sign if , and , where , otherwise. Then,
and
where the subscript denotes the active set of the sparse code , is composed of the active columns of , and is the vector with the active elements of the sparse code. Using the structure of the dictionary, we have
Figure 3
shows how a set of 160 uniformly distributed poles within a ring around the unit circle move while training DYAN with videos from the KITTI video dataset
[6], using the above back propagation and aloss function. As shown in the figure, after only 1 epoch, the poles have already moved significantly and after 30 epochs the poles move slower and slower.
5 Implementation Details
We implemented^{6}^{6}6Code will be made available in Github.
DYAN using Pytorch version0.3. A DYAN trained using raw pixels as input produces nearly perfect reconstruction of the input frames. However, predicted frames may exhibit small lags at edges due to changes in pixel visibility. This problem can be easily addressed by training DYAN using optical flow as input. Therefore, given a video with
input frames, we use coarse to fine optical flow [26] to obtain optical flow frames. Then, we use these optical flow frames to predict with DYAN the next optical flow frame to warp frame into the predicted frame . The dictionary is initialized with poles, uniformly distributed on a grid of in the first quadrant within a ring around the unit circle defined by , their 3 mirror images in the other quadrants, and a fixed pole at . Hence, the resulting encoder and decoder dictionaries have columns^{7}^{7}7Note that the dictionaries do not have repeated columns, for example conjugate poles share the column corresponding to their real parts, so the number of columns is equal to the number of poles. and and rows, respectively. Each of the columns in the encoding dictionary was normalized to have norm 1. The maximum number of iterations for the FISTA step was set to 100.6 Experiments
In this section, we describe a set of experiments using DYAN to predict the next frame and compare its performance against the stateofart video prediction algorithms. The experiments were run on widely used public datasets, and illustrate the generative and generalization capabilities of our network.
6.1 Car Mounted Camera Videos Dataset
We first evaluate our model on street view videos taken by car mounted cameras. Following the experiments settings in [17], we trained our model on the KITTI dataset [6], including 57 recoding sessions (around 41k frames), from the City, Residential, and Road categories. Frames were centercropped and resized to as done in [19]. For these experiments, we trained our model with 10 input frames () and to predict frame 11. Then, we directly tested our model without fine tuning on the Caltech Pedestrian dataset [4], testing partition (4 sets of videos), which consists of 66 video sequences. During testing time, each sequence was split into sequences of 10 frames, and frames were also centercropped and resized to . Also following [17], the quality of the predictions for these experiments was measured using MSE[19] and SSIM[34] scores, where lower MSE and higher SSIM indicate better prediction results.
Qualitative results on the Caltech dataset are shown in Figure 4, where it can be seen that our model accurately predicts sharp, future frames. Also note that even though in this sequence there are cars moving towards opposite directions or occluding each other, our model can predict all motions well. We compared DYAN’s performance against three stateoftheart approaches: DualMoGAN[17], BeyondMSE[22] and Prednet[19]. For a fair comparison, we normalized our image values between 0 and 1 before computing the MSE score. As shown in Table 1, our model outperforms all other algorithms, even without fine tuning on the new dataset. This result shows the superior predictive ability of DYAN, as well as its transferability.
For these experiments, the network was trained on 2 NVIDIA TITAN XP GPUs, using one GPU for each of the optical flow channels. The model was trained for 200 epochs and it only takes 3KB to store it on disk. Training only takes 10 seconds/epoch, and it takes an average of 230ms (including warping) to predict the next frame, given a sequence of 10 input frames. In comparison, [17] takes 300ms to predict a frame.
6.2 Human Action Videos Dataset
We also tested DYAN on generic videos from the UCF101 dataset [30]. This dataset contains 13,320 videos under 101 different action categories with an average length of 6.2 seconds. Input frames are . Following stateofart algorithms [18] and [22], we trained using the first split and using frames as input to predict the 5th frame. While testing, we adopted the test set provided by [22] and the evaluation script and optical masks provided by [18] to mask in only the moving object(s) within each frame, resized to . There are in total 378 video sequences in the test set: every 10th video sequence was extracted from UCF101 test list and then 5 consecutive frames are used, 4 for input and 1 for ground truth. Quantitative results with PSNR[22] and SSIM[34] scores, where the higher the score the better the prediction, are given in Table 2 and qualitative results are shown in Figure 5. These experiments show that DYAN predictions achieve superior PSNR and SSIM scores by identifying the dynamics of the optical flow instead of assuming it is constant as DVF does.
Finally, we also conducted a multistep prediction experiment in which we applied our model to predict the next three future frames, where each prediction was used as a new available input frame. Figure 6 shows the results of this experiment, compared against the scores for BeyondMSE [22] and DVF [18], where it can be seen that the PSNR scores of DYAN’s predictions are consistently higher than the ones obtained using previous approaches.
For these experiments, DYAN was trained on 2 NVIDIA GeForce GTX GPUs, using one GPU for each of the optical flow channels. Training takes around 65 minutes/epoch, and predicting one frame takes 390ms (including warping). Training converged at 7 epochs for . In contrast, DVF takes severals day to train. DYAN’s saved model only takes 3KB on hard disk.
UCF101 







PSNR  28.6  30.11  31.6  32.86  34.26  
SSIM  0.89  0.88  0.93  0.93  0.96 
7 Conclusion
We introduced a novel DYnamical Atomsbased Network, DYAN, designed using concepts from dynamic systems identification theory, to capture dynamicsbased invariants in video sequences, and to predict future frames. DYAN has several advantages compared to architectures previously used for similar tasks: it is compact, easy to train, visualize and interpret, it is fast to train, it produces high quality predictions fast, and generalizes well across domains. Finally, the high quality of DYAN’s predictions show that the sparse features learned by its encoder do capture the underlying dynamics of the input, suggesting that they will be useful for other unsupervised learning and video processing tasks such as activity recognition and video semantic segmentation.
References
 [1] Ayazoglu, M., Li, B., Dicle, C., Sznaier, M., Camps, O.I.: Dynamic subspacebased coordinated multicamera tracking. In: Computer Vision (ICCV), 2011 IEEE International Conference on. pp. 2462–2469. IEEE (2011)
 [2] Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2(1), 183–202 (2009)
 [3] Dicle, C., Yilmaz, B., Camps, O., Sznaier, M.: Solving temporal puzzles. In: CVPR. pp. 5896–5905 (2016)

[4]
Dollár, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: A benchmark. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. pp. 304–311. IEEE (2009)
 [5] Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2758–2766 (2015)
 [6] Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32(11), 1231–1237 (2013)
 [7] Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)

[8]
Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th International Conference on Machine Learning (ICML10). pp. 399–406 (2010)
 [9] Hesterberg, T., Choi, N.H., Meier, L., Fraley, C.: Least angle and l1 penalized regression: A review. Statistics Surveys 2, 61–93 (2008)

[10]
Horn, B.K., Schunck, B.G.: Determining optical flow. Artificial intelligence
17(13), 185–203 (1981)  [11] Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (tcnn) for action detection in videos. arXiv preprint arXiv:1703.10664 (2017)
 [12] Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: Evolution of optical flow estimation with deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). vol. 2 (2017)
 [13] Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: ICML. pp. 2342–2350 (2015)
 [14] Kalchbrenner, N., Oord, A.v.d., Simonyan, K., Danihelka, I., Vinyals, O., Graves, A., Kavukcuoglu, K.: Video pixel networks. arXiv preprint arXiv:1610.00527 (2016)
 [15] Kalogeiton, V., Weinzaepfel, P., Ferrari, V., Schmid, C.: Action tubelet detector for spatiotemporal action localization. arXiv preprint arXiv:1705.01861 (2017)
 [16] Li, B., Camps, O.I., Sznaier, M.: Crossview activity recognition using hankelets. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. pp. 1362–1369. IEEE (2012)
 [17] Liang, X., Lee, L., Dai, W., Xing, E.P.: Dual motion gan for futureflow embedded video prediction. arXiv preprint (2017)
 [18] Liu, Z., Yeh, R., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: International Conference on Computer Vision (ICCV). vol. 2 (2017)
 [19] Lotter, W., Kreiman, G., Cox, D.: Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104 (2016)
 [20] Luc, P., Neverova, N., Couprie, C., Verbeek, J., LeCun, Y.: Predicting deeper into the future of semantic segmentation. In: of: ICCV 2017International Conference on Computer Vision. p. 10 (2017)
 [21] Luo, Z., Peng, B., Huang, D.A., Alahi, A., FeiFei, L.: Unsupervised learning of longterm motion dynamics for videos. arXiv preprint arXiv:1701.01821 2 (2017)
 [22] Mathieu, M., Couprie, C., LeCun, Y.: Deep multiscale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015)
 [23] Meyer, S., Wang, O., Zimmer, H., Grosse, M., SorkineHornung, A.: Phasebased frame interpolation for video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1410–1418 (2015)
 [24] Moreau, T., Bruna, J.: Understanding the learned iterative soft thresholding algorithm with matrix factorization. arXiv preprint arXiv:1706.01338 (2017)
 [25] Mundy, J.L., Zisserman, A.: Geometric invariance in computer vision, vol. 92. MIT press Cambridge, MA (1992)
 [26] Pathak, D., Girshick, R., Dollár, P., Darrell, T., Hariharan, B.: Learning features by watching objects move. In: Computer Vision and Pattern Recognition (CVPR) (2017)
 [27] Peng, X., Schmid, C.: Multiregion twostream rcnn for action detection. In: ECCV. pp. 744–759. Springer (2016)
 [28] Ranzato, M., Szlam, A., Bruna, J., Mathieu, M., Collobert, R., Chopra, S.: Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604 (2014)
 [29] Saha, S., Singh, G., Sapienza, M., Torr, P.H., Cuzzolin, F.: Deep learning for detecting multiple spacetime action tubes in videos. arXiv preprint arXiv:1608.01529 (2016)
 [30] Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
 [31] Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using lstms. In: International conference on machine learning. pp. 843–852 (2015)
 [32] Sun, X., Nasrabadi, N.M., Tran, T.D.: Supervised multilayer sparse coding networks for image classification. arXiv preprint arXiv:1701.08349 (2017)
 [33] Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phasebased video motion processing. ACM Transactions on Graphics (TOG) 32(4), 80 (2013)
 [34] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004)
 [35] Xue, T., Wu, J., Bouman, K., Freeman, B.: Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In: Advances in Neural Information Processing Systems. pp. 91–99 (2016)
 [36] Yilmaz, B., Bekiroglu, K., Lagoa, C., Sznaier, M.: A randomized algorithm for parsimonious model identification. IEEE Transactions on Automatic Control 63(2), 532–539 (2018)
 [37] Zhou, Y., Berg, T.L.: Learning temporal transformations from timelapse videos. In: European Conference on Computer Vision. pp. 262–277 (2016)
Comments
There are no comments yet.