After significant successes in face detection, face recognition and object detection commonly used in our daily life, computer vision researchers are now aiming at understanding video which is one dimension more difficult. These successes rely on advanced machine learning techniques and training data which require computational power, mainly deep networks. Hence, the process of data acquisition may be as vital as the technique used. Large data sets, such as a million object and animal photos, hundreds of thousands of faces  or millions of scenes 
, enables complex neural networks to train successfully. However, similar results can never be achieved through small data sets manually captured by researchers themselves. Video data sets or specifically human action data sets are more difficult to compile. There are two common scenarios to generate a human action data set: (1) asking subjects to do a series of actions in front of a camera (2) labeling an existing video from the internet. The first scenario is not scaleable considering the number of subjects and the limitations imposed by the capturing environment. These types of data sets are not common anymore due to their small size. Some examples of the second scenario are UCF 101 containing 101 actions of thousands of online clips, Hollywood2  containing 12 actions in around 3 thousands clip extracted from movies and the kinetics  including 400 actions from hundreds of thousands of YouTube videos. Although these data sets are very useful to benchmark the accuracy of different algorithms, the clips or actions are not necessarily useful for real world action recognition tasks such as security surveillance cameras, sport analysis, smart home devices, health monitoring etc, as each scenario has different settings and sets of actions. A solution would be for researchers to collect their own data sets which may prove to be costly and time consuming.
In this paper, we’ve introduced a novel way to partition an action video clip into action, subject and context. We showed that we can manipulate each part separately and assemble them with our proposed video generation model into new clips. The actions are represented by a series of skeletons, the context is an still image or a video clip, and the subject is represented by random images of the same person. We can change an action by extracting it from an arbitrary video clip, generate it through our proposed skeleton trajectory model, or by applying perspective transform on existing skeleton. Additionally, we can change the subject and the context using arbitrary video clips, enabling us to arbitrarily generate action clips. This is particularly useful for action recognition models which require large data sets to increase their accuracy. With the use of a large unlabeled data and a small set of labeled data, we can synthesize a realistic set of training data for training a deep model.
We called it DIY (do it yourself) because we can eventually build our own data set from a small one. Similar to actual data collection, not only we can add a new person or action to the data set, but also internally expand the data set or capture the same data from different angles with very little time and effort.
Lastly, to quantitatively evaluate our data generation technique, we applied it to UT Kinects  a human action data set comprised of 10 actions in 200 video clips. We generated new video clip types by adding new subjects or actions or by expanding current action and subjects. It is shown that generated data along with the existing data, can improve the performance of well-performed video representation networks: I3D  and C3D  on action recognition task. For further investigation, we applied our method and action recognition task to actions with two persons in SUB interact  data sets. The outline of this paper is as follows. In §2 we’ve described related works in action recognition, data augmentation and video generative model. Section 3 introduces our video generation methods as well as skeleton trajectory generation methods with samples and use cases. In §4, we’ve discussed the data sets and action recognition methods used to evaluate our work. In §5 we’ve presented the extensive experimental data backing our claims. Our paper is concluded in §6.
2 Related Works
2.1 Action Recognition
Human action recognition has drawn attention for some time. Before deep learning era of computer vision, many researchers tried to inflate successful 2D features or descriptors in order to solve this problem such as 3d SIFT , 3d bag of features  or dense trajectories . Please refer to  for a comprehensive survey of these types of algorithms.
Deep learning networks significantly outperformed transitional approaches and are therefore the focus of this paper. Unlike image representation network architecture, the video representation networks haven’t had satisfactory advances. There have been different approaches to this problem. Some used the convolution and layers in 2D (image-based) [7, 60] while some used 3D (video-based) kernels [15, 47, 4]. Input to the networks could be just RGB video  while optical flow could be used as an additional input [9, 4]. Information could propagate across frames either through LSTMs [7, 60] or feature aggregation .
Using synthetic data or data warping for training classifiers has been proven effective[23, 63, 43]. Sato et al.  proposes a method for training a neural network classifier using augmented data. Wong et al.  thoroughly investigated the benefits of data augmentation for classification tasks. In action recognition tasks, data is usually very limited, since collecting and annotating videos is difficult. Although one can use our algorithm for data augmentation by generating videos varying in background, human appearance, and type of actions, this is not the purpose of our work. Unlike data augmentation that is limited to manipulating data, our method is capable of generating new data with new content and visual features.
2.2 Video Generative Models
Video generation has posed as a challenge for a number of years. The early work in the field focused on generating texture [8, 46, 55]. In recent years with the success of generative models in image generation such as GANs , VAEs [22, 35], Plug&Play Generative Networks 
, Moment Matching Networks, and. PixelCNNs , a new window of opportunity has opened towards generating videos using generative models. In this paper, we use GANs to generate human skeleton trajectories and realistic video sequences. GAN consists of a discriminator and a generator, trained in a 2-player zero-sum game. Although GANs have shown promising results on image generation [6, 34, 62, 28, 27], they have proven to be difficult to train. To address this issue, Arjovsky et al.  proposed Wassertein GAN to combat mode collapse with more stability. Salimans et al.  introduced several tricks for training GANs. Karras et al.  proposed a novel method for training GANs through progressively adding new layers. Ronneberger et al.  proposed U-Net, a convolutional network for segmentation.
GANs have previously been used for video generation. There are two lines of work in video generation. First is video prediction where given the first few frames of a video, the goal is to predict the future frames. Several papers focus on producing pixel values conditioned on the past observed frames [59, 45, 32, 30, 17, 58, 51]. Another group of papers aimed at reordering the pixels from the previous frames to generate the new ones [49, 10].
In the second line of work, the goal is to generate a sequence of video frames conditioned on label, single frame, etc. Early attempts assumed video clips to be fixed length and embedded in a latent space [52, 37]. Tulyakov et al.  proposed to decompose motion from content and generate videos using a recurrent neural net. Our work is different from  where their model learns motion and content in the same network whereas we separated them completely.Furthermore,  is not capable of generating complex human motions. Also filling gaps in the background initially blocked by the person in the input video is a difficult task for this method. On the other hand, our method handles these challenges by completely separating appearance, background, and motion. Our work is somewhat similar to 
, which does video forecasting using pose estimation, by modeling the movement of human using a VAE and then using a GAN to predict the pixel value of the future frames.
Our work lies in the ”video generation” category where we focus on employing video generation techniques to generate human action videos. In our proposed method we completely separate background, skeleton motion, and appearance, allowing us to model frame generation and skeleton trajectory independently. So, one would require labeled data and the other can benefit from unlimited unlabeled human action videos available on internet, respectively.
We define problem as follows; given an action label a small set of reference images each containing a human subject from which a sequence of video frames is generated featuring a human with the same appearance as the human in the reference image set performing an action . Modeling the (human/camera) motion and generating photo-realistic video frames may be challenging but knowing the location/motion of human skeletons in each frame would simplify it. Hence, we subdivided the problem into two simpler tasks (inspired by [48, 51]).
The first task comprised of the reference images , background image , and a sequence of target skeletons employed to render photo-realistic video frames of the person in moving according to on background.
The second task produced the target skeleton sequences for the first part. In another words, given action label , a sequence of skeletons of a random person performing action was generated.
By combining the two tasks, we created a novel algorithm that can generate arbitrary number of human action videos with varying backgrounds, human appearances, actions, and ways each action is performed.
3.1 Video Generation from Skeleton and Reference Appearance
In this section, we explain our algorithm used to generate a video sequence of a person based on given appearance () and a series of target skeletons () in an arbitrary background(). In our proposed model, we use GAN conditioned on the appearance, the target skeleton, and the background. Our proposed generator network works in a frame-by-frame fashion, where each frame is generated independently from others. We have tried using LSTMs and RNNs to take into account smoothness of the videos. However, our experiments show frames that are generated separately are sharper as RNNs/LSTMS may introduce blurriness to the generated frames.
. Our generator network needs a reference image of the person in order to generate images of the same person with arbitrary poses/backgrounds. However, one reference image may not have all the appearance information due to occlusions in some poses (e.g. face is not visible when the person is not facing the camera). To overcome this issue to some extent, we provided multiple reference images of the person to the network. In both training and testing, these images were selected completely at random, so that network would be responsible for choosing the right pieces of appearance features from the set of input images. These images could be selected with a better heuristic to produce better results though this is not in the scope of this work.
The reference images were pre-processed before incorporation into the network. First we extracted the human skeleton from each reference image (using ), then used an offline transform to map the RGB pixel values of each skeleton part from the image to the target skeleton. Also, a binary mask of where the transformed skeleton is located was created. All these images, , along with the background, , and the target skeleton, were stacked.
Inspired by pix2pix, we used a U-net style conditional GAN. The generator , is conditioned on the set of transformed images and corresponding masks, along with the background and target skeleton. The generator, , maps to the target frame , such that it fools the discriminator, . The discriminator, , on the other hand is trained to discriminate between real images and the fake images generated by . The architecture of the discriminator is illustrated in Fig. 3. The pipeline and architecture of the generator is illustrated in Fig. 2. Fig. 3(a) illustrates some of the results.
The objective function of GAN is expressed as:
Following  we added an loss to the objective function, which resulted in sharper generated frames.
In initial experiments, we noticed that using only loss and GAN loss is not enough as the output background would be sharp but the region that the target person is supposed to be was blurry. Subsequently, we introduced a ”Regional L1 loss” with a larger weight as following,
where ”masked” masks out the region where the person was located. This mask was generated based on the target skeleton, , using morphological functions (erode, etc.).
Our final objective is as follows:
where and are weights of and regional losses (in our experiments ). and the goal is to solve the following optimization problem.
Multi-person Video Generation In a nutshell, our algorithm merges transformed images of a person on an arbitrary pose with an arbitrary background in a natural photo-realistic way. We managed to go beyond simple one person human action videos and extended our method to multi-person interaction videos as well. For this purpose, we trained our model on a two person interaction data set . The only difference with single frame generation process is that in the pre-processing phase, for each person in the input reference image, we needed to know the corresponding skeleton in the target frame, we then transformed each person’s body parts to his/her own body parts in the target skeleton. There are some challenges in this task such as occlusions in certain interactions (e.g. passing by, hugging, etc.). The data set that we used contains these occlusions to some extent. Our method is able to handle relatively well some simple occlusions that occur in such interactions. We acknowledge that there is room for improvement in this area, but that would not fit in the scope of this work. Fig. 3(b) illustrates some of the generated videos.
3.2 Skeleton Trajectory Generation
In the previous section, we explained how we designed a method that enables us to generate videos of an arbitrary person in any background based on any given sequence of skeletons. Although number of backgrounds and persons are unlimited, the number of labeled skeleton sequences are limited to the ones in the existing data sets. We propose a novel solution to this problem; using a generative model to learn the distribution of skeleton sequences conditioned on the action labels. This allows us to generate as many skeleton sequences as needed for the actions in the data set. Fig. 6 shows a few sample generated skeleton sequences.
We used small data sets for training our model. However, due to the nature of the problem and the limited amount of data, generating long sequences of natural looking skeletons proved challenging. Thus we aimed at generating relatively short fixed-length sequences. Having said that, training GAN in such way is still prone to problems such as mode collapse, divergence, etc. In designing the generator and discriminator networks, we have taken into account these problems (e.g. introduced batch diversity in the discriminator, created multiple discriminators, etc.).
Skeleton Trajectory Representation. Each skeleton consists of 18 joints. We represented each skeleton with a vector (a flattened version of matrix of joints coordinates). We normalized the coordinates by dividing them by ”height” and ”width” of the original image.
Generator Network. We used a conditional GAN model to generate sequences of skeletal positions corresponding to different actions. Our generator has a ”U” shape architecture where input consists of action label and noise, and output is a tensor representing a human skeleton trajectory with time-steps.
Based on our results, providing a vector of random noise for each time step helps the generator to learn and generalize better. So the input noise, , is a tensor with size, is replicated and concatenated to the 3rd dimension of the . The rest is a ”U” shaped network with skip connections that maps the input () to a skeleton sequence . Fig. 4(a) illustrates the network architecture. We also used Dense-net  blocks in our network.
. Architecture of discriminator is three-fold. The base for discriminator is 1D convolutional neural net along the time dimension. In order to allow discriminator to distinguish ”human”-looking skeletons, we used sigmoid layer on top of fully-convolutional net. To discriminate ”trajectory”, we used set of convolutions along the time with stride 2, shrinking output to onecontaining features of the whole sequence. To prevent mode collapse, first we grouped fully convolutional net outputs across batch dimension.We then used min, max and mean operations across batch, and provided these statistical information to the discriminator. This method seems to provide enough information about distribution of values across batch and allows to change batch size during training. For detailed discriminator architecture see Fig. 4(b).
Our objective function is:
where and are action label and skeleton trajectories, respectively. We aim to solve the following:
In this work, we have shown that generative models can be adopted to learn human skeleton trajectories. We trained a Conditional GAN on a very small data set (200 sequences) and managed to generate natural looking skeleton trajectories conditioned on action labels. This can be used to generate a variety of human action sequences that don’t exist in the data set. However, our work is limited to a fixed number of frames. Thus for future work, we’ll work to improve our method so that it’ll accommodate longer sequences varying in length. We also explained that in addition to the generated skeletons, we can also use real skeleton sequences from other sources (other data sets, current data set but different subjects) to largly expand existing data sets.
4 Datasets and Action Recognition Methods
4.1 Data Sets
In this paper, we’ve claimed to expand small amount of action videos by addition of new generated videos. We targeted smaller action recognition data sets and expanded them to meet the large data load requirements of recent action recognition algorithms such as UCF 101 , the kinetics  or NTU RGB+D . This eliminates the need for time and cost inefficient data acquisition processes.
UT Kinects : One of the data sets wildly used in our experiments is UT Kinects which includes 10 action labels: Walk, Sit-down, Stand-up, Trow, Push, Pull, Wave-hand, Carry and Clap-hand. There are 10 subjects that perform each of these action twice in front of a rig of RGB camera and Kinect. Therefore in total they are 200 action clips of RGB and depth though depth is ignored. All videos are taken in office environment with similar lighting condition and the position of the camera is fixed.
For the training setup, 2 random subjects were left out (20%, used for testing) and the experiments were carried out using 80% of the subjects. The reported results are the average of six individual runs. The 6 train/test runs are constant throughout our experiment.
SUB Interact : Since our methods work with multiple human subjects in a scene, we picked SUB Interact. It is a kinect captured human activity recognition data set depicting two person interaction. It contains 294 sequences of 8 classes (Kicking, Punching, Hugging, Shaking-hand, Approaching, departing and Exchanging objects) with subject independent 5-fold cross validation. The original data includes RGB, depth and skeleton but we only use RGB for our purpose. We used a 5-fold cross validation throughout our experiments and reported the average accuracy.
KTH : KTH action recognition data set was commonly used at the early stage of action recognition. It includes 600 low resolution clips of 6 actions: Walk, Wave-hand, Clap-hand, Jogging, running and boxing which are divided in train, test and validation. The first three action labels are shared with UT data set while the last three are new. We used this data set to add new action to UT data set and for cross data set evaluation.
4.2 Action Recognition Methods
We used the following deep learning networks which have previously shown decent performance on recent action recognition data sets.
Convolutional 3D (C3D) : is a simple and efficient 3-dimensional ConvNet for spatiotemporal feature which shows decent performance on video processing benchmarks such as action recognition in conjunction with large amount of training data. We used their proposed network with 8 convolutional layers, 5 pooling layers and 2 fully connected layers with 16-frames of RGB input. They released a network pre-trained on UCF Sport  which we used for our experiments aimed at training from scratch, denoted as C3D(p) vs. C3D(s). Unfortunately we can not couldn’t converge the C3D when we trained from scratch on UT data set but it converged successfully on SUB.
Inflated 3D ConvNets (I3D)  : is a more complex model which has recently been proposed as the state-of-the-art for action recognition task. It builds upon Inception-v1 , but inﬂates their filters and pooling kernels into 3D. It is a two-steam network which uses both RGB and optical flow input with inputs. We only used RGB for simplicity. They released a network pre-trained on ImgeNet  followed by the Kinetics . We used this for our experiments aimed at training from scratch, denoted as I3D(p) vs. I3D(s).
We use data augmentation by translation and clipping as mentioned in  for all experiments. For training, we only used the original clips as test, making sure there was no generated clips with skeletons or subjects (subject pair) from test data in each run.
So far, we have introduced our video generation method which enable us to generate new action clips for the action recognition training process. In this section, we show different scenarios for generating new data and running experiments for each to see if adding the generated data to a training process can improve the accuracy of the action recognizer. We applied our proposed video generation models to all the experiments using skeletons. The skeletons were trained using data from UT and SBU data sets as well as 41 un-annotated clips (between 10 to 30 seconds) that we captured from our colleagues. For future works, we will train our model again using a large amount of data from web. But the time being, we are satisfied with the current model as higher resolution for action recognition is currently unnecessary. Our technique for generating new action video clips has the capacity of running experiments with numerous varying settings. Here, we show five experiments which may be quantitatively evaluated.
5.1 Generated Trajectory
The first experiments is a combination of our proposed video generation technique and skeleton trajectory generation. We generated around 200 random skeleton trajectories from action labels in UT data set using the method mentioned in §3.2. Each of these skeleton trajectories generated a video by proposed video generation applied to a person in UT data set, meaning our new data set is doubled with half of it being the generated data. We then trained our model by I3D and C3D using training setting mentioned in §4.1. Table 1 shows about 3% improvement for I3D with and without training data as well as significant improvement (by 15%) for C3D network which is less complex.
5.2 New Subjects
One common way to extend a video data set is to invite new people to do a series of actions in front of a camera. Diversity  in body shape, cloths and behaviour will clearly help with the generalization of the ML methods. In this experiment, we aimed to virtually add new subject to the data set. Thus, we collected a small unannotated clips from 10 distinct persons and fed them as new subjects into our proposed video generation method. For UT, each subject was replaced by a new one for all of his/her action which is similar to adding 10 new subjects to UT. The same was done with SUB to double the data set, the only difference being the replacement each pair with a new subject pair. Figure 3(b) shows a few new subjects with their generated action videos from SBU data set. The results have been presented in Table 2.
5.3 New Actions
In real computer vision problems, one might decide to add a new label class after the data collection process has been done. Adding a new label action to a valid data set could cost the same as gathering a data set from scratch as all the subjects are needed for re-acting that single action. In this experiment, we tried to introduce a new action labeled to UT data set. As mentioned in §4.1 , UT consists 10 action labels. We used training data from a third data set called KTH  in order to generate 3 new actions, running, jogging and boxing, in addition to that of the UT. For each subject in UT data set and each of these 3 new action, we randomly picked 5 action clips from KTH training data clips and extracted the skeleton by OpenPose  where in addition to input background image, we generated 150 new action clips from our data set. We then trained a new model using I3D by pre-trained network where in each run we used training data from original set and all the data generated for the new set of actions. Since the KTH data is grey scaled images, we randomly grey scaled both the original and the generated training clips in the training phase. For each run, we found per class accuracy for UT test set (refer to §4.1 for explaining UT train/test) as well as KTH test sets. Table 3 shows average of the per class accuracy for both test sets. We may consider KTH test results as a measure of cross data set accuracy for walk, wave-hand and clap-hand. Our trained network on new action labels boxing, running and jogging achieved 72.14%, 44.44% and 63.20%, respectively. This indicates that the new actions in the data set performed as good as the data captured by camera.
|Action||UTK Test||Label||KTH Test|
5.4 Data set Expansion
So far, we’ve shown that using our proposed method we can generate video clips with any number of arbitrary action videos and subjects. In an action data set with subjects carrying out distinct actions, there will be video actions. when applied to our proposed method of action video generation, the subjects and the video actions will result in generation of video actions comprising of original videos while the rest is generated videos. This approach enabled us to expand UT Kinect data set from 200 clips to 4000 clips and SUB Interact from 283 clips to 5943 using only the original data set. We trained I3D and C3D using our expanded data set as described in §4.1. Table 4 shows the result of this experiment.
Figures 7 shows an screen shot of the clips from UTK and SUB data sets. The first row shows skeleton clips extracted from an arbitrary action while rows 2-4 show the generated video for subjects from different clip performing that specific action.
5.5 Real World
In this section, we carried out 4 different experiments on 2 data sets for bench-marking. Although in all experiments, the generated data improved the network performance, we believe none of the experiments show the actual strength and convenience of our proposed methods in real world scenarios. In both data sets, as well as other commonly used small data sets, the environmental setup for data acquisition such as distance from camera view  and light condition were kept as uniformly as possible for both test and train video clips. This would be unattainable in real life data acquisitions. A way of overcoming this obstacle would be to collect diverse sets of data for strong neural network models. We’ve previously shown that by partitioning the video to action, subject and context allows us to easily manipulate the background or change the camera view. In this experiment, We applied perspective transform on skeleton while using diverse backgrounds. Although the model trained with these data did not outperform our previous experiments, a live demo showed it to be better for unseen cases, qualitatively. Figure 8 illustrates an input skeleton and its perspective transform as well as the generated clip.
6 Conclusion and Future Works
In this paper, we’ve introduced a novel way to partition an action video clip into action, subject and context. We showed that we can manipulate each part separately, reassemble them with our proposed video generation model into new clips and use as an input for action recognition models which require large data. We can change an action by extracting it from an arbitrary video clip, generate it through our proposed skeleton trajectory model or by applying perspective transform on existing skeleton. Additionally, we can change the subject and the context using arbitrary video clips.
For the future work, we will replace our 2d skeleton with 3d skeleton to achieve a 3d transformation and handle occlusions. Additionally, while our video generation technique demonstrated acceptable results for images, we believe it can be extended even further to achieve higher resolution by feeding more unannotated data.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
-  M. Bagheri, Q. Gao, S. Escalera, A. Clapes, K. Nasrollahi, M. B. Holte, and T. B. Moeslund. Keep it accurate and diverse: Enhancing action recognition performance by ensemble learning. In CVPRW, pages 22–29, 2015.
-  Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, 2017.
-  J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. arXiv preprint arXiv:1705.07750, 2017.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.
Imagenet: A large-scale hierarchical image database.
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
-  E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a￼ laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pages 1486–1494, 2015.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, pages 2625–2634, 2015.
-  G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto. Dynamic textures. International Journal of Computer Vision, 51(2):91–109, 2003.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1933–1941, 2016.
-  C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems, pages 64–72, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448–456, 2015.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
S. Ji, W. Xu, M. Yang, and K. Yu.
3d convolutional neural networks for human action recognition.PAMI, 35(1):221–231, 2013.
-  I. N. Junejo, E. Dexter, I. Laptev, and P. Perez. View-independent action recognition from temporal self-similarities. PAMI, 33(1):172–185, 2011.
-  N. Kalchbrenner, A. v. d. Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
-  W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
-  I. Kemelmacher-Shlizerman, S. M. Seitz, D. Miller, and E. Brossard. The megaface benchmark: 1 million faces for recognition at scale. In CVPR, pages 4873–4882, 2016.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  W. Li, Z. Zhang, and Z. Liu. Action recognition based on a bag of 3d points. In CVPRW, pages 9–14. IEEE, 2010.
-  Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1718–1727, 2015.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. Springer, 2014.
-  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. arXiv preprint arXiv:1703.00848, 2017.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
-  M. Marszałek, I. Laptev, and C. Schmid. Actions in context. In CVPR, 2009.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
-  A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
-  J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems, pages 2863–2871, 2015.
-  R. Poppe. A survey on vision-based human action recognition. Image and vision computing, 28(6):976–990, 2010.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and variational inference in deep latent gaussian models.In International Conference on Machine Learning, 2014.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241. Springer, 2015.
-  M. Saito and E. Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
-  I. Sato, H. Nishimura, and K. Yokoi. Apac: Augmented pattern classification with neural networks. arXiv preprint arXiv:1505.03229, 2015.
-  C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: a local svm approach. In ICPR, volume 3, pages 32–36. IEEE, 2004.
-  P. Scovanner, S. Ali, and M. Shah. A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th ACM international conference on Multimedia, pages 357–360. ACM, 2007.
-  A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In CVPR, pages 1010–1019, 2016.
-  P. Y. Simard, D. Steinkraus, J. C. Platt, et al. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, volume 3, pages 958–962, 2003.
-  K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
-  N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In International Conference on Machine Learning, pages 843–852, 2015.
-  M. Szummer and R. W. Picard. Temporal texture modeling. In Image Processing, 1996. Proceedings., International Conference on, volume 3, pages 823–826. IEEE, 1996.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, pages 4489–4497, 2015.
-  S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz. Mocogan: Decomposing motion and content for video generation. arXiv preprint arXiv:1707.04993, 2017.
-  J. van Amersfoort, A. Kannan, M. Ranzato, A. Szlam, D. Tran, and S. Chintala. Transformation-based models of video sequences. arXiv preprint arXiv:1701.08435, 2017.
-  A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages 4790–4798, 2016.
-  R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee. Decomposing motion and content for natural video sequence prediction. ICLR, 1(2):7, 2017.
-  C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pages 613–621, 2016.
-  J. Walker, K. Marino, A. Gupta, and M. Hebert. The pose knows: Video forecasting by generating pose futures. arXiv preprint arXiv:1705.00053, 2017.
-  H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, pages 3169–3176. IEEE, 2011.
-  L.-Y. Wei and M. Levoy. Fast texture synthesis using tree-structured vector quantization. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 479–488. ACM Press/Addison-Wesley Publishing Co., 2000.
-  S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell. Understanding data augmentation for classification: when to warp? In Digital Image Computing: Techniques and Applications (DICTA), 2016 International Conference on, pages 1–6. IEEE, 2016.
-  L. Xia, C. Chen, and J. Aggarwal. View invariant human action recognition using histograms of 3d joints. In CVPRW, pages 20–27. IEEE, 2012.
-  T. Xue, J. Wu, K. Bouman, and B. Freeman. Probabilistic modeling of future frames from a single image. In NIPS, 2016.
-  T. Xue, J. Wu, K. Bouman, and B. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in Neural Information Processing Systems, pages 91–99, 2016.
-  J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702, 2015.
-  K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras. Two-person interaction detection using body-pose features and multiple instance learning. In CVPRW, pages 28–35. IEEE, 2012.
-  H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242, 2016.
-  X. Zhang, Y. Fu, A. Zang, L. Sigal, and G. Agam. Learning classifiers from synthetic data using a multichannel autoencoder. arXiv preprint arXiv:1503.03163, 2015.