show that deep convolutional neural networks (CNNs) are promising for action recognition in videos. However, CNN models typically have millions of parameters[2, 15, 22], and usually large amounts of training data are needed to avoid overfitting. For this purpose, work is underway to construct datasets consisting of millions of videos . However, the collection, pre-processing, and annotation of such datasets can require a lot of human effort. Moreover, storing and training on such large amounts of data can consume substantial computational resources.
In contrast, collecting and processing images from the Web is much easier. For example, one may need to look through all, or most, video frames to annotate the action, but often a single glance is enough to decide on the action in an image. Videos and web images also have complementary characteristics. A video of 100 frames may convey a complete temporal progression of an action. In contrast, 100 web action images may not capture the temporal progression, but do tend to provide more variations in terms of camera viewpoint, background, body part visibility, clothing, . Moreover, videos often contain many redundant and uninformative frames, , standing postures, whereas action images tend to focus on discriminative portions of the action (Fig. LABEL:fig:defPose). This property can further focus the learning, making action images inherently more valuable.
In this work, we ask the question: Can web action images be leveraged to train better CNN models and to reduce the burden of curating large amounts of training videos?
This is not a question with an easy yes or no answer. First, web action images are usually photos, such as professional photos, commercial photos, or artistic photos, which can differ significantly from video frames. This can introduce domain shift artifacts between videos and images. Second, adding web action images in training may have different effects for different actions and for different CNN models. Furthermore, the performance improvement as a function of the Web image set size should be studied.
We start by collecting a large web action image dataset that contains 23.8K images of 101 action classes. Our dataset is more than double the size of the largest previous action image dataset , both in the number of images and the number of actions. And, to the best of our knowledge, this is the first action image dataset that has one-to-one correspondence in action classes with the large-scale action recognition video benchmark dataset, UCF101 . Images of the dataset are carefully labeled and curated by human annotators; we refer to them as filtered images. Our dataset will be made publicly available for research.
For a thorough investigation, we train CNN models of different depths and analyze the effect of adding web action images to the training set of video frames for different action classes. We also train and evaluate models with varying numbers of action images to explore marginal gain as a function of the web image set size. We find that by combining web action images with video frames in training, a spatial CNN can achieve an accuracy of 83.5% on UCF101, which is more than 10% absolute improvement over a spatial CNN trained only on videos . When combining with motion features, we can achieve 91.1% accuracy, which is the highest result reported to-date on UCF101. We also replace videos by images to demonstrate that our performance gains are due to images providing complementary information to that available in videos, and not solely due to additional training data.
We then further investigate how our approach can be made scalable. We crawl a dataset of web images for UCF101 from the web. These crawled images are not manually labeled; we refer to them as unfiltered images. We compare the performance of filtered and unfiltered images on UCF101. Using more unfiltered images we obtain similar performance to that obtained using fewer filtered images. We also crawl a dataset of web images for ActivityNet ; a larger scale action recognition video dataset. We obtain comparable performance when replacing half the training videos in ActivityNet (which correspond to 16.2M frames) by 393K unfiltered web images. Both crawled datasets will be made publicly available for research.
In summary, our contributions are:
We study the utility of filtered web action images for video-based action recognition using CNNs. By including filtered web action images in training we improve the accuracy of spatial CNN models for action recognition by 10.5%.
We study the utility of unfiltered crawled web action images, a more scalable approach, for video-based action recognition using CNNs. We obtain comparable performance when replacing half ActivityNet videos (16.2M frames) with 393K unfiltered web images.
We collect the largest web action image dataset to-date. This dataset is in one-to-one correspondence with the 101 actions in the UCF101 benchmark. We also collect two crawled action image datasets corresponding to the classes of UCF101 and ActivityNet.
2 Related Work
Action recognition is an important research topic for which a large number of methods have been proposed . Among these, due to promising performance on realistic videos including web videos and movies, bag-of-words approaches that employ expertly-designed local space-time features have been widely used. Some representative works include space-time interest points  and dense trajectories 
. Advanced feature encoding methods, Fisher vector encoding, can be used to further improve the performance of such methods . Besides bag-of-words approaches, other works make an effort to explicitly model the space-time structures of human actions [20, 28, 29] by using, for example, HCRFs and MRFs.
CNN models learn discriminative visual features at different granularities, directly from data, which may be advantageous in large-scale problems. CNN models may implicitly capture higher-level structural patterns in the features learned at the last layers of the CNN model. In addition, CNN features may also be used within structured models like HCRFs and MRFs to further improve performance.
Some recent works propose the use of CNN models for action recognition in videos [10, 13, 18, 22]. Ji  use 3D convolution filters within a CNN model to learn space-time features. Karpathy  construct a video dataset of millions of videos for training CNNs and also evaluate different temporal fusion approaches. Simonyan and Zisserman  use two separate CNN streams: one CNN is trained to model spatial patterns in individual video frames and the other CNN is trained to model the temporal patterns of actions, based on stacks of optical flow. Ng 
use a recurrent neural network that has long short-term memory (LSTM) cells. In all of these works, the CNN models are trained only on videos. Our findings regarding the use of web action images in training may help in further improving the performance of these works.
3 Web Action Image Dataset
To study the usefulness of web action images for learning better CNN models for action recognition, we collect action images that correspond with the 101 action classes in the UCF101 video dataset.
For each action class, we automatically download images from the Web (Google, Flickr, etc.) using corresponding key phrases, pushup training for the class pushup, and then manually remove irrelevant images or drawings and cartoons. We also include 2769 images of relevant actions from the Standford40 dataset . The resulting dataset comprises 23.8K images. Because the images are automatically collected, and then filtered for irrelevant ones, the number of images per category varies. Each class has at least 100 images and most classes have 150-300 images. We will make our dataset publicly available for research.
|Dataset||No. of actions||No. of images||Clutter?||Poses vary?||Visibility varies?|
refers to variance in the partial visibility of the human bodies.
Table 1 compares existing action image datasets with our new dataset. Both in the number of images and the number of actions, our dataset exceeds double the scale of existing datasets. More importantly, to the best of our knowledge, this is the first action image dataset that has one-to-one action class correspondence with a large-scale action recognition benchmark video dataset. We believe that our dataset will enable further study of the relationship between action recognition in videos and in still images.
UCF101 action classes are divided into five types: Human-Object Interaction, Body-Motion Only, Human-Human Interaction, Playing Musical Instruments, and Sports . Fig. 1 shows sample images in our dataset for five action classes, one in each of the five action types.
These action images collected from the Web are originally produced in a variety of settings, such as amateur professional photos, artistic educational commercial photos, etc. For images collected in each action category, wide variation can exist in viewpoint, lighting, human pose, body part visibility, and background clutter. For example, commercial photos may have clear backgrounds while backgrounds of amateur photos may contain much more clutter. Such variance also differs for different types of actions. For example, for Sports, there is significant variance in body pose among images that capture different phases of the actions, whereas body pose variance is minimal in images of Playing Musical Instruments.
Many of the collected action images significantly differ from video frames in camera viewpoint, lighting, human pose, and background. One interesting thing to notice is that action images often capture defining poses of an action that are highly discriminative, standing with both hands over head and legs spread in jumping jack (Fig. 1, row 2). In contrast, videos may have many frames containing poses that are common to many actions, in jumping jack the upright standing pose with hands down. Also, images will have more unique content than video frames, for example more clothing variation. Clearly there exists a compromise between temporal information available in videos and discriminative poses and variety of unique content in images.
4 Training CNNs with Web Action Images
observe that spatio-temporal networks show similar performance compared to spatial models. A spatial CNN effectively classifies actions in individual video frames, and action classification for a video is accomplished via fusion of the spatial CNN’s outputs over multiple frames, via voting or SVM. Because the spatial CNN is trained on single video frames, its parameters can be learned by fine-tuning of a CNN that was trained for a different task, , using a CNN that is pre-trained on ImageNet. The fine-tuning approach is especially beneficial in training a CNN model for action classification in videos, since we often only have limited training samples; given the large number of parameters in a CNN, initializing the parameters to random values leads to overfitting and inferior performance as shown in . In this work, we study improving the spatial CNN for action recognition using web action images as training data in fine-tuning. This is then combined with motion features via state-of-the-art techniques.
In our experiments and analysis, we explore the following key questions:
Is it beneficial to train CNNs with web action images in addition to video frames and, if so, which action classes benefit most?
How do different CNN architectures, in particular ones with different depths, perform when web action images are used as additional training data?
How do the performance gains change when more web action images are used in training the CNN?
Are performance gains solely due to additional training data or also due to a single image being more informative than a randomly sampled video frame?
Can we make the procedure of leveraging web images scalable by using crawled (unfiltered) web images rather than manually filtered ones?
Is adding web images beneficial? Significant performance gains are achieved when we train spatial CNNs using our web action image dataset as auxiliary training data (see Table 2). For example, with the VGG19 CNN architecture, 5.7% absolute improvement in mean accuracy is achieved.
Most encouragingly, such improvements are easy to implement, without the need to introduce additional complexity to the CNN architecture and/or requiring significantly longer training time.
We further analyze which classes improve the most. Fig. 2 shows the 25 action classes for which the largest improvement in accuracy is achieved with the three different CNN architectures on UCF101 split1. The 25 action classes of top average accuracy improvement over all three tested architectures are also shown (rightmost column), all of which have no less than 10% absolute increase in accuracy and 10 classes have more than 20% absolute improvement. Some action classes are consistently improved irrespective of the CNN architecture used, such as push ups, YoYo, handstand walking, brushing teeth, jumping jack, etc. This suggests that utilizing web action images in CNN training is widely applicable.
While classification accuracy improvements in actions that are relatively stationary such as Playing Daf and Brushing Teeth are somewhat expected, it is interesting to see that improvements for actions of fast body motion such as Jumping Jack and Body Weight Squats are also significant.
|Model||# layers||# param. (in Millions)||Accuracy video only||Accuracy video + images|
Are images benefitial irrespective of CNN depth? While there are numerous ways that CNN architectures may differ from each other, here we focus on one of the most important factors. We evaluate the performance changes for CNNs of different depths when web action images are used in addition to video frames in training. We train spatial CNNs of three depths: 7 layers (M2048), 16 layers (VGG16) and 19 layers (VGG19). These are the prototypical choices of CNN depths in recent works [2, 15, 17, 21, 22].
Table 2 shows the mean accuracy of the three CNN models trained with and without web action images on UCF101 split1. Using web action images in training leads to a consistent 5% 9% absolute improvement for all three architectures of different depths. This shows the usefulness of web action images and suggests a wide applicability of this approach. Furthermore, our results in action recognition confirm ’s observation that deeper CNNs of 16-19 layers significantly outperform the shallower 7-layer architecture. However, the margin of performance gain diminishes when we increase the depth from 16 to 19.
Does adding more web images improve accuracy? We further explore how, for the same CNN architecture, the number of web action images used as additional training data can influence the classification accuracy of the resulting CNN model. We sample , , and of the images of each action in our dataset, and for each sampled set we train the spatial CNN by fine-tuning VGG16 using both the training videos and sampled action images. For each sample size, we repeat the experiment three times, each with a different randomly sampled set of web action images. The evaluation is performed on UCF101 split1.
Fig. 3 summarizes the results of this experiment. The increase in classification accuracy is most significant at the beginning of the curve, when a few thousand web action images are used in training. This increase continues as more web action images are used, even though the increase becomes slower. Firstly, this indicates that using web action images in training can make a significant difference in performance by providing additional supervision to that provided by video frames. Secondly, it indicates that it is good practice to collect a moderate number of web action images for each action as a cost-effective way to boost model performance ( , 100 300 images per action for a dataset of the same scale as UCF101).
Do web images complement video frames? Although augmenting with images is more efficient than augmenting with videos, we further investigate whether the achieved performance gains are solely due to additional training data or whether a web image provides more information to the learning algorithm than a video frame. This is done by replacing video frames by web images, keeping the total number of training samples constant. For each sample size, we repeat the experiment three times, each with a different randomly sampled set of web action images. The evaluation is performed on UCF101 split1 and a VGG16 model.
Fig. 4 summarizes the results of this experiment. A consistent improvement in performance is achieved when half the video frames are replaced by web images. The number of training samples (images and video frames) required to obtain the maximum accuracy presented in Fig. 3 is much less (50K 230K). This suggests that images are augmenting the information learnt by the classifier. We posit that discriminative poses in action images may provide implicit supervision, in training, to help learn better discriminative models for classification.
Can this be made scalable? While we have demonstrated the ability to collect a filtered dataset for our desired classes, this is not scalable. Given a different dataset having the same order of magnitude as UCF101 we would have to manually label a dataset for its classes. Given an even larger dataset with more classes and more samples per class, this becomes very cumbersome although still better than collecting videos. We now investigate the possibility of using crawled (unfiltered) web images for the same purpose. We assume that more images will be required if they are unfiltered, and so we crawl 207K unfiltered images from the Web corresponding to the classes of UCF101.
Table 3 summarizes the results of this experiment. The performance of using unfiltered images approaches that of manually filtered images, but the number of web images utilized is much larger. We further investigate whether all the crawled unfiltered images are required to obtain such performance. We do this by randomly selecting one quarter (65.5K) of the 207K unfiltered web images. We select 3 random samples and report the average result in Table 3. Three quarters of the images only contribute with an additional accuracy of 1%; this is consistent with Fig. 3 observations.
Having demonstrated the feasibility of using crawled web images, we now apply this to a larger-scale dataset: ActivityNet . ActivityNet contains more classes (203) and more samples per class than UCF101. ActivityNet classes are more diverse; they belong to the categories: Personal Care, Eating and Drinking, Household, Caring and Helping, Working, Socializing and Leisure, and Sports and Exercises. “ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours.”  Mostly, videos have a duration between 5 and 10 minutes and have a 30 FPS frame rate. About 50% of the videos are in HD resolution. We crawl 393K unfiltered images from the Web corresponding to the classes of ActivityNet. Results on ActivityNet are reported in Section 5.
|Image Type||# Images||Accuracy (%)|
|Unfiltered (rand select)||65.5K||82.1*|
* means average of three random sample sets.
Using insights from the experiments performed on UCF101 split1 in Section 4, we now perform experiments following the standard evaluation protocol  and report the average accuracy over the three provided splits.
We also perform experiments on ActivityNet. Following , we evaluate classification performance on both trimmed and untrimmed videos. Trimmed videos contain exactly one activity. Untrimmed videos contain one or more activities. We use the mAP (mean average precision) in evaluating performance. Results reported on ActivityNet are produced using the validation data, as the authors are reserving the test data for a potential future challenge.
5.1.1 Experimental Setup for UCF101
We use the Caffe software for fine-tuning CNNs. We use models VGG16, VGG19 , and M2048  that are pre-trained on ImageNet by the corresponding authors. We only test M2048 on the first split for analysis, as it is shown to be significantly inferior to the other two architectures (Table 2). Due to hardware limitations, we use a small batch size: 20 for M2048 and 8 for VGG16 and VGG19. Accordingly, we use a smaller learning rate than those used in [2, 22]. For M2048, the initial learning rate is changed to after 40K iterations; training stops at 80K iterations. For both VGG16 and VGG19, the initial learning rate is changed to after 40K iterations, and is further lowered to after 80K iterations. Training stops at 100K iterations. Momentum and weight decay coefficients are always set to 0.9 and
. In each model, all layers are fine-tuned except the last fully connected layer which has to be changed to produce output of 101 dimensions with initial parameter values sampled from a zero-mean Gaussian distribution with.
We resize video frames to 256256, and random crops to 224224 with random horizontal flipping for training. For web action images, since their aspect ratios vary significantly, we first resize the short dimension to 256 while keeping the aspect ratio, and subsequently crop six patches along the longer dimension in equal spacing. Random cropping of 224224 with random horizontal flipping is further applied to these image patches in training. Equal numbers of web images and video frames are sampled in each training batch.
Video Classification: A video is classified by fusing over the CNN outputs for the individual video frames. For a test video, we select 20 frames of equal temporal spacing. From each of the frames, 10 samples are generated following : four corners and the center (each is 224224) are first cropped from the 256256 frame, making 5 samples; horizontal flipping of these samples makes another 5. Their classification scores are averaged to produce the frame’s scores. We classify each frame to the class of the highest score, and the class of the video is then determined by voting of the frames’ classes.
We also test SVM fusion, concatenating the CNN outputs for the 20 frames (averaged over the 10 cropped and flipped samples) from the second fully-connected layer (fc7), the 15th layer in VGG16 and 18th layer in VGG19. This produces a vector of 81,920 () dimensions, which is then L2 normalized. One-vs-rest linear SVMs are then trained on these features for video classification. The SVM parameter in all experiments.
Combining with Motion Features: The output of spatial CNNs can be combined with motion features to achieve significantly better performance, as shown in . We present an alternative by combining the output of the spatial CNNs with the conventional expert-designed features, namely the improved dense trajectories with Fisher encoding (IDT-FV) . We follow the same settings in  to compute the IDT-FV for each video except that we do not use a space-time pyramid. The IDT-FV of each video is then combined with the concatenated fc7 outputs of 20 frames to form the final feature vector for a video. One-vs-rest linear SVMs are then trained on these features for video classification. The SVM parameter .
|slow fusion CNN ||65.4|
|spatial CNN ||73.0|
|VGG16 + Images, voting||82.5|
|VGG16 + Images, SVM fusion on fc7||83.5|
|VGG19 + Images, voting||83.3|
|VGG19 + Images, SVM fusion on fc7||83.4|
5.1.2 Experimental Setup for ActivityNet
We use the Caffe  software for fine-tuning CNNs. We use a VGG19 model  that is pre-trained on ImageNet by the authors. Due to hardware limitations, we use a small batch size of 8. Accordingly, we use a smaller learning rate than . The initial learning rate is changed to after 80K iterations. Training stops at 160K iterations. Momentum and weight decay coefficients are set to 0.9 and . All layers are fine-tuned except the last fully connected layer which has to be changed to produce output of 203 dimensions with initial parameter values sampled from a zero-mean Gaussian distribution with .
|Ours (video frames only)||none||52.3||47.7|
|Ours (unfiltered: all)||393K||53.8||49.5|
|Ours (unfiltered: rand select)||103K||53.3*||49.3*|
Resizing and cropping of images and frames are performed in the same way as previously described for UCF101. Samples in each training batch are randomly selected from web action images and video frames with equal probability.
5.2.1 Experimental Results for UCF101
As seen in Table 4, all our spatial CNNs trained using both videos and images improved 10% (absolute) in accuracy over the spatial CNN of , which is a 7-layer model. We believe this improvement is due to two main factors: using a deeper model and using web action images in training. Comparing the performance of the spatial CNN of  to the deeper models trained only on videos (rows 3 and 6 in Table 4), we find that the improvements solely due to differences of CNN architectures are 4.9% and 4.8% for VGG16 and VGG19 respectively. When web action images are used in addition to videos in training (rows 4 and 7 in Table 4), these improvements are doubled: 9.5% and 10.3% respectively.
Results reported in Table 4 show that, in the models we tested, the simple approach of using web action images in training contributes at least equally with introducing significant complexities to the CNN model, , adding at least 9 more layers. It is also interesting to note that, without using optical flow data, our spatial CNNs already approach performance attained using state-of-the-art expert designed features that use optical flow, IDT-FV  in Table 5. Performance gains obtained by our approach are especially encouraging compared to deepening the model or incorporating motion features, as leveraging web images during training will not add any additional computational or memory burden during test time.
The slow fusion CNN  is not a spatial CNN as it is trained on multiple video frames instead of single video frames. We list it here as it presents a different approach; collecting millions of web videos for training. However, despite the fact that 1M web videos are used as pre-training data, its performance is far lower than our models.
We further test the features learned by our spatial CNNs when combined with motion features, Fisher encoding on improved dense trajectories. Table 5 compares our results with state-of-the-art methods that also use motion features. Our method (VGG16 + Images + IDT-FV) outperforms all, improving by 2.5% over  that trains recurrent CNNs with long short-term memory cells; by 3.1% over , which combines two separate CNNs trained on video frames and optical flow respectively; and by 5.2% over  that uses Fisher encoding on improved dense trajectories.
5.2.2 Experimental Results for ActivityNet
Here we report the performance of our spatial CNNs on ActivityNet for the task of action classification in trimmed and untrimmed videos with and without auxiliary web images (Table 6). We then further investigate the use of web images as a substitute for many training videos (Table 7).
|Experiment||# Frames||# Images||mAP (%)|
|1/2 vids + imgs||16.2M||393K||46.3*|
|1/4 vids + imgs||8.1M||393K||41.7*|
In Table 6 we observe that utilizing web images still helps 1.5% even with a very large scale dataset like ActivityNet. Using a random sample of approximately one quarter of the crawled web images gives nearly the same results, suggesting that performance gains diminish as the number of web action images greatly increase. This result is consistent with results on UCF101 (Figure 3).
In Table 7 we observe that comparable performance is achieved when half the training videos, are replaced by web images (rows 1 and 4 in Table 7). A similar pattern is observed when repeating the experiment at a smaller scale. This suggests that using a relatively small number of web images can help us reduce the effort of curating and storing millions of video frames for training.
We show that utilizing web action images in training CNN models for action recognition is an effective and low-cost approach to improve performance. We also show that while videos contain a lot of useful temporal information to describe an action, and while it is more beneficial to use videos only than to use web images only, web images can provide complementary information to a finite set of videos allowing for a significant reduction in the video data required for training.
-  F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In , pages 961–970, 2015.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. BMVC, 2014.
-  C.-Y. Chen and K. Grauman. Watching unlabeled video helps learn new human actions from very few labeled snapshots. In CVPR, 2013.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
-  L. Duan, D. Xu, and S.-F. Chang. Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. In CVPR, 2012.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 88(2):303–338, 2010.
-  A. Gupta, A. Kembhavi, and L. S. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. TPAMI, 31(10):1775–1789, 2009.
-  N. Ikizler-Cinbis, R. G. Cinbis, and S. Sclaroff. Learning actions from the web. In ICCV, 2009.
-  N. Ikizler-Cinbis and S. Sclaroff. Web-based classifiers for human action recognition. IEEE Transactions on Multimedia, 14(4):1031–1045, 2012.
-  S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. TPAMI, 35(1):221–231, 2013.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
-  Y.-G. Jiang, J. Liu, A. Roshan Zamir, I. Laptev, M. Piccardi, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/ICCV13-Action-Workshop/, 2013.
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
-  A. R. Z. Khurram Soomro and M. Shah. UCF101: A dataset of 101 human action classes from videos in the wild. In CRCV-TR-12-01, 2012.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. CVPR, 2015.
-  J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. arXiv preprint arXiv:1503.08909, 2015.
-  F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV. 2010.
-  M. Raptis and L. Sigal. Poselet key-framing: A model for human activity recognition. In CVPR, 2013.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
-  K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
-  C. Sun, S. Shetty, R. Sukthankar, and R. Nevatia. Temporal localization of fine-grained actions in videos by domain transfer from web images. arXiv preprint arXiv:1504.00983, 2015.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
-  H. Wang and C. Schmid. Lear-inria submission for the thumos workshop. In ICCV Workshop on Action Recognition with a Large Number of Classes, 2013.
-  H. Wang, X. Wu, and Y. Jia. Video annotation via image groups from the web. Multimedia, IEEE Transactions on, 16(5):1282–1291, Aug 2014.
-  L. Wang, Y. Qiao, and X. Tang. Video action detection with relational dynamic-poselets. In ECCV, 2014.
-  Y. Wang and G. Mori. Hidden part models for human action recognition: Probabilistic versus max margin. TPAMI, 33(7):1310–1323, 2011.
-  D. Weinland, R. Ronfard, and E. Boyer. A survey of vision-based methods for action representation, segmentation and recognition. CVIU, 115(2):224–241, 2011.
-  B. Yao and L. Fei-Fei. Grouplet: A structured image representation for recognizing human and object interactions. In CVPR, 2010.
-  B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning bases of action attributes and parts. In ICCV, 2011.