Log In Sign Up

Searching Action Proposals via Spatial Actionness Estimation and Temporal Path Inference and Tracking

by   Nannan Li, et al.

In this paper, we address the problem of searching action proposals in unconstrained video clips. Our approach starts from actionness estimation on frame-level bounding boxes, and then aggregates the bounding boxes belonging to the same actor across frames via linking, associating, tracking to generate spatial-temporal continuous action paths. To achieve the target, a novel actionness estimation method is firstly proposed by utilizing both human appearance and motion cues. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of actionness estimation as a priori. To further promote the performance, we design an improved optimization objective for the problem and provide a greedy search algorithm to solve it. Finally, a tracking-by-detection scheme is designed to further refine the searched action paths. Extensive experiments on two challenging datasets, UCF-Sports and UCF-101, show that the proposed approach advances state-of-the-art proposal generation performance in terms of both accuracy and proposal quantity.


page 3

page 5

page 8

page 11

page 12

page 14


YoTube: Searching Action Proposal via Recurrent and Static Regression Networks

In this paper, we present YoTube-a novel network fusion framework for se...

TraMNet - Transition Matrix Network for Efficient Action Tube Proposals

Current state-of-the-art methods solve spatiotemporal action localisatio...

Tubelets: Unsupervised action proposals from spatiotemporal super-voxels

This paper considers the problem of localizing actions in videos as a se...

Progress Regression RNN for Online Spatial-Temporal Action Localization in Unconstrained Videos

Previous spatial-temporal action localization methods commonly follow th...

Joint Spatial-Temporal Optimization for Stereo 3D Object Tracking

Directly learning multiple 3D objects motion from sequential images is d...

Discovering Spatio-Temporal Action Tubes

In this paper, we address the challenging problem of spatial and tempora...

Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos

In this work, we propose an approach to the spatiotemporal localisation ...

1 Introduction

Video action analysis is an important research topic for human activity understanding, and has gained a wide attention in recent years. A common task of video action analysis is action recognition, which aims to identify which type of action is occurring in a video volume [25, 28, 18]. Compared to action recognition, action detection is a more difficult task, as it requires not only determining the action class, but also localizing the action in the video. Similar to the object detection task, in which reliable object proposals play a crucial role in the detection performance [23], action proposal is also a fundamental problem in action detection.

This paper focuses on generating high-quality action proposals in both spatial compactness and temporal continuity. Existing works in the literature have made different efforts to address the problem, including segmentation-and-merging strategy [12, 13, 1, 7], dense motion features [29, 26, 24], human-centric models [14, 10], and object proposals based approaches [30, 4]. Despite promising results achieved in these works, video action proposal generation is still a challenging problem due to the complex spatio-temporal relationship modeling involved in the task. The problem can be considered as a task with two essential steps, namely spatial (i.e.frame-level) actionness estimation and temporal ( action path generation. For one aspect, because of the large diversity and variation of human actions, it is difficult to generate robust frame-level actionness proposals which contain meaningful action motion patterns and are clearly discriminative from the background in unconstrained videos. For the other aspect, as a fact that the whole number of potential actionness regions on each frame usually has an exponential growth of the video duration [22], it is extremely impractical to calculate on all possible connections of the regions for generating the action paths and guaranteeing each of them associated with the same actor(s).

To tackle the above mentioned issues in the generation of action proposals, we propose a novel framework based on spatial actionness estimation from multiple cues and temporal action path extraction from a fast inference and tracking. Firstly, unlike previous works using selective search [23] or edge boxes [31] for generating object proposals for actionness estimation, we employ more action related cues including both human and motion. Secondly, a deep Faster-RCNN [15]

network is trained and fine-tuned on augmented action detection datasets for obtaining accurate human proposals. Then action motion patterns with Gaussian Mixture Models are modeled for motion estimation of each human proposal. Both human and motion estimations are feed into a proposed forward and backward search algorithm for video-level action path generation. Finally, we use a tracking-by-detection approach to refine the action path by supplement actionness proposals missing in frames.

The key contributions of this paper include three folds: i) we construct an action detector at frame-level by taking both appearance and motion clues into account, which can handle the problem of detecting human with uncommon poses and discriminate actionness proposals containing meaningful motion patterns from the backgrounds; ii) We formulate the action path generation as a maximum convergence problem [5]. We propose an improved optimization objective for the problem and provide a greedy search algorithm to solve it. iii) Extensive experiments on UCF-Sports, UCF-101 datasets show that the proposed method achieves the state-of-the-art performance compared with other existing approaches.

Figure 1: The framework of the proposed action proposal generation approach.

2 Related Work

Traditionally, action localization or detection is performed by sliding window based approaches [20, 11, 3, 27]. For instance, Siva et al.[20] proposed a supervised model based on multiple-instance-learning to slide over subvolumes both spatially and temporally for action detection. Instead of performing an exhaustive search through sliding over the whole video volumes, Oneata et al.[13] put forward a branch-and-bound search approach to achieve the time-efficiency. The main limitation of these sliding-window based approaches is that the detection results are confined by a video subvolume, and thus can not accurately capture the varying shape of the motion.

Some research works address the problem by employing segmentation-and-merging strategy. Generally, these methods include three steps: i) segment the video; ii) merge the segments to generate tube proposals; iii) represent tubes with dense motion features and construct action classifier for recognition. For instance, in

[7] action tubes are generated by hierarchically merging super-voxels. However, accurate video segmentation is a difficult problem especially under unconstrained environments. To alleviate the difficulty encountered with segmentation, some other methods use a figure-centric based model. In [14] the human and objects are detected first and then their interactions are described. Kläser et al.[10] detect human on each frame and track the detection results across frames using optical flow. Our approach also utilizes tracking, via a more robust tracking-by-detection approach [6, 9] based on a combined feature representation of color and shape.

Recently, some methods built upon generation of action proposals are presented. Gkioxari et al.[4]

proposed to utilize Selective Search method for proposing actions on each frame, then scored those proposals by using features extracted by a two-streams Convolutional Neural Networks (CNN), and finally, linked them to form ation tubes. Philippe

et al.[29] adopted the same feature extraction procedure, then utilized a tracking-by-detection approach to link frame-level detections, in combination with a class-specific detector. Our method replaces object proposal method and two-stream CNN with the Faster R-CNN model for calculation efficiency. The most related work to ours is that presented in [30], in which actionness score is calculated for each action path and then a greedy search method is used to generate proposals. Our work differentiates from theirs in the following three aspects: i) we train a Faster R-CNN model for human estimation, which has a stronger ability to differentiate human from backgrounds; ii) compared with the optimization objective they proposed, our improved optimization objective simultaneously maximizing actionness score and member similarity in a path set, thus can effectively cluster the paths from the same actor into a group; iii) we utilize a tracking-by-detection approach to supplement the missing detections.

3 The Proposed Approach

The proposed approach takes video clip as input and generates action proposal results. The framework of our approach is illustrated in Fig. 1. The main procedure consists of two stages: spatial actionness estimation and temporal action path extraction. Firstly, bounding boxes at frame-level that may contain meaningful motion are extracted by simultaneously considering appearance and motion cues; then action paths corresponding to the same actor at video-level are generated and linked to obtain action proposals. The details of our method will be elaborated in the following sections.

3.1 Spatial Actionness Estimation

3.1.1 Human Estimation

Detection of Human proposal is an important and heuristic step for action localization. We implement the human proposal detection employing the Faster R-CNN 

[15] pipeline with a VGG-16 model [19] pre-trained on ILSVRC dataset [17]. Faster R-CNN introduces a Region Proposal Network (RPN) that simultaneously predicts object bounding boxes and their corresponding objectness scores in near real-time speed. As the human detection task is a binary-classification problem, the output of the classification layer of Faster-RCNN network is revised to a two-way softmax classifier: one for the ‘human’ class and the other for the ‘background’ class. For action classes such as diving and swing, the appearance (especially for the shape and the pose) of the human changes significantly among the whole action duration. Therefore, the detection network fine-tuned on the standard PASCAL VOC 2007 dataset is unable to effectively detect the human under those circumstance. To handle the problem, we perform a data augmentation by merging the training data of the human class of PASCAL VOC 2007 and 2012, and rotating each training sample with seven different angles from to with an interval of . Let denotes the bounding box for the -th human proposal at -th frame. The bounding box is represented as , where and stand for width and height respectively, and is the center. After training, for each bounding box

in the test video, a probability

can be estimated by the CNN network. By setting a probability threshold, human proposals with higher probability are kept for follow-up processing. A comparison of human detection results between original Faster R-CNN model and our refined one is showed in Fig. 2, from which it can be clearly observed that detection results from refined model are more precise and compact.

Figure 2: Comparison of human detection results. The bounding boxes with red and green color are the ground truth and the detection results respectively. The 1-st and 3-nd columns are from the initial Faster R-CNN [15] (There is a missing detection in the 3-nd column); while the 2-nd and 4-th columns are from our fine-tuned model.

3.1.2 Motion Estimation

Human cue provides important prior information for generating frame-level action proposals, however it is not sufficient to determine whether an action occurs, e.g., human standing still. Thus we propose to further utilize motion cue for discarding false positive action proposals. The histograms of optical flow (HOF) [26] descriptor is used to describe the motion pattern of each human proposal. We construct two Gaussian Mixture Models (GMMs) and upon the HOFs, which represent the positive and negative proposal class respectively, to predict the probability of a motion pattern belonging to the actions or the background. HOFs calculated within bounding boxes of an Intersection-over-Union (IoU) overlapping with ground truth more than are used as positive samples, while those with IoU overlapping less than as negative samples. Given a test proposal and its HOF , we define the likelihood of being a motion score using the predictions from two mixture of Gaussian models as:


where maps likelihood into the range [0, 1]. To reduce the influence induced by camera movement to optical flow calculation, we adopt the approach presented by [28] to estimate camera motion and subtract it to obtain robust optical flow.

3.1.3 Actionness Score Calculation

The actionness score of a bounding box consists of two parts: human detection score and motion score, and is defined as follows:


where is the parameter that balances the human estimation and motion estimation score.

3.2 Temporal Action Path Extraction

3.2.1 Problem Formulation

Given action proposals on each frame, our goal is to find a set of action paths , where corresponds to a path that starts from -th frame and ends at -th frame. Yu and Yuan [30] formulate finding action path set P as a maximum set coverage problem (MSCP) and propose an optimization objective maximizing actionness score. Inspired by their work, we formulate it as a MSCP with an improved optimization objective, which simultaneously maximizes actionness score and similarity among members within the path set P. Formally, our optimization objective can be presented as follows:


where represents the similarity between action path and , and its definition will be explained in subsection action-path-association; is the actionness score of bounding box (cf. Eq. 2); is action-path-candidate set; is a threshold. The first constraint in Eq. 3 sets the maximum number of paths contained in P; while the second constraint facilitates P to avoid generating redundant action paths that are overlapped. The overlapping of two paths is evaluated by , which is defined as follows:


In Eq. 4, is defined as , representing for IoU of two bounding boxs and .

3.2.2 Action Path Generation

To solve the MSCP in Eq. 3, the action-path-candidate set needs to be obtained first. We wish that consists of spatio-temporal smooth path whose consecutive elements should satisfy the following two requirements:


where represents IoU, as defined in Eq. 4; and stand for histograms of color (HOC) and histograms of gradient (HOG) of , and is a trade-off balancing the weight of the two terms; and are thresholds. The first requirement in Eq. 5 ensures that consecutive bounding box and are spatially continuous; the second requirement ensures that and have similar appearance, thus the path may follow the same actor.

To obtain , we adopt the method proposed by [30] with minor modification to avoid generating much highly-overlap paths. The algorithm includes two stages: forward search and backward track. The aim of the former is to locate the end of a path; while that of the latter seeks to recover the whole path. The central idea is to maintain an updating pool of best Top- path candidates, which is represented as , where is the score of path and obtained by accumulating of it passes by; is the bounding box of the end of -th path. In the forward search, it also records an accumulated actionness score of each ,where and satisfy the two requirements in Eq. 5. Given at frame , we update path candidate pool according to the following two steps: first, for each candidate , if there exists any connecting to , then will be replaced by that has the largest ; second, if the accumulated score of is larger than the score of -th proposal, i.e. , then is updated as . After the forward search, a backward trace is performed to recover each on the candidate path . More specifically, for path , we obtain by solving the equation: .

The pseudo-code of forward-backward search is illustrated in Algorithm 1. It takes bounding box score as input data and outputs action paths . The lines 1 to 8 describe forward search and line 9 corresponds to backward trace. In line 3, denotes the number of bounding box on frame .

0:  bounding box score
0:  action path
2:  for  do
3:     for  do
5:     end for
6:     step1: update each candidate with that connects with and has the largest score
7:     step2: update as , if
8:  end for
9:  backward trace to locate in
Algorithm 1    Forward Search and Backward Track

3.2.3 Action Path Association

Once obtaining , the MSCP in Eq. 3 can be solved. According to [5], the maximum set coverage problem is NP-hard but a greedy-search algorithm can achieve an approximation ratio of . Here, we present a greedy-search solution to address the problem. In the beginning, we search for the candidate with the largest action score in , then add it into path set P. Supposing that P has contained action paths, we enumerate the rest paths in and find the one that maximizes the flowing equation as the -path :


In Eq. 6, represents the similarity of action path and , and is defined as: , where and represent the cluster centers of HOC and HOG of bounding boxes from path respectively. The larger value of , the more likely that the paths and follow the same actor. To reduce redundant paths in set P, the newly added path should satisfy the constraint in Eq. 4.

3.2.4 Action Path Completion

Figure 3: Examples of tracking-by-detection results. The bounding boxes with green and red color are the groundtruth and the detected frame-level human bounding boxes respectively, and those with blue color are obtained by our tracking-by-detection strategy. All the missing human targets before frame are perfectly located by the tracking approach.

As human detection may miss hitting in some frames, the track obtained by connecting the paths in P will have temporal gaps. To get a temporal-spatial continuous track of an actor, we fill the gaps by using tracking-by-detection approaches [29]. We train a linear SVM as frame-level detector. The initial set of positives consist of bounding boxes in set P, while negatives compose of bounding boxes excluded from set P and boxes that are randomly selected around positives with the IoU less than 0.3. Given the detection region on frame , we intend to find the most likely location on frame where the human detection is missed. Firstly, we map to with the shift of the median of optical-flow inside region ; secondly, construct a search region by extending the height and width of to half past one times of original length; thirdly, scan with a set of windows whose ratio between width and length varies in a range to adapt possible size change of an actor. The best region is selected as the one that maximizes the following equation:


where represents the window set produced by scanning and is the SVM detector whose input feature is chosen as the combination of HOC and HOG. After obtaining , we update the SVM detector by adding as a positive sample and boxes around with the IoU less than 0.3 as negatives. An example of how the tracking approach supplementing missing detections is illustrated in Fig. 3.

3.3 Action Proposal Generation

The spatio-temporal continuous track can be considered as an action tube that focuses on an actor from appearing until disappearing. For each action tube, if its duration is larger than a specified threshold ( 20), we regard it as an action proposal, denoted as .

4 Experiment

In this section, we describe the details of the experimental evaluation of the proposed approach, including datasets and evaluation metricts, implementation details, an analysis of the proposed approach and the overall performance comparison with state of the art methods.

4.1 Datasets and Evaluation Metric

We evaluate the performance of the proposed action proposal approach on two publicly available action-detection datasets: UCF-Sports [16] and UCF-101 [21].

UCF-Sports UCF-Sports dataset consists of 150 short videos of sports collected from 10 action classes. It has been widely used for action localization. The videos are truncated to contain a single action and bounding box annotation is provided for each frame.

UCF-101 UCF-101 dataset has more than 13000 videos that belong to 101 classes. In a subset of 24 categories, human actions are annotated both spatially and temporally. Compared with UCF-Sports, only a part of videos () are trimmed to fit the motion.

Evaluation Metric To evaluate the quality of the action proposal , we follow the metric proposed by [24]. More specifically, the estimation is based on the mean IoU value between action proposal and ground truth G, which is defined as: , where and are the detection bounding box and ground truth on t- frame respectively; is the IoU value that is defined in Eq. 4; is the set of frames where either the detection result or the ground truth is not null. An action proposal is considered as true-positive if , where is a specified threshold. In the following passage, is set as 0.5 if not specified.

4.2 Implementation Details

The human estimation is implemented under the Caffe platform 

[8] and based on the Faster R-CNN pipeline with a VGG16 model for parameter initialization as described in Sec. 3.1. We use a four-step alternating training strategy [15] to optimize two pipelines (i.e.RPN and Fast RCNN) of the whole network. For training the RPN pipeline, the same settings of scales and aspect ratios are used as in [15]

. For training the Fast RCNN pipeline, the mini-batch size is set to 128, and the ratio of positive to negative samples is set to 1:4. The network is trained with Stochastic Gradient Descent (SGD) with an initial learning rate of 0.001 and drop by 10 times at every the 5-th epochs, and the momentum and weight decay are set as 0.9 and 0.0005 respectively.

Figure 4: Recall vs. maximum number of path in set P under different test settings.

For the motion estimation of the bounding boxes, the number of components of GMMs is set to the same as the number of action categories. For constructing GMMs, we use randomly selected 1/3 of the video clips in UCF-101 for training and test on UCF-Sports dataset. While test on UCF-101 dataset, all the video clips of UCF-Sports are used for training. This setting is for a fair comparison with existing non-learning based methods which test on the whole dataset. The number of action paths in a candidate set is set to for UCF-Sports and for UCF-101, as the latter one has a longer duration of action videos on average, and hence may contains more action-path segments. The value of in Eq. 3 (i.e.the maximum number of paths in set P) is set as for UCF-Sports and for UCF-101. For each video clip, we propose at least one path set P, while a path set P is generated, the paths in P are removed from the candidate set and the greedy search algorithm is performed again to find a new path set until the duration of the longest path () is less than .

4.3 Analysis of the proposed approach

We analyze the performance of the proposed approach from different aspects, including the sensitivity of the parameter (i.e., the maximum number of path in set P), the influence of actionness estimation based on human appearance and motion cues, the number of generated proposals and the runtime analysis.

Fig. 4 shows the recall performance of our approach using different actionness estimation schemes by varying the value of . From Fig. 4, it can be observed that the proposed approach achieves the best performance when the value of is in the range on UCF-Sports, and the performance degrades significantly when is far from this range. The optimal value of for UCF-101 is larger than that for UCF-Sports. The reason is probably that the video clips of UCF-101 have longer duration than UCF-Sports on average, and thus the action path is more likely to be separated into multiple segments.

We can also observe that the proposed approach using both the human appearance and motion cues for actionness estimation (i.e.) yields better recall performance than that using only the human appearance cue (i.e.) on the two datasets. This demonstrates our initial intuition that employing multiple action-related cues for actionness estimation can help to further improve the performance of action proposal generation.

As also shown in Fig. 4, at the best performance point of recall on UCF-Sports, the number of the generated action proposals of our approach is only , and it is significantly less compared with state-of-the-art methods on the same recall performance level (see Table 1). The notable improvement is mainly due to the precise human estimation from our fine-tuned Faster-RCNN model, and the modified forward-backward search algorithm for generating candidate set . Compared to [30], the improved optimization objective leverages appearance similarity among paths for effectively separating different actors. Fig. 5 illustrates the improvement by our approach. It can be observed that the action path generated by the proposed approach is correctly associated to the same actor. More examples of the action-proposal generation results on UCF-Sports and UCF-101 are shown in Fig. 8 and Fig. 9, respectively.

Figure 5: Examples of action-path generation results. The 1-st row shows the results obtained from [30] (The action path contains an irrelevant actor within the first few frames); the 2-nd row is our results, where the main actor is correctly tracked.

The runtime of the proposed approach includes three parts: (i) spatial actionness estimation: Faster RCNN for human estimation takes around 0.1 seconds per frame (s/f) and GMM-HOF for motion estimation takes around 1 s/f; (ii) temporal action path extraction: the average runtime of this step is 0.09 s/f; and (iii) action path completion takes 0.5 s/f. In summary, the average runtime of the approach is 1.69 s/f. We conduct the runtime analysis on the UCF-Sports dataset with an image resolution of 720 x 404, and based on hardware configurations of an Nvidia Tesla-K80 GPU, 3.4 GHz CPU and 4 GB memory.

Figure 6: Recall vs. IoUs on UCF-Sports and UCF-101 datasets.
Figure 7: Recall performance on each action category on UCF-Sports and UCF-101 datasets.

4.4 Overall Performance

We compare the performance of the action proposal generation of the proposed approach with state-of-the-art methods on UCF-Sports and UCF-101. We vary the value of IoU () in [0, 1], and plot recall as a function of . Fig. 6 shows Recall vs. IoU curves of difference approaches. It is clear that our approach obtains a significant performance gap over the state-of-the-art methods on UCF-Sports (The recall of ours is above when is even at , while the others are below .), and achieves very competitive performance on UCF-101. Fig.  7 also shows the recall performance for each action category. We can observe that our approach presents superior recall performance on almost all action categories except for few classes (e.g., Walk and Kick) on UCF-Sports, and on UCF-101, ours greatly outperforms the comparison methods on action classes such as Biking, Surfing and TrampolingJumping.

From Fig.7, it can be also noticed that the performance on UCF-101 is inferior to that on UCF-Sports. The reason is probably because the testing action video clips of UCF-101 have more dynamically and continuously varying size of actors and are more untrimmed than UCF-Sports. Finally, we report the overall performance on the two datasets in Table 1 using several commonly used metrics, including ABO (Average Best Overlap), MABO (Mean ABO over all classes) and the average number of proposals per video. The results further confirm the superior action proposal generation performance achieved by our approach. It should be noted that on the same level recall performance, our approach generates relatively much smaller number of action proposals for each video clip than the other methods, which is especially important for reducing the computational complexity for the follow-up applications such as action recognition and action interaction modeling.

5 Conclusions

ABO MABO Recall #Proposals
Brox & Malik, ECCV 2010 [2] 29.84 30.90 17.02 4
Jain et al., CVPR 2014 [7] 63.41 62.71 78.72 1,642
Oneata et al., ECCV 2014 [13] 56.49 55.58 68.09 3,000
Gkioxari & Malik, CVPR 2015 [4] 63.07 62.09 87.23 100
APT, BMVC 2015 [24] 65.73 64.21 89.36 1,449
Ours 89.64 74.19 91.49 12
UCF 101
Brox & Malik, ECCV 2010 [2] 13.28 12.82 1.40 3
APT, BMVC 2015 [24] 40.77 39.97 35.45 2,299
Ours 63.76 40.84 39.64 18
Table 1: Quantitative performance comparison of the action proposal generation with state-of-the-art methods with commonly used metrics.
Figure 8: Examples of action-proposal generation results on UCF-Sports. The bounding boxes with green and red color are the ground truth and the action proposal, respectively.
Figure 9: Examples of action-proposal generation results on UCF-101. The bounding boxes with green and red color are the ground truth and the action proposal, respectively.

A novel framework for action proposal generation in video has been presented in this paper. Given an unconstrained video clip as input, it generates spatial-temporal continuous action paths. The proposed approach is built upon actionness estimation leveraging both human appearance and motion cues on frame-level bounding boxes, which are produced by a Faster-RCNN network trained on augmented datasets. Then we search spatial-temporal action paths via linking, associating, and tracking the bounding boxes across frames. We formulate the association of action paths belonging to the same actor as a maximum set coverage problem and propose a greedy search algorithm to solve it. Experiments on two challenging datasets demonstrate that our approach produces more accurate action proposals with remarkably less proposals compared with the state-of-the-art approaches. Based on the observation on the experimental results, the proposed approach is especially effective when the action video clip contains only one actor. In the future, we will explore using CNN networks for better actionness estimation of video frames, and consider recurrent neural networks for modeling the action paths for action recognition.


  • [1] Bergh, M., Roig, G., Boix, X., Manen, S., Gool, L.: Online video seeds for temporal window objectness. In: ICCV (2013)
  • [2] Brox, T., Malik, J.: Object segmentation by long term analysis of point trajectories. In: ECCV (2015)
  • [3] Gaidon, A., Harchaoui, Z., Schmid, C.: Temporal localization of actions with actoms. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 35(11), 2782–2795 (2013)
  • [4] Gkioxari, G., Malik, J.: Finding action tubes. In: CVPR (2015)
  • [5] G.L. Nemhauser, L.A.Wolsey, M.F.: An analysis of approximations for maximizing submodular set functions i. Mathematical Programming 14, 265–294 (1978)
  • [6] Hare, S., Saffari, A., Torr, P.H.: Struck: Structured output tracking with kernels. In: ICCV (2011)
  • [7] Jain, M., Gemert, J., Jégou, H., Bouthemy, P., Snoek, C.: Action localization with tubelets from motion. In: CVPR (2014)
  • [8] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  • [9] Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI) 34(7), 1409–1422 (2012)
  • [10]

    Kläser, A., Marszałek, M., Schmid, C., Zisserman, A.: Human focused action localization in video. In: Trends and Topics in Computer Vision, pp. 219–233. Springer (2010)

  • [11] Laptev, I., Pérez, P.: Retrieving actions in movies. In: ICCV (2007)
  • [12] Ma, S., Zhang, J., Ikizler-Cinbis, N., Sclaroff, S.: Action recognition and localization by hierarchical space-time segments. In: ICCV. pp. 2744–2751 (2013)
  • [13] Oneata, D., Revaud, J., Verbeek, J., Schmid, C.: Spatio-temporal object detection proposals. In: ECCV (2014)
  • [14] Prest, A., Ferrari, V., Schmid, C.: Explicit modeling of human-object interactions in realistic videos. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 35(4), 835–848 (2013)
  • [15] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: NIPS (2015)
  • [16] Rodriguez M D, Ahmed J, S.M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: CVPR (2008)
  • [17]

    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision(IJCV) 115(3), 211–252 (2015)

  • [18] Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: NIPS (2014)
  • [19] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [20] Siva, P., Xiang, T.: Weakly supervised action detection. In: BMVC. vol. 2, p. 6 (2011)
  • [21] Soomro K, Zamir A R, S.M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. In: CoRR (2012)
  • [22] Tran, D., Yuan, J., Forsyth, D.: Video event detection: From subvolume localization to spatiotemporal path search. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 36(2), 404–416 (2014)
  • [23] Uijlings, J.R., van de Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. International Journal of computer vision (IJCV) 104(2), 154–171 (2013)
  • [24] Van Gemert, J.C., Jain, M., Gati, E., Snoek, C.G.: Apt: Action localization proposals from dense trajectories. In: BMVC (2015)
  • [25] Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: CVPR (2011)
  • [26] Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. International Journal of computer vision (IJCV) 103(1), 60–79 (2013)
  • [27] Wang, H., Oneata, D., Verbeek, J., Schmid, C.: A robust and efficient video representation for action recognition. International Journal of Computer Vision (IJCV) pp. 1–20 (2015)
  • [28] Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013)
  • [29] Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Learning to track for spatio-temporal action localization. In: ICCV (2015)
  • [30] Yu, G., Yuan, J.: Fast action proposals for human action detection and search. In: CVPR (2015)
  • [31] Zitnick, C.L., Dollár, P.: Edge boxes: Locating object proposals from edges. In: ECCV (2014)