Recognition of human actions is an important step toward fully automatic understanding of dynamic scenes. Despite significant progress in recent years, action recognition remains a difficult challenge. Common problems stem from the strong variations of people and scenes in motion and appearance. Other factors include subtle differences of fine-grained actions, for example when manipulating small objects or assessing the quality of sports actions.
The majority of recent methods recognize actions based on statistical representations of local motion descriptors [LMSR08, Schuldt2004, Wang2013]. These approaches are very successful in recognizing coarse action (standing up, hand-shaking, dancing) in challenging scenes with camera motions, occlusions, multiple people, etc. Global approaches, however, are lacking structure and may not be optimal to recognize subtle variations, e.g. to distinguish correct and incorrect golf swings or to recognize fine-grained cooking actions illustrated in Figure LABEL:fig:MPIIqual.
Fine-grained recognition in static images highlights the importance of spatial structure and spatial alignment as a pre-processing step. Examples include alignment of faces for face recognition[Berg13] as well as alignment of body parts for recognizing species of birds [Duan12]. In analogy to this prior work, we believe action recognition will benefit from the spatial and temporal detection and alignment of human poses in videos. In fine-grained action recognition, this will, for example, allow to better differentiate wash hands from wash object actions.
In this work we design a new action descriptor based on human poses. Provided with tracks of body joints over time, our descriptor combines motion and appearance features for body parts. Given the recent success of Convolutional Neural Networks (CNN) [Krizhevsky, lecun98], we explore CNN features obtained separately for each body part in each frame. We use appearance and motion-based CNN features computed for each track of body parts, and investigate different schemes of temporal aggregation. The extraction of proposed Pose-based Convolutional Neural Network (P-CNN) features is illustrated in Figure 1.
Pose estimation in natural images is still a difficult task [Chen_NIPS14, Tompson14, yang2011articulated]. In this paper we investigate P-CNN features both for automatically estimated as well as manually annotated human poses. We report experimental results for two challenging datasets: JHMDB [jhuang:hal-00906902], a subset of HMDB [Kuehne11] for which manual annotation of human pose have been provided by [jhuang:hal-00906902], as well as MPII Cooking Activities [rohrbach12cvpr], composed of a set of fine-grained cooking actions. Evaluation of our method on both datasets consistently outperforms the human pose-based descriptor HLPF [jhuang:hal-00906902]. Combination of our method with Dense trajectory features [Wang2013] improves the state of the art for both datasets.
The rest of the paper is organized as follows. Related work is discussed in Section 2. Section 3 introduces our P-CNN features. We summarize state-of-the-art methods used and compared to in our experiments in Section 4 and present datasets in Section 5. Section 6 evaluates our method and compares it to the state of the art. Section LABEL:conclusion concludes the paper. Our implementation of P-CNN features is available from [projectwebpage].
2 Related work
Action recognition in the last decade has been dominated by local features [LMSR08, Schuldt2004, Wang2013]. In particular, Dense Trajectory (DT) features [Wang2013]
combined with Fisher Vector (FV) aggregation[perronnin2010improving] have recently shown outstanding results for a number of challenging benchmarks. We use IDT-FV [Wang2013] (improved version of DT with FV encoding) as a strong baseline and experimentally demonstrate its complementarity to our method.
Recent advances in Convolutional Neural Networks (CNN) [lecun98] have resulted in significant progress in image classification [Krizhevsky] and other vision tasks [Girshick14, Taigman14, toshev2014deeppose]. In particular, the transfer of pre-trained network parameters to problems with limited training data has shown success e.g. in [Girshick14, Oquab14, simonyan2014two]. Application of CNNs to action recognition in video, however, has shown only limited improvements so far [simonyan2014two, Ng15]. We extend previous global CNN methods and address action recognition using CNN descriptors at the local level of human body parts.
Most of the recent methods for action recognition deploy global aggregation of local video descriptors. Such representations provide invariance to numerous variations in the video but may fail to capture important spatio-temporal structure. For fine-grained action recognition, previous methods have represented person-object interactions by joint tracking of hands and objects [ni2014multiple] or, by linking object proposals [zhou2015interaction], followed by feature pooling in selected regions. Alternative methods represent actions using positions and temporal evolution of body joints. While reliable human pose estimation is still a challenging task, the recent study [jhuang:hal-00906902] reports significant gains provided by dynamic human pose features in cases when reliable pose estimation is available. We extend the work [jhuang:hal-00906902] and design a new CNN-based representation for human actions combining positions, appearance and motion of human body parts.
Our work also builds on methods for human pose estimation in images [pishchulin2013poselet, modec13, toshev2014deeppose, yang2011articulated] and video sequences [Cherian14, sapp2011cvpr]. In particular, we build on the method [Cherian14] and extract temporally-consistent tracks of body joints from video sequences. While our pose estimator is imperfect, we use it to derive CNN-based pose features providing significant improvements for action recognition for two challenging datasets.
3 P-CNN: Pose-based CNN features
We believe that human pose is essential for action recognition. Here, we use positions of body joints to define informative image regions. We further borrow inspiration from [simonyan2014two] and represent body regions with motion-based and appearance-based CNN descriptors. Such descriptors are extracted at each frame and then aggregated over time to form a video descriptor, see Figure 1 for an overview. The details are explained below.
To construct P-CNN features, we first compute optical flow [brox2004high] for each consecutive pair of frames. The method [brox2004high] has relatively high speed, good accuracy and has been recently used in other flow-based CNN approaches [actiontubes, simonyan2014two]. Following [actiontubes], the values of the motion field are transformed to the interval by with and . The values below and above are truncated. We save the transformed flow maps as images with three channels corresponding to motion , and the flow magnitude.
Given a video frame and the corresponding positions of body joints, we crop RGB image patches and flow patches for right hand, left hand, upper body, full body and full image as illustrated in Figure 1. Each patch is resized to pixels to match the CNN input layer. To represent appearance and motion patches, we use two distinct CNNs with an architecture similar to [Krizhevsky]. Both networks contain 5 convolutional and 3 fully-connected layers. The output of the second fully-connected layer with values is used as a frame descriptor (). For RGB patches we use the publicly available “VGG-f” network from [Chatfield14]
that has been pre-trained on the ImageNet ILSVRC-2012 challenge dataset[Deng09imagenet:a]. For flow patches, we use the motion network provided by [actiontubes] that has been pre-trained for action recognition task on the UCF101 dataset [soomro2012ucf101].
Given descriptors for each part and each frame of the video, we then proceed with the aggregation of over all frames to obtain a fixed-length video descriptor. We consider and aggregation by computing minimum and maximum values for each descriptor dimension over video frames
The static video descriptor for part is defined by the concatenation of time-aggregated frame descriptors as
To capture temporal evolution of per-frame descriptors, we also consider temporal differences of the form for frames. Similar to (1) we compute minimum and maximum aggregations of and concatenate them into the dynamic video descriptor
Finally, video descriptors for motion and appearance for all parts and different aggregation schemes are normalized and concatenated into the P-CNN feature vector. The normalization is performed by dividing video descriptors by the average -norm of the from the training set.
In Section 6 we evaluate the effect of different aggregation schemes as well as the contributions of motion and appearance features for action recognition. In particular, we compare “Max” vs. “Max/Min” aggregation where “Max” corresponds to the use of values only while ”Max/Min” stands for the concatenation of and defined in (2) and (3). Mean and Max aggregation are widely used methods in CNN video representations. We choose Max-aggr, as it outperforms Mean-aggr (see Section 6). We also apply Min aggregation, which can be interpreted as a “non-detection feature”. Additionally, we want to follow the temporal evolution of CNN features in the video by looking at their dynamics (Dyn). Dynamic features are again aggregated using Min and Max to preserve their sign keeping the largest negative and positive differences. The concatenation of static and dynamic descriptors will be denoted by “Static+Dyn”.
The final dimension of our P-CNN is , i.e., 5 body parts, 4 different aggregation schemes, 4K-dimensional CNN descriptor for appearance and motion. Note that such a dimensionality is comparable to the size of Fisher vector [Chatfield11] used to encode dense trajectory features [Wang2013]. P-CNN training is performed using a linear SVM.
4 State-of-the-art methods
In this section we present the state-of-the-art methods used and compared to in our experiments. We first present the approach for human pose estimation in videos [Cherian14] used in our experiments. We then present state-of-the-art high-level pose features (HLPF) [jhuang:hal-00906902] and improved dense trajectories [Wang2013].
4.1 Pose estimation
To compute P-CNN features as well as HLPF features, we need to detect and track human poses in videos. We have implemented a video pose estimator based on [Cherian14]. We first extract poses for individual frames using the state-of-the-art approach of Yang and Ramanan [yang2011articulated]. Their approach is based on a deformable part model to locate positions of body joints (head, elbow, wrist…). We re-train their model on the FLIC dataset [modec13].
Following [Cherian14], we extract a large set of pose configurations in each frame and link them over time using Dynamic Programming (DP). The poses selected with DP are constrained to have a high score of the pose estimator [yang2011articulated]. At the same time, the motion of joints in a pose sequence is constrained to be consistent with the optical flow extracted at joint positions. In contrast to [Cherian14] we do not perform limb recombination. See Figure 2 for examples of automatically extracted human poses.
4.2 High-level pose features (HLPF)
High-level pose features (HLPF) encode spatial and temporal relations of body joint positions and were introduced in [jhuang:hal-00906902]. Given a sequence of human poses , positions of body joints are first normalized with respect to the person size. Then, the relative offsets to the head are computed for each pose in . We have observed that the head is more reliable than the torso used in [jhuang:hal-00906902]. Static features are, then, the distances between all pairs of joints, orientations of the vectors connecting pairs of joints and inner angles spanned by vectors connecting all triplets of joints.
Dynamic features are obtained from trajectories of body joints. HLPF combines temporal differences of some of the static features, i.e., differences in distances between pairs of joints, differences in orientations of lines connecting joint pairs and differences in inner angles. Furthermore, translations of joint positions ( and ) and their orientations () are added.
All features are quantized using a separate codebook for each feature dimension (descriptor type), constructed using -means with . A video sequence is then represented by a histogram of quantized features and the training is performed using an SVM with a -kernel. More details can be found in [jhuang:hal-00906902]. To compute HLPF features we use the publicly available code with minor modifications, i.e., we consider the head instead of the torso center for relative positions. We have also found that converting angles, originally in degrees, to radians and L2 normalizing the HLPF features improves the performance.
4.3 Dense trajectory features
Dense Trajectories (DT) [wang:2011:inria-00583818:1] are local video descriptors that have recently shown excellent performance in several action recognition benchmarks [oneata:hal-00873662, jourdt]. The method first densely samples points which are tracked using optical flow [Farneback:2003:TME:1763974.1764031]. For each trajectory, 4 descriptors are computed in the aligned spatio-temporal volume: HOG [DT05], HOF [LMSR08] and MBH [Dalal:2006:HDU:2168483.2168522]. A recent approach [Wang2013] removes trajectories consistent with the camera motion (estimated computing a homography using optical flow and SURF [Bay:2008:SRF:1370312.1370556] point matches and RANSAC [Fischler:1981:RSC:358669.358692]). Flow descriptors are then computed from optical flow warped according to the estimated homography. We use the publicly available implementation [Wang2013] to compute improved version of DT (IDT).
Fisher Vectors (FV) [perronnin2010improving] encoding has been shown to outperform the bag-of-word approach [Chatfield11] resulting in state-of-the-art performance for action recognition in combination with DT features [oneata:hal-00873662]
. FV relies on a Gaussian mixture model (GMM) withGaussian components, computing first and second order statistics with respect to the GMM. FV encoding is performed separately for the 4 different IDT descriptors (their dimensionality is reduced by the factor of using PCA). Following [perronnin2010improving], the performance is improved by passing FV through signed square-rooting and normalization. As in [oneata:hal-00873662] we use a spatial pyramid representation and a number of Gaussian components. FV encoding is performed using the Yael library [douze:hal-01020695] and classification is performed with a linear SVM.
|Parts||App||OF||App + OF||App||OF||App + OF|
In our experiments we use two datasets JHMDB [jhuang:hal-00906902] and MPII Cooking Activities [rohrbach12cvpr], as well as two subsets of these datasets sub-JHMDB and sub-MPII Cooking. We present them in the following.
JHMDB [jhuang:hal-00906902] is a subset of HMDB [Kuehne11], see Figure 2 (left). It contains 21 human actions, such as brush hair, climb, golf, run or sit. Video clips are restricted to the duration of the action. There are between 36 and 55 clips per action for a total of 928 clips. Each clip contains between 15 and 40 frames of size
. Human pose is annotated in each of the 31838 frames. There are 3 train/test splits for the JHMDB dataset and evaluation averages the results over these three splits. The metric used is accuracy: each clip is assigned an action label corresponding to the maximum value among the scores returned by the action classifiers.
In our experiments we also use a subset of JHMDB, referred to as sub-JHMDB[jhuang:hal-00906902]
. This subset includes 316 clips distributed over 12 actions in which the human body is fully visible. Again there are 3 train/test splits and the evaluation metric is accuracy.
MPII Cooking Activities [rohrbach12cvpr] contains 64 fine-grained actions and an additional background class, see Figure 2 (right). Actions take place in a kitchen with static background. There are 5609 action clips of frame size . Some actions are very similar, such as cut dice, cut slices, and cut stripes or wash hands and wash objects. Thus, these activities are qualified as “fine-grained”. There are 7 train/test splits and the evaluation is reported in terms of mean Average Precision (mAP) using the code provided with the dataset.
We have also defined a subset of MPII cooking, referred to as sub-MPII cooking, with classes wash hands and wash objects. We have selected these two classes as they are visually very similar and differ mainly in manipulated objects. To analyze the classification performance for these two classes in detail, we have annotated human pose in all frames of sub-MPII cooking. There are and clips for wash hands and wash objects actions respectively, for a total of frames.
6 Experimental results
This section describes our experimental results and examines the effect of different design choices. First, we evaluate the complementarity of different human parts in Section 6.1. We then compare different variants for aggregating CNN features in Section 6.2. Next, we analyze the robustness of our features to errors in the estimated pose and their ability to classify fine-grained actions in Section 6.3. Finally, we compare our features to the state of the art and show that they are complementary to the popular dense trajectory features in Section 6.4.
6.1 Performance of human part features
Table 1 compares the performance of human part CNN features for both appearance and flow on JHMDB-GT (the JHMDB dataset with ground-truth pose) and MPII Cooking-Pose [Cherian14] (the MPII Cooking dataset with pose estimated by [Cherian14]). Note, that for MPII Cooking we detect upper-body poses only since full bodies are not visible in most of the frames in this dataset.
Conclusions for both datasets are similar. We can observe that all human parts (hands, upper body, full body) as well as the full image have similar performance and that their combination improves the performance significantly. Removing one part at a time from this combination results in the drop of performance (results not shown here). We therefore use all pose parts together with the full image descriptor in the following evaluation. We can also observe that flow descriptors consistently outperform appearance descriptors by a significant margin for all parts as well as for the overall combination All. Furthermore, we can observe that the combination of appearance and flow further improves the performance for all parts including their combination All. This is the pose representation used in the rest of the evaluation.
6.2 Aggregating P-CNN features
CNN features are first extracted for each frame and the following temporal aggregation pools feature values for each feature dimension over time (see Figure 1). Results of max-aggregation for JHMDB-GT are reported in Table 1 and compared with other aggregation schemes in Table 6.2. Table 6.2 shows the impact of adding min-aggregation (Max/Min-aggr) and the first-order difference between CNN features (All-Dyn). Combining per-frame CNN features and their first-order differences using max- and min-aggregation further improves results. Overall, we obtain the best results with All-(Static+Dyn)(Max/Min-aggr) for App + OF, i.e., accuracy on JHMDB-GT. This represents an improvement over Max-aggr by . On MPII Cooking-Pose [Cherian14] this version of P-CNN achieves mAP (as reported in Table 6.2) leading to an improvement over max-aggregation reported in Table 1.
|We have also experimented with second-order differences and other
statistics, such as mean-aggregation (last row in Table 6.2),
but this did not improve results. Furthermore, we have tried temporal aggregation of classification
scores obtained for individual frames. This led to a decrease of
performance, for All (App) on JHMDB-GT
score-max-aggregation results in accuracy, compared to
for features-max-aggregation (top row, left column in
Table 6.2). This indicates that early aggregation
works significantly better in our setting.
In summary, the best performance is obtained for Max-aggr on single-frame features,
if only one aggregation scheme is used.
Addition of Min-aggr and first order differences Dyn
provides further improvement.
In the remaining evaluation we report results for this version of
P-CNN, i.e., All parts App+OF with (Static+Dyn)(Max/Min-aggr).
6.3 Robustness of pose-based features
This section examines the robustness of P-CNN features in the presence of pose estimation errors and compares results with the state-of-the-art pose features HLPF [jhuang:hal-00906902]. We report results using the code of [jhuang:hal-00906902] with minor modifications described in Section 4.2. Our HLPF results are comparable to [jhuang:hal-00906902] in general and are slightly better on JHMDB-GT ( vs. ). Table 6.2 evaluates the impact of automatic pose estimation versus ground-truth pose (GT) for sub-JHMDB and JHMDB. We can observe that results for GT pose are comparable on both datasets and for both type of pose features. However, P-CNN is significantly more robust to errors in pose estimation. For automatically estimated poses P-CNN drops only by on sub-JHMDB and by on JHMDB, whereas HLPF drops by and respectively. For both descriptors the drop is less significant on sub-JHMDB, as this subset only contains full human poses for which pose is easier to estimate. Overall the performance of P-CNN features for automatically extracted poses is excellent and outperforms HLPF by a very large margin () on JHMDB.
6.4 Comparison to the state of the art
In this section we compare to state-of-the-art dense trajectory features [Wang2013] encoded by Fisher vectors [oneata:hal-00873662] (IDT-FV), briefly described in Section 4.3. We use the online available code, which we validated on Hollywood2 ( versus [Wang2013]). Furthermore, we show that our pose features P-CNN and IDT-FV are complementary and compare to other state-of-the-art approaches on JHMDB and MPII Cooking.
Table 5 shows that for ground-truth poses our P-CNN features outperform state-of-the-art IDT-FV descriptors significantly (). If the pose is extracted automatically both methods are on par. Furthermore, in all cases the combination of P-CNN and IDT-FV obtained by late fusion of the individual classification scores significantly increases the performance over using individual features only. Figure 3 illustrates per-class results for P-CNN and IDT-FV on JHMDB-GT.
Table 6 compares our results to other methods on MPII Cooking. Our approach outperforms the state of the art on this dataset and is on par with the recently published work of [zhou2015interaction]. We have compared our method with HLPF [jhuang:hal-00906902] on JHMDB in the previous section. P-CNN perform on par with HLPF for GT poses and significantly outperforms HLPF for automatically estimated poses. Combination of P-CNN with IDT-FV improves the performance to and for GT and automatically estimated poses respectively (see Table 5). This outperforms the state-of-the-art result reported in [jhuang:hal-00906902].
Qualitative results comparing P-CNN and IDT-FV are presented in Figure 6.2 for JHMDB-GT. See Figure 3 for the quantitative comparison. To highlight improvements achieved by the proposed P-CNN descriptor, we show results for classes with a large improvement of P-CNN over IDT-FV, such as shoot_gun, wave, throw and jump as well as for a class with a significant drop, namely kick_ball. Figure 6.2 shows two examples for each selected action class with the maximum difference in ranks obtained by P-CNN (green) and IDT-FV (red). For example, the most significant improvement (Figure 6.2, top left) increases the sample ranking from to , when replacing IDT-FV by P-CNN. In particular, the shoot gun and wave classes involve small localized motion, making classification difficult for IDT-FV while P-CNN benefits from the local human body part information. Similarly, the two samples from the action class throw also seem to have restricted and localized motion while the action jump is very short in time. In the case of kick_ball the significant decrease can be explained by the important dynamics of this action, which is better captured by IDT-FV features. Note that P-CNN only captures motion information between two consecutive frames.
Figure LABEL:fig:MPIIqual presents qualitative results for MPII Cooking-Pose [Cherian14] showing samples with the maximum difference in ranks over all classes.