Skeleton-based Activity Recognition with Local Order Preserving Match of Linear Patches

11/01/2018 ∙ by Yaqiang Yao, et al. ∙ USTC 0

Human activity recognition has drawn considerable attention recently in the field of computer vision due to the development of commodity depth cameras, by which the human activity is represented as a sequence of 3D skeleton postures. Assuming human body 3D joint locations of an activity lie on a manifold, the problem of recognizing human activity is formulated as the computation of activity manifold-manifold distance (AMMD). In this paper, we first design an efficient division method to decompose a manifold into ordered continuous maximal linear patches (CMLPs) that denote meaningful action snippets of the action sequence. Then the CMLP is represented by its position (average value of points) and the first principal component, which specify the major posture and main evolving direction of an action snippet, respectively. Finally, we compute the distance between CMLPs by taking both the posture and direction into consideration. Based on these preparations, an intuitive distance measure that preserves the local order of action snippets is proposed to compute AMMD. The performance on two benchmark datasets demonstrates the effectiveness of the proposed approach.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


In computer vision and machine learning communities, human activity recognition has become one of the most appealing studies

[Vrigkas, Nikou, and Kakadiaris.2015, Xu et al.2017] for its wide applications. Previous RGB-based work focused on extracting local space-time features from 2D images. Recently, with the introduction of real-time depth cameras and corresponding human skeleton extraction methods [Shotton et al.2013], the studies of activity recognition have been greatly promoted in terms of depth maps-based methods [Rahmani et al.2016] and skeleton-based methods [Wang et al.2014]. In particular, [Yao et al.2011] verified that skeleton data alone can outperform other low-level image features for human activity recognition. The main reason is that the 3D skeleton poses are invariant to the viewpoint and appearance, such that activities vary less from actor to actor. Several specially designed descriptors such as HOJ3D [Xia, Chen, and Aggarwal.2012], Cov3DJ [Hussein et al.2013], and HOD [Gowayyed et al.2013] employed the conclusion and achieve decent performance.

Related Work

Different from human posture recognition, the temporal relation between adjacent frames poses a challenge to the activity recognition task. Human body represented with 3D skeleton can be viewed as an articulated system in which rigid segments are connected with several joints. In this way, we can treat a human activity as an evolution of the spatial configuration of these segments. Based on this perspective, [Gong, Medioni, and Zhao2014]

addressed the human activity recognition as the problem of structured time series classification. From the view of temporal dynamics modeling, existing methods for human activity recognition fall into two categories: state space models and recurrent neural networks (RNNs). State space models, including linear dynamic system

[Chaudhry et al.2013]

, hidden Markov model

[Lv and Nevatia2006, Wu and Shao2014]

and conditional restricted Boltzmann machine

[Taylor, Hinton, and Roweis2007], treated the action sequence as an observed output produced by a Markov process, whose hidden states are used to model the dynamic patterns. In contrast, RNNs utilize their internal state (memory) instead of dynamic patterns to process an action sequence of inputs [Du, Wang, and Wang.2015, Zhu et al.2016, Song et al.2017]. However, [Elgammal and Lee2004] showed that the geometric structure of activity would not be preserved in temporal relation and proposed to learn the representation of activity with manifold embedding. In particular, the authors nonlinearly embedded activity manifolds into a low dimensional space with LLE and found that the temporal relation of the input sequence was preserved in the obtained embedding to some extent.

Manifold based representation and related algorithms have attracted much attention in image and video analysis. In consideration of temporal dimension, [Wang and Suter2007] exploited locality preserving projections to project a given sequence of moving silhouettes associated with an action video into a low-dimensional space. Modeling each image set with a manifold, [Wang et al.2012]

formulated the image sets classification for face recognition as a problem of calculating the manifold-manifold distance (MMD). The authors extracted maximal linear patches (MLPs) to form nonlinear manifold and integrated the distances between pairs of MLPs to compute MMD. Similar to image sets that each set is composed of images from the same person but covering variations, human body 3D joint locations of an activity can be viewed as a non-linear manifold embedded in a higher-dimensional space. However, in this case, MLP is not a proper decomposition for activity manifold since it may disorder the geometric structure of action sequence.

Our Contributions

In this paper, we propose a new human activity recognition approach based on the manifold representation of 3D joint locations by integrating the advantages of the temporal relation modeling with the manifold embedding. Rather than modeling the dynamical patterns of the sequence explicitly, manifold learning methods preserve the local geometric properties of activity sequence by embedding it into a low-dimensional space. In this way, human activity is denoted as a series of ordered postures residing on a manifold embedded in a high dimensional space. To construct the sequence of meaningful low-dimensional structures on an activity manifold, we design an efficient division method to decompose an action sequence into the ordered CMLPs based on the nonlinearity degree. Different from the division method proposed in [Yao et al.2018] which divides the action sequence into two sub-sequences each time, our division algorithm is more flexible in that an action sequence can be divided into more than two sub-sequences according to a predefined threshold.

The CMLP corresponding to an action snippet is regarded as a local maximal linear subspace. Motivated by the Cov3DJ proposed in [Hussein et al.2013], we combine the major posture of action snippet with the main direction of evolution to represent the local maximal linear subspace. In particular, the major posture and main direction are computed with the mean of joints locations and the first principal component of the corresponding covariance matrix, respectively. Based on the intuition that a reasonable distance measure between actions snippets should take both the major posture distance (MPD) and the main direction distance (MDD) between action snippets into consideration, we define the activity manifold-manifold distance (AMMD) as the pairwise matching of adjacent action snippets in the reference and the test activity manifolds to preserve the local order of action snippets. Our approach is evaluated on two popular benchmarks datasets, KARD dataset [Gaglio, Re, and Morana.2015] and Cornell Activity Dataset (CAD-60) [Sung et al.2012]. Experimental results show the effectiveness and competitiveness of the proposed approach in comparison with the state-of-the-art methods.

In summary, the main contributions of this paper include three aspects:

  • We design an efficient division method to decompose an activity manifold into ordered continuous maximal linear patches (CMLPs) with sequential neighbors graph.

  • A reasonable distance measure between CMLPs that takes into account both the major posture and the main direction of an action snippet is defined.

  • Based on the distance between CMLPs, an activity manifold-manifold Distance (AMMD) that incorporates the sequential property of action snippets is proposed to discriminate different activities.

The Proposed Approach

This section presents the proposed approach for human activity recognition. We first describe the algorithm for the construction of continuous maximal linear patch (CMLP), which decomposes an activity manifold into a sequence of CMLPs viewed as action snippets. Next, we represent CMLP with major posture and main direction, and propose the definition of the distance measure between CMLPs based on this representation. Finally, the activity manifold-manifold distance (AMMD) is computed to discriminate the different activities.

Figure 1: The illustration of continuous maximal linear patch (CMLP) construction. The division algorithm based on nonlinear score divides a sequence of postures into several CMLPs. The CMLPs in dotted local patches indicate action snippets.

Continuous Maximal Linear Patch

Local linear models on a manifold are linear patches, whose linear perturbation is characterized by the deviation of the geodesic distances from the Euclidean distances between points. Here the Euclidean distance and the geodesic distance are computed with -norm and the Dijkstra’s algorithm, respectively. The Dijkstra’s algorithm is based on the nearest neighbors graph in which each vertex is connected to its nearest vertices with the Euclidean metric.

We extend the previous MLP [Wang et al.2012] to a new concept termed continuous maximal linear patch (CMLP). The aim of the construction algorithm is to guarantee that each CMLP only contains meaningful successive postures so that it can be regarded as an action snippet. In view of a rational hypothesis that adjacent postures would close to each other in Euclidean metric, we define a sequential neighbors graph to compute the geodesic distance between postures in action sequence as follows:

Definition 1.

sequential neighbors graph: A graph in which each vertex are connected to its previous and next vertices in temporal order.

Formally, a human activity is a sequence of human postures , where is a

-dimensional column vector (

is 3 coordinates of human joints), and is the number of postures. Assume these postures lie on a low-dimensional manifold composed of several subspaces, we aim to construct a sequence of CMLPs from P,

where is the total number of CMLPs and each action snippet contains postures.

An efficient division method based on nonlinear score is proposed to construct CMLP. In particular, the current action snippet only contains the first posture, and we include the next posture to current action snippet until the nonlinear score of current action snippet exceeds a defined threshold . The next action snippet initialized with empty set continues this process. An illustration of constructed CMLPs is presented in Figure 1. The nonlinearity score to measure the CMLP nonlinearity degree is defined as in [Wang et al.2012],


where is the ratio of the geodesic distance and the Euclidean distance computed by sequential neighbors graph. We average the ratios between each pair of postures and in to obtain a robust measurement of nonlinearity degree, and the computation of can be efficiently carried out.

1:  Input:    A activity sequence ;    The nonlinearity degree threshold ;    The number of sequential neighbors .
2:  Output:    Local linear model sequences .
3:  Initialization:    , , , , ;    Euclidean distance matrix ;    Geodesic distance matrix ;    Distance ratio matrix ;
4:  while  do
5:     Update , ;
6:     Expand , , to include ;
7:     Compute the nonlinearity score with Eq. (1);
8:     if  then
9:        Update , ;
10:        Reset , ,
11:          , , ;
12:     end if
13:     Update ;
14:  end while
15:  if  then
16:     Update ;
17:  end if
18:  return  C;
Algorithm 1 Construction of Continuous Maximal Linear Patch (CMLP).

The improved CMLP not only inherits the ability of MLP to span a maximal linear patch, but also holds the intrinsic structure of successive postures which imply the evolution of corresponding human action snippet. A nonlinearity degree threshold is utilized to control the trade-off between the accuracy of representation and the range of a CMLP. Specifically, a smaller leads to a better accurate representation but a shorter range, and vice versa. Obviously, to make the algorithm applicable, is supposed to be specified to a value larger than to construct meaningful CMLP sequence. The algorithm of construction of CMLP is summarized in Algorithm 1. The index and indicate the current posture and the current CMLP, respectively. After the initialization of the distance matrix and the distance ratio matrix, we include current posture into current CMLP and compute the nonlinear score of current CMLP. If the nonlinear score is greater than threshold , we obtain the first CMLP and reset distance matrix and distance ratio matrix to the initial values. Otherwise, the index of the current posture is assigned to the next posture. These procedures continue until the entire sequence is divided into several CMLPs.

Distance Measure between CMLPs

An activity manifold is decomposed into ordered CMLPs, and each CMLP can be regarded as a linear patch spanned by the continuous postures. We represent a linear patch with its center and the first principal component of the covariance matrix, which specify the major posture and main direction of the evolution of an action snippet, respectively.

For a CMLP denoted by a sequence of postures , the major posture is averaged on all postures in this CMLP,


where is the -th posture of the CMLP . The sample covariance matrix can be obtained with the formula,


By performing eigen-decomposition on the symmetric matrix , the covariance matrix can be factorized as


where the diagonal matrix

contains the real eigenvalues of

on its diagonal elements, and

is the orthogonal matrix whose columns are the eigenvectors of

and corresponds to the eigenvalues in . The eigenvector that is associated with the largest eigenvalue of is denoted by .

For distance measure between two subspaces, the commonly used method is principal angles [Björck and Golub.1973], which is defined as the minimal angles between any two vectors of the subspaces. In particular, let and be subspaces of with dimensions and respectively, and . The -th principal angle between and are defined recursively as follows,


The vector pairs are called the -th principal vectors. Denote the orthonormal bases of and with and

, respectively, the principal angles can be computed straightforward based on the singular value decomposition of

. Concretely, the cosine of the -th principle angle is the -th singular value of .

Various subspace distance definitions have been proposed based on principal angles. For example, max correlation and min correlation are defined using the smallest and largest principal angles, respectively, while [Edelman, Arias, and Smith1999] employed all principal angles in their subspace distance. However, these definitions fail to reflect the difference of subspace positions since principal angles only characterize the difference in direction variation. To derive a better distance measure between CMLPs, we take both the subspace position and direction variation into consideration to measure the main posture distance (MPD) and main direction distance (MDD) between the corresponding action snippets, respectively. The MPD between two CMLPs and

is related to the cosine similarity of

and ,


In contrast to previous work that assigns weights to each eigenvector, in our case, the MDD is simply defined as the sine distance between first eigenvectors and of two CMLPs and ,


The employment of sine distance on both MPD and MDD leads to our distance definition between CMLPs,


This distance is then used as the basis for the following distance measure between action manifolds.

Figure 2: Illustration of the proposed pairwise distance between Continuous Maximal Linear Patches (CMLPs). and denote the reference and test action manifold, respectively. and are the -th and -th CMLP in the reference and the test action manifold. To preserve the local order of action snippets, we match each pair of adjacent two CMLPs between the reference action manifold and the test action manifold. The distance measure is indicated by the double-sided arrow in different colors.

Activity Manifold-Manifold Distance

Given the reference and test activity manifold denoted as and , respectively, where and are CMLPs, we aim to measure Activity Manifold-Manifold Distance (AMMD) based on the distance between CMLPs. An intuitive definition of the manifold to manifold distance is proposed in [Wang et al.2012],


This definition integrates all pairwise subspace to subspace distance and is a many-to-many matching problem. The difficulty is how to determine the weight between subspaces and . Although earth mover’s distance (-st Wasserstein distance) [Rubner, Tomasi, and Guibas2000] can be employed compute , its computational complexity is too high. In practice, all weights are set as an equal constant value .

In the scenario of face recognition with image set (FRIS) [Wang et al.2012], the authors believed that the closet subspace pair deserves the most emphasis and defined the manifold to manifold distance as the distance of closest subspace pair from these two manifolds as follows,


It is easy to find out that the weight of the closet pair is set to and all the other weights are set to in this case. The best-suited subspaces distance is one of the most appropriate manifold-manifold distances for FRIS problem. However, it cannot be applied to our activity recognition problem since this distance ignores the temporal relationship between actions snippets. To preserve the local order of action snippets in distance definition, we propose to match the pairwise adjacent two CMLPs from the test manifold to the reference manifold and obtain the following distance,


As illustrated in Figure 2, for each CMLP pair extracted in test manifold , we find the most similar pair from reference action manifold , and the sum of all pairwise distances amounts to the AMMD. Afterward, the unknown activity is assigned to the class that has the closest AMMD over all reference action classes,


where is the distance between the -th class reference action manifold and the test action manifold.


We study the performance of our approach on two popular benchmarks, KARD dataset [Gaglio, Re, and Morana.2015] and Cornell Activity Dataset (CAD-60) [Sung et al.2012]

. Both of them records 15 joint locations for the participated subjects. In all experiments, the hyperparameters, the number of linked sequential neighbor and the nonlinearity degree threshold, are selected with cross-validation.

KARD Dataset

Subset 1 Subset 2 Subset 3
Horizontal arm wave High arm wave Draw tick
Two-hand wave Side kick Drink
Bend Catch cap Sit down
Phone call Draw tick Phone call
Stand up Hand clap Take umbrella
Forward kick Forward kick Toss paper
Draw X Bend High throw
Walk Sit down Horizontal arm wave
Table 1: Subset segmentation of KARD dataset. The ten Actions are indicated in bold font.

The KARD dataset contains 18 activities collected by Gaglio et al. [Gaglio, Re, and Morana.2015]. These activities include ten gestures and eight actions, and are grouped into three subsets as listed in Table 1. The obtained sequences are collected on 10 different subjects that perform each activity 3 times, thus, there are 540 skeleton sequences in this dataset. According to the previous work [Gaglio, Re, and Morana.2015], KARD dataset is split under three different setups and two modalities in the experiment. Specifically, the three experiments setups A, B, and C utilize one-third, two-thirds, and half of the samples for training, respectively, and the rest for testing. The activities constituting the dataset are split into the five groups: Gestures, Actions, Activity Set 1, 2, and 3 (three subsets). From subset 1 to 3, the activities become increasingly difficult to recognize due to the increase of similarity between activities. Note that Actions are more complex than Gestures.

Methods Subset 1 Subset 2 Subset 3 Gestures Actions
[Gaglio, Re, and Morana.2015] 95.1 99.1 93.0 89.9 94.9 90.1 84.2 89.5 81.7 86.5 93.0 86.7 92.5 95.0 90.1
[Cippitelli et al.2016] 98.0 99.0 97.7 99.8 100 99.6 91.6 95.8 93.3 89.9 95.9 93.7 99.0 99.9 99.1
The Proposed Approach 100 100 100 99.9 100 99.8 97.6 98.0 96.8 99.6 99.8 99.9 97.6 98.1 96.9
Table 2: Accuracies on the KARD dataset under three different experimental setups of five different splittings.
Methods Accuracy %
[Gaglio, Re, and Morana.2015] 84.8
[Cippitelli et al.2016] 95.1
The Proposed Approach 99.3
Table 3: Accuracies on the KARD dataset under the “new-person” setting.
Figure 3: Confusion matrix on the KARD dataset under the “new-person” setting.

All results on this dataset are obtained with the parameter setting: , . In consideration of the randomness existing in the dataset splitting procedure, we run each experimental setup 10 times and present the mean performance in Table 2. The proposed approach outperforms all other methods on four out of five subsets under all experimental setups but narrowly lost to the method in [Cippitelli et al.2016] on the Actions subset. The reason is that the CMLP representation is a linear descriptor, which might fail to capture some nonlinear features of complex activities and is unable to discriminate the subtle difference between the similar activities as a result.

In addition, we perform the experiment in the “new-person” scenario, i.e., a leave-one-subject-out setting. The experimental setting is in line with that in [Cippitelli et al.2016]. Table 3 presents the results of the proposed approach compared with the state-of-the-arts. It can be observed that our approach achieves the best result with an accuracy of , which exceeds the second best result by . Figure 3

illustrates the confusion matrix, which shows that the proposed approach classifies all activities correctly with only slight confusion between activities

and . The reason is that the representations with 3D joint locations are almost the same in these two activities, and the proposed approach is prone to confuse the activities based on the limited information obtained from the linear descriptor. This confusion directly degrades the performance of the experimental setup “Actions” in Table 2. We believe that it would be sensible to explore the addition of RGB or depth image information in our future work. In summary, this newly proposed approach achieved impressive performance on the above human activity recognition tasks in our current experimental setting.

Cornell Activity Dataset

Methods Accuracy %
[Wang et al.2014] 74.7
[Koppula, Gupta, and Saxena.2013] 80.8
[Hu et al.2015] 84.1
[Cippitelli et al.2016] 93.9
The Proposed Approach 99.6
Table 4: Accuracies on the CAD-60 dataset under the “cross-person” setting.
Figure 4: Confusion matrix on the CAD-60 dataset under the “cross-person” setting.

Cornell Activity Dataset 60 (CAD-60) [Sung et al.2012] is a human activity dataset comprising of twelve unique activities. Four different human subjects (one is left-handed and others are right-handed, two males and two females) are asked to perform three or four common activities in five different environments, including bathroom, bedroom, kitchen, living room and office. The experimental setting of leave-one-person-out cross-validation is adopted as in [Wang et al.2014] that the person in the training would not appear in the testing for each environment. To eliminate the influence from the left-handed subject, if the -coordinate of the right hand is smaller than the left hand, we interchange the -coordinate of left and right hands, ankles and shoulders, to transform skeleton positions of the left-handed persons to those of the right-handed ones.

Here, the number of sequential neighbors and nonlinearity degree threshold are set as and , respectively. The recognition performance is shown in Table 4 by averaging the accuracies on all possible splits (totally 20). The proposed approach achieves an accuracy of , which outperforms the results of the comparative methods. Figure 4 shows the confusion matrix of the performance obtained by our proposed approach. It can be observed that the proposed approach classifies all the actions correctly except minor confusion on two actions: and

, which is probably caused by the inaccurate human skeletons information. In conclusion, the appealing recognition result demonstrates that our approach can effectively capture the evolution of human activities only based on human 3D joints locations.

Figure 5: The recognition accuracies of different distance measurements on the KARD dataset with respect to the threshold under the “new person” setting.

Parameter Analysis

There are two key parameters in our approach: the nonlinearity degree threshold and the number of linked sequential neighbors . The nonlinearity degree threshold determines the granularity of continuous linear maximal patches, while the number of linked sequential neighbors quantifies the topology preservation in the computation of geodesic distance. To evaluate the sensitivity of the proposed approach with respect to these two parameters, we conduct experiments with different parameter values under the leave-one-subject-out setting on the KARD dataset.

We first fix the number of neighbors and adjust the nonlinearity degree threshold from to with step size . Figure 5 illustrates the corresponding experimental performance. The relatively small gap between the worst and the best results under each distance measurement validates that the proposed approach is quite robust with respect to the value of . Generally speaking, lower could lead to a better performance since a smaller CMLP yields more representative action snippet. Then, we adjust the number of neighbors from to with step size and fix the nonlinearity degree threshold , respectively. The obtained result is illustrated in Figure 6, which shows that with the increase of , the recognition accuracy gets increasingly higher when is small. However, while is large, the recognition accuracy shows a decline with the increase of .

Overall, the best recognition accuracy is usually obtained with a small matched with a large or vice verse. In some sense, two parameters are not totally independent on determining the final performance of our approach, both parameters cooperate with each other to construct the most representative CMLPs.

Distance Measure Methods Comparison

Figure 6: The recognition accuracy of the proposed approach on KARD dataset with respect to the number of sequential neighbors under the “new-person” setting.

The proposed distance measure includes two parts: main posture distance (MPD) and main direction distance (MDD) between action snippets. Intuitively, MDD is more discriminative than MPD since the evolution of the main direction is more important than the position of subspace in the activity recognition problem. However, the MPD is complementary to MDD to some extent. To demonstrate the strength of the proposed distance measure, we compare the different combination of distance measure between the CMLPs and sequence matching algorithm.

The results are shown in Figure 5, in which dynamic time warping (DTW) is a template matching algorithm that calculates an optimal match between two given sequences under some certain restrictions. The curve of MPD is almost above the curve of MDD, which confirms our intuition that the major posture features more important than main tendency feature for recognition. As expected, MPD holds major posture representation and MDD keeps the ability to describe the evolution of action snippet. Thus the combination of these two distance measurements performs the best. In general, the combination MPD+MDD+AMMD obtains the best results in most cases.


In this paper, we present a novel human activity recognition approach that utilizes a manifold representation of 3D joint locations. Considering that an activity is composed of several compact sub-sequences corresponding to meaningful action snippets, the 3D skeleton sequence is decomposed into ordered continuous maximal linear patches (CMLPs) on the activity manifold. The computation of activity manifold-manifold distance (AMMD) preserves the local order of action snippets and is based on the pairwise distance between CMLPs, which takes into account the major posture and the main direction of action snippets. Experimental results show better performance of our approach in comparison with the state-of-the-art approaches. In practice, there often exists local temporal distortion and periodic patterns in the action sequence. By viewing action snippets as samples from a probability distribution, we attempt to introduce the Wasserstein metric to measure the distance between the action snippets for activity recognition in the future work.


  • [Björck and Golub.1973] Björck, Ȧ., and Golub., G. 1973. Numerical methods for computing angles between linear subspaces. Mathematics of Computation.
  • [Chaudhry et al.2013] Chaudhry, R.; Ofli, F.; Kurillo, G.; Bajcsy, R.; and Vidal, R. 2013. Bio-inspired dynamic 3d discriminative skeletal features for human action recognition. In

    IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , 471–478.
  • [Cippitelli et al.2016] Cippitelli, E.; Gasparrini, S.; Gambi, E.; and et al. 2016. A human activity recognition system using skeleton data from rgbd sensors. Computional Intelligence and Neuroscience.
  • [Du, Wang, and Wang.2015] Du, Y.; Wang, W.; and Wang., L. 2015. Hierarchical recurrent neural network for skeleton based action recognition. IEEE Conference on Computer Vision and Pattern Recognition 1110–1118.
  • [Edelman, Arias, and Smith1999] Edelman, A.; Arias, T. A.; and Smith, S. T. 1999. The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications 20(2):303–353.
  • [Elgammal and Lee2004] Elgammal, A., and Lee, C.-S. 2004. Inferring 3d body pose from silhouettes using activity manifold learning. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, II–II.
  • [Gaglio, Re, and Morana.2015] Gaglio, S.; Re, G.; and Morana., M. 2015. Human activity recognition process using 3-d posture data. IEEE Transactions on Human-Machine Systems 586–597.
  • [Gong, Medioni, and Zhao2014] Gong, D.; Medioni, G.; and Zhao, X. 2014. Structured time series analysis for human action segmentation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(7):1414–1427.
  • [Gowayyed et al.2013] Gowayyed, M.; Torki, M.; Hussein, M.; and El-Saban., M. 2013. Histogram of oriented displacements (hod): Describing trajectories of human joints for action recognition.

    International Joint Conference on Artificial Intelligence

  • [Hu et al.2015] Hu, J.; Zheng, W.; Lai, J.; and et al. 2015. Jointly learning heterogeneous features for rgb-d activity recognition. IEEE Conference on Computer Vision and Pattern Recognition 5344–5352.
  • [Hussein et al.2013] Hussein, M.; Torki, M.; Gowayyed, M.; and El-Saban., M. 2013. Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. International Joint Conference on Artificial Intelligence 2466–2472.
  • [Koppula, Gupta, and Saxena.2013] Koppula, H.; Gupta, R.; and Saxena., A. 2013. Learning human activities and object offardances from rgb-d videos. International Journal of Robotics Research 951–970.
  • [Lv and Nevatia2006] Lv, F., and Nevatia, R. 2006. Recognition and segmentation of 3-d human action using hmm and multi-class adaboost. In European Conference on Computer Vision, 359–372. Springer.
  • [Rahmani et al.2016] Rahmani, H.; Mahmood, A.; Huynh, D.; and et al. 2016. Histogram of oriented principal components for cross-view action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2430–2443.
  • [Rubner, Tomasi, and Guibas2000] Rubner, Y.; Tomasi, C.; and Guibas, L. J. 2000.

    The earth mover’s distance as a metric for image retrieval.

    International Journal of Computer Vision 40(2):99–121.
  • [Shotton et al.2013] Shotton, J.; Sharp, T.; Kipman, A.; and et al. 2013. Real-time human pose recognition in parts from single depth images. Communications of the ACM 116–124.
  • [Song et al.2017] Song, S.; Lan, C.; Xing, J.; Zeng, W.; and Liu, J. 2017.

    An end-to-end spatio-temporal attention model for human action recognition from skeleton data.

    In AAAI, 4263–4270.
  • [Sung et al.2012] Sung, J.; Phonce, C.; Selman, B.; and et al. 2012. Unstructured human activity detection from rgbd images. IEEE International Conference on Robotics and Automation 842–849.
  • [Taylor, Hinton, and Roweis2007] Taylor, G. W.; Hinton, G. E.; and Roweis, S. T. 2007. Modeling human motion using binary latent variables. In Advances in neural information processing systems, 1345–1352.
  • [Vrigkas, Nikou, and Kakadiaris.2015] Vrigkas, M.; Nikou, C.; and Kakadiaris., I. 2015. A review of human activity recognition methods. Frontiers in Robotics and AI.
  • [Wang and Suter2007] Wang, L., and Suter, D. 2007. Learning and matching of dynamic shape manifolds for human action recognition. IEEE Transactions on Image Processing 16(6):1646–1661.
  • [Wang et al.2012] Wang, R.; Shan, S.; Chen, X.; and et al. 2012. Manifold-manifold distance and its application to face recognition with image sets. IEEE Transactions on Image Processing 4466–4479.
  • [Wang et al.2014] Wang, J.; Liu, Z.; Wu, Y.; and Yuan., J. 2014. Learning actionlet ensemble for 3d human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 11–40.
  • [Wu and Shao2014] Wu, D., and Shao, L. 2014. Leveraging hierarchical parametric networks for skeletal joints based action segmentation and recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 724–731.
  • [Xia, Chen, and Aggarwal.2012] Xia, L.; Chen, C.; and Aggarwal., J. 2012. View invariant human action recognition using histograms of 3d joints. IEEE Conference on Computer Vision and Pattern Recognition Workshops 22–27.
  • [Xu et al.2017] Xu, W.; Miao, Z.; Zhang, X.-P.; and Tian, Y. 2017. Learning a hierarchical spatio-temporal model for human activity recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing, 1607–1611.
  • [Yao et al.2011] Yao, A.; Gall, J.; Fanelli, F.; and Gool., L. 2011.

    Does human action recognition benefit from pose estimation?

    British Machine Vision Conference.
  • [Yao et al.2018] Yao, Y.; Liu, Y.; Liu, Z.; and Chen, H. 2018. Human activity recognition with posture tendency descriptors on action snippets. IEEE Transactions on Big Data.
  • [Zhu et al.2016] Zhu, W.; Lan, C.; Xing, J.; Zeng, W.; Li, Y.; Shen, L.; Xie, X.; et al. 2016. Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks. In AAAI, 3697–3703.