Skeleton Based Human Action Recognition with Global Context-Aware Attention LSTM Networks

07/18/2017 ∙ by Jun Liu, et al. ∙ 0

Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, Long Short-Term Memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for skeleton based action recognition. This network is capable of selectively focusing on the informative joints in each frame of each skeleton sequence by using a global context memory cell. To further improve the attention capability of our network, we also introduce a recurrent attention mechanism, with which the attention performance of the network can be enhanced progressively. Moreover, we propose a stepwise training scheme in order to train our network effectively. Our approach achieves state-of-the-art performance on five challenging benchmark datasets for skeleton based action recognition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: Skeleton based human action recognition with the Global Context-Aware Attention LSTM network. The first LSTM layer encodes the skeleton sequence and generates an initial global context representation for the action sequence. The second layer performs attention over the inputs by using the global context memory cell to achieve an attention representation for the sequence. Then the attention representation is used back to refine the global context. Multiple attention iterations are performed to refine the global context memory progressively. Finally, the refined global context information is utilized for classification.

Action recognition is a very important research problem owing to its relevance to a wide range of applications, such as video surveillance, patient monitoring, robotics, human-machine interaction, etc [1, 2, 3]. With the development of depth sensors, such as RealSense and Kinect [4, 5, 6], 3D skeleton based human action recognition has received much attention, and a lot of advanced methods have been proposed during the past few years [7, 8, 9, 10].

Human actions can be represented by a combination of the motions of skeletal joints in 3D space [11, 12]. However, this does not indicate all joints in the skeleton sequence are informative for action recognition. For instance, the hand joints’ motions are quite informative for the action clapping, while the movements of the foot joints are not. Different action sequences often have different informative joints, and in the same sequence, the informativeness degree of a body joint may also change over the frames. Thus, it is beneficial to selectively focus on the informative joints in each frame of the sequence, and try to ignore the features of the irrelevant ones, as the latter contribute very little for action recognition, and even bring noise which corrupts the performance [13]. This selectively focusing scheme can also be called attention, which has been demonstrated to be quite useful for various tasks, such as speech recognition [14], image caption generation [15], machine translation [16], and so on.

Long Short-Term Memory (LSTM) networks have strong power in handling sequential data [17]. They have been successfully applied to language modeling [18], RGB based video analysis [19, 20, 21, 22, 23, 24, 25, 26, 27], and also skeleton based action recognition [12, 28, 29]. However, the original LSTM does not have strong attention capability for action recognition. This limitation is mainly owing to LSTM’s restriction in perceiving the global context information of the video sequence, which is, however, often very important for the global classification problem – skeleton based action recognition.

In order to perform reliable attention over the skeletal joints, we need to assess the informativeness degree of each joint in each frame with regarding to the global action sequence. This indicates that we need to have global contextual knowledge first. However, the available context information at each evolution step of LSTM is relatively local. In LSTM, the sequential data is fed to the network as input step by step. Accordingly, the context information (hidden representation) of each step is fed to the next one. This implies the available context at each step is the hidden representation from the previous step, which is quite local when compared to the global information

111Though in LSTM, the hidden representations of the latter steps contain wider range of context information than that of the initial steps, their context is still relatively local, as LSTM has trouble in remembering information too far in the past [30]. .

In this paper, we extend the original LSTM model and propose a Global Context-Aware Attention LSTM (GCA-LSTM) network which has strong attention capability for skeleton based action recognition. In our method, the global context information is fed to all evolution steps of the GCA-LSTM. Therefore, the network can use it to measure the informativeness scores of the new inputs at all steps, and adjust the attention weights for them accordingly, i.e., if a new input is informative regarding to the global action, then the network takes advantage of more information of it at this step, on the contrary, if it is irrelevant, then the network blocks the input at this step.

Our proposed GCA-LSTM network for skeleton based action recognition includes a global context memory cell and two LSTM layers, as illustrated in Fig. 1

. The first LSTM layer is used to encode the skeleton sequence and initialize the global context memory cell. And the representation of the global context memory is then fed to the second LSTM layer to assist the network to selectively focus on the informative joints in each frame, and further generate an attention representation for the action sequence. Then the attention representation is fed back to the global context memory cell in order to refine it. Moreover, we propose a recurrent attention mechanism for our GCA-LSTM network. As a refined global context memory is produced after the attention procedure, the global context memory can be fed to the second LSTM layer again to perform attention more reliably. We carry out multiple attention iterations to optimize the global context memory progressively. Finally, the refined global context is fed to the softmax classifier to predict the action class.

In addition, we also extend the aforementioned design of our GCA-LSTM network in this paper, and further propose a two-stream GCA-LSTM, which incorporates fine-grained (joint-level) attention and coarse-grained (body part-level) attention, in order to achieve more accurate action recognition results.

The contributions of this paper are summarized as follows:

  • A GCA-LSTM model is proposed, which retains the sequential modeling ability of the original LSTM, meanwhile promoting its selective attention capability by introducing a global context memory cell.

  • A recurrent attention mechanism is proposed, with which the attention performance of our network can be improved progressively.

  • A stepwise training scheme is proposed to more effectively train the network.

  • We further extend the design of our GCA-LSTM model, and propose a more powerful two-stream GCA-LSTM network.

  • The proposed end-to-end network yields state-of-the-art performance on the evaluated benchmark datasets.

This work is an extension of our preliminary conference paper [31]. Based on the previous version, we further propose a stepwise training scheme to train our network effectively and efficiently. Moreover, we extend our GCA-LSTM model and further propose a two-stream GCA-LSTM by leveraging fine-grained attention and coarse-grained attention. Besides, we extensively evaluate our method on more benchmark datasets. More empirical analysis of the proposed approach is also provided.

The rest of this paper is organized as follows. In Section II, we review the related works on skeleton based action recognition. In Section III, we introduce the proposed GCA-LSTM network. In Section IV, we introduce the two-stream attention framework. We provide the experimental results in Section V. Finally, we conclude the paper in Section VI.

Ii Related Work

In this section, we first briefly review the skeleton based action recognition methods which mainly focus on extracting hand-crafted features. We then introduce the RNN and LSTM based methods. Finally, we review the recent works on attention mechanism.

Ii-a Skeleton Based Action Recognition with Hand-crafted Features

In the past few years, different feature extractors and classifier learning methods for skeleton based action recognition have been proposed [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44].

Chaudhry et al. [45] proposed to encode the skeleton sequences to spatial-temporal hierarchical models, and then use linear dynamical systems (LDSs) to learn the dynamic structures. Vemulapalli et al. [46]

represented each action as a curve in a Lie group, and then utlized a support vector machine (SVM) to classify the actions. Xia

et al. [47]

proposed to model the temporal dynamics in action sequences with the Hidden Markov models (HMMs). Wang

et al. [48, 49]

introduced an actionlet ensemble representation to model the actions meanwhile capturing the intra-class variances. Chen

et al. [50] designed a part-based 5D feature vector to explore the relevant joints of body parts in skeleton sequences. Koniusz et al. [51]

introduced tensor representations for capturing the high-order relationships among body joints. Wang

et al. [52] proposed a graph-based motion representation in conjunction with a SPGK-kernel SVM for skeleton based activity recognition. Zanfir et al. [53] developed a moving pose framework together with a modified k-NN classifier for low-latency action recognition.

Ii-B Skeleton Based Action Recognition with RNN and LSTM Models

Very recently, deep learning, especially recurrent neural network (RNN), based approaches have shown their strength in skeleton based action recognition. Our proposed GCA-LSTM network is based on the LSTM model which is an extension of RNN. In this part, we review the RNN and LSTM based methods as below, since they are relevant to our method.

Du et al. [12] introduced a hierarchical RNN model to represent the human body structure and temporal dynamics of the joints. Veeriah et al. [54] proposed a differential gating scheme to make the LSTM network emphasize on the change of information. Zhu et al. [28] proposed a mixed-norm regularization method for the LSTM network in order to drive the model towards learning co-occurrence features of the skeletal joints. They also designed an in-depth dropout mechanism to effectively train the network. Shahroudy et al. [55] introduced a part-aware LSTM model to push the network towards learning long-term context representations of different body parts separately. Liu et al. [29, 56] designed a 2D Spatio-Temporal LSTM framework to concurrently explore the hidden sources of action related context information in both temporal and spatial domains. They also introduced a trust gate mechanism [29] to deal with the inaccurate 3D coordinates of skeletal joints provided by the depth sensors.

Beside action recognition, RNN and LSTM models have also been applied to skeleton based action forecasting [57] and detection [58, 57].

Different from the aforementioned RNN/LSTM based approaches, which do not explicitly consider the informativeness of each skeletal joint with regarding to the global action sequence, our proposed GCA-LSTM network utilizes the global context information to perform attention over all the evolution steps of LSTM to selectively emphasize the informative joints in each frame, and thereby generates an attention representation for the sequence, which can be used to improve the classification performance. Furthermore, a recurrent attention mechanism is proposed to iteratively optimize the attention performance.

Ii-C Attention Mechanism

Our approach is also related to the attention mechanism [14, 16, 59, 60, 61, 62, 63] which allows the networks to selectively focus on specific information. Luong et al. [62]

proposed a network with attention mechanism for neural machine translation. Stollenga

et al. [64] designed a deep attention selective network for image classification. Xu et al. [15] proposed to incorporate hard attention and soft attention for image caption generation. Yao et al. [65]

introduced a temporal attention model for video caption generation.

Though a series of deep learning based models have been proposed for video analysis in existing works [66, 67], most of them did not consider the attention mechanism. There are several works which explored attention, such as the methods in [60, 68, 69]. However, our method is significantly different from them in the following aspects: These works use the hidden state of the previous time step of LSTM, whose context information is quite local, to measure the attention scores for the next time step. For the global classification problem - action recognition, the global information is crucial for reliably evaluating the importance (informativeness) of each input to achieve a reliable attention. Therefore, we propose a global context memory cell for LSTM, which is utilized to measure the informativeness score of the input at each step. Then the informativeness score is used as a gate (informativeness gate, similar to the input gate and forget gate) inside the LSTM unit to adjust the contribution of the input data at each step for updating the memory cell. To the best of our knowledge, we are the first to introduce a global memory cell for LSTM network to handle global classification problems. Moreover, a recurrent attention mechanism is proposed to iteratively promote the attention capability of our network, while the methods in [60, 68, 69] performed attention only once. In addition, a two-stream attention framework incorporating fine-grained attention and coarse-grained attention is also introduced. Owing to the new contributions, our proposed network yields state-of-the-art performance on the evaluated benchmark datasets.

Iii GCA-LSTM Network

In this section, we first briefly review the 2D Spatio-Temporal LSTM (ST-LSTM) as our base network. We then introduce our proposed Global Context-Aware Attention LSTM (GCA-LSTM) network in detail, which is able to selectively focus on the informative joints in each frame of the skeleton sequence by using global context information. Finally, we describe our approach to training our network effectively.

Iii-a Spatio-Temporal LSTM

In a generic skeleton based human action recognition problem, the 3D coordinates of the major body joints in each frame are provided. The spatial dependence of different joints in the same frame and the temporal dependence of the same joint among different frames are both crucial cues for skeleton based action analysis. Very recently, Liu et al. [29] proposed a 2D ST-LSTM network for skeleton based action recognition, which is capable of modeling the dependency structure and context information in both spatial and temporal domains simultaneously.

Fig. 2: Illustration of the ST-LSTM network [29]. In the spatial direction, the body joints in each frame are arranged as a chain and fed to the network as a sequence. In the temporal dimension, the body joints are fed over the frames.

As depicted in Fig. 2, in ST-LSTM model, the skeletal joints in a frame are arranged and fed as a chain (the spatial direction), and the corresponding joints over different frames are also fed in a sequence (the temporal direction).

Specifically, each ST-LSTM unit is fed with a new input (, the 3D location of joint in frame ), the hidden representation of the same joint at the previous time step , and also the hidden representation of the previous joint in the same frame , where and denote the indices of joints and frames, respectively. The ST-LSTM unit has an input gate , two forget gates corresponding to the two sources of context information ( for the temporal dimension, and for the spatial domain), together with an output gate .

The transition equations of ST-LSTM are formulated as presented in [29]:

(1)
(3)

where and denote the cell state and hidden representation of the unit at the spatio-temporal step , respectively, is the modulated input, denotes the element-wise product, and is an affine transformation consisting of model parameters. Readers are referred to [29] for more details about the mechanism of ST-LSTM.

Iii-B Global Context-Aware Attention LSTM

Fig. 3: Illustration of our GCA-LSTM network. Some arrows are omitted for clarity.

Several previous works [13, 50] have shown that in each action sequence, there is often a subset of informative joints which are important as they contribute much more to action analysis, while the remaining ones may be irrelevant (or even noisy) for this action. As a result, to obtain a high accuracy of action recognition, we need to identify the informative skeletal joints and concentrate more on their features, meanwhile trying to ignore the features of the irrelevant ones, i.e., selectively focusing (attention) on the informative joints is useful for human action recognition.

Human action can be represented by a combination of skeletal joints’ movements. In order to reliably identify the informative joints in an action instance, we can evaluate the informativeness score of each joint in each frame with regarding to the global action sequence. To achieve this purpose, we need to obtain the global context information first. However, the available context at each evolution step of LSTM is the hidden representation from the previous step, which is relatively local when compared to the global action.

To mitigate the aforementioned limitation, we propose to introduce a global context memory cell for the LSTM model, which keeps the global context information of the action sequence, and can be fed to each step of LSTM to assist the attention procedure, as illustrated in Fig. 3. We call this new LSTM architecture as Global Context-Aware Attention LSTM (GCA-LSTM).

Iii-B1 Overview of the GCA-LSTM network

We illustrate the proposed GCA-LSTM network for skeleton based action recognition in Fig. 3. Our GCA-LSTM network contains three major modules. The global context memory cell maintains an overall representation of the whole action sequence. The first ST-LSTM layer encodes the skeleton sequence, and initializes the global context memory cell. The second ST-LSTM layer performs attention over the inputs at all spatio-temporal steps to generate an attention representation of the action sequence, which is then used to refine the global context memory.

The input at the spatio-temporal step of the first ST-LSTM layer is the 3D coordinates of the joint in frame . The inputs of the second layer are the hidden representations from the first layer.

Multiple attention iterations (recurrent attention) are performed in our network to refine the global context memory iteratively. Finally, the refined global context memory can be used for classification.

To facilitate our explanation, we use instead of to denote the hidden representation at the step in the first ST-LSTM layer, while the symbols, including , , , and , which are defined in Section III-A, are utilized to represent the components in the second layer only.

Iii-B2 Initializing the Global Context Memory Cell

Our GCA-LSTM network performs attention by using the global context information, therefore, we need to obtain an initial global context memory first.

A feasible scheme is utilizing the outputs of the first layer to generate a global context representation. We can average the hidden representations at all spatio-temporal steps of the first layer to compute an initial global context memory cell () as follows:

(4)

We may also concatenate the hidden representations of the first layer and feed them to a feed-forward neural network, then use the resultant activation as

. We empirically observe these two initialization schemes perform similarly.

Iii-B3 Performing Attention in the Second ST-LSTM Layer

By using the global context information, we evaluate the informativeness degree of the input at each spatio-temporal step in the second ST-LSTM layer.

In the -th attention iteration, our network learns an informativeness score () for each input () by feeding the input itself, together with the global context memory cell () generated by the previous attention iteration to a network as follows:

(5)
(6)

where denotes the normalized informativeness score of the input at the step in the -th attention iteration, with regarding to the global context information.

The informativeness score is then used as a gate of the ST-LSTM unit, and we call it informativeness gate. With the assistance of the learned informativeness gate, the cell state of the unit in the second ST-LSTM layer can be updated as:

The cell state updating scheme in Eq. (III-B3) can be explained as follows: (1) if the input is informative (important) with regarding to the global context representation, then we let the learning algorithm update the cell state of the second ST-LSTM layer by importing more information of it; (2) on the contrary, if the input is irrelevant, then we need to block the input gate at this step, meanwhile relying more on the history information of the cell state.

Iii-B4 Refining the Global Context Memory Cell

We perform attention by adopting the cell state updating scheme in Eq. (III-B3), and thereby obtain an attention representation of the action sequence. Concretely, the output of the last spatio-temporal step in the second layer is used as the attention representation () for the action. Finally, the attention representation is fed to the global context memory cell to refine it, as illustrated in Fig. 3. The refinement is formulated as follows:

(8)

where is the refined version of . Note that is not shared over different iterations.

Multiple attention iterations (recurrent attention) are carried out in our GCA-LSTM network. Our motivation is that after we obtain a refined global context memory cell, we can use it to perform the attention again to more reliably identify the informative joints, and thus achieve a better attention representation, which can then be utilized to further refine the global context. After multiple iterations, the global context can be more discriminative for action classification.

Iii-B5 Classifier

The last refined global context memory cell is fed to a softmax classifier to predict the class label:

(9)

The negative log-likelihood loss function

[70] is adopted to measure the difference between the true class label and the prediction result . The back-propagation algorithm is used to minimize the loss function. The details of the back-propagation process are described in Section III-C.

Iii-C Training the Network

Fig. 4: Illustration of the two network training methods. (a) Directly train the whole network. (b) Stepwise optimize the network parameters. In this figure, the global context memory cell is unfolded over the attention iterations. The training step corresponds to the -th attention iteration. The black and red arrows denote the forward and backward passes, respectively. Some passes, such as those between the two ST-LSTM layers, are omitted for clarity. Better viewed in colour.

In this part, we first briefly describe the basic training method which directly optimizes the parameters of the whole network, we then propose a more advanced stepwise training scheme for our GCA-LSTM network.

Iii-C1 Directly Train the Whole Network

Since the classification is performed by using the last refined global context, to train such a network, it is natural and intuitive to feed the action label as the training output at the last attention iteration, and back-propagate the errors from the last step, i.e., directly optimize the whole network as shown in Fig. 4(a).

Iii-C2 Stepwise Training

Owing to the recurrent attention mechanism, there are frequent mutual interactions among different modules (the two ST-LSTM layers and the global context memory cell, see Fig. 3) in our network. Moreover, during the progress of multiple attention iterations, new parameters are also introduced. Due to these facts, it is rather difficult to simply optimize all parameters and all attention iterations of the whole network directly as mentioned above.

Therefore, we propose a stepwise training scheme for our GCA-LSTM network, which optimizes the model parameters incrementally. The details of this scheme are depicted in Fig. 4(b) and Algorithm 1.

1:Randomly initialize the parameters of the whole network with zero-mean Gaussian.
2:for  = to  do   // is the training step
3:     Feed the action label as the training output at the
     attention iteration .
4:     do
5:

         Training an epoch: optimizing the parameters used

         in the iterations to via back-propagation.
6:     while Validation error is decreasing
7:end for
Algorithm 1 Stepwise train the GCA-LSTM network.

The proposed stepwise training scheme is effective and efficient in optimizing the parameters and ensuring the convergence of the GCA-LSTM network. Specifically, at each training step , we only need to optimize a subset of parameters and modules which are used by the attention iterations to . 222Note that #0 is not an attention iteration, but the process of initializing the global context memory cell (). To facilitate the explantation of the stepwise training, we here temporally describe it as an attention iteration. Training this shrunken network is more effective and efficient than directly training the whole network. At the step , a larger scale network needs to be optimized. However, the training at step is also very efficient, as most of the parameters and passes have already been optimized (pre-trained well) by its previous training steps.

Iv Two-stream GCA-LSTM Network

Fig. 5: Illustration of the two-stream GCA-LSTM network, which incorporates fine-grained (joint-level) attention and coarse-grained (body part-level) attention. To perform coarse-grained attention, the joints in a skeleton are divided into five body parts, and all the joints from the same body part share a same informative score. In the second ST-LSTM layer for coarse-grained attention, we only show two body parts at each frame, and other body parts are omitted for clarity.

In the aforementioned design (Section III), the GCA-LSTM network performs action recognition by selectively focusing on the informative joints in each frame, i.e., the attention is carried out at joint level (fine-grained attention). Beside fine-grained attention, coarse-grained attention can also contribute to action analysis. This is because some actions are often performed at body part level. For these actions, all the joints from the same informative body part tend to have similar importance degrees. For example, the postures and motions of all the joints (elbow, wrist, palm, and finger) from the right hand are all important for recognizing the action salute in the NTU RGB+D dataset [55], i.e., we need to identify the informative body part “right hand” here. This implies coarse-grained (body part-level) attention is also useful for action recognition.

As suggested by Du et al. [12], the human skeleton can be divided into five body parts (torso, left hand, right hand, left leg, and right leg) based on the human physical structure. These five parts are illustrated as the right part of Fig. 5. Therefore, we can measure the informativeness degree of each body part with regarding to the action sequence, and then perform coarse-grained attention.

Specifically, we extend the design of out GCA-LSTM model, and introduce a two-stream GCA-LSTM network here, which jointly takes advantage of a fine-grained (joint-level) attention stream and a coarse-grained (body part-level) attention stream.

The architecture of the two-stream GCA-LSTM is illustrated in Fig. 5. In each attention stream, there is a global context memory cell to maintain the global attention representation of the action sequence, and also a second ST-LSTM layer to perform attention. This indicates we have two separated global context memory cells in the whole architecture, which are respectively the fine-grained attention memory cell () and the coarse-grained attention memory cell (). The first ST-LSTM layer, which is used to encode the skeleton sequence and initialize the global context memory cells, is shared by the two attention streams.

The process flow (including initialization, attention, and refinement) in the fine-grained attention stream is the same as the GCA-LSTM model introduced in Section III. The operation in the coarse-grained attention stream is also similar. The main difference is that, in the second layer, the coarse-grained attention stream performs attention by selectively focusing on the informative body parts in each frame.

Concretely, in the attention iteration , the network learns an informativeness score () for each body part () as:

(10)
(11)

where is the representation of the body part at frame , which is calculated based on the hidden representations of all the joints that belong to , with average pooling as:

(12)

where denotes the number of joints in body part .

To perform coarse-grained attention, we allow each joint in body part to share the informativeness degree of , i.e., at frame , all the joints in use the same informativeness score , as illustrated in Fig. 5. Hence, in the coarse-grained attention stream, if , then the cell state of the second ST-LSTM layer is updated at the spatio-temporal step as:

Multiple attention iterations are also performed in the proposed two-stream GCA-LSTM network. Finally, the refined fine-grained attention memory and coarse-grained attention memory are both fed to the softmax classifier, and the prediction scores of these two streams are averaged for action recognition.

The proposed step-wise training scheme can also be applied to this two-stream GCA-LSTM network, and at the training step , we simultaneously optimize the two attention streams, both of which correspond to the -th attention iteration.

V Experiments

We evaluate our proposed method on the NTU RGB+D [55], SYSU-3D [71], UT-Kinect [47], SBU-Kinect Interaction [72], and Berkeley MHAD [73] datasets. To investigate the effectiveness of our approach, we conduct extensive experiments with the following different network structures:

  • “ST-LSTM + Global (1)”. This network architecture is similar to the original two-layer ST-LSTM network in [29], but the hidden representations at all spatio-temporal steps of the second layer are concatenated and fed to a one-layer feed-forward network to generate a global representation of the skeleton sequence, and the classification is performed on the global representation; while in [29], the classification is performed on single hidden representation at each spatio-temporal step (local representation).

  • “ST-LSTM + Global (2)”. This network structure is similar to the above “ST-LSTM + Global (1)”, except that the global representation is obtained by averaging the hidden representations of all spatio-temporal steps.

  • “GCA-LSTM”. This is the proposed Global Context-Aware Attention LSTM network. Two attention iterations are performed by this network. The classification is performed on the last refined global context memory cell. The two training methods (direct training and stepwise training) described in Section III-C are also evaluated for this network structure.

In addition, we also adopt the large scale NTU RGB+D and the challenging SYSU-3D as two major benchmark datasets to evaluate the proposed “two-stream GCA-LSTM” network.

We use Torch7 framework [74]

to perform our experiments. Stochastic gradient descent (SGD) algorithm is adopted to train our end-to-end network. We set the learning rate, decay rate, and momentum to

, , and

, respectively. The applied dropout probability

[75] in our network is set to . The dimensions of the global context memory representation and the cell state of ST-LSTM are both .

V-a Experiments on the NTU RGB+D Dataset

The NTU RGB+D dataset [55] was collected with Kinect (V2). It contains more than 56 thousand video samples. A total of 60 action classes were performed by 40 different subjects. To the best of our knowledge, this is the largest publicly available dataset for RGB+D based human action recognition. The large variations in subjects and viewpoints make this dataset quite challenging. There are two standard evaluation protocols for this dataset: (1) Cross subject (CS): 20 subjects are used for training, and the remaining subjects are used for testing; (2) Cross view (CV): two camera views are used for training, and one camera view is used for testing. To extensively evaluate the proposed method, both protocols are tested in our experiment.

We compare the proposed GCA-LSTM network with state-of-the-art approaches, as shown in TABLE I. We can observe that our proposed GCA-LSTM model outperforms the other skeleton-based methods. Specifically, our GCA-LSTM network outperforms the original ST-LSTM network in [29] by 6.9% with the cross subject protocol, and 6.3% with the cross view protocol. This demonstrates that the attention mechanism in our network brings significant performance improvement.

Both “ST-LSTM + Global (1)” and “ST-LSTM + Global (2)” perform classification on the global representations, thus they achieve slightly better performance than the original ST-LSTM [29] which performs classification on local representations. We also observe “ST-LSTM + Global (1)” and “ST-LSTM + Global (2)” perform similarly.

The results in TABLE I also show that using the stepwise training method can improve the performance of our network in contrast to using the direct training method.

Method CS CV
Skeletal Quads [76] 38.6% 41.4%
Lie Group [46] 50.1% 52.8%
Dynamic Skeletons [71] 60.2% 65.2%
HBRNN [12] 59.1% 64.0%
Deep RNN [55] 56.3% 64.1%
Deep LSTM [55] 60.7% 67.3%
Part-aware LSTM [55] 62.9% 70.3%
JTM CNN [77] 73.4% 75.2%
STA Model [68] 73.4% 81.2%
SkeletonNet [78] 75.9% 81.2%
Visualization CNN [79] 76.0% 82.6%
ST-LSTM [29] 69.2% 77.7%
ST-LSTM + Global (1) 70.5% 79.5%
ST-LSTM + Global (2) 70.7% 79.4%
GCA-LSTM (direct training) 74.3% 82.8%
GCA-LSTM (stepwise training) 76.1% 84.0%
TABLE I: Experimental results on the NTU RGB+D dataset.

We also evaluate the performance of the two-stream GCA-LSTM network, and report the results in TABLE II. The results show that by incorporating fine-grained attention and coarse-grained attention, the proposed two-stream GCA-LSTM network achieves better performance than the GCA-LSTM with fine-grained attention only. We also observe the performance of two-stream GCA-LSTM can be improved with the stepwise training method.

Method CS CV
GCA-LSTM (coarse-grained only) 74.1% 81.6%
GCA-LSTM (fine-grained only) 74.3% 82.8%
Two-stream GCA-LSTM 76.2% 84.7%
Two-stream GCA-LSTM with stepwise training 77.1% 85.1%
TABLE II: Performance of the two-stream GCA-LSTM network on the NTU RGB+D dataset.

V-B Experiments on the SYSU-3D Dataset

The SYSU-3D dataset [71], which contains 480 skeleton sequences, was collected with Kinect. This dataset includes 12 action classes which were performed by 40 subjects. The SYSU-3D dataset is very challenging as the motion patterns are quite similar among different action classes, and there are lots of viewpoint variations in this dataset.

We follow the standard cross-validation protocol in [71] on this dataset, in which 20 subjects are adopted for training the network, and the remaining subjects are kept for testing. We report the experimental results in TABLE III. We can observe that our GCA-LSTM network surpasses the state-of-the-art skeleton-based methods in [80, 71, 29], which demonstrates the effectiveness of our approach in handling the task of action recognition in skeleton sequences. The results also show that our proposed stepwise training scheme is useful for our network.

Method Accuracy
LAFF (SKL) [80] 54.2%
Dynamic Skeletons [71] 75.5%
ST-LSTM [56] 76.5%
ST-LSTM + Global (1) 76.8%
ST-LSTM + Global (2) 76.6%
GCA-LSTM (direct training) 77.8%
GCA-LSTM (stepwise training) 78.6%
TABLE III: Experimental results on the SYSU-3D dataset.

Using this challenging dataset, we also evaluate the performance of the two-stream attention model. The results in TABLE IV show that the two-stream GCA-LSTM network is effective for action recognition.

Method Accuracy
GCA-LSTM (coarse-grained only) 76.9%
GCA-LSTM (fine-grained only) 77.8%
Two-stream GCA-LSTM 78.8%
Two-stream GCA-LSTM with stepwise training 79.1%
TABLE IV: Performance of the two-stream GCA-LSTM network on the SYSU-3D dataset.

V-C Experiments on the UT-Kinect Dataset

The UT-Kinect dataset [47] was recorded with a stationary Kinect. The skeleton sequences in this dataset are quite noisy. A total of 10 action classes were performed by 10 subjects, and each action was performed by the same subject twice.

We follow the standard leave-one-out-cross-validation protocol in [47] to evaluate our method on this dataset. Our approach yields state-of-the-art performance on this dataset, as shown in TABLE V.

Method Accuracy
Grassmann Manifold [81] 88.5%
Histogram of 3D Joints [47] 90.9%
Riemannian Manifold [82] 91.5%
Key-Pose-Motifs Mining [83] 93.5%
Action-Snippets and Activated Simplices [84] 96.5%
ST-LSTM [29] 97.0%
ST-LSTM + Global (1) 97.0%
ST-LSTM + Global (2) 97.5%
GCA-LSTM (direct training) 98.5%
GCA-LSTM (stepwise training) 99.0%
TABLE V: Experimental results on the UT-Kinect dataset.
Standard deviation () of noise                                
Accuracy 100% 99.3% 98.5% 97.5% 95.6% 92.7% 80.4% 61.5%
TABLE VI: Evaluation of robustness against the input noise. Gaussian noise is added to the 3D coordinates of the skeletal joints.
 #Attention Iteration  NTU RGB+D (CS)  NTU RGB+D (CV)       UT-Kinect       SYSU-3D   Berkeley MHAD
1 72.9% 81.8% 98.0% 77.8% 100%
2 76.1% 84.0% 99.0% 78.6% 100%
TABLE VII: Performance comparison of different attention iteration numbers .
 #Attention Iteration (a) (b) (c) (d)
w/o sharing within iteration w/ sharing within iteration w/o sharing within iteration w/  sharing within iteration
w/  sharing cross iterations w/ sharing cross iterations w/o sharing cross iterations w/o sharing cross iterations
1 71.0% 72.9% 71.0% 72.9%
2 73.0% 74.3% 73.4% 76.1%
3 73.1% 74.4% 69.3% 73.2%
TABLE VIII: Performance comparison of different parameter sharing schemes.

V-D Experiments on the SBU-Kinect Interaction Dataset

Method Accuracy
Yun et al. [72] 80.3%
CHARM [85] 83.9%
Ji et al. [86] 86.9%
HBRNN [12] 80.4%
Deep LSTM [28] 86.0%
Co-occurrence LSTM [28] 90.4%
SkeletonNet [78] 93.5%
ST-LSTM [29] 93.3%
GCA-LSTM (direct training) 94.1%
GCA-LSTM (stepwise training) 94.9%
TABLE IX: Experimental results on the SBU-Kinect Interaction dataset.

The SBU-Kinect Interaction dataset [72] includes 8 action classes for two-person interaction recognition. This dataset contains 282 sequences corresponding to 6822 frames. The SBU-Kinect Interaction dataset is challenging because of (1) the relatively low accuracies of the coordinates of skeletal joints recorded by Kinect, and (2) complicated interactions between two persons in many action sequences.

We perform 5-fold cross-validation evaluation on this dataset by following the standard protocol in [72]. The experimental results are depicted in TABLE IX. In this table, HBRNN [12], Deep LSTM [28], Co-occurrence LSTM [28], and ST-LSTM [29] are all LSTM based models for action recognition in skeleton sequences, and are very relevant to our network. We can see that the proposed GCA-LSTM network achieves the best performance among all of these methods.

V-E Experiments on the Berkeley MHAD Dataset

The Berkeley MHAD dataset was recorded by using a motion capture system. It contains 659 sequences and 11 action classes, which were performed by 12 different subjects.

We adopt the standard experimental protocol on this dataset, in which 7 subjects are used for training and the remaining 5 subjects are held out for testing. The results in TABLE X show that our method achieves very high accuracy () on this dataset.

Method Accuracy
Ofli et al. [43] 95.4%
Vantigodi et al. [87] 96.1%
Vantigodi et al. [88] 97.6%
Kapsouras et al. [89] 98.2%
ST-LSTM [29] 100%
GCA-LSTM (direct training) 100%
GCA-LSTM (stepwise training) 100%
TABLE X: Experimental results on the Berkeley MHAD dataset

As the Berkeley MHAD dataset was collected with a motion capture system rather than a Kinect, thus the coordinates of the skeletal joints are relatively accurate. To evaluate the robustness with regarding to the input noise, we also investigate the performance of our GCA-LSTM network on this dataset by adding zero mean input noise to the skeleton sequences, and show the results in TABLE VI. We can see that even if we add noise with the standard deviation () set to (which is significant noise in the scale of human body), the accuracy of our method is still very high (). This demonstrates that our method is quite robust against the input noise.

V-F Evaluation of Attention Iteration Numbers

We also test the effect of different attention iteration numbers on our GCA-LSTM network, and show the results in TABLE VII. We can observe that increasing the iteration number can help to strength the classification performance of our network (using 2 iterations obtains higher accuracies compared to using only 1 iteration). This demonstrates that the recurrent attention mechanism proposed by us is useful for the GCA-LSTM network.

Specifically, we also evaluate the performance of 3 attention iterations by using the large scale NTU RGB+D dataset, and the results are shown in TABLE VIII. We find the performance of 3 attention iterations is slightly better than 2 iterations if we share the parameters over different attention iterations (see columns (a) and (b) in TABLE VIII). This consistently shows using multiple attention iterations can improve the performance of our network progressively. We do not try more iterations due to the GPU’s memory limitation.

We also find that if we do not share the parameters over different attention iterations (see columns (c) and (d) in TABLE VIII), then too many iterations can bring performance degradation (the performance of using 3 iterations is worse than that of using 2 iterations). In our experiment, we observe the performance degradation is caused by over-fitting (increasing iteration number will introduce new parameters if we do not share parameters). But the performance of two iterations is still significantly better than one iteration in this case. We will also give the experimental analysis of the parameter sharing schemes detailed in Section V-G.

V-G Evaluation of Parameter Sharing Schemes

As formulated in Eq. (5), the model parameters and are introduced for calculating the informativeness score at each spatio-temporal step in the second layer. Also multiple attention iterations are carried out in this layer. To regularize the parameter number inside our network and improve the generalization capability, we investigate two parameter sharing strategies for our network: (1) Sharing within iteration: and are shared by all spatio-temporal steps in the same attention iteration; (2) Sharing cross iterations: and are shared over different attention iterations. We investigate the effect of these two parameter sharing strategies on our GCA-LSTM network, and report the results in TABLE VIII.

In TABLE VIII, we can observe that: (1) Sharing parameters within iteration is useful for enhancing the generalization capability of our network, as the performance in columns (b) and (d) of TABLE VIII is better than (a) and (c), respectively. (2) Sharing parameters over different iterations is also helpful for handling the over-fitting issues, but it may limit the representation capacity, as the network with two attention iterations which shares parameters within iteration but does not share parameters over iterations achieves the best result (see column (d) of TABLE VIII). As a result, in our GCA-LSTM network, we only share the parameters within iteration, and two attention iterations are used.

V-H Evaluation of Training Methods

The previous experiments showed that using the stepwise training method can improve the performance of our network in contrast to using direct training (see TABLE I, V, III, IX). To further investigate the performance of these two training methods, we plot the convergence curves of our GCA-LSTM network in Fig. 6.

We analyze the convergence curves (Fig. 6) of the stepwise training method as follows. By using the proposed stepwise training method, at the training step , we only need to train the subnetwork for initializing the global context (), i.e., only a subset of parameters and modules need to be optimized, thus the training is very efficient and the loss curve converges very fast. When the validation loss stops decreasing, we start the next training step . Step contains new parameters and modules for the first attention iteration, which have not been optimized yet, therefore, loss increases immediately at this epoch. However, most of the parameters involved at this step have already been pre-trained well by the previous step , thus the network training is quite effective, and the loss drops to a very low value after only one training epoch.

By comparing the convergence curves of the two training methods, we can find (1) the network converges much faster if we use stepwise training, compared to directly train the whole network. We can also observe that (2) the network is easier to get over-fitted by using direct training method, as the gap between the train loss and validation loss starts to rise after the th epoch. These observations demonstrate that the proposed stepwise training scheme is quite useful for effectively and efficiently training our GCA-LSTM network.

Fig. 6: Convergence curves of the GCA-LSTM network with two attention iterations by respectively using stepwise training (in red) and direct training (in green) on the NTU RGB+D dataset. Better viewed in colour.

V-I Evaluation of Initialization Methods and Attention Designs

In Section III-B2, we introduce two methods to initialize the global context memory cell (). The first is averaging the hidden representations of the first layer (see Eq. (4)), and the second is using a one-layer feed-forward network to obtain . We compare these two initialization methods in TABLE XI. The results show that these two methods perform similarly. In our experiment, we also find that by using feed-forward network, the model converges faster, thus the scheme of feed-forward network is used to initialize the global context memory cell in our GCA-LSTM network.

Method NTU RGB+D (CS) NTU RGB+D (CV)
Averaging 73.8% 83.1%
Feed-forward network 74.3% 82.8%
TABLE XI: Performance comparison of different methods of initializing the global context memory cell.

In the GCA-LSTM network, the informativeness score

is used as a gate within LSTM neuron, as formulated in Eq. (

III-B3). We also explore to replace this scheme with soft attention method [15, 62], i.e., the attention representation is calculated as . Using soft attention, the accuracy drops about one percentage point on the NTU RGB+D dataset. This can be explained as equipping LSTM neuron with gate provides LSTM better insight about when to update, forget or remember. In addition, it can keep the sequential ordering information of the inputs , while soft attention loses ordering and positional information.

V-J Visualizations

Fig. 7: Examples of qualitative results on the NTU RGB+D dataset. Three actions (taking a selfie, pointing to something, and kicking other person) are illustrated. The informativeness scores of two attention iterations are visualized. Four frames are shown for each iteration. The circle size indicates the magnitude of the informativeness score for the corresponding joint in a frame. For clarity, the joints with tiny informativeness scores are not shown.

To better understand our network, we analyze and visualize the informativeness score evaluated by using the global context information on the large scale NTU RGB+D dataset in this section.

We analyze the variations of the informativeness scores over the two attention iterations to verify the effectiveness of the recurrent attention mechanism in our method, and show the qualitative results of three actions (taking a selfie, pointing to something, and kicking other person) in Fig. 7. The informativeness scores are computed with soft attention for visualization. In this figure, we can see that the attention performance increases between the two attention iterations. In the first iteration, the network tries to identify the potential informative joints over the frames. After this attention, the network achieves a good understanding of the global action. Then in the second iteration, the network can more accurately focus on the informative joints in each frame of the skeleton sequence. We can also find that the informativeness score of the same joint can vary in different frames. This indicates that our network performs attention not only in spatial domain, but also in temporal domain.

In order to further quantitatively evaluate the effectiveness of the attention mechanism, we analyze the classification accuracies of the three action classes in Fig. 7 among all the actions. We observe if the attention mechanism is not used, the accuracies of these three classes are 67.7%, 71.7%, and 81.5%, respectively. However, if we use one attention iteration, the accuracies rise to 67.8%, 72.4%, and 83.4%, respectively. If two attention iterations are performed, the accuracies become 67.9%, 73.6%, and 86.6%, respectively.

Fig. 8: Visualization of the average informativeness gates for all testing samples. The size of the circle around each joint indicates the magnitude of the corresponding informativeness score.

To roughly explore which joints are more informative for the activities in the NTU RGB+D dataset, we also average the informativeness scores of the same joint in all the testing sequences, and visualize it in Fig. 8. We can observe that averagely, more attention is assigned to the hand and foot joints. This is because in the NTU RGB+D dataset, most of the actions are related to the hand and foot postures and motions. We can also find that the average informativeness score of the right hand joint is higher than that of left hand joint. This indicates most of the subjects are right-handed.

Vi Conclusion

In this paper, we have extended the original LSTM network to construct a Global Context-Aware Attention LSTM (GCA-LSTM) network for skeleton based action recognition, which has strong ability in selectively focusing on the informative joints in each frame of the skeleton sequence with the assistance of global context information. Furthermore, we have proposed a recurrent attention mechanism for our GCA-LSTM network, in which the selectively focusing capability is improved iteratively. In addition, a two-stream attention framework is also introduced. The experimental results validate the contributions of our approach by achieving state-of-the-art performance on five challenging datasets.

Acknowledgement

This work was carried out at the Rapid-Rich Object Search (ROSE) Lab at Nanyang Technological University (NTU), Singapore. The ROSE Lab is supported by the National Research Foundation, Singapore, under its Interactive Digital Media (IDM) Strategic Research Programme. We acknowledge the support of NVIDIA AI Technology Centre (NVAITC) for the donation of the Tesla K40 and K80 GPUs used for our research at the ROSE Lab. Jun Liu would like to thank Qiuhong Ke from University of Western Australia for helpful discussions.

References

  • [1] J. Zheng, Z. Jiang, and R. Chellappa, “Cross-view action recognition via transferable dictionary learning,” IEEE Transactions on Image Processing, 2016.
  • [2]

    F. Liu, X. Xu, S. Qiu, C. Qing, and D. Tao, “Simple to complex transfer learning for action recognition,”

    IEEE Transactions on Image Processing, 2016.
  • [3] Y.-G. Jiang, Q. Dai, W. Liu, X. Xue, and C.-W. Ngo, “Human action recognition in unconstrained videos by explicit motion modeling,” IEEE Transactions on Image Processing, 2015.
  • [4]

    J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with microsoft kinect sensor: A review,”

    IEEE transactions on cybernetics, 2013.
  • [5]

    G. Zhang, J. Liu, H. Li, Y. Q. Chen, and L. S. Davis, “Joint human detection and head pose estimation via multistream networks for rgb-d videos,”

    IEEE Signal Processing Letters, 2017.
  • [6] G. Zhang, L. Tian, Y. Liu et al., “Robust real-time human perception with depth camera,” in ECAI, 2016.
  • [7] L. L. Presti and M. La Cascia, “3d skeleton-based human action classification: A survey,” Pattern Recognition, 2016.
  • [8] F. Han, B. Reily, W. Hoff, and H. Zhang, “Space-time representation of people based on 3d skeletal data: a review,” arXiv, 2016.
  • [9] J. K. Aggarwal and L. Xia, “Human activity recognition from 3d data: A review,” Pattern Recognition Letters, 2014.
  • [10] J. Zhang, W. Li, P. O. Ogunbona, P. Wang, and C. Tang, “Rgb-d-based action recognition datasets: A survey,” Pattern Recognition, 2016.
  • [11] M. Ye, Q. Zhang, L. Wang, J. Zhu, R. Yang, and J. Gall, “A survey on human motion analysis from depth data,” in Time-of-flight and depth imaging. sensors, algorithms, and applications, 2013.
  • [12] Y. Du, W. Wang, and L. Wang, “Hierarchical recurrent neural network for skeleton based action recognition,” in CVPR, 2015.
  • [13] M. Jiang, J. Kong, G. Bebis, and H. Huo, “Informative joints based human action recognition using skeleton contexts,” Signal Processing: Image Communication, 2015.
  • [14] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in NIPS, 2015.
  • [15] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in ICML, 2015.
  • [16] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in ICLR, 2015.
  • [17] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, 1997.
  • [18] M. Sundermeyer, R. Schlüter, and H. Ney, “Lstm neural networks for language modeling.” in INTERSPEECH, 2012.
  • [19] M. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and G. Mori, “A hierarchical deep temporal model for group activity recognition,” in CVPR, 2016.
  • [20] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei, “End-to-end learning of action detection from frame glimpses in videos,” in CVPR, 2016.
  • [21] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in CVPR, 2015.
  • [22] Z. Wu, X. Wang, Y.-G. Jiang, H. Ye, and X. Xue, “Modeling spatial-temporal clues in a hybrid deep learning framework for video classification,” in ACM MM, 2015.
  • [23] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in CVPR, 2015.
  • [24] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid, “Leveraging structural context models and ranking score fusion for human interaction prediction,” arXiv, 2017.
  • [25] Q. Ke, M. Bennamoun, S. An, F. Bossaid, and F. Sohel, “Spatial, structural and temporal feature learning for human interaction prediction,” arXiv, 2016.
  • [26]

    N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised learning of video representations using lstms,” in

    ICML, 2015.
  • [27] S. Ma, L. Sigal, and S. Sclaroff, “Learning activity progression in lstms for activity detection and early detection,” in CVPR, 2016.
  • [28] W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen, and X. Xie, “Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks,” in AAAI, 2016.
  • [29] J. Liu, A. Shahroudy, D. Xu, and G. Wang, “Spatio-temporal lstm with trust gates for 3d human action recognition,” in ECCV, 2016.
  • [30] J. Weston, S. Chopra, and A. Bordes, “Memory networks,” in ICLR, 2015.
  • [31] J. Liu, G. Wang, P. Hu, L.-Y. Duan, and A. C. Kot, “Global context-aware attention lstm networks for 3d action recognition,” in CVPR, 2017.
  • [32] J. Luo, W. Wang, and H. Qi, “Group sparsity and geometry constrained dictionary learning for action recognition from depth maps,” in ICCV, 2013.
  • [33] A. Shahroudy, T.-T. Ng, Q. Yang, and G. Wang, “Multimodal multipart learning for action recognition in depth videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.
  • [34] M. Meng, H. Drira, M. Daoudi, and J. Boonaert, “Human-object interaction recognition by learning the distances between the object and the skeleton joints,” in FG, 2015.
  • [35] X. Yang and Y. Tian, “Effective 3d action recognition using eigenjoints,” Journal of Visual Communication and Image Representation, 2014.
  • [36] J. Wang and Y. Wu, “Learning maximum margin temporal warping for action recognition,” in ICCV, 2013.
  • [37] I. Lillo, J. Carlos Niebles, and A. Soto, “A hierarchical pose-based approach to complex action understanding using dictionaries of actionlets and motion poselets,” in CVPR, 2016.
  • [38] H. Rahmani, A. Mahmood, D. Q. Huynh, and A. Mian, “Real time action recognition using histograms of depth gradients and random decision forests,” in WACV, 2014.
  • [39] C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of depth, skeleton, and inertial data for human action recognition,” in ICASSP, 2016.
  • [40] L. Tao and R. Vidal, “Moving poselets: A discriminative and interpretable skeletal motion representation for action recognition,” in ICCVW, 2015.
  • [41] A. Shahroudy, G. Wang, and T.-T. Ng, “Multi-modal feature fusion for action recognition in rgb-d sequences,” in ISCCSP, 2014.
  • [42] P. Wang, W. Li, P. Ogunbona, Z. Gao, and H. Zhang, “Mining mid-level features for action recognition based on effective skeleton representation,” in DICTA, 2014.
  • [43] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” Journal of Visual Communication and Image Representation, 2014.
  • [44] R. Anirudh, P. Turaga, J. Su, and A. Srivastava, “Elastic functional coding of human actions: from vector-fields to latent variables,” in CVPR, 2015.
  • [45] R. Chaudhry, F. Ofli, G. Kurillo, R. Bajcsy, and R. Vidal, “Bio-inspired dynamic 3d discriminative skeletal features for human action recognition,” in CVPRW, 2013.
  • [46] R. Vemulapalli, F. Arrate, and R. Chellappa, “Human action recognition by representing 3d skeletons as points in a lie group,” in CVPR, 2014.
  • [47] L. Xia, C.-C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in CVPRW, 2012.
  • [48] J. Wang, Z. Liu, Y. Wu, and J. Yuan, “Mining actionlet ensemble for action recognition with depth cameras,” in CVPR, 2012.
  • [49] J. Wang, Z. Liu, Y. Wu, and J. S. Yuan, “Learning actionlet ensemble for 3d human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014.
  • [50] H. Chen, G. Wang, J.-H. Xue, and L. He, “A novel hierarchical framework for human action recognition,” Pattern Recognition, 2016.
  • [51] P. Koniusz, A. Cherian, and F. Porikli, “Tensor representations via kernel linearization for action recognition from 3d skeletons,” in ECCV, 2016.
  • [52] P. Wang, C. Yuan, W. Hu, B. Li, and Y. Zhang, “Graph based skeleton motion representation and similarity measurement for action recognition,” in ECCV, 2016.
  • [53] M. Zanfir, M. Leordeanu, and C. Sminchisescu, “The moving pose: An efficient 3d kinematics descriptor for low-latency action recognition and detection,” in ICCV, 2013.
  • [54] V. Veeriah, N. Zhuang, and G.-J. Qi, “Differential recurrent neural networks for action recognition,” in ICCV, 2015.
  • [55] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “Ntu rgb+d: A large scale dataset for 3d human activity analysis,” in CVPR, 2016.
  • [56] J. Liu, A. Shahroudy, D. Xu, A. C. Kot, and G. Wang, “Skeleton-based action recognition using spatio-temporal lstm network with trust gates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  • [57] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena, “Structural-rnn: Deep learning on spatio-temporal graphs,” in CVPR, 2016.
  • [58] Y. Li, C. Lan, J. Xing, W. Zeng, C. Yuan, and J. Liu, “Online human action detection using joint classification-regression recurrent neural networks,” in ECCV, 2016.
  • [59] C. Xiong, S. Merity, and R. Socher, “Dynamic memory networks for visual and textual question answering,” in ICML, 2016.
  • [60] S. Sharma, R. Kiros, and R. Salakhutdinov, “Action recognition using visual attention,” in ICLRW, 2016.
  • [61]

    A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, “Ask me anything: Dynamic memory networks for natural language processing,” in

    ICML, 2016.
  • [62] M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in EMNLP, 2015.
  • [63] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus, “End-to-end memory networks,” in NIPS, 2015.
  • [64] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber, “Deep networks with internal selective attention through feedback connections,” in NIPS, 2014.
  • [65] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville, “Describing videos by exploiting temporal structure,” in ICCV, 2015.
  • [66] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in NIPS, 2014.
  • [67] A. Shahroudy, T.-T. Ng, Y. Gong, and G. Wang, “Deep multimodal feature analysis for action recognition in rgb+ d videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  • [68] S. Song, C. Lan, J. Xing, W. Zeng, and J. Liu, “An end-to-end spatio-temporal attention model for human action recognition from skeleton data.” in AAAI, 2017.
  • [69] Y. Wang, S. Wang, J. Tang, N. O’Hare, Y. Chang, and B. Li, “Hierarchical attention network for action recognition in videos,” arXiv, 2016.
  • [70] A. Graves, “Supervised sequence labelling,” in Supervised Sequence Labelling with Recurrent Neural Networks, 2012.
  • [71] J.-F. Hu, W.-S. Zheng, J. Lai, and J. Zhang, “Jointly learning heterogeneous features for rgb-d activity recognition,” in CVPR, 2015.
  • [72] K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras, “Two-person interaction detection using body-pose features and multiple instance learning,” in CVPRW, 2012.
  • [73] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Berkeley mhad: A comprehensive multimodal human action database,” in WACV, 2013.
  • [74]

    R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” in

    NIPSW, 2011.
  • [75] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, 2014.
  • [76] G. Evangelidis, G. Singh, and R. Horaud, “Skeletal quads: Human action recognition using joint quadruples,” in ICPR, 2014.
  • [77]

    P. Wang, Z. Li, Y. Hou, and W. Li, “Action recognition based on joint trajectory maps using convolutional neural networks,” in

    ACM MM, 2016.
  • [78] Q. Ke, S. An, M. Bennamoun, F. Sohel, and F. Boussaid, “Skeletonnet: Mining deep part features for 3-d action recognition,” IEEE Signal Processing Letters, 2017.
  • [79] M. Liu, H. Liu, and C. Chen, “Enhanced skeleton visualization for view invariant human action recognition,” Pattern Recognition, 2017.
  • [80] J.-F. Hu, W.-S. Zheng, L. Ma, G. Wang, and J. Lai, “Real-time rgb-d activity prediction by soft regression,” in ECCV, 2016.
  • [81] R. Slama, H. Wannous, M. Daoudi, and A. Srivastava, “Accurate 3d action recognition using learning on the grassmann manifold,” Pattern Recognition, 2015.
  • [82] M. Devanne, H. Wannous, S. Berretti, P. Pala, M. Daoudi, and A. Del Bimbo, “3-d human action recognition by shape analysis of motion trajectories on riemannian manifold,” IEEE Transactions on Cybernetics, 2015.
  • [83] C. Wang, Y. Wang, and A. L. Yuille, “Mining 3d key-pose-motifs for action recognition,” in CVPR, 2016.
  • [84] C. Wang, J. Flynn, Y. Wang, and A. L. Yuille, “Recognizing actions in 3d using action-snippets and activated simplices,” in AAAI, 2016.
  • [85] W. Li, L. Wen, M. Choo Chuah, and S. Lyu, “Category-blind human action recognition: A practical recognition system,” in ICCV, 2015.
  • [86] Y. Ji, G. Ye, and H. Cheng, “Interactive body part contrast mining for human interaction recognition,” in ICMEW, 2014.
  • [87] S. Vantigodi and R. V. Babu, “Real-time human action recognition from motion capture data,” in NCVPRIPG, 2013.
  • [88] S. Vantigodi and V. B. Radhakrishnan, “Action recognition from motion capture data using meta-cognitive rbf network classifier,” in ISSNIP, 2014.
  • [89] I. Kapsouras and N. Nikolaidis, “Action recognition on motion capture data using a dynemes and forward differences representation,” Journal of Visual Communication and Image Representation, 2014.