Spatio-Temporal Dual Affine Differential Invariant for Skeleton-based Action Recognition

04/21/2020 ∙ by Qi Li, et al. ∙ Institute of Computing Technology, Chinese Academy of Sciences 0

The dynamics of human skeletons have significant information for the task of action recognition. The similarity between trajectories of corresponding joints is an indicating feature of the same action, while this similarity may subject to some distortions that can be modeled as the combination of spatial and temporal affine transformations. In this work, we propose a novel feature called spatio-temporal dual affine differential invariant (STDADI). Furthermore, in order to improve the generalization ability of neural networks, a channel augmentation method is proposed. On the large scale action recognition dataset NTU-RGB+D, and its extended version NTU-RGB+D 120, it achieves remarkable improvements over previous state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Skeleton-based action recognition has received great attention in recent years, as the dynamics of human skeletons has significant information for the task of action recognition. Compared to other modalities of human action, for example, video, depth images and optical flows, human skeletons have the advantage of small amount of data and high information density. The dynamics of human skeletons can be seen as time series of human poses, or the combination of human joint trajectories. Among all the human joints, the trajectory of important joints indicating the action class conveys the most significant information. It is also worth noting that when performing the same action under different attempts, trajectories of these joints are subject to some distortions. In this work, we propose a novel invariant feature under these distortions and then utilize them for facilitating skeleton-based action recognition.

When performing the same action, two similar trajectories of corresponding joints should share a basic shape. However, due to individual factors, these two trajectories always appear in diverse kinds of distortions. These distortions are caused by spatial and temporal factors. Spatial factors include the change of viewpoints, different skeleton sizes and action amplitude ([29, 27]), while temporal factors indicate time scaling along the time series ([5, 6]). All the spatial factors can be modeled by the affine transformation in 3D space, whereas the uniform time scaling is commonly discussed case, which can be seen as affine transformation in 1D space. We combine these two kinds of distortions as the spatio-temporal dual affine transformation.

In this paper, we propose a general method for constructing Spatio-Temporal Dual Affine Differential Invariant (STDADI). Specifically, we utilize the rational polynomial of derivatives of joint trajectories to obtain the invariants. By bounding the degree of polynomial and the order of derivatives, we generate 8 independent STDADIs and combine them as an invariant vector at each moment for each human joint.

Recently, researchers tend to explore the potential of date-driven methods for skeleton-based action recognition. When considering to improve the generalization ability of neural networks under different transformations, a common practice is data augmentation. However, additional data preprocessing generates more samples and takes longer time in the training phase. In this paper, we propose an intuitive yet effective method, extending input data with STDADI along the channel dimension for training and evaluation, and call this practice as channel augmentation. Experiments show that channel augmentation based on STDADI not only achieves stronger performance and generalization, but also provides more insights for skeleton-based action recognition.

The main contributions of this work are the following:

  1. We propose a novel feature called spatio-temporal dual affine differential invariant (STDADI).

  2. In order to improve the generalization ability of neural networks, a channel augmentation method is proposed.

  3. We validate the effectiveness of the proposed feature and method on the large scale action recognition dataset NTU-RGB+D [23] and its extended version NTU-RGB+D 120 [15], and get superior performance when compared to previous state-of-the-art methods.

Ii Related Work

Skeleton-based action recognition

Before the rising of deep learning, some handcrafted-feature-based methods were proposed to solve skeleton-based action recogniton. Wang

et al.[28] proposed to use relative locations of joints as motion features. Hussein et al.[7] exploited the covariance matrices of joint trajectories. Vemulapalli et al.[26] utilized rotations and translations between joint locations to capture the dynamics of human skeletons. However, the performance of these methods is limited as designed features do not cover all factors affecting the recognition. Thanks to the success of deep learning, data-driven methods achieve better performance than before. These methods can be further divided as RNN-based, CNN-based and GNN-based approaches. RNN-based approaches take the sequence of human joint coordinate vectors as input and predict the action label in a recursive manner ([27, 23, 25, 4, 16, 31, 18, 17]). CNN-based approaches express the skeleton data as a pseudo-image for conventional CNNs ([19, 14, 13, 12, 20]) or as a sequence of coordinate vectors for temporal CNNs ([9, 11, 10]). Compared to these two kinds of methods, GNN-based approaches are modeled based on the natural connections between human joints, thus better to characterize the dynamics of human skeletons. Recently, Yan et al.[30] proposed the spatiao-temporal graph convolutional network (ST-GCN) and achieved evident improvements over previous methods.

Transformations and invariant features for skeleton-based action recognition For skeleton-based action recognition, commonly discussed transformations are geometric, which are usually caused by the change of viewpoint and the magnitude of motion. Müller et al. [21] took a set of boolean features associated with four joints to describe their relative positions, which is invariant with respect to the skeleton’s position, orientation and size. Vemulapalli et al. [26] utilized rigid transformations between human joints to describe the skeleton, which are geometrically invariant. Shao et al. [24] used integral invariants as a local description of joint trajectory. Boulahia et al. [2] integrated a set of features inspired by the handwriting recogniton, in which moment invariants [22] with respect to similar transformation were utilized.

Time scaling, as a transformation along the time dimension, are hardly discussed by previous works for skeleton-based action recognition. Most of these works([29, 24, 1, 8]), use a dynamic time warping technique for time alignment and trajectory matching. Esling and Agon [5] explicitly defined time scaling, and classfy it as uniform and dynamic. We explore the definition of uniform time scaling and model it as the temporal affine transformation.

In the deep learning domain, data augmentation is a universal approach for improving generalization under various transformations. However, this method is time-consuming during training and hard to explain for its improvement. Wang et al.[27] proposed a data augmentation method for 3D coordinates of human skeletons including rotation, scaling and shear transformations, and this method is beneficial to training of the proposed two-stream RNN.

Iii Approach

Iii-a Spatio-temporal Dual Affine Transformation

Formally, we express the dynamics of human joints in the form of parameterized curve taking time as the parameter:

(1)
(2)

where and represent joint trajectories before and after the transformation respectively.

The dual affine transformation can be defined as

(3)

where the matrix and vector express the spatial affine transformation, and the scalar and are used to denote the temporal affine transformation. This can be detailed as follows:

Spatial affine transformation The matrix controls the rotation and scaling while the vector means the translation. Spatial affine transformations are caused by multiple factors, including coordinate system convertion, pose orientation, different skeleton sizes and action amplitude.

Temporal affine transformation

The linear transformation of time domain can be considered as the 1D affine transformation. The parameter

means time scaling, indicating different speeds, and means phase shift, indicating different beginning time. We discuss uniform time scaling here and it assumes a uniform change of the time scale according to the same proportion [6]. We follow this assumption and express it as the temporal affine transformation.

Iii-B Spatio-Temporal Dual Affine Differential Invariant

We utilize the rational polynomial of derivatives of joint trajectories to construct STDADI. Specifically, based on equation 3, we can derive the relationship between 1st derivatives of joint trajectories before and after the transformation:

(4)

Similarly, we can obtain the relationship between their any order derivatives by chain rule:

(5)

where the superscript denotes the order of derivation.

It is worth noting that when is equal to 0, formula 5 is equivalent to formula 3 without translation vector . We can eliminate the effect of translation vector by subtracting the mean value. That is,

(6)

Thus, in Equation 5, we can set as a non-negative integer.

Based on the relationship in equation 5, we construct a matrix using 3 derivatives of different orders as column and derive their relationship:

(7)

where are all non-negative integers. To ensure the determinant of is not equal to 0, are different from each other. We find that the determinant of is a relative invariant which is related to the transformation parameters of and :

(8)

We eliminate the parameters of and by constructing the rational formula:

(9)

This means that

(10)

is an invariant feature with respect to the spatio-temporal dual affine transformation, namely, STDADI. In this expression, N is a positive integer named as the degree of polynomials, and the degree of numerator and denominator should be equal to guarantee the elimination of the matrix . The max value of derivatives is named as the order of STDADI. To ensure the elimination of the parameter , the following needs to be met,

(11)

To ensure that every determinant is not equal to 0, it is also needed that . The parameter is a small value for computational stability.

For computation simplicity, we set the upper limit of the degree and order to be 2 and 4, respectively, and we obtain 55 invariants in total. We select 8 of them which are function-independent [3] from each other, which means weaker correlation and better description ability. The 8 invariants are listed as follows ( are ignored here for compact expression):

(12)

In practice, we approxiamate the derivatives of joint trajectories using a 5th order B-spline curve. Then we calculate STDADIs following formula 10 and 12. Finally we arrange the obtained invariants as an 8-dimension invariant feature vector at each moment for each human joint.

Iii-C Channel Augmentation

Compared to other handcrafted features, our STDADI focuses on describing joint trajectories under the spatio-temporal dual affine transformation. As not all factors are covered, STDADI itself is not efficient enough for the recognition task. However, as the feature is beneficial for recognizing actions under different transformations, it can help improve the generalization of data-driven methods. In this case, we propose an inituitive yet effective method named channel augmentation.

Specifically, we extend input data with STDADI along the channel dimension, as shown in Fig. 1. Conventional inputs are 3D coordinates of human joints, and we concatenate the coordinate vector and the STDADI vector at each joint for each frame. Before the concatenation, we apply a hyperbolic tangent function on the STDADI vector to make sure that it matches the magnitude of coordinates. Channel augmentation introduces invariant information into input data without changing the inner structure of neural networks.

In our experiments we choose to use spatio-temporal graph convolutional networks (ST-GCN) [30]. This method models the skeleton data as a graph structure, considering spatial and temporal connections between human joints simultaneously. Particularly, it can help exploit local pattern and correlation from human skeletons, in other words, the importance of joints along the action sequence, expressed as weights of joints in the spatio-temporal graph. This is in line with our STDADI, because both of them focus on describing joint dynamics, and our features further provide an invariant expression which is not affected by the distortions.

Fig. 1: Illustration of channel augmentation for the spatio-temporal graph of human skeletons.
NTU-RGB+D NTU-RGB+D 120
Method Cross-subject Cross-view Cross-subject Cross-setup
Part-Aware LSTM [23] 62.9% 70.3% 25.5% 26.3%
Spatio-Termporal LSTM [16] 69.2% 77.7% 55.7% 57.9%
GCA-LSTM [18] 74.4% 82.8% 58.3% 59.2%
Two-Stream Attention LSTM [17] 76.1% 84.0% 61.2% 63.3%
Skeleton Visualization [19] 80.0% 87.2% 60.3% 63.2%
Body Pose Evolution Map(*) [20] 82.4% 86.7% 64.6% 66.9%
Multi-Task Learning Network [9] 79.6% 84.8% 58.4% 57.9%
Multi-Task CNN with RotClips [10] 81.1% 87.4% 62.2% 61.8%
ST-GCN [30] 81.5% 88.3% 71.7% 74.3%
ST-GCN + data augmentation 80.6% 90.5% 72.2% 79.0%
ST-GCN + channel augmentation 83.4% 91.3% 77.3% 78.8%
TABLE I: Comparisons of the validation accuracy with state-of-the-art methods on NTU-RGB+D and NTU-RGB+D 120. ”*” indicates that for [20], we report here the results on NTU-RGB+D using only skeleton data. Best results are labeled in bold.

Iv Results

In this section we validate the effectiveness of the proposed feature and method on the large scale action recognition dataset NTU-RGB+D [23] and its extended version NTU-RGB+D 120 [15]. In addition to the original ST-GCN, we adopted a data augmentation technique as the baseline method. As illustrated in [27]

, the data augmentation technique involves rotation, scaling and shear transformations of 3D skeletons during training. For all the experimental methods, we used the same training strategy and hyperparameters as suggested in 

[30].

Iv-a Datasets & Evaluation Metrics

NTU-RGB+D and its extended version, NTU-RGB+D 120 are currently the largest action recognition datasets with 3D joint annotations captured in a constrained indoor environment using Microsoft Kinect V2 cameras. Both of them provide 3D skeleton data containing 3D locations of 25 major body joints in the camera coordinate system. NTU-RGB+D contains 56880 samples in 60 action classes performed by 40 subjects, and NTU RGB+D 120 extends the original by adding 57600 more samples, expanding the number of action classes and subjects to 120 and 106, respectively. Both datasets have the cross-subject evaluation criteria, while NTU RGB+D 120 makes an improvement on the cross-view benchmark by introducing more factors that affect the angle of view, including the height and distance of cameras to the subjects, and renames this benchmark as ”cross-setup”. We report top-1 recognition accuracy on both datasets with corresponding evaluation metrics.

Iv-B Comparison with the State-of-the-art

As shown in Table I, our method, ST-GCN + channel augmentation, outperforms most of the previous state-of-the-art methods. Compare to two baseline approaches, ST-GCN and ST-GCN + data augmentation, our method achieves obvious improvements on both benchmarks. For data augmentation, as it is mainly consisted of 3D geometric transformations, it helps much to improve accuracy in cross-view recognition, but contributes little to the cross-subject setting. This also verifies that our spatio-temporal dual affine transformation assumption is valid on both evaluation criteria.

Method Cross-subject Cross-view
ST-GCN 81.5% 88.3%
+ derivatives 80.4% 87.6%
+ STDADI 83.4% 91.3%
TABLE II: The validation accuracy of different input settings for channel augmentation on NTU-RGB+D.
Fig. 2: Improvements of each class in validation accuracy of ST-GCN + channel augmentation over ST-GCN, on NTU-RGB+D, the cross-subject benchmark. For clarity only classes with accuracy improvement larger than 2% or less than -2% are shown.

Iv-C Detailed Analysis

To validate the effectiveness of STDADI, we tried a different input setting using trajectory derivatives as the extended vector for channel augmentation. This vector contains the 1st, 2nd and 3rd derivatives of the joint trajectory and thus is 9-dimensional. Seen from Table II, while the ST-GCN+STDADI has an improvement, the ST-GCN+derivatives has a decrease of accuracy on both the benchmarks. This shows that the improvement of accuracy comes from the invariance expressed by STDADI.

We also compare the improvements of ST-GCN + channel augmentation over ST-GCN of different action classes. As shown in Fig. 2, actions such as pointing to something and salute achieve the greatest performance gain, while actions like brushing hairs suffer performance loss. We find that those action classes with improving accuracy have specific joint trajectory motion patterns. When performing actions like pointing to something and salute, the trajectories of wrist joint of performers are geometrically similar. This indicates that the geometric similarity of important joint trajectories helps to recognize the action class, and our STDADI provides an invariant representation for the similarity under various distortions.

V Conclusion

In this paper, we propose a general method for constructing spatio-temporal dual affine differential invariant (STDADI). We prove the effectiveness of this invariant feature using a channel augmentation technique on the large-scale action recognition dataset NTU-RGB+D and NTU-RGB+D 120. The combination of handcrafted features and data-driven methods not only improves the accuracy but also provides more insights. In the future, as the temporal affine transformation may not be efficient to model complex transformations along the time dimension, we are going to explore the invariance under unlinear dynamic time scaling.

References

  • [1] R. Anirudh, P. Turaga, J. Su, and A. Srivastava (2016) Elastic functional coding of riemannian trajectories. IEEE transactions on pattern analysis and machine intelligence 39 (5), pp. 922–936. Cited by: §II.
  • [2] S. Y. Boulahia, E. Anquetil, R. Kulpa, and F. Multon (2016) HIF3D: handwriting-inspired features for 3d skeleton-based action recognition. In

    2016 23rd International Conference on Pattern Recognition (ICPR)

    ,
    pp. 985–990. Cited by: §II.
  • [3] A. B. Brown (1935) Functional dependence. Transactions of the American Mathematical Society 38 (2), pp. 379–394. Cited by: §III-B.
  • [4] Y. Du, W. Wang, and L. Wang (2015)

    Hierarchical recurrent neural network for skeleton based action recognition

    .
    In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 1110–1118. Cited by: §II.
  • [5] P. Esling and C. Agon (2012) Time-series data mining. ACM Computing Surveys (CSUR) 45 (1), pp. 12. Cited by: §I, §II.
  • [6] X. He, C. Shao, and Y. Xiong (2014) A new similarity measure based on shape information for invariant with multiple distortions. Neurocomputing 129, pp. 556–569. Cited by: §I, §III-A.
  • [7] M. E. Hussein, M. Torki, M. A. Gowayyed, and M. El-Saban (2013) Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In

    Twenty-Third International Joint Conference on Artificial Intelligence

    ,
    Cited by: §II.
  • [8] A. Kacem, M. Daoudi, B. B. Amor, S. Berretti, and J. C. Alvarez-Paiva (2018) A novel geometric framework on gram matrix trajectories for human behavior understanding. IEEE transactions on pattern analysis and machine intelligence. Cited by: §II.
  • [9] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid (2017) A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3288–3297. Cited by: §II, TABLE I.
  • [10] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid (2018) Learning clip representations for skeleton-based 3d action recognition. IEEE Transactions on Image Processing 27 (6), pp. 2842–2855. Cited by: §II, TABLE I.
  • [11] T. S. Kim and A. Reiter (2017) Interpretable 3d human action analysis with temporal convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1623–1631. Cited by: §II.
  • [12] B. Li, Y. Dai, X. Cheng, H. Chen, Y. Lin, and M. He (2017) Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep cnn. In 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 601–604. Cited by: §II.
  • [13] C. Li, Q. Zhong, D. Xie, and S. Pu (2017)

    Skeleton-based action recognition with convolutional neural networks

    .
    In 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 597–600. Cited by: §II.
  • [14] H. Liu, J. Tu, and M. Liu (2017) Two-stream 3d convolutional neural network for skeleton-based action recognition. arXiv preprint arXiv:1705.08106. Cited by: §II.
  • [15] J. Liu, A. Shahroudy, M. L. Perez, G. Wang, L. Duan, and A. K. Chichung (2019) NTU rgb+ d 120: a large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence. Cited by: item 3, §IV.
  • [16] J. Liu, A. Shahroudy, D. Xu, and G. Wang (2016) Spatio-temporal lstm with trust gates for 3d human action recognition. In European Conference on Computer Vision, pp. 816–833. Cited by: §II, TABLE I.
  • [17] J. Liu, G. Wang, L. Duan, K. Abdiyeva, and A. C. Kot (2017) Skeleton-based human action recognition with global context-aware attention lstm networks. IEEE Transactions on Image Processing 27 (4), pp. 1586–1599. Cited by: §II, TABLE I.
  • [18] J. Liu, G. Wang, P. Hu, L. Duan, and A. C. Kot (2017) Global context-aware attention lstm networks for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1647–1656. Cited by: §II, TABLE I.
  • [19] M. Liu, H. Liu, and C. Chen (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognition 68, pp. 346–362. Cited by: §II, TABLE I.
  • [20] M. Liu and J. Yuan (2018)

    Recognizing human actions as the evolution of pose estimation maps

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1159–1168. Cited by: §II, TABLE I.
  • [21] M. Müller, T. Röder, and M. Clausen (2005) Efficient content-based retrieval of motion capture data. In ACM Transactions on Graphics (ToG), Vol. 24, pp. 677–685. Cited by: §II.
  • [22] F. A. Sadjadi and E. L. Hall (1980) Three-dimensional moment invariants. IEEE Transactions on Pattern Analysis and Machine Intelligence (2), pp. 127–136. Cited by: §II.
  • [23] A. Shahroudy, J. Liu, T. Ng, and G. Wang (2016) NTU rgb+ d: a large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1010–1019. Cited by: item 3, §II, TABLE I, §IV.
  • [24] Z. Shao and Y. Li (2015) Integral invariants for space motion trajectory matching and recognition. Pattern Recognition 48 (8), pp. 2418–2432. Cited by: §II, §II.
  • [25] V. Veeriah, N. Zhuang, and G. Qi (2015) Differential recurrent neural networks for action recognition. In Proceedings of the IEEE international conference on computer vision, pp. 4041–4049. Cited by: §II.
  • [26] R. Vemulapalli, F. Arrate, and R. Chellappa (2014) Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 588–595. Cited by: §II, §II.
  • [27] H. Wang and L. Wang (2017) Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 499–508. Cited by: §I, §II, §II, §IV.
  • [28] J. Wang, Z. Liu, Y. Wu, and J. Yuan (2012) Mining actionlet ensemble for action recognition with depth cameras. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1290–1297. Cited by: §II.
  • [29] S. Wu and Y. F. Li (2009) Flexible signature descriptions for adaptive motion trajectory representation, perception and recognition. Pattern Recognition 42 (1), pp. 194–214. Cited by: §I, §II.
  • [30] S. Yan, Y. Xiong, and D. Lin (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §II, §III-C, TABLE I, §IV.
  • [31] P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng (2017) View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2117–2126. Cited by: §II.