Human action recognition is a popular topic in the area of computer vision. Especially, skeleton-based action recognition has attracted more and more attention. Compared with RGB data, skeleton data are considered as a more robust representation for human action dynamics. Meanwhile, skeleton data are extremely compact in terms of data size. This makes it possible to design more lightweight models. Skeleton data can be easily captured by depth cameras (e.g. Kinetics) or estimated with human pose estimation algorithms(Chen et al., 2018; Wei et al., 2016; Cao et al., 2018).
In the task of skeleton-based action recognition, we are given a time series of human joint coordinates and expected to predict the action being performed. Considering the sequence property, Recurrent Neural Networks (RNNs) are a natural choice and have been widely studied(Shahroudy et al., 2016; Zhang et al., 2019a, 2018; Liu et al., 2016; Zhang et al., 2017)
. On the other hand, some recent works attempted to cast a skeleton sequence as a pseudo-image. Then Convolutional Neural Networks (CNNs) are employed to classify the image directly(Li et al., 2017, 2018), which also achieve great success. The advantage of deep neural networks (DNNs) lies in their powerful feature learning capability. Nevertheless, none of these methods explicitly exploits the skeleton topology information, which is very informative for discriminating different action categories. (Yan et al., 2018) first introduced Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017) in the context of skeleton-based action recognition. They model skeleton data as a graph and extract the topology information by an adjacency matrix according to the physical connections of human body. However, the dependencies among skeleton joints of different samples vary, especially when they are performing different actions. Such topology information derived from fixed graph is relatively weak. Recently, there have been some attempts (Shi et al., 2019c; Zhang et al., 2019b; Li et al., 2019a; Liu et al., 2020) to construct different graphs for different samples. They are basically inspired by the non-local operation (Buades et al., 2005), where a distance metric like inner product is utilized to measure the degree of dependency between two arbitrary skeleton joints. To some extent, the topology information is strengthened and the recognition performance gets improved. However, they also bring three issues for future improvement. 1) Non-local-based methods, measuring the dependency between two skeleton joints while ignoring the influence of all other contextual joints, is essentially a local method. We argue that besides the underlying two joints, the contextual information from the rest joints is critical for learning reliable and stable topology. 2) Using an arbitrary function like inner product to compute the dependency between two joints introduces strong prior knowledge, which may not be optimal. 3) In the skeleton dynamic system, non-local-based methods consider the dependency of every pair of joints undirected. Since the contextual information of each joint is different, the dependency should be directed. Moreover, for different pairs of queries, their similarities yielded by the non-local-based methods may be almost identical (Cao et al., 2019).
In this work, we propose a hybrid GCN-CNN framework named Dynamic GCN as shown in Figure 1. We aim to attack the weaknesses of existing learning-based skeleton topology by leveraging the feature learning capability of CNNs. Specifically, a novel convolutional neural network named Context-encoding Network (CeN) is introduced to learn skeleton topology automatically. It can be embedded into a graph convolutional (GConv) layer and learned end-to-end. Compared with non-local-based methods, CeN has the following advantages. 1) CeN takes full account of the contextual information of each joint from a global perspective. 2) CeN is completely data-driven and does not require any prior knowledge, which is more flexible. 3) CeN regards the dependency of each pair of joints as directed and yields directed graphs (asymmetric adjacency matrices), which can represent the dynamics of the skeleton system more accurately. 4) Compared with other topology-learning algorithms, CeN is extremely lightweight yet effective, and can be integrated into the GCN-based methods easily.
Notably, CeN predicts a unique graph topology per-sample as well as per-GConv layer. This feature results in a dynamic graph topology rather than static topology, which enhances the capacity and expressiveness of the model.
For context modeling in CeN, various feature aggregation architectures are explored. As pointed out by (Li et al., 2018)
, for a two-dimensional (2D) convolutional layer, features are aggregated globally along the channel dimension and locally along the spatial dimensions. A skeleton sequence can be represented as a tensor of, where , and denote the feature, temporal and joint dimensions respectively. In the context of topology learning, we argue that the contextual information from the surrounding joints are the most important. To this end, in the proposed CeN features are aggregated globally along the joint dimension by treating it as channel. Ablation studies show that it is superior over two other alternatives, where either the temporal or feature dimension is treated as channel. Further discussion is given, which may guide future research on graph topology learning.
In terms of performance, CeN alone surpasses existing non-local-based topology learning methods significantly. Combining dynamic topology predicted by CeN with static topology leads to further performance gain, which indicates that CeN is complementary to static topology. In terms of efficiency, CeN only brings ~7% extra FLOPs for the GCN-based baseline method. By using the joint-level feature aggregation mechanism, Dynamic GCN is ~ less expensive than other GCN-based methods on calculation.
Moreover, by incorporating the spatial and motion modalities of skeleton sequences, our final model achieves state-of-the-art performance on three large-scale benchmarks, namely NTU-RGB+D, NTU-RGB+D 120 and Skeleton-Kinetics.
Our main contributions can be summarized as follows.
We propose the Dynamic GCN framework, which leverages the complementary benefits of GCN’s topology learning and CNN’s feature learning capabilities.
We introduce a lightweight context-encoding network, which learns context-enriched dynamic skeleton topology in a global way.
Three alternative context modeling architectures are well explored, which may serve as a guideline for future research on graph topology learning.
Our final model achieves state-of-the-art performance on three large-scale benchmarks for skeleton-based action recognition.
2. Related Works
Skeleton-based action recognition has been dominated by deep learning-based methods. They have been proven more effective than those methods based on hand-crafted features(Vemulapalli et al., 2014; Wang et al., 2012; Xia et al., 2012). We will summarize recent works into two major categories, i.e. DNN-based methods and GCN-based methods.
2.1. DNN-based Methods
RNN is a straightforward model for sequence data. (Shahroudy et al., 2016) divides the cell of LSTM into five part-cells which correspond to five body parts respectively. (Zhang et al., 2017) proposed a view adaptation scheme to automatically regulate observation viewpoints during the occurrence of an action. While RNNs aggregate temporal information sequentially, CNNs are able to encode spatiotemporal information jointly. (Li et al., 2017) treats a skeleton sequence as a pseudo-image, in which the coordinate of each skeleton joint is regarded as three channels of the image. Then a CNN is designed to directly classify the image into action categories. (Li et al., 2018) proposes a co-occurrence feature learning framework, which inspires the global contextual feature aggregation of the proposed CeN. The success of CNNs is attributed to their strong capability in feature representation. However, after converting the irregularly structured skeleton data into regularly structured images, the skeleton topology information is lost. Although CeN is also a CNN model, it is designed for learning of the graph topology rather than final action recognition.
2.2. GCN-based Methods
GCN is able to effectively deal with irregularly structured graphs like skeleton data. Given skeleton data with joints, the graph topology can be well represented by an adjacency matrix . The key of GCN-based methods lies in the design of graph topology, i.e. . The most straightforward way is to define a fixed graph according to the physical connections of human body (Figure 2(a)), which is adopted in ST-GCN (Yan et al., 2018). In order to put slight attention on edges, they also create a learnable mask which is multiplied or added with the physical adjacency matrix (Figure 2(b)). Later, (Tang et al., 2018) adopted the conception of virtual connection as a supplement to physical adjacency matrix.
In the above methods, the adjacency matrices are either pre-defined or fixed after training finishes. To make the graph topology more flexible, (Shi et al., 2019c; Liu et al., 2020) and (Li et al., 2019a) attempted to construct different graphs for different samples. Specifically, non-local-based operations are employed to infer the connectedness between two arbitrary joints. As shown in Figure 2(c), when measuring the dependency between two joints, only features of the underlying two joints are taken into account, while the influence of the contextual joints is ignored. On the contrary, in our Dynamic GCN, features of all contextual joints are fully incorporated with the proposed CeN (Figure 2(d)). The graph learned in this way can be more robust and expressive.
In this section, we first briefly recap GCN in the context of skeleton-based action recognition. Then we illustrate the details of the proposed Dynamic GCN framework.
3.1. GCN for Skeleton-based Action Recognition
In the context of skeleton-based action recognition, a GCN is typically composed of graph convolutional blocks (GC-blocks) and temporal convolutional blocks (TC-blocks). Given a skeleton sequence of , GC-blocks and TC-blocks aggregate features along the joint () and temporal () dimensions respectively. Note that the number of joints is kept unchanged, while and normally differ in different GConv layers. Specifically, GC-blocks can be formulated as:
where denotes the number of spatial configurations according to ST-GCN (Yan et al., 2018). and denote the input and output feature maps respectively. denotes the learnable kernels. For each spatial configuration, is the adjacency matrix and is the diagonal node degree matrix for normalization. Specifically, the degree of node is computed by , where denotes the element of the -th row and -th column in , and is added to avoid the all-zero problem.
TC-blocks are normal convolutional layers with a kernel size of . To learn spatiotemporal features jointly, a GCN is typically built by stacking GC-blocks and TC-blocks alternately.
3.2. Dynamic GCN
In this section, we elaborately introduce the proposed Dynamic GCN framework. We first introduce the architecture of Context-encoding Network (CeN) for topology learning. Then we describe how to integrate CeN into GConv layers and build the complete framework.
3.2.1. Context-encoding Network
The adjacency matrix in GCN fully represents the graph topology, which corresponds to the dependencies among different skeleton joints. When is pre-defined with prior knowledge, the topology information is static and limited. Existing learning-based methods (Zhang et al., 2019b; Shi et al., 2019c; Li et al., 2019a; Liu et al., 2020) predict the dependency between two joints independently, and hand-crafted functions (e.g. inner product) to map input features into dependency are employed. On the contrary, we design an extremely lightweight convolutional neural network which takes the whole feature map as input and predicts the full adjacency matrix directly. Notably, contextual information along the joint, temporal and feature dimensions are well explored, yielding a more flexible and expressive graph topology.
The architecture of CeN is shown in Figure 3. Given a feature map of intermediate layer , firstly the feature and temporal dimensions of each joint are squeezed by two 11 convolutional layers named Conv-C and Conv-T. Then the joint dimension is treated as the channel of convolution, and a single 11 convolutional layer is utilized to map the
-dimensional vector into theadjacency matrix. This design fully considers the impact of all other joints when measuring the dependency between every pair of joints. After that, the topology is represented as matrices with the shape of . In addition, the L2 normalization is applied to each row of the adjacency matrices, which eases the optimization.
It is worth noting that CeN learns a dynamic and unique adjacency matrix per-sample as well as per-GConv layer. The graph topology is not shared among different samples even if they belong to the same action class. Rather than hand-crafted functions, the parameters in CeN are learned in a data-driven way without any prior assumption. Moreover, by treating the joint dimension as channel, global contextual information of all joints can be encoded by the trainable kernels.
The adjacency matrix predicted by CeN is fed into a GConv layer as its graph topology representation. With the help of L2 normalization, the normalization of node degree in Eq. (1) is unnecessary. For simplicity, henceforth we use to refer to either normalized static adjacency matrix or learned adjacency matrix.
3.2.2. Dynamic GConv Layer
Figure 4 shows the pipeline of the Dynamic GConv layer, which is the basic building block of the Dynamic GCN framework. Besides the dynamic graph predicted by CeN, static graphs are also incorporated. The static branch takes the skeleton features from the last layer and the physical graph with learnable parameter-mask as input. The dynamic branch takes the skeleton features and the context-enriched graph as input. Their outputs further get fused by element-wise summation.
In detail, according to Eq. (1), the static branch can be formulated as:
where denotes the physical graph from the physical connections of human body. denotes the parameterized mask, which is used as an attention onto the physical graph. Following 2s-AGCN (Shi et al., 2019c), these two graphs are combined in an additive manner. is the output features of the static branch. The static branch extracts the static topology information of the skeleton data, which has been proven helpful for final prediction.
More importantly, the dynamic branch can be formulated as:
where is the dynamic graph predicted by CeN. is the output of the dynamic branch, which extracts the global context-enriched topology of the skeleton data. Note that the learnable parameters are not shared between the static and dynamic branches.
After extracting the static and context-enriched topology features with the static branch and dynamic branch, a weighted summation operation is applied for fusion. That is:
where is the weight used to balance and , as they may differ in magnitude.
After the topology features are aggregated, a TC-block is appended for temporal feature aggregation. A shortcut connection is added after the TC-block. As shown in Figure 1, the complete Dynamic GCN framework is built by stacking 10 Dynamic GConv layers. Global average pooling layer and a fully-connected layer along with softmax are appended after the GConv layers for final classification. The numbers of output channels in the GConv layers are kept the same as ST-GCN (Yan et al., 2018).
3.2.3. Joint-level Feature Aggregation and Ensemble of Spatial-motion Modalities
In the previous GCN-based methods (Yan et al., 2018; Shi et al., 2019c, b), the number of nodes in the graph is kept unchanged. That is, given a skeleton sequence , all GConv layers share the same number of joints . In contrast, we propose a very simple way to gradually aggregate features at the joint level. Specifically, we use a projection matrix to shrink the size of the joint dimension, where and . We insert into some intermediate layers of the graph convolutional network, so that . By using the joint-level feature aggregation, the FLOPs of the model can be greatly reduced, and the performance of the model is barely affected.
Moreover, to further boost the performance, we explore multiple modalities, namely joint, bone and their corresponding motion modalities as Shi et al. (Shi et al., 2019b, c) did. The temporal motion of skeleton joints has been shown informative to discriminate fine-grained actions, such as “put on jacket” and “take off jacket”. For one person in frame , each joint can be denoted as . Then the joint motion is defined as the temporal movement of each joint .
Moreover, the bone is another spatial information, which has been proved important in previous works (Shi et al., 2019c). Each bone is defined as the vector pointing from a source joint to a target joint. It can be formulated as . Temporal motion can also be computed for the bone stream, in the same way as joint motion.
For ensemble of multiple modalities, we train one model per-modality separately. Then the logits before softmax of the four models are fused by summation.
We evaluate Dynamic GCN on three large-scale skeleton-based action recognition benchmarks, namely NTU-RGB+D, NTU-RGB+D 120 and Skeleton-Kinetics. Extensive ablation studies are conducted to verify the impact of different components of the framework. Lastly, our final model is evaluated and compared with current state-of-the-arts.
NTU-RGB+D (Shahroudy et al., 2016) is the most widely used action recognition dataset. It contains 56,880 skeleton clips of 60 action classes. These clips were captured in the lab environment from three camera views. The annotations provide the 3D location of each joint in the camera coordinate system. There are 25 joints per-subject. Each clip is guaranteed to contain at most 2 subjects. We follow the standard evaluation protocols, namely cross-subject (C-Subject) and cross-view (C-View). In the C-subject setting, 40,320 clips from 20 subjects are used for training, and the rest for testing. In the C-View setting, 37,920 clips captured from camera 2 and 3 are used for training and those from camera 1 for testing.
NTU-RGB+D 120 (Liu et al., 2019a)
is an extension of NTU-RGB+D, where the number of classes is expanded to 120 and the number of samples is expanded to 114,480. There are also two recommended evaluation protocols, namely cross-subject (C-Subject) and cross-setup (C-Setup). In the C-Subject setting, 63,026 clips from 53 subjects are used for training, and the remaining subjects are reserved for testing. In the C-Setup setting, 54,471 clips with even collection setup IDs are used for training, and the rest clips with odd setup IDs are used for testing.
Skeleton-Kinetics is derived from the Kinetics video action recognition dataset (Kay et al., 2017). The dataset contains 300,000 video clips of 400 classes. Each video clip lasts around 10 seconds. Human skeletons are estimated by (Yan et al., 2018) from the RGB videos using the OpenPose toolbox (Cao et al., 2018). Each joint consists of 2D coordinates in the pixel coordinate system and its confidence score . Thus it is finally represented as a tuple of . There are 18 joints for each person. In each frame at most two subjects are considered. We follow the same train-validation split as (Yan et al., 2018). That is, the training and validation sets contain 240,000 and 20,000 video clips respectively. Top-1 and top-5 accuracies are reported.
4.2. Implementation Details
Dynamic GCN is implemented in PyTorch(Paszke et al., 2017). To verify the effectiveness of CeN on a high-performance baseline, we follow the same data processing strategy and the attention mechanism in MS-AAGCN (Shi et al., 2019b). SGD (Bottou, 2010)
with a 0.9 Nesterov momentum is used for optimization. The learning rate is set to 0.1 initially and reduced twice at the 35-th and 55-th epochs with a factor of 0.1. On all the three datasets, the model is trained for 65 epochs in total. And theis set to 1. The joint aggregation rate is set to 0.6, and the projection is inserted twice after the 5-th and 8-th GConv layers. The input skeleton sequences are resized to a fixed length, i.e. 64 frames for both NTU-RGB+D and NTU-RGB+D 120, and 150 frames for Skeleton-Kinetics. For a fair comparison with methods based on ST-GCN, the number of spatial configurations is set to 3. To alleviate overfitting, weight decay is set to 0.0004. Batch size is set to 64, and cross-entropy loss is employed.
4.3. Ablation Studies
Ablation studies are conducted on the NTU-RGB+D dataset with the C-Subject setting. We first validate the effectiveness of the proposed CeN by comparing it with existing non-local-based methods. Next two alternative context modeling architectures are compared. Finally, the contribution of ensemble of spatial-motion modalities is reported.
4.3.1. Choosing the Optimal Position of Joint Aggregation
For the joint-level feature aggregation with the projection matrix , the aggregation rate is set to 0.6 empirically. To choose the appropriate GConv layers to apply the projection, we try a few configurations. As shown in Table 1, the position of applying barely affects the performance. When applying it to the 5-th & 8-th layers, it even slightly improves the accuracy from 88.9% to 89.2%.
4.3.2. Effectiveness of CeN
As shown in Figure 4, the Dynamic GConv layer is composed of static and dynamic branches. To measure the contribution of individual component, we train the model either using the static or dynamic branch only. To compare CeN with existing non-local-based method, we investigate the case when all CeNs are replaced with the non-local operation following (Shi et al., 2019c). Meanwhile, to verify the importance of directed graphs, we compare CeN with a variant CeN* which predicts undirected graphs by forcing the adjacency matrices to be symmetric.
As shown in Table 2, whether or not the static branch is enabled, the directed graph structure predicted by CeN achieves better performance than the undirected graph by CeN*. It clearly demonstrates the importance of directed graph, which can represent the dynamic characteristics of the skeleton more effectively. This also verifies our opinion that for different joints, different contextual information is preferred. That is why learning-based dynamic graph topology is necessary.
In addition, using static graph alone achieves an accuracy of 88.2%, while non-local-based dynamic graph is inferior to static graph. CeN-predicted graph surpasses non-local-based graph by 1.5%.
When combining static graph with dynamic graph, non-local-based graph barely improves the accuracy (i.e. by 0.2%), while CeN-predicted graph significantly improves the accuracy from 88.2% to 89.2%, setting a new state-of-the-art for single model.
4.3.3. Alternative Context-enriched Topology
When learning the skeleton topology, CeN enriches contextual information globally along the joint dimension. Two straightforward alternatives are to learn contextual information along the feature or temporal dimension rather than the joint dimension. As shown in Figure 5, they can be easily implemented by changing the order of the three convolutional layers. The two variants of CeN are referred to as CeN-F and CeN-T respectively.
Table 3 shows a comparison of CeN with its two variants. The performance of the model decreases by 0.7% and 0.8% for CeN-F and CeN-T respectively. The results verify our motivation that the surrounding joints are the most important for learning dynamic skeleton topology.
4.3.4. Contribution of Model Ensemble
|Spatial Fusion||Motion Fusion||Accuracy (%)|
To fully exploiting the skeleton data, we evaluate the ensemble of four modalities, i.e. joints, bones, joint motion and bone motion. The modalities of joints and bones are combined as spatial information, and the modalities of joint motion and bone motion are combined as motion information. Using spatial and motion information alone as well as fusion of all modalities are evaluated.
As shown in Table 4, with ensemble of spatial modalities, the accuracy gets significantly improved from 89.2% to 90.9%. After combining the motion information, we achieve a top-1 accuracy of 91.5% in the C-Subject setting of NTU-RGB+D, which is the new state-of-the-art for ensemble model.
4.3.5. Comparison of FLOPs with Other GCN-based Methods
In order to highlight the efficiency of the proposed Dynamic GCN, we compare it with existing GCN-based methods in terms of FLOPs and accuracy. For simplicity, for all methods, FLOPs and accuracy of single model are reported. As shown in Table 5, CeN only brings extra ~7% FLOPs for the baseline method, whose FLOPs increase from ~1.86G to ~1.99G. Compared with other methods, Dynamic GCN has ~ advantage in terms of FLOPs, while achieving state-of-the-art performance.
|ST-GCN (Yan et al., 2018)||~3.56G||81.5|
|AS-GCN (Li et al., 2019b)||~6.10G||86.8|
|MS-AAGCN (Shi et al., 2019b)||~3.98G||88.0|
|MS-G3D Net (1 pathway) (Liu et al., 2020)||~5.21G||89.1|
|MS-G3D Net (2 pathways) (Liu et al., 2020)||~8.32G||89.4|
|Dynamic GCN (w/o CeN)||~1.86G||88.2|
|Dynamic GCN (ours)||~1.99G (+~7%)||89.2|
4.3.6. Visualization of Learned Topology
To investigate how well CeN learns the graph topology, we attempt to visualize the learned skeleton topology. Since CeN predicts dynamic graph for different samples, for clarification, we compute the average adjacency matrix of the 5-th GConv block for a specific action category. If the action involves two people, only the first person will be visualized.
Figure 6 visualizes the learned skeleton topologies of three actions. Response values larger than 0.4 are drawn as dark-orange lines. The blue lines are the physical connections of human body, and the red dots are the joints. As we can see, the strong dependencies captured by CeN are mainly related to two hands for the action of Wipe Face (Figure 6(c)) which is in line with cognition. For the action of Jump (Figure 6(a)), CeN mainly attends to the dependencies related to the knee joint and the feet joints. The action of walking (Figure 6(b)), CeN put attention to the hands and feet significantly. These results validate that reasonable potential topological patterns of different actions can be captured by CeN.
4.4. Comparison with the State-of-the-arts
We compare the proposed Dynamic GCN with other state-of-the-art methods on the NTU-RGB+D and Skeleton-Kinetics datasets in Table 6 and Table 7 respectively. For NTU-RGB+D 120, besides two recent methods, the baseline models mentioned in the original paper are also listed (Table 8).
|Lie Group (Vemulapalli et al., 2014)||50.1||82.8||2014|
|Deep LSTM (Shahroudy et al., 2016)||60.7||67.3||2016|
|ST-LSTM+TS (Liu et al., 2016)||69.2||77.7||2016|
|TCN (Kim and Reiter, 2017)||74.3||83.1||2017|
|VA-LSTM (Zhang et al., 2017)||79.4||87.6||2017|
|ST-GCN (Yan et al., 2018)||81.5||88.3||2018|
|DPRL (Tang et al., 2018)||83.5||89.8||2018|
|HCN (Li et al., 2018)||86.5||91.1||2018|
|GR-GCN (Gao et al., 2019)||87.5||94.3||2019|
|AGC-LSTM (Si et al., )||89.2||95.0||2019|
|DGNN (Shi et al., 2019a)||89.9||96.1||2019|
|STGR-GCN (Li et al., 2019a)||86.9||92.3||2019|
|AS-GCN (Li et al., 2019b)||86.8||94.2||2019|
|2s-AGCN (Shi et al., 2019c)||88.5||95.1||2019|
|MS-AAGCN (Shi et al., 2019b)||90.0||96.2||2020|
|MS-G3D Net (Liu et al., 2020)||91.5||96.2||2020|
|Dynamic GCN (ours)||91.5||96.0|
As shown in Table 6, Dynamic GCN achieves state-of-the-art performance, i.e. 91.5% and 96.0% in the C-Subject and C-View settings of NTU-RGB+D respectively. Although our accuracy is on par with MS-G3D Net (Liu et al., 2020), our model is much more efficient as analyzed in Table 5. (Yan et al., 2018) and (Li et al., 2018) are two representative methods for GCN-based and CNN-based methods. Dynamic GCN outperforms them by 10% and 5% respectively.
|Method||Top-1 Acc.||Top-5 Acc.||Year|
|Deep LSTM (Shahroudy et al., 2016)||16.4||35.3||2016|
|TCN (Kim and Reiter, 2017)||20.3||40.0||2017|
|ST-GCN (Yan et al., 2018)||30.7||52.8||2018|
|STGR-GCN (Li et al., 2019a)||33.6||56.1||2019|
|AS-GCN (Li et al., 2019b)||34.8||56.5||2019|
|2s-AGCN (Shi et al., 2019c)||36.1||58.7||2019|
|MS-AAGCN (Shi et al., 2019b)||37.8||61.0||2020|
|MS-G3D Net (Liu et al., 2020)||38.0||60.9||2020|
|Dynamic GCN (ours)||37.9||61.3|
As for the Kinetics-Skeleton dataset (shown in Table 7), our model also achieves state-of-the-art performance (37.9% top-1 accuracy and 61.3% top-5 accuracy).
For the recently released NTU-RGB+D 120 dataset, so far few state-of-the-art methods have been evaluated on it. We compare the Dynamic GCN with the baseline methods mentioned in the original paper. As shown in Table 8, our model achieves a top-1 accuracy of 87.3% and 88.6% in the C-Subject and C-Setup settings respectively, surpassing the baseline methods by a large margin. Compared with MS-G3D Net (Liu et al., 2020), we achieve slightly better accuracy.
|Dynamic Skeleton (Hu et al., 2015)||50.8||54.7||2015|
|PA-LSTM (Shahroudy et al., 2016)||25.5||26.3||2016|
|ST-LSTM+TS (Liu et al., 2016)||55.7||57.9||2016|
|FSNet (Liu et al., 2019b)||59.9||62.4||2019|
|2s-ALSTM (Liu et al., 2017)||61.2||63.3||2017|
|MT CNN (Ke et al., 2018)||62.2||61.8||2018|
|BPEM (Liu and Yuan, 2018)||64.6||66.9||2018|
|2s-AGCN (Shi et al., 2019c)||82.9||84.9||2019|
|MS-G3D Net (Liu et al., 2020)||86.9||88.4||2020|
|Dynamic GCN (ours)||87.3||88.6|
5.1. On Hybrid GCN-CNN
Compared with GCN alone, combining GCN and CNN into a hybrid GCN-CNN framework is indeed beneficial. In this work, the proposed Dynamic GCN is merely a simple form of the hybrid GCN-CNN architectures, in which CNN is used for topology learning. The capacity of GCN is greatly expanded with a flexible and expressive graph topology.
Beyond the task of skeleton-based action recognition, the hybrid GCN-CNN can be applied in many fields, such as social network modeling. Although the Dynamic GCN we propose is generally proven effective, the form of hybrid GCN-CNN requires further exploration for different tasks. For example, normal convolution and graph convolution can be stacked sequentially or in parallel for better skeleton feature learning. We will leave this as future work.
5.2. On Learned Graph Toplogy
For skeleton data, the physical graph enjoys the benefit of being definite and explainable. According to our experiment in Table 2, its performance is reasonable and competitive. Nevertheless, learned graph by CeN is complementary to physical graph, which indicates that there exist potential connections that are informative but missed by physical connections. Our visualization of learned topology in Figure 6 verifies this point. We believe that for other graph-structured data like social network, the proposed context-enriched topology learning is also applicable, considering the fact that the pre-defined topology is noisy, incomplete and unreliable.
How to extract the topology of skeleton data effectively for graph convolutional networks is the major challenge for skeleton-based action recognition. In this paper, we propose the Dynamic GCN framework, which leverages the advantages of GCN and CNN. The context-encoding network is designed to learn global context-enriched topology. Extensive experiments on three large-scale datasets validate the superiority of the proposed method. The significant reduction of FLOPs compared with existing methods makes our model more competitive for deployment, in particular on edge devices where computing power is limited. The novelly introduced hybrid GCN-CNN architecture as well as the technique of context-enriched topology learning may provide insights for future research on skeleton data analysis and beyond.
- . In Proceedings of COMPSTAT’2010, pp. 177–186. Cited by: §4.2.
- A non-local algorithm for image denoising. In CVPR, Vol. 2, pp. 60–65. Cited by: §1.
- GCNet: non-local networks meet squeeze-excitation networks and beyond. arXiv preprint arXiv:1904.11492. Cited by: §1.
- OpenPose: realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008. Cited by: §1, §4.1.
- Cascaded pyramid network for multi-person pose estimation. In CVPR, pp. 7103–7112. Cited by: §1.
- Optimized skeleton-based action recognition via sparsified graph regression. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 601–610. Cited by: Table 6.
- Jointly learning heterogeneous features for rgb-d activity recognition. In CVPR, pp. 5344–5352. Cited by: Table 8.
- The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §4.1.
- Learning clip representations for skeleton-based 3d action recognition. IEEE Transactions on Image Processing 27 (6), pp. 2842–2855. Cited by: Table 8.
- Interpretable 3d human action analysis with temporal convolutional networks. In CVPRW, pp. 1623–1631. Cited by: Table 6, Table 7.
- Semi-supervised classification with graph convolutional networks. ICLR. Cited by: §1.
- Spatio-temporal graph routing for skeleton-based action recognition. Cited by: §1, §2.2, §3.2.1, Table 6, Table 7.
- Skeleton-based action recognition with convolutional neural networks. In ICMEW, pp. 597–600. Cited by: §1, §2.1.
- Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. IJCAI. Cited by: §1, §1, §2.1, §4.4, Table 6.
- Actional-structural graph convolutional networks for skeleton-based action recognition. In CVPR, pp. 3595–3603. Cited by: Table 5, Table 6, Table 7.
- NTU rgb+ d 120: a large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence. Cited by: §4.1.
- Skeleton-based online action prediction using scale selection network. IEEE transactions on pattern analysis and machine intelligence. Cited by: Table 8.
- Spatio-temporal lstm with trust gates for 3d human action recognition. In ECCV, pp. 816–833. Cited by: §1, Table 6, Table 8.
- Skeleton-based human action recognition with global context-aware attention lstm networks. IEEE Transactions on Image Processing 27 (4), pp. 1586–1599. Cited by: Table 8.
- Recognizing human actions as the evolution of pose estimation maps. In CVPR, pp. 1159–1168. Cited by: Table 8.
- Disentangling and unifying graph convolutions for skeleton-based action recognition. arXiv preprint arXiv:2003.14111. Cited by: §1, §2.2, §3.2.1, §4.4, §4.4, Table 5, Table 6, Table 7, Table 8.
- Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, Cited by: §4.2.
- Ntu rgb+ d: a large scale dataset for 3d human activity analysis. In CVPR, pp. 1010–1019. Cited by: §1, §2.1, §4.1, Table 6, Table 7, Table 8.
- Skeleton-based action recognition with directed graph neural networks. In CVPR, pp. 7912–7921. Cited by: Table 6.
- Skeleton-based action recognition with multi-stream adaptive graph convolutional networks. arXiv preprint arXiv:1912.06971. Cited by: §3.2.3, §3.2.3, §4.2, Table 5, Table 6, Table 7.
- Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In CVPR, pp. 12026–12035. Cited by: §1, §2.2, §3.2.1, §3.2.2, §3.2.3, §3.2.3, §3.2.3, §4.3.2, Table 6, Table 7, Table 8.
-  An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In cvpr, Cited by: Table 6.
Deep progressive reinforcement learning for skeleton-based action recognition. In CVPR, pp. 5323–5332. Cited by: §2.2, Table 6.
- Human action recognition by representing 3d skeletons as points in a lie group. In CVPR, pp. 588–595. Cited by: §2, Table 6.
- Mining actionlet ensemble for action recognition with depth cameras. In CVPR, pp. 1290–1297. Cited by: §2.
- Convolutional pose machines. In CVPR, pp. 4724–4732. Cited by: §1.
View invariant human action recognition using histograms of 3d joints.
2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–27. Cited by: §2.
- Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-Second AAAI, Cited by: §1, §2.2, §3.1, §3.2.2, §3.2.3, §4.1, §4.4, Table 5, Table 6, Table 7.
- View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In ICCV, pp. 2117–2126. Cited by: §1, §2.1, Table 6.
- View adaptive neural networks for high performance skeleton-based human action recognition. IEEE transactions on pattern analysis and machine intelligence. Cited by: §1.
- Semantics-guided neural networks for efficient skeleton-based human action recognition. arXiv preprint arXiv:1904.01189. Cited by: §1, §3.2.1.
Adding attentiveness to the neurons in recurrent neural networks. In ECCV, pp. 135–151. Cited by: §1.