DeepAI
Log In Sign Up

View-Invariant Skeleton-based Action Recognition via Global-Local Contrastive Learning

Skeleton-based human action recognition has been drawing more interest recently due to its low sensitivity to appearance changes and the accessibility of more skeleton data. However, even the 3D skeletons captured in practice are still sensitive to the viewpoint and direction gave the occlusion of different human-body joints and the errors in human joint localization. Such view variance of skeleton data may significantly affect the performance of action recognition. To address this issue, we propose in this paper a new view-invariant representation learning approach, without any manual action labeling, for skeleton-based human action recognition. Specifically, we leverage the multi-view skeleton data simultaneously taken for the same person in the network training, by maximizing the mutual information between the representations extracted from different views, and then propose a global-local contrastive loss to model the multi-scale co-occurrence relationships in both spatial and temporal domains. Extensive experimental results show that the proposed method is robust to the view difference of the input skeleton data and significantly boosts the performance of unsupervised skeleton-based human action methods, resulting in new state-of-the-art accuracies on two challenging multi-view benchmarks of PKUMMD and NTU RGB+D.

READ FULL TEXT VIEW PDF
03/17/2020

Predictively Encoded Graph Convolutional Network for Noise-Robust Skeleton-based Action Recognition

In skeleton-based action recognition, graph convolutional networks (GCNs...
04/24/2019

A Large-scale Varying-view RGB-D Action Dataset for Arbitrary-view Human Action Recognition

Current researches of action recognition mainly focus on single-view and...
12/02/2020

Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximization

We introduce a novel representation learning method to disentangle pose-...
09/07/2022

Shifting Perspective to See Difference: A Novel Multi-View Method for Skeleton based Action Recognition

Skeleton-based human action recognition is a longstanding challenge due ...
04/17/2018

Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation

Skeleton-based human action recognition has recently drawn increasing at...
04/20/2018

View Adaptive Neural Networks for High Performance Skeleton-based Human Action Recognition

Skeleton-based human action recognition has recently attracted increasin...

I Introduction

Human action recognition plays an important role in video surveillance, human-machine interaction, and sport video analysis [10]. Different modality information, such as appearance, depth, optical flows, and body skeletons [3] has been used for human action recognition. Among them, the skeleton consists of compact positions of major body joints [47] and can provide highly effective information on human motion underlying different actions [15, 4]. Skeleton-based action recognition is robust to appearance inconsistencies, different environments, and varying illuminations and is getting more accessible with the rapid development of sensor technology for capturing the skeleton.

3D skeletons simultaneously captured for the same person from different views are usually different [46], as shown in Figure 1

, even if we try to transform them to the same coordinates. There are many reasons accounting for this phenomenon, such as altered reference coordinates, different occluded joints in different views, and inaccurate human pose estimation. In practice, the skeleton data used for action recognition may be captured from different and even time-varying views, and such view variance can easily lead to incorrect feature representation and recognition 

[46, 27]. Nowadays, view-invariant action recognition is still a challenging problem.

Fig. 1:

An illustration of view variance of skeleton data: 3d skeleton data simultaneously captured for a same person, but from different views, are usually different due to altered reference coordinates, different occluded joints, or inaccurate human pose estimation.

A commonly used strategy to improve view invariance in skeleton-based action recognition is to perform frame-level or sequence-level pre-processing of skeleton transformation [23, 32, 24, 46]. Nevertheless, the frame-level pre-processing transforms the skeleton to the body center with the upper-body orientation aligned and results in partial loss of relative motion information. To further preserve the motion information, sequence-level pre-processing performs the same transformation on all frames with the same parameters derived in the first frame. However, since the human body is non-rigid, the definition of body plane by the specific joints is not always suitable for the purpose of orientation alignment. Furthermore, it is almost impossible to eliminate the structural differences in different viewpoints through these transformations. Manually designing a view-invariant representation of the action is another approach, such as the displacement of joints within one frame or between frames [43], the histogram of joint orientations [41], and some higher-level features like Lie group [36] and the covariant matrix of joints [12]

, but they could only deal with small view changes. While the deep learning-based methods vastly outperform the traditional hand-crafted feature-based methods, most of them just rely on training on a large number of labeled samples taken from various views and cannot achieve actual view invariance in the underlying representation learning.

Most of the state-of-the-art methods for skeleton-based action recognition use supervised deep learning, which requires large-scale annotated data samples for training [26, 7]

. To address this problem, several recent studies attempt to leverage unsupervised learning for skeleton-based action recognition 

[48, 21, 33]

. In these studies, deep representations are learned for skeleton data sequences in terms of tasks like human motion prediction or regeneration, without using any action labels for supervision. For algorithm evaluation, a simple linear classifier is finally trained for action recognition based on both the learned representations and action labels of the training data. At present, there is still a relatively obvious performance gap between the supervised and unsupervised methods for skeleton-based action recognition. One primary cause lies in that the existing unsupervised skeleton representation learning cannot effectively exclude all the irrelevant factors for action recognition, such as view variation, skeleton deformation, and noise. For this reason, we propose a new approach to enhancing representation learning by tackling view variation in skeleton-based action recognition without using any manual action labels. Since the training data are unlabeled and have been already formated and related in data collection and prepossessing, we follow previous studies 

[19, 14], where a surrogate task is designed to exploit the inherent structure of unlabeled multi-view data for representation learning, by calling our proposed method unsupervised in this paper.

In this paper, we adopt contrastive representation learning to enhance the view-invariant unsupervised skeleton-based action recognition. More specifically, we develop a multi-view spatial-temporal graph (ST-Graph) contrastive representation learning (CRL) approach, in which the training loss function is defined to maximize the mutual information between features learned from skeletons that are simultaneously captured for the same person from different views. Such multi-view representations of the same person are pulled closer to each other in the embedding space through network training. Furthermore, the proposed loss takes the form of a global-local contrastive one, which can also model the multi-scale co-occurrence relationships between the spatial and temporal domains. In the testing stage, just like in previous works we only take one skeleton data sequence captured from an unknown view as the input of the network for skeleton-based action recognition. We conduct comprehensive evaluation and analysis in our experiments to demonstrate that the proposed method can better learn view-invariant representations for improving the performance of skeleton-based human action recognition. The proposed method achieves a new state-of-the-art performance of unsupervised skeleton-based action recognition on two widely used multi-view benchmarks under the linear evaluation protocol.

The main contributions of this paper are as follows:

  • A contrastive learning framework for explicit learning of view-invariant representations for skeleton-based action recognition is proposed.

  • We introduce a local-global spatial-temporal graph contrastive loss, combined with task uncertainty, to model the multi-scale co-occurrence relationship between spatial and temporal domains.

  • Compared with existing methods that do not use ground-truth action labels in training, the proposed algorithm significantly boosts the performance on two widely used benchmarks of PKUMMD and NTU RGB+D,

The remainder of the paper is organized as follows. Section 2 gives a brief review of the related work on skeleton-based action recognition and contrastive learning. In Section 3, we describe our proposed multi-view contrastive representation learning approach. Section 4 describes the benchmark datasets and experimental setting and reports the experiment results, followed by a brief conclusion in Section 5.

Ii Related Work

Ii-a Skeleton-based Action Recognition

Skeleton-based action recognition is a very active and burgeoning area of research, due to its effective representation of motion dynamics. Much of the traditional skeleton-based action recognition work focuses on designing effective handcrafted features, especially the joint or body part-based features [36, 43, 41, 12]

. New methods have recently emerged in the literature to address the skeleton-based action representation with deep learning, including Recurrent Neural Network (RNN), Convolutional Neural Networks (CNN), and Graph Convolutional Network (GCN). Most of them aim to find more effective ways to model temporal and spatial information of skeleton sequences. The structure of RNN is suitable for processing sequential data and prior works have shown that RNN is especially good for handling varying-length skeleton sequences 

[37]. To extract discriminative spatial and temporal features of different actions, Song et al. [32] propose a spatial and temporal attention module to assign different importance to each joint and frame within a sequence on top of RNN. CNN has the intrinsic ability to learn structural information from 2D or 3D grids, and it has also been used to encode skeleton sequences as pseudo-images for spatial-temporal representation learning [16]. Liu et al. [25]

firstly transform skeleton sequence into a series of color images, and then enhance visual and motion local patterns through mathematical morphology, finally propose a multi-stream CNN-based model to extract and fuse deep features from the enhanced-color images. GCN is the generalization of CNN to graphs and it can well represent the joint-based skeleton data. Therefore the use of GCN can automatically capture the patterns embedded in the spatial configuration of the joints as well as their temporal dynamics 

[42, 26, 7]. Cheng et al. [7] take novel shift graph operations and lightweight point-wise convolutions to replace regular graph convolutions. This way it reduces computation cost and provides flexible receptive fields for both the spatial graph and the temporal graph.

To avoid the laborious labeling of large-scale skeleton data, unsupervised skeleton-based action recognition has been studied by many researchers [21, 33]. It performs the feature learning by an encoder-decoder structure, the input of which is a masked or original skeleton sequence, and the goal of training is to reconstruct the skeleton sequences from the encoded features. For the same reason, we focus on enhancing the view-invariant representative learning for skeleton-based action recognition without any manual action labeling using the GCN-based network in this paper.

Ii-B Contrastive Learning

Contrastive learning aims to pull together an anchor and a “positive” sample in embedding space while pushing apart the anchor from many “negative” samples [17]. Therefore, contrastive losses are adopted to learn effective representations for pretext tasks in an unsupervised fashion. Closely related to contrastive learning is the family of losses based on metric distance learning or triplets that depend on class labels to supervise the choice of positive and negative pairs [29]. The key distinction between triplet losses and contrastive losses is that the former use exactly one positive and one negative pair per anchor and the positive pair of them is chosen from the same class and the negative pair is chosen from different classes. Contrastive learning generally uses just one positive pair for each anchor sample, selected using either co-occurrence [11, 9] or data augmentation [6]. The introduction of contrastive learning leads to a surge of interest in unsupervised visual representation learning [6]. Wu et al. [40]

maximize distinction between instances via a novel nonparametric softmax formulation and use a memory bank to store the instance class representation vector. For effective similarity measurement between samples in low-dimensional embedding space, other work explores the use of in batch samples for negative sampling instead of a memory bank 

[44, 13]. Recently, researchers have attempted to relate the success of their methods to the maximization of mutual information between latent representations [2, 9].

In probability theory and information theory, the mutual information of two random variables is a measure of their mutual dependence 

[39]. It has important applications to contrastive learning [6]. By maximizing mutual information between node and graph representations, some works, focusing on general graphs, have achieved state-of-the-art results in unsupervised node and graph classification tasks [35, 34]

. Maximizing mutual information between features extracted from multiple views of a shared context is analogous to human learning to represent observations generated by a shared cause driven by a desire to predict other related observations 

[2]. Aiming at a specific graph structure and task, we introduce a multi-view spatial-temporal graph contrastive representation learning method for view-invariant skeleton-based action recognition in this paper.

Fig. 2: The overall pipeline of the proposed multi-view ST-Graph CRL for view-invariant skeleton-based action recognition. and are from any two views of the multi-view skeleton sequence . is from any view of the multi-view skeleton sequence . This approach pulls together skeletons simultaneously captured for the same person from different views in embedding space, while pushing apart the others.

Iii Multi-View ST-Graph Contrastive Representation Learning

Inspired by recent contrastive learning algorithms, we propose an approach to learning view-invariant representations without any manual action labeling by maximizing the mutual information between skeleton sequences that are simultaneously taken for the same person but from different views, via a global-local contrastive loss in the latent space. The overall pipeline of the proposed approach is illustrated in Figure 2. Specifically, a stochastic data augmentation module that transforms any given data example randomly to encourage learning a more robust representation for the downstream task. Then, an ST-GCN structural encoder

extracts representation vectors from augmented data examples. We maximize the representation agreement between samples simultaneously taken for the same person but from different views on a global level and a local level. On the global level, a small neural network projection head

maps the representations to a latent space by applying a global contrastive loss. On the local level, as shown in Figure 3, an ST-Graph partitioning function splits the graph structural representation of the whole skeleton sequence into multi-local subgraphs, and then a projection head maps the representations to a latent space by applying a local contrastive loss. Moreover, to effectively combine global and local contrastive losses, we adjust their relative weights based on task uncertainty.

Fig. 3: An more detail illustration of local contrastive in Figure 2. ST-Graph structural representations and are from any two views of the multi-view skeleton sequence while is from any view of the multi-view skeleton sequence . This loss aims to pull together skeleton sequence regions simultaneously captured for the same person from different views in embedding space, while pushing apart the others.

Before getting into the details of the approach, we state the main notations. Similar to previous studies [7, 47], we organize skeleton sequence of an action sample as an undirected spatial-temporal graph , where denotes a set of vertices, corresponding to frames and body joints per frame, and is the set of edges, indicating the connections between nodes. Then, we represent a multi-view skeleton sample as , where represents the number of viewpoints, which could be as many as needed, and indicates the specific -th viewpoint. For many multi-view skeleton samples, we also use to denote the -th view of the -th multi-view skeleton sample .

Iii-a Multi-view skeletal data augmentation.

Data augmentation aims to create novel and realistically rational data by applying a certain transformation to the original training data without affecting their semantic meanings. It has been demonstrated that contrastive learning usually needs stronger data augmentation than supervised learning 

[6]. Meanwhile, for specific graphs, certain data augmentations might be more effective than the others [45]. Let an augmented skeleton sequence be , where is the augmentation function. In this paper, we apply temporal subgraph as the data augmentation for multi-view CRL, with definitions as follows: it samples a segment from

along the temporal dimension. As the length of a skeleton sequence is fixed to 100 frames, we randomly sample 95 consecutive frames and then extend it to 100 frames by linear interpolation. This data augmentation increases the robustness of action recognition when the starting and ending frames of the action cannot be accurately determined and the skeleton sequences captured from different views do not have perfect temporal alignment.

Iii-B ST-GCN structural encoder.

The ultimate goal of ST-Graph CRL is to train a powerful skeleton sequence encoder to get view-invariant representation for skeleton-based action recognition without any manual action labeling. Specifically, to effectively model the co-occurrence relationships among joints in both spatial and temporal domains, we apply an ST-GCN structural encoder, which extracts representation from the augmented skeleton sequence . Specifically, it contains two parts: spatial graph convolution and temporal graph convolution.

For spatial graph convolution, the neighbor set of joints is defined as an adjacent matrix according to , which is typically partitioned into 3 partitions: the centripetal group containing neighboring nodes that are closer to the skeleton center, the node itself and otherwise the centrifugal group. For individual skeleton, let and denote the input and output feature during the processing respectively, where and are the input and output feature dimensions. The graph convolution is computed as:

(1)

where denotes the spatial partitions, is the normalized adjacent matrix and , is set to 0.001 to avoid empty rows. is the weight of the convolution for each partition group. For temporal dimension, we construct temporal graph by connecting identical joints in consecutive frames and use regular 1D convolution on the temporal dimension as the temporal graph convolution.

The ST-GCN structural encoder comprises a series of dynamic spatial-temporal graph convolution blocks stacked one above the other. In this form, there existed many specific models with subtle differences [42, 31, 7]. The proposed approach does not place any restriction on the ST-GCN structural encoder, as long as it maintains the feature of the spatial-temporal graph structure. In our implementation, we adopt the network recently proposed by [7] as the ST-GCN structural encoder.

Iii-C ST-Graph partitioning function.

As stated in Li et al. [20], the graph convolution operation can be considered Laplacian smoothing for node features over graph topology. The Laplacian smoothing computes the new node features as the weighted average of itself and its neighbors. It helps make nodes in the same cluster tend to learn similar representations. Nevertheless, it may also lead to the over-smoothing problem and make nodes indistinguishable as the number of network layers increases. Meanwhile, it may concentrate more on node features and make the learned embeddings lack structural information. In short, ST-GGN can handle most simple cases but may ignore local details on a complicated graph.

Given the above problems, we enhance the representation by giving more consideration to specific characteristics of local regions. Specifically, we include an ST-Graph partitioning function to split the feature of the whole skeleton sequence into multi-local subgraphs , where represents the number of generated subgraphs, and indicate sample index and subgraph index, respectively. The choice of partitioning strategies has a strong impact on not only the performance of recognition networks but also the design of the networks [8]. Several graph partitioning algorithms have already been developed and they are often either edge cut [1], which evenly partitions vertices and cuts edges, or vertex cut [5], which evenly partitions edges by replicating vertices. There have also been hybrid algorithms [18], which cut both edges and vertices. In this paper, we adopt two simple rule-based edge cut style partitioning strategies to segment the skeleton spatial-temporal feature graph. Specifically, vertices of the ST-Graph are evenly partitioned into segments along the spatial dimension or the temporal dimension by cutting edges, as shown in figure 4.

Fig. 4: ST-Graph spatial or temporal partitioning strategies. The spatial-temporal feature graph are evenly partitioned along different dimensions by cutting edges.

Iii-D Projection head.

Recent work by Chen et al. [6] found that mapping features to another latent space before contrastive loss calculation can be more effective. In this way, the features before a nonlinear projection are the learned representations, where information loss of raw data induced by the contrastive loss can be relieved. Therefore, in this paper, the representations and are mapped to another latent space through an MLP with one hidden layer, respectively. We name this module as projection head and add it to global and local contrastive learning subnetworks. Meanwhile, a global pooling is performed on and to get a fixed dimension feature vector for each ST-Graph to aggregate the node features before the projection head. Formally, the process is defined as:

(2)

where and represent global and local projection heads. and are the global and local representations in another latent space. is a global pooling function.

is a ReLU nonlinearity and

’s are learned weights of MLP. Note that the output of is named as , which is the representation we learned for the skeleton sequence based on ST-Graph CRL.

Iii-E Global-Local contrastive learning.

A global representation can well capture the common knowledge of action patterns among all the regions in the skeleton sequence and hence possesses the nice merit in terms of model generalization while a local representation targets personalization of individual regions. As mentioned above, we propose several ST-Graph partitioning strategies to segment the graph into multiple local subgraphs. In this section, a global-local contrastive learning loss is proposed to effectively model the multi-scale co-occurrence relationship between spatial and temporal domains in the ST-Graph. For this, we define different positive pairs in global and local scenarios and maximize the consistency between the positive pairs compared with corresponding negative pairs using global and local contrastive loss functions. Meanwhile, the two contrastive loss functions are combined with task uncertainty in order to balance the trade-off between generalization and personalization of representation.

Global contrastive loss. Given two global representations and , we specify that they form a positive pair if is equal to , else they form a negative pair. It means multiple skeleton sequences, if simultaneously taken for the same person from different views, will be pulled together in embedding space, otherwise will be pulled apart, which is shown in Figure 2. Therefore, not only skeleton representations can be effectively learned without any action label information, but also their view-invariant property of them can be enhanced during multi-view contrastive learning. To achieve this, we adopt the normalized temperature-scaled cross-entropy loss [6]. Specifically, we randomly sample a minibatch of examples and define the contrastive prediction task on pairs of skeleton sequences. Note that each example consists of skeleton sequences collected from different views, resulting in data points. Given positive pairs in an example, we treat the other data points within a minibatch as negative examples. Let and denote representations of two data points. To measure similarity, we define that denotes the dot product between normalized and . Then, the global loss function for positive pairs of example is defined as

(3)

where is an indicator function evaluating to 1 if , is also an indicator function evaluating to 1 if and are simultaneously satisfied, otherwise evaluating to 0. denotes a temperature parameter. For a minibatch, the global contrastive loss is computed across all examples,

(4)

where is the batchsize.

Local contrastive loss. Local contrastive loss is calculated among the local representations, as illustrated in Figure 3. Given two local representations and , we specify that they form a positive pair if both and are satisfied, else they form a negative pair. From the composition of the positive and negative pairs, the contrastive loss achieves the same effect as the global one in the local scale when subgraph indices are consistent for all pairs. Besides, it can also handle the over-smoothing and the structural information lacking problems by contrasting among local regions in a sequence when sample indices are consistent for all pairs. The definition of local contrastive loss is the same as the global one. But because of the extra subgraph dimension, there are positive pairs and negative pairs in a sample. Formally, the local contrastive loss function for positive pairs of example is defined as

(5)

where is an indicator function that needs the three inequalities are simultaneously true. is the number of split subgraphs. For a minibatch, the local contrastive loss also needs to be computed across all examples,

(6)

Corresponding to spatially or temporally partitioning the ST-Graph into multiply subgraphs, the notation of local contrastive loss is or .

Global-Local contrastive loss. Global-Local contrastive loss is concerned about jointly optimizing the related global and local contrastive loss functions. In this paper, the popular approach of using a linear combination of them as a total loss function is abandoned. Because manually tuning their weight hyper-parameters is expensive and intractable. Instead, following the work of [38], we adjust each loss’s relative weight in the total loss function by deriving a multi-task loss function based on maximizing the Gaussian likelihood with task-dependent uncertainty during model training. We define the global-local contrastive loss as follows:

(7)

where and associate with the task uncertainty and can be interpreted as the relative weights of respective loss terms. and serve as regularizers to avoid over-fitting. All network parameters and the uncertainty task weights are trainable and optimized by gradient back propagation.

The proposed multi-view ST-Graph CRL is summarized as Algorithm 1.

1 Input: Augmentation , global pooling , ST-Graph partitioning function , ST-GCN structural encoder , global and local projection heads and , training multi-view skeleton sequences , global contrastive loss , local contrastive loss , similarity measurement function .
Parameters: Learnable relative weight parameters for global and local contrastive loss: and ; number of views ; number of split subgraphs ; number of samples in one batch ; temperature parameter .
1:  while sampled batch  do
2:     while  to  do
3:        while  to  do
4:           
5:           
6:           
7:           while  to  do
8:              
9:           end while
10:        end while
11:     end while
12:     while  to  do
13:        
14:        
15:     end while
16:     
17:     
18:     
19:     update networks , , , and to minimize
20:  end while
21:  return encoder model , and throw away projection heads and
Algorithm 1 Multi-view spatial-temporal graph contrastive representation learning algorithm

Iv Experiment

Iv-a Dataset

We evaluate the proposed method on two public available multi-view action recognition benchmarks: NTU RGB+D [30] and PKUMMD [22]. We briefly describe them below.

NTU RGB+D (NTU). NTU is a large-scale multi-modal action recognition dataset. It is composed of 56,880 samples over 60 classes captured from 40 distinct subjects and three Kinect cameras. Each action in the samples involves one or two people. The dataset is very challenging due to the large intra-class and view variations. The original paper of the NTU recommends two benchmarks: 1) Cross-subject (CS): all samples from a selected group of subjects are used for training and the rest samples for testing. 2) Cross-view (CV): the training set contains samples that are captured by cameras 2 and 3, and the testing set contains videos that are captured by camera 1. We follow this convention and report performance on both benchmarks.

PKUMMD. PKUMMD is a new large-scale benchmark for continuous multi-modality 3D human action understanding and covers a wide range of complex human activities with well-annotated information. It contains almost 20,000 action instances in 51 action categories, performed by 66 subjects in three different view Kinect sensors. PKUMMD consists of two subsets: PKUMMD-I is an easier subset for action recognition, while PKUMMD-II is more challenging with more skeleton noise caused by large view variation. We conduct experiments under the cross-subject protocol on the two subsets.

Iv-B Implementation Details

Iv-B1 Pre-training without any action label information

In ST-Graph CRL, an ST-GCN structural encoder , a global projection head and a local projection head

are pre-trained using multi-view skeleton sequences without any action label information. We use SGD with Nesterov momentum 0.9 to pre-train them for 40 epochs. The learning rate is set to 0.1 and divided by 10 at epoch 20, 30, and 35. The batch size is set to 16 for all experiments. The sequence length

is set to 100. The temperature parameter for global-local contrastive loss is set to 0.07. The number of subgraph is set to 5. is set to 2, which means each sample includes two skeleton sequences, simultaneously taken from different views.

Iv-B2 Evaluation Protocol

To validate the effectiveness of the proposed representation learning method, we follow the linear evaluation protocol [38, 6], which is commonly used to evaluate unsupervised learning methods. In this way, a linear classifier attached to the frozen encoder model is trained with the annotated dataset. We report Top-1 accuracy on the testing set as a quantitative evaluation indicator. The classifier is trained for 45 epochs, with the learning rate divided by 10 at epoch 25, 35, and 40. The other settings remain the same as the pre-training.

Iv-C Comparison Experiments

To quantitatively evaluate the performance, Table I and Table II list the linear evaluation results of ST-Graph CRL and other state-of-the-art unsupervised methods on PKUMMD and NTU benchmarks. The model which only trains the linear classifier and freezes the randomly initialized encoder is denoted as ST-Graph Rand. We regard this model as one of our baselines. The models implementing ST-Graph contrastive learning in single-view and multi-view scenarios are denoted as ST-Graph CRL SV and ST-Graph CRL MV, respectively. In the single view version, we maximize the mutual information between skeleton ST-Graph representations of one augmented instance and another augmented instance of an identical skeleton sequence, to learn inherent action patterns of different skeleton transformations. For the evaluation of P&C FW on the action recognition task, we reproduce the coder of P&C FW with a linear evaluation protocol. The temporal subgraph is the default data augmentation method we adopt in these experiments.

Supervised Models PKUMMD-I PKUMMD-II NTU (CS) NTU (CV)
Yes ST-Graph 94.45 56.75 87.82 95.13
No ST-Graph Rand 30.14 10.58 19.55 23.23
No LongT GAN [48] 67.70 25.95 52.14 -
No P&C FW [33] 67.62 35.90 32.50 35.67
No SL [21] 64.86 27.63 52.55 -
No CAE+ [28] - - 58.50 64.80
No ST-Graph CRL SV (Ours) 68.42 31.80 60.24 59.79
No ST-Graph CRL MV(Ours) 83.62 39.89 74.71 82.62
TABLE I: Comparison of action recognition performance of the proposed ST-Graph CRL and other state-of-the-art methods.

Iv-C1 Comparison with State-of-the-art

In existing studies [33, 21], the pre-training and evaluation are usually conducted on the same dataset. An overall summary of the results is given in Table I, where the proposed method has returned significantly improved performance in the unsupervised methods that do not use action labels for training. As we can see, ST-Graph CRL MV is far beyond the performance of random baseline and other state-of-the-art unsupervised methods and greatly reduces the gap to the models trained with action annotation. NTU(CV) is a suitable benchmark to evaluate the model’s robustness to the viewpoint difference. Here, we can see that model’s Top-1 accuracy of ST-Graph CRL MV in NTU(CV) is 82.62%, while ST-Graph Rand and P&C FW are only 23.23% and 35.67%, respectively. Therefore, the multi-view contrastive learning significantly improved the view-invariant property of skeleton representation. Even in a single view scenario, under the truly unsupervised setting, the performances of ST-Graph CRL SV are quite outstanding, which performs better than almost all the baselines. It achieves high recognition accuracies of 60.21% and 59.79% on NTU(CS) and NTU(CV), respectively, which proves that our global-local contrastive learning of augmented skeletons of the same sample also works well. From the comparison of ST-Graph CRL SV and MV, we can see that substantial improvements are made in each benchmark. It proves that CRL between the multi-view skeletons brings in a giant performance leap for unsupervised skeleton-based action recognition.

Supervised Models PKUMMD-I PKUMMD-II
Yes ST-Graph 90.56 55.01
No P&C FW [33] 63.31 23.61
No ST-Graph CRL SV (Ours) 76.29 39.83
No ST-Graph CRL MV(Ours) 82.21 46.98
TABLE II:

Performance of transfer learning setting and linear evaluation.

Iv-C2 Transfer learning performance.

To further evaluate whether the proposed ST-Graph CRL can gain knowledge of related tasks, we investigate the transfer learning performance of our model [21]. As the representations learned from large-scale data are more generalizable, we regard the NTU as the source dataset and PKUMMD-I and PKUMMD-II as the target datasets. We conduct the pre-training on the source dataset and the evaluation of target datasets. Under this setting, the samples used for pre-training and linear evaluation are completely different in terms of viewpoints, action patterns, and so on, which is more in accordance with the practical scenarios. The results are summarized in Table II, from which ST-Graph CRL MV gets better results of 82.21% for PKUMMD-I and 46.98% for PKUMMD-II when models are pre-trained without action annotations. Apart from that, together with Table I, we can see that the accuracies of P&C FW are reduced from 67.62% and 35.90% to 63.31% and 23.61%, respectively, while most ST-Graph CRLs are boosted from 68.42%, 31.80%, 83.62%, and 39.89% to 76.29%, 39.83%, 82.21%, and 46.98%, respectively, when the training and testing datasets are from consistent to inconsistent. Meanwhile, the performances of models pre-trained with action annotations also decrease in this transfer learning setting from 94.45% and 56.75% to 90.56% and 55.01%, respectively. One possible reason is that ST-Graph CRL can take advantage of a larger training set more effectively with less influence from the data distribution difference between different datasets. It can be concluded that the representations learned in ST-Graph CRL have a good generalization ability.

Iv-D Ablation Experiments

For a specific ST-Graph structural encoder, the performance of ST-Graph CRL is mainly determined by the following four components: multi-view skeleton contrastive mechanism, data augmentation, projection head, and global-local contrastive loss. From the results of ST-Graph CRL SV and ST-Graph CRL MV in Table I and Table II, the performance of the multi-view skeleton contrastive mechanism is shown to be impressive in all cases. To further assess the other factors, we conduct several ablation experiments on NTU with a linear evaluation protocol.

Loss Function NTU (CS) NTU (CV)
69.69 73.70
67.36 74.88
61.04 65.89
74.71 82.62
73.62 79.04
74.21 81.54
TABLE III: Analysis of global-local contrastive learning loss function.
Viewpoint Loss combination NTU (CS) NTU (CV)
Single-view Linear 54.12 56.23
Task uncertainty 60.24 59.79
Multi-view Linear 71.06 78.68
Task uncertainty 74.71 82.62
TABLE IV: Performance by using different methods to combine the global and local contrastive learning losses.

Iv-D1 The effect of global-local contrastive loss.

In this experiment, we evaluate different forms of the contrastive loss function. Experimental results are summarized in Table III. Based on the results, we make the following observations. As the accuracy of is higher than by 6.32%(CS) and 8.99%(CV), the temporal splitting method is superior to the spatial splitting in this experiment for local contrastive loss. The impacts of the global and the local losses are different but complementary. Compared with using only one of them, the combined global-local loss function leads to substantially better performance in two benchmarks, i.e., 74.71%(CS) and 82.62%(CV). In Table III, it can be found that only produces poor performance. We think the reason might be related to the multi-view action datasets. Specifically, as most multi-view action datasets do not provide the corresponding relation between persons in different views, the spatial partitioning strategy is likely to lead to a phenomenon that the positive pairs of local parts are from different persons when an action is performed by two people. In this case, the effect of is inconsistent with our expectation and one of its impacts, learning a fine-grained view of irrelevant representation in the spatial dimension, will fail, while others still work. However, when combined with and , the main role of is reflected in learning a fine-grained view irrelevant representation in the spatial dimension. This is why its accuracy goes down. Therefore, the default form of global-local contrastive loss consists of and , combined with task uncertainty in this paper. We also compare linear and task uncertainty based methods to combine the global and local contrastive losses. Note that all the weight parameters are uniformly set to 1 in the linear combination method. Results are shown in Table IV, from which we can see that the task uncertainty based combination method outperforms the linear combination methods in both single-view and multi-view scenarios.

Iv-D2 Analysis of the projection head.

We study the importance of including a projection head, i.e. and . Table V shows the linear evaluation results using two different architectures for the head: identity mapping and the nonlinear projection with one additional hidden layer. We can observe that a nonlinear projection head, regardless of its output representation dimension, performs better than identity mapping in terms of recognition accuracy. Therefore, it can be concluded that the hidden layer before the projection head is a better representation than the layer after.

Projection head Identity Mapping Nonlinear projection
Output dimension 256 32 64 128 256 512
Accuracy 66.21 74.29 74.31 74.39 74.38 74.71
TABLE V: Linear evaluation of representations with different projection heads and various dimensions of output. The representation, before projection, is 256-dimensional here.

Iv-D3 The effects of data augmentation.

Apart from the temporal subgraph, we also explore other four popular skeleton data augmentations in experiments including node dropping, node perturbation, view rotation, and skeleton shearing, with definitions as follows:

Node dropping. It randomly discards body joints in the input skeleton sequence . Specifically, with a chance, we randomly drop of nodes, where the corresponding joint coordinates are set to zero. It is a common phenomenon that a subset of joints, e.g., those occluded ones, cannot be detected. The augmentation of node dropping enables the crucial action patterns can still be learned from a subset of joints.

Node perturbation. The coordinates of joints are perturbed using a normal Gaussian distribution. The mean of the distribution is set to 0 while the standard deviation is set to 0.05. The detected joint locations, even for those without occlusion, always contain errors due to sensor and estimation accuracies in practice. The augmentation of node perturbation enables the action recognition to be robust to such errors.

View rotation. It randomly rotates the joint coordinates in a skeleton sequence along three axes in terms of a rotation matrix. Specifically, we randomly select three degrees , all uniformly in the range of for each sequence. Three basic rotation matrices with rotation angles about X, Y, Z axis are given as follows:

(8)

Based on these three basic rotation matrices, the final rotation matrix is

(9)

We apply the rotation matrix to the original coordinates of the skeleton sequence and get the transformed coordinates. It simulates the view changes of the camera. This augmentation enables the action recognition to be robust to the camera view changes.

Skeleton shearing. It slants the shape of a skeleton at a random angle. The shearing factors are drawn from a uniform distribution in

. The transformation matrix can be written as

(10)

where , , , , , are shearing factors. All joint coordinates of the original skeleton sequence are transformed with the shearing matrix . The augmentation of skeleton shearing further increases the robustness of action recognition to more nonrigid transformations of the skeleton sequence.

Augmentation NTU (CS) NTU (CV)
Original 69.85 78.35
Node Dropping 69.05 77.37
Node perturbation 66.26 74.99
View rotation 70.37 77.99
Shear 69.56 77.55
Temporal subgraph 74.71 82.62
TABLE VI: Performance of multi-view ST-Graph CRL using different augmentation strategies.

We denote the model without any data augmentation as the original. The results are shown in Table VI. Much to our surprise, compared with directly using the original sequence, only the temporal subgraph strategy to ST-Graph CRL can significantly improve the accuracy by 5.26%(CS) and 4.27%(CV). Two possible reasons are: 1) defining precise frame-level starting and ending time for action is almost impossible, and 2) it is hard to achieve a strict temporal alignment of skeleton sequences captured by multiple cameras. Applying inconsistent temporal subgraphs for different views can improve the robustness of these unavoidable problems without breaking their original relationships. The counterproductive of other data augmentations maybe because the original correspondences in spatial structure among skeletons, which are simultaneously taken from different views, are destroyed after these random transformations. For example, compared with other augmentations, node perturbation, which changes the values of joints with a normal Gaussian distribution, is most damaging to spatial structure correspondences and leads to the sharpest performance drop of 3.59%(CS) and 3.36%(CV). Thus, the temporal subgraph is the default data augmentation we adopt in this work.

Iv-E Visualization of Skeleton Representation.

The superior performance of ST-Graph CRL over the existing methods is largely due to the use of the multi-view skeleton contrastive mechanism. Hence apart from the quantitative evaluation, we also visualize the feature changes by using this mechanism. We randomly select ten classes in the NTU testing set and visualize the TSNE-embeddings of the features obtained from P&C FW [33], ST-Graph CRL SV, and ST-Graph CRL MV for the same skeleton sequences in Figure 5. Here we observe that even in this 2D embedding it is evident that the features for different classes are better separated by using ST-Graph CRLs than using P&C FW. Points of different colors are mixed up in (a) while they are more separated in (b) and (c). Meanwhile, points of the same color in (c) are more concentrated than those in (b). For example, there is a clear line among points with different colors at the bottom right of (c) while they are mixed up at the bottom left of (b). This supports the conclusion that CRL between multi-view skeletons makes the learned representation more discriminative.

(a) P&C FW [33]
(b) ST-Graph CRL SV
(c) ST-Graph CRL MV
Fig. 5: TSNE-embedding visualizations of the learned representations from 10 classes randomly selected in NTU(CS) testing set.

V Conclusion

In this paper, we studied the problem of view-invariant skeleton-based action recognition by learning effective representations without any manual action labeling. Based on ST-GCN structural encoder, a multi-view spatial-temporal graph contrastive representation learning approach was developed to maximize the mutual information between the representations extracted from multiple skeleton data simultaneously taken from different views. Specifically, we explored five popular skeleton data augmentation methods and found only temporal subgraph can make a positive role in multi-view CRL. Then, to support our global-local CRL, partitioning functions were designed to segment ST-Graph into multiple subgraphs along spatial or temporal dimensions and projection heads were added to map the learned representations to another latent space. Besides, we proposed a local-global spatial-temporal graph contrastive loss, combined with task uncertainty, to model the multi-scale co-occurrence relationship between spatial and temporal domains. Experiments on two multi-view action datasets showed that our proposed approach, no matter in single-view or multi-view scenarios, got competitive performance compared with the random baseline and other state-of-the-art unsupervised skeleton-based action recognition methods. In the future, we will explore new approaches to effectively handle multi-view multi-person scenarios.

References

  • [1] K. Andreev and H. Racke (2006) Balanced graph partitioning. Theory of Computing Systems 39 (6), pp. 929–939. Cited by: §III-C.
  • [2] P. Bachman, R. D. Hjelm, and W. Buchwalter (2019) Learning representations by maximizing mutual information across views. arXiv preprint arXiv:1906.00910. Cited by: §II-B, §II-B.
  • [3] S. Bhardwaj, M. Srinivasan, and M. M. Khapra (2019) Efficient video classification using fewer frames. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    Cited by: §I.
  • [4] C. Bian, W. Feng, L. Wan, and S. Wang (2021) Structural knowledge distillation for efficient skeleton-based action recognition. IEEE Transactions on Image Processing 30, pp. 2963–2976. Cited by: §I.
  • [5] F. Bourse, M. Lelarge, and M. Vojnovic (2014) Balanced graph edge partition. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1456–1465. Cited by: §III-C.
  • [6] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §II-B, §II-B, §III-A, §III-D, §III-E, §IV-B2.
  • [7] K. Cheng, Y. Zhang, X. He, W. Chen, and H. Lu (2020) Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II-A, §III-B, §III.
  • [8] W. Fan, R. Jin, M. Liu, P. Lu, X. Luo, R. Xu, Q. Yin, W. Yu, and J. Zhou (2020) Application driven graph partitioning. In Proceedings of the 20th ACM SIGMOD International Conference on Management of Data, Cited by: §III-C.
  • [9] O. Henaff (2020) Data-efficient image recognition with contrastive predictive coding. In

    Proceedings of the International Conference on Machine Learning

    ,
    Cited by: §II-B.
  • [10] S. Herath, M. Harandi, and F. Porikli (2017) Going deeper into action recognition: a survey. Image and vision computing 60 (1), pp. 4–21. Cited by: §I.
  • [11] R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio (2018) Learning deep representations by mutual information estimation and maximization. In Proceedings of the International Conference on Learning Representations, Cited by: §II-B.
  • [12] M. E. Hussein, M. Torki, M. A. Gowayyed, and M. El-Saban (2013) Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In

    Proceedings of the Twenty-third International Joint Conference on Artificial Intelligence

    ,
    Cited by: §I, §II-A.
  • [13] X. Ji, J. F. Henriques, and A. Vedaldi (2019) Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865–9874. Cited by: §II-B.
  • [14] Y. Ji, Y. Yang, H. T. Shen, and T. Harada (2021) View-invariant action recognition via unsupervised attention transfer (uant). Pattern Recognition 113, pp. 107807. Cited by: §I.
  • [15] G. Johansson (1973) Visual perception of biological motion and a model for its analysis. Perception & psychophysics 14 (2), pp. 201–211. Cited by: §I.
  • [16] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid (2017) A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II-A.
  • [17] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan (2020) Supervised contrastive learning. arXiv preprint arXiv:2004.11362. Cited by: §II-B.
  • [18] D. Li, Y. Zhang, J. Wang, and K. Tan (2019) TopoX: topology refactorization for efficient graph partitioning and processing. Proceedings of the VLDB Endowment 12 (8), pp. 891–905. Cited by: §III-C.
  • [19] J. Li, Y. Wong, Q. Zhao, and M. S. Kankanhalli (2018) Unsupervised learning of view-invariant action representations. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 1262–1272. Cited by: §I.
  • [20] Q. Li, Z. Han, and X. Wu (2018)

    Deeper insights into graph convolutional networks for semi-supervised learning

    .
    In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. Cited by: §III-C.
  • [21] L. Lin, S. Song, W. Yang, and J. Liu (2020) MS2L: multi-task self-supervised learning for skeleton based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia, Cited by: §I, §II-A, §IV-C1, §IV-C2, TABLE I.
  • [22] C. Liu, Y. Hu, Y. Li, S. Song, and J. Liu (2017) PKU-mmd: a large scale benchmark for continuous multi-modal human action understanding. arXiv preprint arXiv:1703.07475. Cited by: §IV-A.
  • [23] J. Liu, A. Shahroudy, D. Xu, and G. Wang (2016) Spatio-temporal lstm with trust gates for 3d human action recognition. In Proceedings of the European Conference on Computer Vision, Cited by: §I.
  • [24] J. Liu, G. Wang, P. Hu, L. Duan, and A. C. Kot (2017) Global context-aware attention lstm networks for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I.
  • [25] M. Liu, H. Liu, and C. Chen (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognition 68, pp. 346–362. Cited by: §II-A.
  • [26] Z. Liu, H. Zhang, Z. Chen, Z. Wang, and W. Ouyang (2020) Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II-A.
  • [27] Q. Nie, J. Wang, X. Wang, and Y. Liu (2019) View-invariant human action recognition based on a 3d bio-constrained skeleton model. IEEE transactions on image processing 28 (8), pp. 3959–3972. Cited by: §I.
  • [28] H. Rao, S. Xu, X. Hu, J. Cheng, and B. Hu (2021) Augmented skeleton based contrastive action learning with momentum lstm for unsupervised action recognition. Information Sciences 569, pp. 90–109. Cited by: TABLE I.
  • [29] F. Schroff, D. Kalenichenko, and J. Philbin (2015)

    Facenet: a unified embedding for face recognition and clustering

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II-B.
  • [30] A. Shahroudy, J. Liu, T. Ng, and G. Wang (2016) NTU rgb+d: a large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV-A.
  • [31] L. Shi, Y. Zhang, J. Cheng, and H. Lu (2019) Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §III-B.
  • [32] S. Song, C. Lan, J. Xing, W. Zeng, and J. Liu (2017)

    An end-to-end spatio-temporal attention model for human action recognition from skeleton data

    .
    In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §I, §II-A.
  • [33] K. Su, X. Liu, and E. Shlizerman (2020) Predict & cluster: unsupervised skeleton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II-A, 5(a), §IV-C1, §IV-E, TABLE I, TABLE II.
  • [34] F. Sun, J. Hoffman, V. Verma, and J. Tang (2019) InfoGraph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In Proceedings of the International Conference on Learning Representations, Cited by: §II-B.
  • [35] P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm (2018) Deep graph infomax. In Proceedings of the International Conference on Learning Representations, Cited by: §II-B.
  • [36] R. Vemulapalli, F. Arrate, and R. Chellappa (2014) Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II-A.
  • [37] H. Wang and L. Wang (2017) Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II-A.
  • [38] M. Wang, Y. Lin, G. Lin, K. Yang, and X. Wu (2020) M2GRL: a multi-task multi-view graph representation learning framework for web-scale recommender systems. arXiv preprint arXiv:2005.10110. Cited by: §III-E, §IV-B2.
  • [39] Wikipedia (2021) Mutual information. Note: https://en.wikipedia.ahut.cf/wiki/Mutual_information Cited by: §II-B.
  • [40] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742. Cited by: §II-B.
  • [41] L. Xia, C. Chen, and J. K. Aggarwal (2012) View invariant human action recognition using histograms of 3d joints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Cited by: §I, §II-A.
  • [42] S. Yan, Y. Xiong, and D. Lin (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §II-A, §III-B.
  • [43] X. Yang and Y. L. Tian (2012)

    Eigenjoints-based action recognition using naive-bayes-nearest-neighbor

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Cited by: §I, §II-A.
  • [44] M. Ye, X. Zhang, P. C. Yuen, and S. Chang (2019) Unsupervised embedding learning via invariant and spreading instance feature. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6210–6219. Cited by: §II-B.
  • [45] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen (2020) Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems 33. Cited by: §III-A.
  • [46] P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng (2019) View adaptive neural networks for high performance skeleton-based human action recognition. IEEE transactions on pattern analysis and machine intelligence 41 (8), pp. 1963–1978. Cited by: §I, §I.
  • [47] P. Zhang, C. Lan, W. Zeng, J. Xing, J. Xue, and N. Zheng (2020) Semantics-guided neural networks for efficient skeleton-based human action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §III.
  • [48] N. Zheng, J. Wen, R. Liu, L. Long, J. Dai, and Z. Gong (2018) Unsupervised representation learning with long-term dynamics for skeleton based action recognition. In Proceedings of the Thirty-Second AAAI conference on Artificial Intelligence, Cited by: §I, TABLE I.