An Attractor-Guided Neural Networks for Skeleton-Based Human Motion Prediction

05/20/2021
by   Pengxiang Ding, et al.
0

Joint relation modeling is a curial component in human motion prediction. Most existing methods tend to design skeletal-based graphs to build the relations among joints, where local interactions between joint pairs are well learned. However, the global coordination of all joints, which reflects human motion's balance property, is usually weakened because it is learned from part to whole progressively and asynchronously. Thus, the final predicted motions are sometimes unnatural. To tackle this issue, we learn a medium, called balance attractor (BA), from the spatiotemporal features of motion to characterize the global motion features, which is subsequently used to build new joint relations. Through the BA, all joints are related synchronously, and thus the global coordination of all joints can be better learned. Based on the BA, we propose our framework, referred to Attractor-Guided Neural Network, mainly including Attractor-Based Joint Relation Extractor (AJRE) and Multi-timescale Dynamics Extractor (MTDE). The AJRE mainly includes Global Coordination Extractor (GCE) and Local Interaction Extractor (LIE). The former presents the global coordination of all joints, and the latter encodes local interactions between joint pairs. The MTDE is designed to extract dynamic information from raw position information for effective prediction. Extensive experiments show that the proposed framework outperforms state-of-the-art methods in both short and long-term predictions in H3.6M, CMU-Mocap, and 3DPW.

READ FULL TEXT VIEW PDF
03/17/2020

Dynamic Multiscale Graph Neural Networks for 3D Skeleton-Based Human Motion Prediction

We propose novel dynamic multiscale graph neural networks (DMGNN) to pre...
01/06/2019

Self-Expressive Subspace Clustering to Recognize Motion Dynamics of a Multi-Joint Coordination for Chronic Ankle Instability

Ankle sprains and instability are major public health concerns. Up to 70...
07/13/2022

Global-local Motion Transformer for Unsupervised Skeleton-based Action Learning

We propose a new transformer model for the task of unsupervised learning...
07/08/2021

Uncertainty-aware Human Motion Prediction

Human motion prediction is essential for tasks such as human motion anal...
12/23/2020

Multi-grained Trajectory Graph Convolutional Networks for Habit-unrelated Human Motion Prediction

Human motion prediction is an essential part for human-robot collaborati...
07/31/2022

Skeleton-Parted Graph Scattering Networks for 3D Human Motion Prediction

Graph convolutional network based methods that model the body-joints' re...
04/25/2019

Local Relation Networks for Image Recognition

The convolution layer has been the dominant feature extractor in compute...

1 Introduction

3D skeleton-based human motion prediction aims to generate future skeleton sequences given past observed ones. This technique can help machines better understand human intention and have a broad prospect in scenarios such as human-robot interaction [14, 9, 11], autonomous driving [25], and pedestrian tracking [8, 2].

Figure 1: Qualitative results of short-term predictions of motion ”discussion” on H3.6M. From top to bottom, we show the ground truth, the results of LTD [21], ConvS2S [16], ResSup [24] and our approach. Compared with the result of our approach, the predicted motions of other works have the same problem: the limbs are uncoordinated which makes the predicted motion appear unnatural.

Joint relation modeling is essential for motion prediction. Prior works mainly relied on graphs to model joint relation combined with neural networks, such as RNN [7, 24, 19, 13, 1], CNN [16, 20], GCN [18, 6]. Most graphs are designed according to the kinematic structure of the human to extract motion features. Though they are effective, it is hard for them to learn the relations between spatial separated joint pairs directly. Recently, dynamic graphs were developed by [21, 22] to model the relations explicitly. Thus, the local interactions between joint pairs can be learned adequately. However, there still exists one drawback. The global coordination of all joints, which contributes to the balance of human motion, is not well learned. It is mainly because the global motion features are usually extracted by fusing different body components’ local features. In this process, all joints’ global relations are learned progressively and asynchronously, and thus the relations are usually weakened. It sometimes makes the predicted motion appears unnatural, e.g., the limbs are uncoordinated, as is shown in Figure 1.

In this paper, we aim to learn the global coordination of all joints. To this end, we learn a balance attractor (BA) to act as the medium to build new relations of all joints indirectly. Specifically, the BA is learned by calculating dynamic weighted aggregation of single joint feature. Then we calculate the difference between the BA and each joint feature. Finally, the resulting new joint features are used to calculate joints similarities to generate final joint relations. In this way, all joints are related indirectly but synchronously through the BA. Meanwhile, because the new joint relations encode global motion features, the global coordination of all joints can be better learned.

Figure 2: The framework of proposed Attractor-Guided Neural Network. In the encoder, a MTDE module is used to extract dynamics features of motion. The AJRE module is adopted to encode the global coordination of all joints and local interactions between joint pairs through GCE and LIE, respectively. AFFM is introduced to fuse features according to channel-wise attention. The whole AJRE is built based on the bottleneck architecture of ResNet [10].In the decoder, skipped connections are used to offer fine-grained information inspired by U-Net [27]. The two convolutions are successively used to conduct space transformation to get final prediction results.

Additionally, enriching dynamic representation of raw input data is also beneficial for effective prediction. As is well known, the raw skeleton sequences only include each joint’s position information of different time steps, which are not sufficient to convey the dynamics property of motion. Previous works [17, 28] tended to introduce a two-stream architecture for extra velocity information. [17, 32] enlarge the time horizon by taking three neighbor frames into account. But it still ignores other dynamic information like accelerated speed, which is not limited to fixed timescales. Therefore, we extract the features among frames with multiple timescales to get enriching dynamic representation from raw 3D coordinates.

Based on the above two aspects, we present our framework referred to as Attractor-Guided Neural Network. Given observed motion sequences, we first learn an enriching dynamic representation from raw position information adaptively through Multi-timescale Dynamics Extractor (MTDE). Next, we introduce the Attractor-Based Joint Relation Extractor (AJRE), including a Local Interaction Extractor (LIE), a Global Coordination Extractor (GCE), and an Adaptive Feature Fusing Module. The LIE is used to encode the local interactions between joint pairs, and the GCE is designed to present the global coordination of all joints. The above different joint relations are adaptively aggregated in the Adaptive Feature Fusing module.

The main contributions of this paper are summarized as follows. 1. We propose a novel joint relation modeling module, AJRE, mainly including GCE and LIE. GCE is proposed to model the global coordination of all joints, encoding the balance property of human motion. LIE is presented to mine the local interactions between joint pairs. 2. We also put forward an MTDE module to extract enriching dynamic information from raw input data for effective prediction. 3. Our proposed Attractor-Guided Neural Network outperforms most state-of-the-art methods for short and long-term motion prediction on three standard benchmark datasets: H3.6M, CMU-Mocap, and 3DPW.

2 Related work

Skeleton-based motion prediction has attracted increasing attention recently. Recent works using neural networks [21, 7, 24, 19, 16, 20, 13, 1, 18, 22, 6, 5, 3, 29] have significantly outperformed traditional approaches [15, 30].

Human motion prediction. RNNs[7, 24, 19] are first used to predict human motion for their ability on sequence modeling. The first attempt was made by Fragkiadaki et al. [7]

, who proposed an Encoder-Recurrent-Decoder (ERD) model to combine encoder and decoder with recurrent layers. They encode the skeleton in each frame to a feature vector and built temporal correlation recursively. Julieta et al.

[24] introduced a residual architecture to predict velocities and achieved better performance. However, these works all suffer from discontinuities between the observed poses and the predicted future ones. Though Gui et al. [19] proposed to generate a smooth and realistic sequence through adversarial training, it is hard to alleviate error-accumulation in a long-time horizon inherent to the RNNs scheme. A feedforward network was widely adopted to help alleviate those above questions because their prediction was not recursive and thus could avoid error-accumulation. Li et al. [16] introduced a convolutional sequence-to-sequence model that encodes the skeleton sequence as a matrix whose columns represent the pose at every time step. However, their spatiotemporal modeling is still limited by the convolutional filters’ size. Recently, [21, 20] were proposed to consider global spatial and temporal features simultaneously. They all transform temporal space to trajectory space to take the global temporal information into account. It contributes to capturing richer temporal correlation and thus achieved state-of-the-art results. In this paper, we follow this scheme but use different methods to model global spatial correlation.

Joint relation modeling. Previous work mainly focused on skeletal constraints to model correlations between joints. Jain et al. [13] first introduced a Structural-RNN model to explicitly model structural information relying on high-level spatiotemporal graphs. However, the graph is designed according to kinetic structure and is not flexible for different motions. Recently, some dynamic graph structures [21, 5, 18, 4] were developed to model more flexible joint relations. Mao et al. [21] used an adaptive graph to model motion, but it is still unreliable because the graph is initialized randomly without structure prior. Cui et al. [5]further combined kinematic structure with dynamic graph structure. Li et al. [18] used stacked GCNs to build the interaction of different scales structure in each layer to model the correlation of both neighbor and distant joints. However, there still exists a problem in existing methods: the global coordination of all joints, which reflects the balance property of human motion, is usually weakened because they are learned from part to whole progressively and asynchronously. Therefore, in this paper, we aim to encode the global coordination of all joints. Based on this intuition, we propose an Attractor-Based Joint Relation Extractor (AJRE) to better leverage global coordination of all joints combined. Among the module, local interactions between joint pairs are also included as auxiliary information.

Dynamic representation of Skeleton sequence. Considering the raw skeleton sequence only represents each joint’s position information at each time step, which is not sufficient to convey the dynamic property of motion. Many attempts [17, 32, 28] proposed to extract enriching dynamic representation from raw data. They relied on two-stream architecture to introduce velocity information. A drawback of them is that they only extract the dynamics from neighbor frames. Though Li et al. [17] enlarged the time horizon by convolution operation, it is still insufficient because dynamics exist in different timescales. Therefore, in this paper, we extract the dynamic features among frames through multiple timescales convolution and fuse them for enriching dynamic representation from raw 3D coordinates.

3 Our Method

The proposed balance attractor guided framework, AGN, models human motion from a new perspective. It mainly includes two components, MTDE and AJRE. MTDE extracts multi-time scale temporal information to obtain rich features for motion prediction. AJRE mines the balance attractor based dynamics from the multi-time scale input to model the spatiotemporal evolution of human motion. Finally, the two convolutions are successively used to conduct dimension reduction to get final predictions.

3.1 Problem formulation

We denote the historical 3D skeleton-based poses as and future poses as , where represents the 3D pose at time with joints. The depicts the dimension of joint coordinates. Our goal is to generate predicted poses, through proposed framework .

3.2 Multi-timescale Dynamics Extractor (MTDE)

Dynamics is a important motion property to represent the patterns of current motion and is used to anticipate future motion trends. Many previous works utilize two-stream architecture to offer different modality inputs like velocity related to motion. While it makes sense, it is still not suitable for all motions because the length of dynamics in different motions varies. Thus, most of the previous works are incapable of getting efficient dynamic representation of motion. In this part, we conduct a combination of different time scales motion dynamics to coordinate with two-stream architecture to address this issue. And more fine-grained dynamic information can be achieved in our proposed Multi-timescale Dynamics Extractor.

The architecture is shown in Figure 2. We take two-stream architecture: one path is raw input with the size of and another path is the difference between adjacent frames in raw input with the size of representing the velocity of raw input. Both paths are connected with a feature extractor which encodes dynamics through three different time scales. Especially, we model dynamics of each joint separately to avoid the interference of other joints. For motion prediction, it is beneficial to enable the model to extract a richer representation of a single joint before building the correlation between joints.

We here take as an example. Given the input , we first use different temporal convolutions with different timescale to generate new dynamic features. Formally,

(1)

where , indicates the convolution operation and is the size of new channel.

Considering different contains different dynamic features of motion, we concatenate them along the channel. This operation enables the model to capture coarse and subtle detailed dynamics simultaneously. Meanwhile, here we also use a convolution to reduce feature channels for efficiency. Formally,

(2)
(3)

where represents the concatenation along the channel.

Similar to , we also extract the dynamics for with the same process to get the representation . Specifically, is calculated by makeing differences between adjacent frames of . To make use of different features, we synthesize them along temporal dimensions to get dynamic representation.

(4)

where represents the concatenation along temporal dimension.

3.3 Attractor-Based Joint Relation Extractor (AJRE)

Figure 3: The overall process of GCE.

The AJRE is used to exploit more prosperous joint relations of motion to help effective modeling. We thus propose Global Coordination Extractor (GCE) and Local Interaction Extractor (LIE) to separately model global coordination of all joints and local interactions between joint pairs. The Adaptive Feature Fusion module (AFFM) is introduced to fuse features according to channel-wise attention to improve the flexibility of joint relation modeling.

3.3.1 Global Coordination Extractor (GCE)

Global coordination of all joints plays an essential role in human motion. It needs all joints to coordinate synchronously and controls the balance of the human body during motion. However, it is usually weakened in previous works because the global motion features are generally learned by fusing the local features of different body components asynchronously and progressively. To tackle this issue, we learn a medium to build new joint relations indirectly. Through the medium, all joints are related synchronously, and thus the global coordination of all joints can avoid being weakened and thus it can be better learned.

Figure 4: The implementations of GCE. The module has two paths in parallel. One path invloves pure

convolution. Another path contains serial Balance Attractor Unit and Cosine Similarity Unit.

As is shown in Figure 3

, we illustrate how to learn global coordination of all joints through the BA. In the Balance Attractor Unit, we first learn a medium called balance attractor (BA) by calculating all joints’ aggregation to characterize the global motion features. We then calculate the difference between the BA and each joint feature to fuse the global motion features into each joint feature. In the Cosine Similarity Unit, we generate a new joint relation by measuring the similarities of joint pairs’ new features. In this way, all joints can be related synchronously through this medium and thus can reflect the global coordination of all joints. The relation graph is subsequently to guide the motion feature extraction. It is noteworthy that we learn the BA in high dimensional space instead of in 3D space because the spatiotemporal features of motion in high dimensional space represent more dynamics. Besides, we name the medium as the balance attractor because it is used to model all joints’ global coordination, which equals human motion’s balance property.

More details are illustrated in Figure 4. In the Balance Attractor Unit, given input , we first do the dimension transpose to get . The channel size is set to , which represents the number of joints, and the resulting feature map in each channel with the size of represents the spatiotemporal features of each joint. Next, we here adopt a simple convolution to learn the BA. Because the output of a convolution is the global response of the input channel, the BA represents all joints’ comprehensive features and reflects the global motion features. This process is a dynamic weighted feature aggregation of joints features. The weight is learned by and adaptive to different motions. Formally,

(5)

where represents the transformation between the joint dimension and the temporal dimension.

After getting a BA, it is used as a medium to build a new representation relative to BA for each joint indirectly through making differences. The purpose is to fuse the global motion features into each joint feature.

(6)

We focus on building new relations of all joints through in the Cosine Similarity Unit. This step aims to encode the coordination of all joints into the relative joint relations graph. Specifically, We first use a convolution to learn a embedding of .

(7)

Next, we aim to calculate the relative relations of joints. The size of one feature map of , of which each row represents the spatiotemporal features of one joint, is

. Therefore, we can calculate the cosine similarity between all row vector pairs to illustrate the correlation between joint pairs. The reasons why we choose cosine similarity are: (1) this metric contains angle information that corresponds to the mutual influence between joints; (2) the value is limited into [-1,1], which avoids the violent variance.

Formally, we denote as a row vector of each feature map at channel , where . And then we can calculate the correlation matrix as:

(8)
(9)

where represents similarity of and , denotes the correlation between all joints.

Notably, we calculate the correlation matrix on each channel because each channel encodes specific spatiotemporal features and should focus on different correlations compared with other channels. Therefore, we can get the correlation matrix of all channels:

.

The last step is to calculate the aggregated features according to the joint relation . Specifically, convolution is used to extract intra-joint features and then combine with the guidance of to get the final features .

(10)

where represents channel-wise multiplication.

motion Walking Eating Smoking Discussion
time(ms) 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
ResSup [24] 23.8 40.4 62.9 70.9 17.6 34.7 71.9 87.7 19.7 36.6 61.8 73.9 31.7 61.3 96.0 103.5
ConvS2S [16] 17.1 31.2 53.8 61.5 13.7 25.9 52.5 63.3 11.1 21.0 33.4 38.3 18.9 39.3 67.7 75.7
LTD [21] 8.9 15.7 29.2 33.4 8.8 18.9 39.4 47.2 7.8 14.9 25.3 28.7 9.8 22.1 39.6 44.1

LPJP [5]
7.9 14.5 29.1 34.5 8.4 18.1 37.4 45.3 6.8 13.2 24.1 27.5 8.3 21.7 43.9 48.0

TrajCNN [20]
8.2 14.9 30.0 35.4 8.5 18.4 37.0 44.8 6.3 12.8 23.7 27.8 7.5 20.0 41.3 47.8



Ours
7.2 13.7 25.6 31.0 7.7 16.7 35.8 44.2 6.3 13.3 24.5 29.7 7.5 20.3 38.7 44.7


motion
Direction Greeting Phoning Posing
time(ms) 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
ResSup [24] 36.5 56.4 81.5 97.3 37.9 74.1 139.0 158.8 25.6 44.4 74.0 84.2 27.9 54.7 131.3 160.8
ConvS2S [16] 22.0 37.2 59.6 73.4 24.5 46.2 90.0 103.1 17.2 29.7 53.4 61.3 16.1 35.6 86.2 105.6


LTD [21]
12.6 24.4 48.2 58.4 14.5 30.5 74.2 89.0 11.5 20.2 37.9 43.2 9.4 23.9 66.2 82.9

LPJP [5]
11.1 22.7 48.0 58.4 13.2 28.0 64.5 77.9 10.8 19.6 37.6 46.8 8.3 22.8 65.6 81.8

TrajCNN [20]
9.7 22.3 50.2 61.7 12.6 28.1 67.3 80.1 10.7 18.8 37.0 43.1 6.9 21.3 62.9 78.8



Ours
9.3 21.1 45.0 55.0 11.2 23.9 63.4 79.6 10.2 18.5 34.3 38.5 6.8 20.5 60.6 76.6


motion
Purchasing Sitting Sitting down Taking photo
time(ms) 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
ResSup [24] 40.8 71.8 104.2 109.8 34.5 69.9 126.3 141.6 28.6 55.3 101.6 118.9 23.6 47.4 94.0 112.7
ConvS2S [16] 29.4 54.9 82.2 93.0 19.8 42.4 77.0 88.4 17.1 34.9 66.3 77.7 14.0 27.2 53.8 66.2
LTD [21] 19.6 38.5 64.4 72.2 10.7 24.6 50.6 62.0 11.4 27.6 56.4 67.6 6.8 15.2 38.2 49.6

LPJP [5]
18.5 38.1 61.8 69.6 9.5 23.9 49.8 61.8 11.2 29.9 59.8 68.4 6.3 14.5 38.8 49.4

TrajCNN [20]
17.1 36.1 64.3 75.1 9.0 22.0 49.4 62.6 10.7 28.8 55.1 62.9 5.4 13.4 36.2 47.0






Ours
17.1 38.0 65.0 73.0 7.8 19.9 44.9 56.4 9.2 23.7 47.7 59.4 5.6 14.3 37.6 48.9
motion Waiting Walking dog Walking Together Average
time(ms) 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
ResSup [24] 29.5 60.5 119.9 140.6 60.5 101.9 160.8 188.3 23.5 45.0 71.3 82.8 30.8 57.0 99.8 115.5
ConvS2S [16] 17.9 36.5 74.9 90.7 40.6 74.7 116.6 138.7 15.0 29.9 54.3 65.8 19.6 37.8 68.1 80.2
LTD [21] 9.5 22.0 57.5 73.9 32.2 58.0 102.2 122.7 8.9 18.4 35.3 44.3 12.1 25.0 51.0 61.3


LPJP [5]
8.4 21.5 53.9 69.8 22.9 50.4 100.8 119.8 8.7 18.3 34.2 44.1 10.7 23.8 50.0 60.2

TrajCNN [20]
8.2 21.0 53.4 68.9 23.6 52.0 98.1 116.9 8.5 18.5 33.9 43.4 10.2 23.2 49.3 59.7


Ours
7.7 18.8 48.0 64.7 22.0 49.2 90.9 110.0 7.8 17.3 32.1 43.3 9.6 22.0 46.2 57.0


Table 1: Short-term prediction on HM. Where “ms” denotes “milliseconds”.

3.3.2 Local Interaction Extractor (LIE)

Figure 5:

The implementations of Local Interaction Extractor (LIE). The left is the path using a non-local block without residual connection to learn the relations between distant joint pairs. The right is the the path with convolutions to learn the relations between adjacent joint pairs.

Local Interaction Extractor (LIE) is used to learn local interactions between joint pairs, including adjacent and distant joints. The local connection via bones brings spatial correlation for adjacent joints. For distant joints, some joints may have a strong correlation even if they are not directly connected, e.g., left hand and right hand are tightly correlated during ‘eating’. Therefore, these two relations are equally important for effective prediction.

As is shown in Figure 5, given an input which is the same as GCE, there exist two main paths to separately learn the relations between adjacent joint pairs and distant joint pairs. To learn the relations between adjacent joint pairs, a pure convolution is adopted to extract spatiotemporal features between adjacent joint pairs. To learn the relations between distant joint pairs, the self-attention module Non-local [31] is used to capture spatiotemporal features between adjacent joint pairs. The outputs can be described as follows. More details of this module are provided in the supplementary materials.

(11)
(12)

3.3.3 Adaptive Feature Fusing Module (AFFM)

Figure 6: The implementations of Adaptive Feature Fusing Module (AFFM). The features learnt from previous block are fused with channel attention machanism.

The different motions will have a respective preference for local interactions between joint pairs and global coordination of all joints. Here we adopt the channel attention mechanism to fuse features adaptively and reform more reliable representation.

As is shown in Figure 6

, the global average pooling of the raw input represents the value of the feature map. After several operations of neural networks, we can get the importance ratio of each channel through the sigmoid function. Last we do channel-wise multiplication between ratio and raw input to reform features. More details of this module are provided in the supplementary materials.

motion Walking Eating Smoking Discussion Directions Greeting Phoning Posing
time(ms) 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000
LTD [21] 42.2 51.3 56.5 68.6 32.3 60.5 70.4 103.5 85.8 109.3 91.8 87.4 65.0 113.6 113.4 220.6
TrajCNN [20] 37.9 46.4 59.2 71.5 32.7 58.7 75.4 103.0 84.7 104.2 91.4 84.3 62.3 113.5 111.6 210.9
Ours 35.5 42.7 57.3 70.3 30.9 55.0 74.3 105.7 89.7 103.5 91.1 90.5 59.1 110.5 107.3 211.9
motion Purchases Sitting Sitting down Taking photo Waiting Walking Dog Walking Together Average
time(ms) 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000 560 1000
LTD [21] 94.3 130.4 79.6 114.9 82.6 140.1 68.9 87.1 100.9 167.6 136.6 174.3 57.0 85.0 78.5 114.3
TrajCNN [20] 84.5 115.5 81.0 116.3 79.8 123.8 73.0 86.6 92.9 165.9 141.1 181.3 57.6 77.3 77.7 110.6
Ours 82.1 117.6 73.1 105.1 78.0 126.1 75.9 88.9 85.9 154.4 130.2 170.7 57.1 82.2 75.1 109.0
Table 2: Long-term prediction on HM.

3.4 Loss Function

Following [20, 21], we make use of the Mean Per Joint Position Error (MPJPE). In particular, for one training sample, loss is as follows:

(13)

where , representing the 3D coordinates of the joint of the human pose, is the predicted result and is the ground truth.

4 Experiments

We evaluate our model on several benchmark motion capture (mocap) datasets, including Human3.6M (H3.6M) [12], the CMU mocap dataset, and the 3DPW dataset [23]. We first introduce these datasets and corresponding implantation details. And then, we compare it with the state-of-the-arts by MPJPE.

4.1 Datasets and Implementation Details

motion Basketball Basketball Signal Directing Traffic
time (ms) 80 160 320 400 1000 80 160 320 400 1000 80 160 320 400 1000
LTD [21] 14.0 25.4 49.6 61.4 106.1 3.5 6.1 11.7 15.2 53.9 7.4 15.1 31.7 42.2 152.4

LPJP [5]
11.6 21.7 44.4 57.3 90.9 2.6 4.9 12.7 18.7 75.8 6.2 12.7 29.1 39.6 149.1

TrajCNN [20]
11.1 19.7 43.9 56.8 114.1 1.8 3.5 9.1 13.0 49.6 5.5 10.9 23.7 31.3 105.9
Ours 11.1 19.5 42.8 55.7 113.1 1.9 3.5 9.3 13.0 57.5 5.8 11.7 25.6 33.4 139.0
motion Jumping Running Soccer
time (ms) 80 160 320 400 1000 80 160 320 400 1000 80 160 320 400 1000
LTD [21] 16.9 34.4 76.3 96.8 164.6 25.5 36.7 39.3 39.9 58.2 11.3 21.5 44.2 55.8 117.5

LPJP [5]
12.9 27.6 73.5 92.2 176.6 23.5 34.2 35.2 36.1 43.1 9.2 18.4 39.2 49.5 93.9

TrajCNN [20]
12.2 28.8 72.1 94.6 166.0 17.1 24.4 28.4 32.8 49.2 8.1 17.6 40.9 51.3 126.5
Ours 11.4 28.0 72.7 94.1 155.3 16.4 20.1 22.9 27.6 41.9 8.6 18.3 39.1 48.4 103.6


motion Walking Wash Window Average
time (ms) 80 160 320 400 1000 80 160 320 400 1000 80 160 320 400 1000
LTD [21] 7.7 11.8 19.4 23.1 40.2 5.9 11.9 30.3 40.0 79.3 11.5 20.4 37.8 46.8 96.5

LPJP [5]
6.7 10.7 21.7 27.5 37.4 5.4 11.3 29.2 39.6 79.1 9.8 17.6 35.7 45.1 93.2

TrajCNN [20]
6.5 10.3 19.4 23.7 41.6 4.5 9.7 29.9 41.5 89.9 8.3 15.6 33.4 43.1 92.8
Ours 5.9 9.0 17.4 21.1 38.8 4.6 10.0 28.6 39.0 73.1 8.2 15.1 32.3 41.5 90.3
Table 3: Short and long-term prediction on CMU-mocap.

H3.6M [12] is the most widely used benchmark for motion prediction. It involves 15 actions performed by professionals, and each human pose involves a 32-joint skeleton. Following [21, 20], we compute the joint’s 3D coordinates by applying forward kinematics and down-sample the motion sequence to 25 frames per second. To remove the global rotation, translation, and constant 3D coordinates of each human pose, there remain 22 joints. We test our method on subject 5 (S5).

3DPW [23] The 3D Pose in the Wild dataset (3DPW) [23] consists of challenging indoor and outdoor actions. The dataset consists of various activities such as shopping, doing sports, and hugging, including 60 sequences and more than 51k frames. For a fair comparison, we evaluate the whole test set.

CMU-Mocap The CMU mocap dataset mainly includes five categories. Be consistent with [21, 20], we select 8 detailed actions: “basketball”, “basketball signal”, “directing traffic”, “jumping”, “running”, “soccer”, “walking” and “washing window”.

Network Setting. We take three timescales: 3, 5, and 7 frames around the target frame in MTDE. The size of the high-level dimension is 32. We use 5 layers in the encoder and 4 layers in the decoder to get enough receptive field. The size of the temporal dimension is enlarged to 64. More details can be found in the supplementary material.

Training

. All training is conducted on the Pytorch platform with one 2080Ti GPU. We use Adam

[26] optimizer with an initial learning rate of 0.0005. We use a weight decay of 0.96 and set the learning rate as 0.0001. The batch sizes are set to 16.

4.2 Comparison with state-of-the-art

Here we show the prediction performance for both short-term and long-term motion prediction on H3.6M, CMU Mocap, and 3DPW. We quantitatively evaluate various methods by the MPJPE between the generated motions and ground truths in 3D coordinates space. To be consistent with the literature[20, 21], we report our results for short-term ( 500ms) and long-term ( 500ms) predictions. For all datasets, we are given 10 frames (400 milliseconds) to predict the future 10 frames (400 milliseconds) for short-term prediction and to predict the future 25 frames (1 second) for long-term prediction. More results can be found in the supplementary material.

4.2.1 Results on H3.6M

Short-term motion prediction. Table 1 provides the short-term predictions on H3.6M for the 15 activities and the average results. Note that our method outperforms all the baselines on average and almost all motions. It demonstrates that our approach learns the general representation of different movements. Specifically, for those motions that need the upper body and lower body to cooperate, e.g., “Walking dog”, “Phoning” and “Sitting down”, our method outperforms the most, reflecting the efficacy of our proposed BA in joint relation modeling. Besides, the results on 320ms and 400ms increase most, which shows that our method is good at capturing temporal continuity compared with other methods. We also provide qualitative comparisons in Figure 1. They further evidence that our predictions are closer to the ground truth than those of the above actions’ baselines. More visualizations are included in the supplementary material.

Long-term motion prediction. In Table 2, we compare our results with those of the baselines for long-term prediction on H3.6M. Our method outperforms all the baselines on average. For long-term prediction, with the uncertainly of motion increasing, our method still obtains competitive performances on almost all motions. Especially in motions with more dynamics like “Walking Dog”, our method outperforms other competitors most. The observations demonstrate the advantages of our proposed dynamics representation and BA.

4.2.2 Results on CMU-Mocap and 3DPW

Table 3 reports the MPJPE for short-term and long-term prediction on CMU-Mocap and Table 4 reports the results on 3DPW. In essence, the conclusions remain unchanged: our method consistently outperforms the baselines for both short-term and long-term prediction with BA guidance.

time (ms) 200 400 600 800 1000
LTD [21] 36.0 69.0 91.0 107.6 118.6
Ours 34.7 66.7 85.6 98.0 108.4
Table 4: Short and long-term predictions on 3DPW.

5 Ablation study

In this section, we conduct several ablation experiments on H3.6M to testify the effectiveness of different components in our proposed framework.

5.1 Effectiveness of components of MTDE

MTDE is designed mainly to get enriching dynamics information of raw input data. Table 5 shows the results of experiments. The results of 320ms and 400ms increase significantly, which shows MTDE encodes more temporal information and offers more meaningful guidance for prediction, especially in the long time horizon.

MTDE 80 160 320 400
9.8 22.6 48.0 58.4
9.6 22.0 46.3 57.0
Table 5: Results of ablation experiments on MTDE

5.2 Effectiveness of components of GCE

GCE is designed mainly to model the global coordination of joints according to the nature of the human body to keep balance. It mainly has two components: Balance Attractor Unit (BAU) and Cosine Similarity Unit (CSU). To prove the effectiveness of CSU, we design an experiment with a common softmax function as a comparison. To prove the guidance of BA is useful, we also design an experiment without BAU. Here “” and “” represent the usage of cosine similarity and softmax respectively. “BAU” is the Balance Attractor Unit. Table 6 shows the results.

BAU 80 160 320 400

10.2 23.4 49.5 60.6
10.1 23.1 49.2 60.0
9.7 22.3 47.4 58.4
9.6 22.0 46.3 57.0

Table 6: Results of ablation experiments on GCE

We have the following observations:

(1) The BAU is essential for effective prediction, especially on long horizon. It demonstrates that the indirect BA offer useful guidance and this module extract meaningful global motion features.

(2) The cosine similarity is better compared with the softmax function used in self-attention models. It arises from two aspects. First, it avoids violent differences in the softmax function because cosine similarity limits the value in

. Second, it has the angle information to represent both orientation and intensity of correlation, while softmax only represents the intensity of correlation.

(3) Methods with proposed GCE outperforms 0.5, 1.1, 2.9, 3.0 by the one without GCE for 80ms, 160ms, 320ms, 400ms, respectively. This proves the effectiveness of the GCE module.

5.3 Effectiveness of LIE and AFFM

In table 7, the method with a single GCE outperforms the one with single LIE. This demonstrates that our proposed GCE is superior to those encodes local interactions of joints, which indicates the importance of our proposed BA. The improved performance due to fusing these two paths proves that these two paths are complementary.

AAFM improves the results by 0.4 on average. It reflects that the channel attention enhances the whole performance. Besides, it increases slowly compared with the introduction of GCE and LIE, which reflects that our model’s improvement mainly benefits from the design of GCE and LIE.

AFFM 80 160 320 400
9.7 22.6 48.3 58.9

10.1 23.1 49.2 60.0
9.6 22.4 46.8 57.4

9.6 22.0 46.3 57.0

Table 7: Results of ablation experiments on LIE and AFFM

6 Conclusion

In this paper, we have proposed a simple yet effective framework referred to as Attractor-Guided Neural Network to model spatiotemporal features for skeleton-based human motion prediction. We extract the dynamic representation of raw skeleton data from a MTDE for effective prediction. To exploit richer joint relation, we propose an AJRE module to better leverage joint relation, including GCE and LIE. The former presents global coordination of all joints and later encodes local interactions between joint pairs. With those two fine-grained features introduced, our proposed method achieves state-of-the-art results on three benchmark datasets.

References

  • [1] E. Aksan, M. Kaufmann, and O. Hilliges (2019) Structured prediction helps 3d human motion modelling. In ICCV, pp. 7143–7152. External Links: Document Cited by: §1, §2.
  • [2] A. Bhattacharyya, M. Fritz, and B. Schiele (2018) Long-term on-board prediction of people in traffic scenes under uncertainty. In CVPR, pp. 4194–4202. External Links: Document Cited by: §1.
  • [3] J. Butepage, M.J. Black, D. Kragic, and H. Kjellstrom (2017) Deep representation learning for human motion prediction and classification. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 6158–6166. Cited by: §2.
  • [4] Y. Cai, L. Ge, J. Liu, J. Cai, T.J. Cham, J. Yuan, and N. Thalmann (2019)

    Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks

    .
    In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2272–2281. Cited by: §2.
  • [5] Y. Cai, L. Huang, Y. Wang, T.J. Cham, J. Cai, J. Yuan, J. Liu, X. Yang, Y. Zhu, XShen, et al. (2020) Learning progressive joint propagation for human motion prediction. In European Conference on Computer Vision, pp. 226–242. Cited by: §2, §2, Table 1, Table 3.
  • [6] Q. Cui, H. Sun, and F. Yang (2020) Learning dynamic relationships for 3d human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6519–6527. Cited by: §1, §2.
  • [7] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In ICCV, pp. 4346–4354. External Links: Document Cited by: §1, §2, §2.
  • [8] H. Gong, J. Sim, M. Likhachev, and J. Shi (2011) Multi-hypothesis motion planning for visual object tracking. In ICCV, pp. 619–626. External Links: Document Cited by: §1.
  • [9] L. Gui, K. Zhang, Y. Wang, X. Liang, J. M. F. Moura, and M. Veloso (2018) Teaching robots to predict human motion. In IROS, pp. 562–567. External Links: Document Cited by: §1.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. External Links: Document Cited by: Figure 2.
  • [11] D.A. Huang and K. M. Kitani (2014) Action-reaction: forecasting the dynamics of human interaction. In ECCV, Vol. 8695, pp. 489–504. External Links: Document Cited by: §1.
  • [12] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014) Human3.6m: large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI 36 (7), pp. 1325–1339. External Links: Document Cited by: §4.1, §4.
  • [13] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016)

    Structural-rnn: deep learning on spatio-temporal graphs

    .
    In CVPR, pp. 5308–5317. External Links: Document Cited by: §1, §2, §2.
  • [14] H. S. Koppula and A. Saxena (2013) Anticipating human activities for reactive robotic response. In IROS, pp. 2071–2071. External Links: Document Cited by: §1.
  • [15] A. M. Lehrmann, P. V. Gehler, and S. Nowozin (2014)

    Efficient nonlinear markov models for human motion

    .
    In CVPR, pp. 1314–1321. External Links: Document Cited by: §2.
  • [16] C. Li, Z. Zhang, W. S. Lee, and G. H. Lee (2018) Convolutional sequence to sequence model for human dynamics. In CVPR, pp. 5226–5234. External Links: Document Cited by: Figure 1, §1, §2, §2, Table 1.
  • [17] C. Li, Q. Zhong, D. Xie, and S. Pu (2018) Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In IJCAI, pp. 786–792. External Links: Document Cited by: §1, §2.
  • [18] M. Li, S. Chen, Y. Zhao, Y. Zhang, Y. Wang, and Q. Tian (2020) Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction. In CVPR, pp. 211–220. External Links: Document Cited by: §1, §2, §2.
  • [19] Y. Liang, X. Yu, X. Liang, and J.M. Moura (2018) Adversarial geometry-aware human motion prediction. In ECCV, Vol. 11208, pp. 823–842. External Links: Document Cited by: §1, §2, §2.
  • [20] X. Liu, J. Yin, J. Liu, P. Ding, J. Liu, and H. Liub (2020) TrajectoryCNN: a new spatio-temporal feature learning network for human motion prediction. IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1. External Links: Document Cited by: §1, §2, §2, §3.4, Table 1, Table 2, §4.1, §4.1, §4.2, Table 3.
  • [21] W. Mao, M. Liu, M. Salzmann, and H. Li (2019) Learning trajectory dependencies for human motion prediction. In ICCV, pp. 9488–9496. External Links: Document Cited by: Figure 1, §1, §2, §2, §2, §3.4, Table 1, Table 2, §4.1, §4.1, §4.2, Table 3, Table 4.
  • [22] W. Mao, M. Liu, and M. Salzmann (2020) History repeats itself: human motion prediction via motion attention. In European Conference on Computer Vision, pp. 474–489. Cited by: §1, §2.
  • [23] T.V. Marcard, R. Henschel, M.J. Black, B. Rosenhahn, and G. Pons-Moll (2018) Recovering accurate 3d human pose in the wild using imus and a moving camera. In ECCV, Vol. 11214, pp. 614–631. External Links: Document Cited by: §4.1, §4.
  • [24] J. Martinez, M. J. Black, and J. Romero (2017)

    On human motion prediction using recurrent neural networks

    .
    In CVPR, pp. 4674–4683. External Links: Document Cited by: Figure 1, §1, §2, §2, Table 1.
  • [25] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli (2016) A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on Intelligent Vehicles 1 (1), pp. 33–55. External Links: Document Cited by: §1.
  • [26] A. Paszke, S.Gross, S. Chintala, G. Chanan, E. Yang, Z. D. Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.1.
  • [27] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Vol. 9351, pp. 234–241. External Links: Document Cited by: Figure 2.
  • [28] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In NeurIPS, pp. 568–576. Cited by: §1, §2.
  • [29] B. Wang, E. Adeli, H.K. Chiu, D.A. Huang, and J. Niebles (2019) Imitation learning for human pose prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7124–7133. Cited by: §2.
  • [30] J. M. Wang, D. J. Fleet, and A. Hertzmann (2008) Gaussian process dynamical models for human motion. TPAMI 30 (2), pp. 283–298. External Links: Document Cited by: §2.
  • [31] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In CVPR, pp. 7794–7803. External Links: Document Cited by: §3.3.2.
  • [32] S. Yan, Y. Xiong, and D. Lin (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, pp. 7444–7452. Cited by: §1, §2.