3D skeleton-based human motion prediction forecasts future poses given the past motions based on the human-body-skeleton. The motion prediction helps machines understand human behaviors, attracting considerable attention [9, 20, 33, 5, 12, 2]
. The related techniques can be widely applied to many computer vision and robotics scenarios, such as human-computer interaction[24, 23, 17, 13], autonomous driving , and pedestrian tracking [1, 15, 3].
Many methods, including the conventional state-based methods [26, 45, 39, 38, 37] and deep-network-based methods [9, 33, 10, 7, 12, 14, 11, 34, 44], have been proposed to achieve promising motion prediction. However, most methods did not explicitly exploit the relations or constraints between different body-components, which carry crucial information for motion prediction. A recent work  built graphs across body-joints for pairwise relation modeling; however, such a graph was still insufficient to reflect a functional group of body-joints. Another work  builds pre-defined sturctures to aggregate body-joint features to represent fixed body-parts, while the model only considers the body physical constraints without exploiting the movement coordination and relations. For example, the action of ‘Walking’ tends to be understood based on the collaborative movements of abstract arms and legs, rather than the detailed locations of fingers and toes.
To model more comprehensive relations, we propose a new representation for a human body: a multiscale graph, whose nodes are body-components at various scales and edges are pairwise relations between components. To model a body at multiple scales, a multiscale graph consists of two types of sub-graphs: single-scale graphs, connecting body-components at the same scales, and cross-scale graphs, connecting body-components across two scales; see Figure 1. The single-scale graphs together provide a pyramid representation of a body skeleton. Each cross-scale graph is a bipartite graph, bridging one single-scale graph to another. For example, an “arm” node in a coarse-scale graph could connect to “hand” and “elbow“ nodes in a fine-scale graph. This multiscale graph is initialized by predefined physical connections and adaptively adjusted in training to be motion-sensitive. Overall, this multiscale representation provides a new potentiality to model body relations.
Based on the multiscale graph, we propose a novel model, called dynamic multiscale graph neural networks (DMGNN), which is action-category-agnostic and follows from an encoder-decoder framework to learn motion representations for prediction. The encoder contains a cascade of multiscale graph computational units (MGCU), where each is associated with a multiscale graph. One MGCU includes two key components: single-scale graph convolution block (SS-GCB), leveraging single-scale graphs to exact features at individual scales, and cross-scale fusion block (CS-FB), inferring cross-scale graphs to convert features from one scale to another and enable fusion across scales. The multiscale graph has adaptive and trainable inbuilt topology; it is also dynamic because the topology is changing from one MGCU to another; see the learned dynamic multiscale graphs in Figure 1. Notably, cross-scale graphs in CS-FBs are constructed adaptively to input motions, and reflect discriminative motion patterns for category-agnostic prediction.
As for the decoder, we adopt a graph-based gated recurrent unit
(G-GRU) to sequentially produce predictions given the last estimated poses. The G-GRU utilizes trainable graphs to further enhance state propagation. We also use residual connections to stabilize the prediction. To learn richer motion dynamics, we introduce difference operators to extract multiple orders of motion differences as the proxies of positions, velocities, and accelerations. The architecture of DMGNN is illustrated in Figure2.
To verify the superiority of our DMGNN, extensive experiments are conducted on two large-scale datasets: Human 3.6M  and CMU Mocap111http://mocap.cs.cmu.edu/. The experimental results show that our model outperforms most state-of-the-art works for both short-term and long-term prediction in terms of both effectiveness and efficiency. The main contributions of this paper are as follow:
We propose dynamic multiscale graph neural networks (DMGNN) to extract deep features at multiple scales and achieve effective motion prediction;
We propose two key components: a multiscale graph computational unit, which leverages a multiscale graph to extract and fuse features across multiple scales, as well as a graph-based GRU to enhance state propagation for pose generation; and
We conduct extensive experiments to show that the proposed DMGNN outperforms most state-of-the-art methods for short and long-term motion prediction on two large datasets. We further visualize the learned graphs for interpretability and reasoning.
2 Related Work
Human motion prediction:
To forecast motions, some traditional methods, e.g., hidden Markov models, Gaussian-process 
and random forests, were developed. Recently, deep networks are playing increasingly crucial roles: some recurrent-network-based models generated future poses step-by-step [9, 20, 33, 42, 46, 11, 31, 12, 29]; some feed-forward networks [27, 32]
tried to reduce error accumulation for stable prediction; imitation-learning algorithm was also proposed. However, these methods rarely considered enough relations from various scales, which carry comprehensive information for human behaviors understanding. In this work, we build dynamic multiscale graphs to capture rich multiscale relations and extract flexible semantics for motion prediction.
Graph deep learning:
Graph deep learning:Graphs, expressing data associated with non-grid structures, preserve the dependencies among internal nodes [47, 41, 40]. Many studies focused on graph representation learning and the relative applications [30, 8, 22, 16, 47, 36]. Based on fixed graph structures, previous works explored propagating node features according to either the graph spectral domain [8, 22] or the graph vertex domain . Several graph-based models have been employed for skeleton-based action recognition [47, 28, 35], motion prediction 
and 3D pose estimation; Different from any previous works, our model considers multiscale graphs and corresponding operations.
3 Problem Formulation
Suppose that the historical 3D skeleton-based poses are and the future poses are , where with joints and feature-dimensions depicts the 3D pose at time . The goal of motion prediction is to generate future poses given the past observed ones; mathematically, we need to propose a model to predict , where is the predicted motion close to the target .
To exploit rich body relations, we represent a body as a multiscale graph across multiscale body-components. Theorically, we could use arbitrary number of scales. Based on human nature, we specifically adopt scales: the body-joint scale, the low-level-part scale, and the high-level-part scale. To initialize multiscale body graphs, we merge spatially nearby joints to coarser scales based on human prior; see Figure 3. With the multiscale graphs, we propose dynamic multiscale graph neural networks (DMGNN) to predict future poses in an end-to-end fashion.
4 Key Components
To construct our dynamic multiscale graph neural networks (DMGNN), we consider three basic components: a multiscale graph computational unit (MGCU), a graph-based GRU (G-GRU), and a difference operator.
4.1 Multiscale graph computational unit (MGCU)
The functionality of a MGCU is to extract and fuse features at multiple scales based on a multiscale graph, which is trained adaptively and individually. One MGCU includes two types of building blocks: single-scale graph convolution blocks, which leverage single-scale graphs to extract features at each scale, and cross-scale fusion blocks, which leverage cross-scale graphs to convert features from one scale to another and enable effective fusion across scales; see Figure 4. We now introduce each block in detail.
Single-scale graph convolution block (SS-GCB). To extract spatio-temporal features at each scale, we propose a single-scale graph convolution block (SS-GCB). Let the trainable adjacency matrix of the single-scale graph at scale be , where is the number of body-components. is first initialized by a skeleton graph whose nodes are body-components and edges are physical connections, modeling a prior of the physical constraints; see Figure 3. During training, each element in is adaptively tuned to capture flexible body relations.
Based on the single-scale graph, SS-GCB effectively extracts deep features through two steps: 1) a graph convolution extracts spatial features of body-components; and 2) a temporal convolution extracts temporal features from motion sequences. Let the input feature at scale be , the spatial graph convolution is formulated as
where are trainable parameters. Through (1), we extract the spatial features from correlated body-components.
in each SS-GCB is trained individually and stays fixed during test. To capture motions along time, we then develop a temporal convolution on the feature sequences. The single-scale graphs in different SS-GCBs are dynamic, showing flexible relations. Note that features extracted at various scales have different dimensionalities and reflect information with different receptive fields.
Cross-scale fusion block (CS-FB). To enable information diffusion across scales, we propose a cross-scale fusion block (CS-FB) which uses a cross-scale graph to convert features from one scale to another. A cross-scale graph is a bipartite graph that corresponds the nodes in one single-scale graph to the nodes in another single-scale graph. For example, the features of an “arm” node in the low-level-part scale can potentially guide the feature learning of a “hand” node in the body-joint scale . We aim to infer this cross-scale graph adaptively from data. Here we present CS-FB from to as an example.
We first infer the cross-scale graph with adjacent matrix to model the cross-scale relations. Let the feature of the th joint and the th part along time be and
, we vectorize them asand to leverage temporal information, where and
denote the temporal convolution kernel size and stride. We infer the edge weight between theth joint and th part through
where , , and denotes MLPs; is a softmax operator along the raw of inner product matrix and is concatenation. (2a) and (2c) aggregate the relative features of all the components to the th and the th components in two scales, which are then updated by (2b) and (2d); and (2e) obtains adjacent matrix through inner product and softmax, thus we model the normalized effects from a body in to each component in . The intuition behind this design is to leverage the global relative information to augment body-component features, and we use the inner product of two augmented features to obtain the edge weight. Figure 5 illustrates the inference of . Notably, different from the fixed single-scale graphs during inference, the cross-scale graphs are efficiently inferred online and adaptive to motion features, which are flexible to capture distinct patterns for individual inputs.
We next fuse the joint features to the part-scale with . Given the joint features at a certain time stamp , the part-scale feature is updated as
where is trainable. Thus, each body-part in adaptively absorbs detailed information from the corresponding joints in . The fused is fed into the SS-CB of the next MGCU in . In the other way around, we can define the fusion from to with similar operations.
4.2 Graph-based GRU
The functionality of a graph-based GRU (G-GRU) is to learn and update hidden states with the guide of a graph. The key is to use a trainable graph to regularize the states, which are used to generate future poses. Let be the adjacent matrix of the inbuilt graph, which is initialized with the skeleton-graph and trained to build adaptive edges, and be the initial state of G-GRU. At time , G-GRU takes two inputs: the initial state, , and the online 3D skeleton-based information, . Then, G-GRU works as
where , , , , and are trainable linear mappings; denotes the trainable weights. For each G-GRU cell, it applies a graph convolution on the hidden states for information propagation and produces the state for next frame.
4.3 Difference operator
The motion states like velocity and acceleration carry important dynamics. To use them, we propose a difference operator to compute high-order differences of input sequences, guiding the model to learn richer dynamics. At time , the -order difference is , and the -order difference () of the pose, , is
We use zero paddings after computing the differences to handle boundary conditions. Overall, the difference operator works as
Here we consider . The three elements reflects positions, velocities, and accelerations.
5 DMGNN Framework
Here we present the architecture of our DMGNN, which contains a multiscale graph-based encoder and a recurrent graph-based decoder for motion prediction.
Capturing semantics from observed motions, the encoder aims to provide the decoder with motion states for prediction. In the encoder, for each motion sample, we first concatenate its -order of differences as input. And we initialize body scales by averaging joint clusters in to spatially corresponding components in coarser scales. For example, we average two “right hand” joints in to the “right arm” part in . We then use a cascade of MGCUs to extract spatio-temporal features. Note that the multiscale graph associated with each MGCU is trained individually, thus the graph topology can be dynamically changing from one MGCU to another. To finally combine the three scales for comprehensive semantics, the output features are weighted summed. Since the numbers of body-components are different across scales, we broadcast the coarser components to match their spatially corresponding joints. Let the broadcast output features of the three scale be , the summed feature is
where is a hyper-parameter to balance different scales. We next use a temporal average pooling to remove the time dimension of and obtain , which aggregates historical information as the initial state of the decoder.
The decoder aims to predict future poses sequentially. The core of the decoder is the proposed graph-based GRU (G-GRU), which further propagates motion states for sequence regression. We first use the difference operator to extract three orders of differences as motion priors, and then feed them into G-GRU to update the hidden state. We next generate future pose displacement with an output function. Finally, we add the displacements to the input pose to predict the next frame. At frame , the decoder works as
where represents an output function, implemented by MLPs. The initial state , which is the final output of encoder.
5.3 Loss function
To train our DMGNN, we consider the loss. Let the th sample of predictions be and the corresponding ground truth be . For
training samples, the loss function is
where denotes the norm. loss gives sufficient gradients to joints with small losses to promote even more precise prediction; loss also gives stable gradients to joints with large losses, alleviating gradient explosion. In our experiments, loss leads to more precise predictions than
loss. All the weights in the proposed DMGNN are trained end-to-end with the stochastic gradient descent.
|DMGNN (fixed )||0.20||0.35||0.54||0.63||0.20||0.34||0.53||0.66||0.23||0.41||0.86||0.83||0.26||0.65||0.92||1.02|
|DMGNN (no G-GRU)||0.22||0.33||0.53||0.61||0.19||0.32||0.53||0.66||0.23||0.42||0.87||0.82||0.27||0.65||0.90||0.98|
|Motion||Sitting Down||Taking Photo||Waiting||Walking Dog||Walking Together||Average|
|Motion||Basketball||Basketball Signal||Directing Traffic||Jumping|
6.1 Datasets and experimental setup
Human 3.6m (H3.6M). H3.6M dataset  has subjects performing different classes of actions. There are joints in each subject, and we transform the joint positions into the exponential maps and only use the joints with non-zero values ( joints remain). Along the time axis, we downsample all sequences by two. Following previous paradigms , the models are trained on 6 subjects and tested on the specific clips of the 5th subject.
CMU motion capture (CMU Mocap). CMU Mocap consists of general classes of actions: ‘human interaction’, ‘interaction with environment’, ‘locomotion’, ‘physical activities & sports’, and ‘situations & scenarios’, where each subject has joints and we preserve joints with non-zero exponential maps. Be consistent with , we select detailed actions: ‘basketball’, ‘basketball signal’, ‘directing traffic’, ‘jumping’, ‘running’, ‘soccer’, ‘walking’ and ‘washing window’. We evaluate our model with the same approach as we do for H3.6M.
We implement DMGNN with PyTorch 1.0 on one GTX-2080Ti GPU. We setscales, which contains body-joints, and body-components for both datasets. We use cascaded MGCUs, whose feature dimensions are , , and , respectively. In the first two MGCUs, we use both SS-GCBs and CS-FBs to extract spatio-temporal features and fuse cross-scale features; In the last two MGCUs, we only use SS-GCBs. In the decoder, the dimension of the G-GRU is , and we use a two-layer MLP for pose output. In training, we set the batch size and clip the gradients to a maximum -norm of ; we use Adam optimizer  with learning rate . All the hyper-parameters are selected with validation sets.
Baseline methods. We compare the proposed DMGNN with many recent works, which learned motion patterns from pose vectors, e.g. Res-sup. , CSM , TP-RNN , AGED , and Imit-L , or separated bodies e.g. Skel-TNet , and Traj-GCN . We reproduce, Res-sup., CSM and Traj-GCN based on their released codes. We also employ a naive baseline, ZeroV , which sets all predictions to be the last observed pose at .
6.2 Comparison to state-of-the-art methods
To validate the proposed DMGNN, we show the prediction performance for both short-term and long-term motion prediction on Human 3.6M (H3.6M) and CMU Mocap. We quantitatively evaluate various methods by the mean angle error (MAE) between the generated motions and ground-truths in angle space. We also illustrate the predicted samples for qualitative evaluation.
Short-term motion prediction. Short-term motion prediction aims to predict the future poses within 500 milliseconds. We compare DMGNN to state-of-the-art methods for predicting poses in 400 milliseconds on H3.6M dataset. We first test representative actions: ‘Walking’, ‘Eating’, ‘Smoking’ and ‘Discussion’. Table 16 shows MAEs of DMGNN and some baselines. We also present the performance of several variants of DMGNN: we use fixed body-graphs in SS-GCBs (fixed ); the common GRU without a graph (no G-GRU); or only the joint-scale () bodies. We see that, i) the complete DMGNN obtain the most precise prediction among all the variants; ii) compared to baselines, DMGNN has the lowest prediction MAEs on ‘Eating’ and ‘Smoking’, and obtains competitive results on ‘Walking’ and ‘Discussion’. Table 2 compares the proposed DMGNN with some recent baselines on the remaining actions in H3.6M. We see that DMGNN achieves the best performance in most actions (also for average MAEs).
Long-term motion prediction. Long-term motion prediction aims to predict the poses over 500 milliseconds, which is challenging due to the action variation and non-linearity movements. Table 3 presents the MAEs of various models for predicting actions and average MAEs across the actions in the future 560 ms and 1000 ms on H3.6M dataset. We see that DMGNN outperforms the competitors on actions ‘Eating’, and ‘Discussion’ at 560 ms, and obtains competitive performances on other cases.
We also train our DMGNN for short-term and long-term prediction on classes of actions in CMU Mocap dataset. Table 4 shows the MAEs across the future 1000 ms. We see that DMGNN significantly outperforms the state-of-the-art methods on actions ‘Basketball’, ‘Basketball Signal’, ‘Running’ and ‘Walking’ and obtains competitive performance on the other actions.
Predicted sample visualization. We compare the synthesized samples of DMGNN to those of Res-sup., CSM and Traj-GCN on H3.6M. Figure 6 illustrates the future poses of ‘Taking Photo’ in 1000 ms with the frame interval of 80 ms. Comparing to baselines, we see that DMGNN completes the action accurately and reasonably, providing significantly better predictions. Res-sup. has large discontinuity between the last observed pose the first predicted one (red box); CSM and Traj-GCN have large errors after the 280th ms (blue box); three baselines give large posture errors in long-term (yellow box). We show more prediction images and videos in Appendix.
Effectiveness and efficiency test. We compare the running time costs of DMGNN to several latest models. Table 5 presents the running time of different methods for short and long-term motion prediction on H3.6M dataset. We see that DMGNN achieves the shortest running time while generating future poses over both 400 or 1000 ms, compared with the other competitors [33, 27, 32]. DMGNN takes only ms to generate motions in 400 ms, indicating that DMGNN with multiscale graphs has efficient operations.
6.3 Ablation study
We now investigate some crucial elements of DMGNN.
Effects of multiple scales.
To verify the proposed multiscale representation, we employ various scales in DMGNN for 3D skeleton-based motion prediction. Besides the three scales in our model, we introduce additional two scales: , which represents a body as parts: left limbs, right limbs and torso, and , which contains parts: upper body and lower body; see illustrations of and in Appendix. Table 6 presents the MAEs with various scales. We see that, when we combine , and , lowest prediction error is achieved. Notably, using two scales ( or ) is significant better than using only ; but involving too abstract scales ( or ) tends to hurt prediction.
Effects of the number of MGCUs.
|MAE at different time stamps (ms)||running time (ms)|
To validate the effects of multiple MGCUs in the encoder, we tune the numbers of MGCUs from to and show the prediction errors and running time costs for short and long-term prediction on H3.6M, which are presented in Table 7. We see that, when we adopt to MGCUs, the prediction MAEs fall and time costs rise continuously; when we use or MGCUs, the prediction errors are stably low, but the time costs rise higher. Therefore, we select to use MGCUs, resulting in precise prediction and high running efficiency.
Effects of CS-FBs.
|Average MAE across 400 ms|
Here, we evaluate 1) the effectiveness of using relative features during cross-scale graph inference in CS-FBs; 2) different numbers of CS-FBs in a sequence of MGCUs. For CS-FB, the model only fuses all scales at the end of the encoder. Table 8 presents the average MAEs with different CS-FBs and relative-feature mechanisms across 400 ms on H3.6M. We see that 1) using relative features leads to lower MAEs, validating the effectiveness of such augmented features; 2) CS-FBs leads to the best prediction performance. The intuition is that 0 or 1 CS-FB fuse insufficiently and 3 CS-FBs tend to fuse redundant information to confuse the model.
Effect of in final fusion. The hyper-parameter in the final fusion (3) balances the influence between joint-scale and more abstract scales. Figure 7 illustrates the average MAE with different body scales and CS-FBs for short-term prediction on H3.6M. We see that the performance reach its best when we use scales, hierarchical CS-FBs and , even though it is robust to the change of .
Effect of high-order motion differences.
|MAE at different time stamps (ms)|
We study the effects of various orders of motion differences fed into the encoder and decoder of our model. We evaluate DMGNN with combinations of -orders of pose differences. Table 9 presents the MAEs of DMGNN with various input differences for short-term motion prediction. We see that the proposed DMGNN obtains the lowest MAEs when it adopts the -orders of motion differences. This indicates that high-order differences improve the prediction performance significantly.
6.4 Analysis of category-agnostic property
Here we validate that DMGNN can learn discriminative motion features for category-agnostic prediction.
We first visualize the learned cross-scale graphs for different actions to test the discriminative power. Figure 8 shows the graphs in two CS-FBs on ‘Walking’ and ‘Directions’ in H3.6M. For each action, we show some strong relations from detailed scales to the right arms in coarse scales. We see that i) for each action, the CS-FBs capture diverse ranges of a human body: the graph in the first CS-FB focuses on nearby body-components; the second CS-FB captures more global and action-related effects; i.e. hands and feet affects arms during walking; and ii) the cross-scale graphs are different for various actions, especially in the second CS-FB, capturing distinct patterns.
|Methods||On CS-FB 1||On CS-FB 2||On||Res-sup. ||TP-RNN |
We next conduct action classification on the intermediate representations to test the discriminative power. We isolatedly train a two-layer MLP to classify each dynamic cross-scale graph. We also classify the outputs from the encoders of DMGNN, Res-sup. (class-aware) and TP-RNN (class-agnostic). Table10 presents the average classification accuracies on categories of actions. We see that the cross-scale graph in the second CS-FB is more informative than the one in the first CS-FB for action recognition. Comparing to baselines, DMGNN obtains the highest the classification accuracies on encoder representation, indicating that DMGNN captures discriminative information for class-agnostic prediction.
We build dynamic mutiscale graphs to represent a human body and propose dynamic multiscale graph neural networks (DMGNN) with an encoder-decoder framework for 3D skeleton-based human motion prediction. In the encoder, We develop multiscale graph computational units (MGCU) to extract features; in the decoder, we develop a graph-based GRU (G-GRU) for pose generation. The results show that the proposed model outperforms most state-of-the-art methods for both short and long-trem prediction in terms of both effectiveness and efficiency.
Acknowledgement: This work is supported by the National Key Research and Development Program of China (No. 2019YFB1804304), SHEITC (No. 2018-RGZN-02046), NSFC (No. 61521062), 111 plan (No. B07022), and STCSM (No. 18DZ2270700).
Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Feifei
Li, and Silvio Savarese.
Social lstm: Human trajectory prediction in crowded spaces.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961–971, June 2016.
-  Emad Barsoum, John Kender, and Zicheng Liu. Hp-gan: Probabilistic 3d human motion prediction via gan. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 1531–1540, June 2018.
-  Apratim Bhattacharyya, Mario Fritz, and Bernt Schiele. Long-term on-board prediction of people in traffic scenes under uncertainty. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4194–4202, June 2018.
Large-scale machine learning with stochastic gradient descent.In International Conference on Computational Statistics (COMPSTAT), pages 177–187, August 2010.
-  Judith Butepage, Michael Black, Danica Kragic, and Hedvig Kjellstrom. Deep representation learning for human motion prediction and classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1591–1599, July 2017.
-  Siheng Chen, Baoan Liu, Chen Feng, Carlos Vallespi-Gonzalez, and Carl Wellington. 3d point cloud processing and learning for autonomous driving. IEEE Signal Processing Magazine Special Issue on Autonomous Driving, 2020.
-  Hsukuang Chiu, Ehsan Adeli, Borui Wang, DeAn Huang, and Juan Niebles. Action-agnostic human pose forecasting. CoRR, abs/1810.09676, 2018.
-  Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems (NeurIPS), pages 3844–3852, December 2016.
-  Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In The IEEE International Conference on Computer Vision (ICCV), pages 4346–4354, December 2015.
-  Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. Learning human motion models for long-term predictions. CoRR, abs/1704.02827, 2017.
-  Anand Gopalakrishnan, Ankur Mali, Dan Kifer, Lee Giles, and Alexander Ororbia. A neural temporal model for human motion prediction. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12116–12125, June 2019.
-  Liangyan Gui, Yuxiong Wang, Xiaodan Liang, and Jose Moura. Adversarial geometry-aware human motion prediction. In The European Conference on Computer Vision (ECCV), pages 786–803, September 2018.
-  Liangyan Gui, Kevin Zhang, Yuxiong Wang, Xiaodan Liang, Jose Moura, and Manuela Veloso. Teaching robots to predict human motion. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2018.
Xiao Guo and Jongmoo Choi.
Human motion prediction via learning local structure representations
and temporal dependencies.
AAAI Conference on Artificial Intelligence, February 2019.
-  Ankur Gupta, Julieta Martinez, James Little, and Robert Woodham. 3d pose from motion for cross-view action recognition via non-linear circulant temporal encoding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2061–2068, June 2014.
-  Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pages 1024–1034, December 2017.
-  Dean Huang and Kris Kitani. Action-reaction: Forecasting the dynamics of human interaction. In The European Conference on Computer Vision (ECCV), pages 489–504, July 2014.
-  Du Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of Mathematical Imaging and Vision, 35(2):155–164, October 2009.
-  Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(7):1325–1339, July 2014.
-  Ashesh Jain, Amir Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5308–5317, June 2016.
-  Diederik Kingma and Jimmylei Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), pages 1–15, May 2015.
-  Thomas Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), pages 1–14, April 2017.
-  Hema Koppula and Ashutosh Saxena. Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation. In International Conference on Machine Learning (ICML), pages 792–800, June 2013.
-  Hema Koppula and Ashutosh Saxena. Anticipating human activities using object affordances for reactive robotic response. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 38(1):14–29, January 2016.
-  JogendraNath Kundu, Maharshi Gor, and RVenkatesh Babu. Bihmp-gan: Bidirectional 3d human motion prediction gan. In AAAI Conference on Artificial Intelligence, February 2019.
-  Andreas Lehrmann, Peter Gehler, and Sebastian Nowozin. Efficient nonlinear markov models for human motion. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1314–1321, June 2014.
-  Chen Li, Zhen Zhang, Wee Sun Lee, and Gim Hee Lee. Convolutional sequence to sequence model for human dynamics. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5226–5234, June 2018.
-  Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-structural graph convolutional networks for skeleton-based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3595–3603, June 2019.
-  Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Symbiotic graph neural networks for 3d skeleton-based human action recognition and motion prediction. CoRR, abs/1910.02212, 2019.
-  Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR), pages 1–20, May 2016.
-  Zhenguang Liu, Shuang Wu, Shuyuan Jin, Qi Liu, Shijian Lu, Roger Zimmermann, and Li Cheng. Towards natural and accurate future motion prediction of humans and animals. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10004–10012, June 2019.
-  Wei Mao, Miaomiao Liu, Mathieu Salzmann, and Hongdong Li. Learning trajectory dependencies for human motion prediction. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
Julieta Martinez, Michael Black, and Javier Romero.
On human motion prediction using recurrent neural networks.In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4674–4683, July 2017.
-  Dario Pavllo, David Grangier, and Michael Auli. Quaternet: A quaternion-based recurrent model for human motion. In British Machine Vision Converence (BMVC), pages 1–14, September 2018.
-  Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with directed graph neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7912–7921, June 2019.
-  Chenyang Si, Ya Jing, Wei Wang, Liang Wang, and Tieniu Tan. Skeleton-based action recognition with spatial reasoning and temporal stack learning. In The European Conference on Computer Vision (ECCV), pages 103–118, September 2018.
Ilya Sutskever, Geoffrey Hinton, and Graham Taylor.
The recurrent temporal restricted boltzmann machine.In Advances in Neural Information Processing Systems (NeurIPS), pages 1601–1608, December 2009.
-  Graham Taylor and Geoffrey Hinton. Factored conditional restricted Boltzmann machines for modeling motion style. In International Conference on Machine Learning (ICML), pages 1025–1032, June 2009.
-  Graham Taylor, Geoffrey Hinton, and Sam Roweis. Modeling human motion using binary latent variables. In Advances in Neural Information Processing Systems (NeurIPS), pages 1345–1352, December 2007.
-  Diego Valsesia, Giulia Fracastoro, and Enrico Magli. Learning localized generative models for 3d point clouds via graph convolution. In International Conference on Learning Representations (ICLR), pages 1–15, May 2019.
-  Nitika Verma, Edmond Boyer, and Jakob Verbeek. Feastnet: Feature-steered graph convolutions for 3d shape analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2598–2606, June 2018.
-  Jacob Walker, Kenneth Marino, Abhinav Gupta, and Martial Hebert. The pose knows: Video forecasting by generating pose futures. In The IEEE International Conference on Computer Vision (ICCV), pages 3332–3341, October 2017.
-  Borui Wang, Ehsan Adeli, Hsukuang Chiu, Dean Huang, and JuanCarlos Niebles. Imitation learning for human pose prediction. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
-  He Wang, Edmond Ho, Hubert Shum, and Zhanxing Zhu. Spatio-temporal manifold learning for human motions via long-horizon modeling. IEEE Transactions on Visualization and Computer Graphics (TVCG), PP(99), August 2019.
-  Jack Wang, Aaron Hertzmann, and David Fleet. Gaussian process dynamical models. In Advances in Neural Information Processing Systems (NeurIPS), pages 1441–1448, December 2006.
-  Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in Neural Information Processing Systems (NeurIPS), pages 91–99. December 2016.
-  Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI Conference on Artificial Intelligence (AAAI), pages 7444–7452, February 2018.
-  Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris N. Metaxas. Semantic graph convolutional networks for 3d human pose regression. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3425–3435, June 2019.
8 Detailed Architecture
Here we show the detailed structure of the proposed DMGNN. We first show the structure of the encoder, including the single-scale graph convolution block (SS-GCB) and cross-scale fusion block (CS-FB). We then show the structure of the decoder, including the graph-based gated recurrent unit (G-GRU).
Single-scale graph convolution block (SS-GCB). SS-GCB consists of a graph convolution and a temporal convolution. Table 11 presents the structures of four cascaded SS-GCB at scale in the encoder of DMGNN.
|Idx||Shape & Operations||Feature||Remarks|
|, stride=1||temporal conv|
|, stride=2||temporal conv|
|, stride=2||temporal conv|
|, stride=2||temporal conv|
We see that we use four SS-GCBs to extract spatio-temporal motion features. In each SS-GCB, we employ ReLU, batch normalization, and dropout operations. We use stride 2 to downsample along the temporal dimension.
Cross-scale fusion block (CS-FB) We use CS-FB to fuse multiscale features. Table 12 presents the structure of the first CS-FB to fuse the feature from to .
|Step||Shape & Operations|
|1||temporal conv: , stride=2; vectorize|
|2||for both and : 800-256-relu|
|for both and : 512-256-relu|
|3||Computing (2e) in paper|
We first use a temporal convolution to shrink the temporal dimension and obtain a compact feature vector for each body-component; we then use four MLPs to learn the feature embeddings for two body-scales, respectively; we finally calculate the inner product of these two embeddings and employ a softmax to calculate the corresponding edge weight in a cross-scale graph.
Total architecture In summary, we show the total architecture of the encoder, which combine SS-GCBs at multiple scales and CS-FB across scales. Table 13 presents the structure of the encoder.
|MGCU||Initialize three scales|
|1||SS-GCB 1 at||SS-GCB 1 at||SS-GCB 1 at|
|CS-FB 1 between & and &|
|2||SS-GCB 2 at||SS-GCB 2 at||SS-GCB 2 at|
|CS-FB 2 between & and &|
|3||SS-GCB 3 at||SS-GCB 3 at||SS-GCB 3 at|
|4||SS-GCB 4 at||SS-GCB 4 at||SS-GCB 4 at|
|A final SS-GCB at|
|Temporal average pooling|
We see that we use four MGCUs, where the first two MGCUs use SS-GCBs and CS-FBs to learn the features from multiscale bodies and the last two MGCUs only use SS-GCB to extract features.
Graph-based Gated Recurrent Unit (G-GRU) G-GRU is one of the key components in the proposed decoder for synthesizing precise and reasonable future poses. Table 14 presents the structure of the G-GRU at time stamp .
|input: , ; :|
|graph conv: ; :|
|sum and sigmoid|
|input: , ; :|
|graph conv: ; :|
|sum and sigmoid|
|input: , ; :|
|graph conv: ; :|
|element-wise product of and|
|sum and tanh|
We see that we take the historical motion state and the online 3D skeleton-based information as inputs and introduce the graph convolution to propagate the motion information to produce the motion state at the next frame. The hidden dimension of the G-GRU is 256.
Total architecture Here, we show the total architecture of the decoder, which combines the proposed G-GRU and an MLP-formed output function. Table 15 presents the structure the decoder at time stamp .
We see that, given the hidden motion state and current input information, we use a G-GRU and an MLP-formed output function to model the displacement of motions between two consecutive frames, and we emply residual connections to obtain the estimated poses. The hidden dimensions are 256.
9 Quantitative Comparison with more Baselines
In our paper submission, we only compare DMGNN to several state-of-the-art works, while many other methods has been developed. Here we compare DMGNN to as many previous methods as possible. Table 16 presents the MAE of many methods for short-term motion prediction on 4 representative actions of Human 3.6M
We see that, the proposed DMGNN outperforms the state-of-the-art methods on most actions. Notably, we have cited all of baselines presented in Table 16 in our paper submission.
10 Coarser Body-scales in Ablation Studies
In the first experiment of ablation studies (‘effects of multiple scales’), we initialize two coarser body-scales ( and ) besides the effective three scales (, and ) that used in our DMGNN. Here we present and in details.
To initialize , we average the input features of three body-components: left-body, head-and-torso, and right-body as the nodes of corresponding body-graph. We build two initial edges to respectively connect head-and-torso with left-body and right-body. To initialize , we average the input features of two body-components: upper-body and lower-body as the graph nodes. We build an edge between these two body-components. Figure 9 illustrates the two coarser body-scales as well as the body-joint scale on Human 3.6M . We name as ‘Left-right-body scale’ and name as ‘Up-low-body scale’.
11 Effects of Numbers and Positions of CS-FBs
In our DMGNN, we employ CS-FBs with aggregating relative features at different MGCUs to fuse various levels of motion features across different scales; see Equation (2a) in the submission. Here we further investigate the effects of numbers and positions of CS-FBs at cascaded MGCUs. In the four MGCUs, we use one to four CS-FBs at different MGCUs, and we obtain the average prediction MAEs of different model variants.
Table 17 presents the average MAEs of DMGNN with different numbers of CS-FBs at different MGCUs on H3.6M for short-term motion prediction. We also compare the performance of CS-FBs with or without aggregating relative information from all the body-components (‘with relative’ or ‘without relative’). We denote the numbers of CS-FBs at the column ‘Number’ and denote the CS-FB positions as MGCU indices at column ‘Position’.
|Number||Position||MAE (without relative)||MAE (with relative)|
We see that 1) when we aggregate global relative information to in the CS-FB, we obtain lower MAEs than the module without relative information aggregation; 2) when we use two CS-FBs with relative information aggregation at the 1st and 2nd MGCUs, DMGNN produces the most precise predictions across different model variants; 3) fusing multiscale features at first few MGCUs outperforms fusing at last ones. The reason behind could be, if we use only one CS-FB, we cannot fuse rich features for comprehensive pattern learning; if we use too many CS-FBs, the capacity of the network become much larger, leading to overfitting.
12 More Generated Motion Samples
12.1 Human 3.6M Dataset
Figure 10 illustrates the predicted poses of ‘Posing’ in Human 3.6M in 1000 ms.
We see that the proposed DMGNN could well model the posture, such as stretched bodies and arms; however, Res-sup predicts the motion with large discontinuity between the last observed pose the first predicted one (red box); CSM and Traj-GCN tends to have large errors after the 400th ms (blue box); all the baselines produce unreasonable poses at the 1000th ms (yellow box), which are far from the ground truth.
We also predict the action of ‘Waiting’ in Human 3.6M in a long term with different methods. The results are illustrated in Figure 11.
We see that, for baselines, the motion predicted by res-sup has large discontinuity between the last observed pose the first predicted one (red box) and loses the movements, which is far from the ground truths. CSM and Traj-GCN suffer from large errors after the 320th ms; all the baselines predict unreasonable poses at the 1000th ms (yellow box); but the predictions from DMGNN could complete the action reasonably.
12.2 CMU Mocap Dataset
For the action of ‘Basketball’, the main challenge of motion prediction is the running legs and swaying arms. We illustrate the generated samples of three models in Figure 13.
We see that the errors of the predictions from CSM and Traj-GCN rise after the 320th ms (blue box); two baselines give unreasonable postures at the 1000th ms in long-term (yellow box); that is, CSM has wrong tilt orientation of the body and the left leg (purple) of the pose predicted by Traj-GCN has inaccurate position; DMGNN could predict motions with smaller errors in both short-term and long-term.
For the action of ‘Washing window’, we also predict the future poses in 1000 ms and illustrate them in Figure 13.
We see that the prediction of CSM has large discontinuity between the last observed pose the first predicted one (red box); Traj-GCN tends to have large errors after the 400th ms, since the pose does not raise the left arm (blue box); two baselines give poses at the 1000th ms with large errors (yellow box); but DMGNN could predict motions with smaller errors in both short-term and long-term.