DeepAI
Log In Sign Up

Application of Graph Convolutions in a Lightweight Model for Skeletal Human Motion Forecasting

10/10/2021
by   Luca Hermes, et al.
0

Prediction of movements is essential for successful cooperation with intelligent systems. We propose a model that integrates organized spatial information as given through the moving body's skeletal structure. This inherent structure is exploited in our model through application of Graph Convolutions and we demonstrate how this allows leveraging the structured spatial information into competitive predictions that are based on a lightweight model that requires a comparatively small number of parameters.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/02/2016

Molecular Graph Convolutions: Moving Beyond Fingerprints

Molecular "fingerprints" encoding structural information are the workhor...
02/03/2022

Skeleton-Based Action Segmentation with Multi-Stage Spatial-Temporal Graph Convolutional Neural Networks

The ability to identify and temporally segment fine-grained actions in m...
04/04/2022

Learning Dynamic Correlations in Spatiotemporal Graphs for Motion Prediction

Human motion prediction is a challenge task due to the dynamic spatiotem...
10/16/2020

VolumeNet: A Lightweight Parallel Network for Super-Resolution of Medical Volumetric Data

Deep learning-based super-resolution (SR) techniques have generally achi...
07/26/2019

Single Level Feature-to-Feature Forecasting with Deformable Convolutions

Future anticipation is of vital importance in autonomous driving and oth...
10/20/2019

Structured Prediction Helps 3D Human Motion Modelling

Human motion prediction is a challenging and important task in many comp...
11/30/2020

Graph convolutions that can finally model local structure

Despite quick progress in the last few years, recent studies have shown ...

1 Introduction

Human motion forecasting has many useful applications. As intelligent systems should interact with humans, it becomes necessary to predict human movements and actions. Take, as one example, a cooperative task in which a human and a robot should safely collaborate in an assembly process. Therefore, multiple techniques have been developed to tackle such tasks. Ranging from simple heuristic approaches towards learning-based approaches. Machine learning approaches for such temporal data were mostly relying on recurrent networks that were applied to vectorized representations of joints. These were using skeletal structure only implicitly. The latest advances in geometric deep learning (GDL) provide a direct way to leverage the skeletal structure for predictions via graph convolutions (GCNs). In this work, we propose a straightforward implementation of such a model. We use spatio-temporal convolutions together with a type of GCN to extract spatio-temporal features in human motion, resulting in a simple autoregressive model. We adapt dilated causal convolutions for temporal modeling as used in

[wavenet], but include local joint connectivity which leads to a lightweight spatio-temporal operation. The code will be made publicly available.111https://github.com/LucaHermes/lightweight-motion-forecasting

2 Methods

In this section, we introduce our deep geometric model for motion forecasting. The model is based on Graph-WaveNet [graph_wavenet], a spatio-temporal extension to the original WaveNet [wavenet].

Figure 1: Model architecture, with

consecutive spatio-temporal processing blocks, followed by two alternating ReLU and linear layers (

). Rounded arrows denote a channel-split.

Input to the model is a time series of consecutive skeletal states with joints and a -dimensional joint representation, (Fig. 1 shows the model architecture). In our experiments, we represent joints as quaternions, i.e. .

First, the linear input layer () is applied to each joint and acts as a trainable embedding for the -dimensional joint inputs. Secondly, consecutive Graph-WaveNet blocks extract spatio-temporal features from the given time series. Every block produces a skip output and the sum of these outputs are, thirdly, passed to an MLP that is applied per joint.

Every Graph-WaveNet block performs a spatio-temporal convolution (ST-Conv) followed by a purely spatial graph-convolution (K-GCN), as shown in Fig. 1. Both operations use the same output dimensionality, to which we refer to as the block dimensionality

. A residual connection is applied which bypasses both convolutions by adding the block input to the block output. The result is fed to the subsequent block.

We use the following model configuration in our experiments: The input layer consists of neurons. We use blocks with and a skip output dimensionality of . The two layers in the output MLP have and neurons, respectively. The resulting model has trainable parameters.

Spatio-Temporal Convolution       In the original Graph-WaveNet a purely temporal convolution is used to extract temporal features from the input. In contrast, we substitute the temporal convolution that acts on each joint individually with a spatio-temporal convolution that integrates information from neighboring joints. This operation is inspired by the temporal extension module (TEM) [temporal_extension_module], which is a type of GCN placed prior to a temporal operation. Therefore, the temporal operation operates on information from the neighborhood .

We integrate the idea of TEM directly into the spatio-temporal convolution operation. This means, instead of applying a 1-dimensional convolution on the trajectory of a single joint, we apply a 2-dimensional spatio-temporal convolution on the trajectory of a kinematic chain of joints. The joint hierarchy is given by the kinematic tree of the skeleton. Through selecting the hip as the root joint, an ordering is introduced that extends into the leaf-joints, i.e. the head, hands and foot joints.

Fig. 1(b) visualizes our spatio-temporal convolution. First, the parent and grandparent joint for every joint are sampled based on the kinematic tree. Afterwards, the trajectories for these three joints are stacked and convolved using a convolutional kernel (blue rectangle). The kernel size is denoted as and in the spatial and temporal dimension, respectively. The applied convolution is causal and dilated in the temporal dimension following the pattern of [wavenet, graph_wavenet].

(a)
(b)
Figure 2: (a) Samplig strategy for the ST-Conv. (b) Computation of the spatio-temporal convolution. A 2-dimensional kernel computes the convolution of the time-series from three joints. The kernel size in the spatial and the temporal domain is denoted by and , respectively.

Spatial Graph-Convolution       The purely spatial convolution is computed using graph convolutions (GCNs) as defined in [kipf_welling_17_spacial_gcn]. Instead of using the full skeletal graph, we again utilize the kinematic tree that was described above to convert the undirected skeletal graph into three directed subgraphs: In the first subgraph, an edge exists if and only if joint relative to joint is further up in the hierarchy of the kinematic tree. This subgraph retains all edges linking a joint to its immediate child joints. The second subgraph is similarly constructed, but with inverted edge direction, therefore retaining all edges linking a joint to its immediate parent joints. The third subgraph consists only of self-loops. The subgraphs are represented by three adjacency matrices . Note that

corresponds to the identity matrix. The output of this operation is computed as follows:

The inputs are given as , is the parameter matrix of the GCN operation and denotes the degree matrix of subgraph .

3 Results

In this section, we describe the experiments we conducted on the presented model. To quantify the performance, we perform an evaluation using protocols from the related literature and provide qualitative results.

Dataset and Training Setup     We train and evaluate the model on the Human3.6M (H3.6M) dataset [h36m_pami, h36m_ionescu]. The dataset consists of motion capture data from seven different human subjects performing 15 different actions. Table 1 and 2 provide an overview of these actions. We follow the general preprocessing procedure in [quaternet], downsampling the dataset by a factor of 2 to 25 Hz and mirror each trajectory across the --plane.

We split the data by subject into a train (subjects 1, 7, 8, 9, 11), validation (subject 6) and test set (subject 5). The models are trained using a batch size of 16 trajectories over 3000 epochs on the mean absolute error in the quaternion space. An epoch consists of five samples from every sequence. A sequence is a single trial of one subject, performing a single action. Each sample contains successive frames from a single downsampled or mirrored version of the trajectories. The model is trained using the Adam optimizer with an initial learning rate of

that is decayed exponentially by a factor of after every epoch. We use a seed trajectory length of 32 frames () to condition the model, because this perfectly fits into the receptive field. The target sequence has a length of 10 frames () and was generated autoregressively.

Model Evaluation     We show qualitative and quantitative results for the proposed model. For our model evaluation, we use the non-mirrored, but downsampled dataset. For the quantitative evaluation, we follow the standard evaluation protocol of [fragkiadaki2015]. This protocol first constructs an evaluation set that consists of four random samples from every sequence in the test set, i.e. the trials of subject 5. 222We were able to draw the same sequences as [fragkiadaki2015] using the RandomState implementation of the random package of NumPy (v. 1.19.2) with a seed of . An error metric quantifies the performance of the model on this evaluation set. The metric used by [fragkiadaki2015] is the Euclidean distance between the predicted and target rotations converted to Euler angles. The following equation summarizes this metric:

where is the number of time-steps, is the number of joints and the sum over accumulates the error in the , and dimension of the given Euler angles. The final results correspond to the average taken over the four samples. Following [martinez2017], the results for a running average over 2 and 4 frames (Run. avg. 2/4) and a zero-velocity-model are also documented as baselines. The zero-velocity model returns the first observed frame as the prediction for all successive frames.

Table 1 shows the results under the standard protocol for four actions. The running average and zero-velocity baselines are included, as well as results for multiple SoTA models. Table 2 lists additional results for the remaining 12 actions (we include results from other models when available for comparison). Our forecasting model generally shows competitive results that in some cases, e.g. eating, directions, and walkingtogether, outperform the referenced approaches. The original Graph-WaveNet architecture converges to a zero-velocity model.

Walking Eating Smoking Discussion No. of
Milliseconds 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400 Parameters
Run.-avg. 4 0.64 0.87 1.07 1.20 0.40 0.59 0.77 0.88 0.37 0.58 1.03 1.02 0.60 0.90 1.11 1.15 -
Run.-avg. 2 0.48 0.74 1.02 1.17 0.32 0.52 0.74 0.87 0.30 0.52 0.99 0.97 0.41 0.74 0.99 1.09 -
Zero-velocity 0.39 0.68 0.99 1.15 0.27 0.48 0.73 0.86 0.26 0.48 0.97 0.95 0.31 0.67 0.94 1.04 -
LSTM-3LR [fragkiadaki2015] 0.77 1.00 1.29 1.47 0.89 1.09 1.35 1.46 1.34 1.65 2.04 2.16 1.88 2.12 2.25 2.23
GRU sup. [martinez2017] 0.28 0.49 0.72 0.81 0.23 0.39 0.62 0.76 0.33 0.61 1.05 1.15 0.31 0.68 1.01 1.09
QuaterNet GRU abs./vel. [quaternet] 0.21 0.34 0.56 0.62 0.20 0.35 0.58 0.70 0.25 0.47 0.93 0.90 0.26 0.60 0.85 0.93
QuaterNet CNN abs./vel. [quaternet] 0.25 0.40 0.62 0.70 0.22 0.36 0.58 0.71 0.26 0.49 0.94 0.90 0.30 0.66 0.93 1.00
DCT-GCN short-term [dct_gcn] 0.18 0.31 0.49 0.56 0.16 0.29 0.50 0.62 0.22 0.41 0.86 0.80 0.20 0.51 0.77 0.85
DMGNN [li_multiscaleGCN2020] 0.18 0.31 0.49 0.58 0.17 0.30 0.49 0.59 0.21 0.39 0.81 0.77 0.26 0.65 0.92 0.99
Ours abs./vel. 0.23 0.37 0.61 0.69 0.18 0.31 0.54 0.66 0.23 0.46 0.93 0.90 0.31 0.70 0.97 1.07
Ours vel./vel. 0.19 0.34 0.57 0.63 0.16 0.29 0.50 0.60 0.22 0.41 0.85 0.81 0.22 0.57 0.84 0.93
Table 1: Quantitative results on action forecasting under the standard protocol of [fragkiadaki2015]. We further specify the input and output mode of some models (input/output), where either velocities (vel.) or absolute angles (abs.) are used.
Directions Greeting Phoning Posing
Milliseconds 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
GRU sup. 0.26 0.47 0.72 0.84 0.75 1.17 1.74 1.83 0.23 0.43 0.69 0.82 0.36 0.71 1.22 1.48
DCT-GCN st 0.26 0.45 0.71 0.79 0.36 0.60 0.95 1.13 0.53 1.02 1.35 1.48 0.19 0.44 1.01 1.24
DMGNN 0.25 0.44 0.65 0.71 0.36 0.61 0.94 1.12 0.52 0.97 1.29 1.43 0.20 0.46 1.06 1.34
Ours abs./vel. 0.32 0.47 0.68 0.80 0.42 0.72 1.14 1.36 0.54 1.00 1.34 1.47 0.27 0.55 1.05 1.27
Ours vel./vel. 0.24 0.43 0.77 0.81 0.35 0.61 1.01 1.20 0.53 1.00 1.28 1.40 0.26 0.51 1.08 1.32
Purchases Sitting Sittingdown Takingphoto
Milliseconds 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
GRU sup. 0.51 0.97 1.07 1.16 0.41 1.05 1.49 1.63 0.39 0.81 1.40 1.62 0.24 0.51 0.90 1.05
DCT-GCN st 0.43 0.65 1.05 1.13 0.29 0.45 0.80 0.97 0.30 0.61 0.90 1.00 0.14 0.34 0.58 0.70
DMGNN 0.41 0.61 1.05 1.14 0.26 0.42 0.76 0.97 0.32 0.65 0.93 1.05 0.15 0.34 0.58 0.71
Ours abs./vel. 0.56 0.75 1.03 1.15 0.31 0.50 0.91 1.12 0.33 0.65 0.96 1.09 0.19 0.42 0.73 0.93
Ours vel./vel. 0.42 0.61 1.08 1.15 0.30 0.49 0.90 1.09 0.29 0.65 0.97 1.08 0.15 0.34 0.58 0.72
Waiting Walkingdog Walkingtogether Average
Milliseconds 80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
GRU sup. 0.28 0.53 1.02 1.14 0.56 0.91 1.26 1.40 0.31 0.58 0.87 0.91 0.36 0.67 1.02 1.15
DCT-GCN st 0.23 0.50 0.91 1.14 0.46 0.79 1.12 1.29 0.15 0.34 0.52 0.57 0.27 0.51 0.83 0.95
DMGNN 0.22 0.49 0.88 1.10 0.42 0.72 1.16 1.34 0.15 0.33 0.50 0.57 0.27 0.52 0.83 0.95
Ours abs./vel. 0.25 0.51 0.93 1.10 0.46 0.79 1.16 1.32 0.17 0.37 0.58 0.65 0.32 0.57 0.90 1.04
Ours vel./vel. 0.21 0.51 0.97 1.17 0.43 0.78 1.10 1.24 0.15 0.32 0.50 0.54 0.27 0.52 0.87 0.98
Table 2: Quantitative results for the remaining actions; Continuation of table 1.
Figure 3: Ground truth (dotted lines) and prediction (solid lines) for four different joints (columns) and four different actions (rows) of the second trial from subject 5. Each plot shows 32 seed frames () and 32 target frames.

Fig. 3 visualizes predicted and true individual quaternion dimensions. Overall, the predictions are very similar to ground truth, but finer details are smoothed out. This is clearly visible in the trajectory of the right knee joint performing the walkingdog action. However, the model is still able to correctly predict the phase timing and thus model major movement features even in the long-term future ().

4 Conclusions

We combine the well-established causal dilated convolutions from WaveNet with geometric deep learning principles resulting in a lightweight autoregressive model. With about parameters, our model is an order of magnitude less complex compared to current models (cf. Table 1). Nonetheless, it shows competitive results compared to current approaches when evaluated on the H3.6M dataset for skeletal human motion forecasting. Furthermore, we show that this model is able to predict the phase timing even in a long-term forecasting setup.

References