AGVNet: Attention Guided Velocity Learning for 3D Human Motion Prediction

05/25/2020
by   Xiaoli Liu, et al.
10

Human motion prediction plays a vital role in human-robot interaction with various applications such as family service robot. Most of the existing works did not explicitly model velocities of skeletal motion that carried rich motion dynamics, which is critical to predict future poses. In this paper, we propose a novel feedforward network, AGVNet (Attention Guided Velocity Learning Network), to predict future poses, which explicitly models the velocities at both Encoder and Decoder. Specifically, a novel two-stream Encoder is proposed to encode the skeletal motion in both velocity space and position space. Then, a new feedforward Decoder is presented to predict future velocities instead of position poses, which enables the network to predict multiple future velocities recursively like RNN based Decoder. Finally, a novel loss, ATPL (Attention Temporal Prediction Loss), is designed to pay more attention to the early predictions, which can efficiently guide the recursive model to achieve more accurate predictions. Extensive experiments show that our method achieves state-of-the-art performance on two benchmark datasets (i.e. Human3.6M and 3DPW) for human motion prediction, which demonstrates the effectiveness of our proposed method. The code will be available if the paper is accepted.

READ FULL TEXT VIEW PDF

Authors

page 1

06/15/2019

VRED: A Position-Velocity Recurrent Encoder-Decoder for Human Motion Prediction

Human motion prediction, which aims to predict future human poses given ...
10/11/2020

SDMTL: Semi-Decoupled Multi-grained Trajectory Learning for 3D human motion prediction

Predicting future human motion is critical for intelligent robots to int...
09/04/2019

PISEP^2: Pseudo Image Sequence Evolution based 3D Pose Prediction

Pose prediction is to predict future poses given a window of previous po...
06/08/2018

Unsupervised Learning for Surgical Motion by Learning to Predict the Future

We show that it is possible to learn meaningful representations of surgi...
09/09/2020

HSFM-Σnn: Combining a Feedforward Motion Prediction Network and Covariance Prediction

In this paper, we propose a new method for motion prediction: HSFM-Σnn. ...
10/15/2019

Trajectorylet-Net: a novel framework for pose prediction based on trajectorylet descriptors

Pose prediction is an increasingly interesting topic in computer vision ...
11/26/2021

Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting

Trajectory forecasting plays a pivotal role in the field of intelligent ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Humans recognize and interact with the real world relying on their ability to predict their surrounding changes over time[13]. For example, if we observe a moving people is losing her balance, we may guess that she will fall down in a near future and be ready to help her avoid danger immediately. Similarly, intelligent robots that can perceive and interact with the moving people must have the ability to predict the future dynamics of human motion. In this paper, as is shown in Figure 1, we focus on the problem of human motion prediction with D joint position data, which aims to predict the future human motion sequence based on the observed human motion sequence.

Fig. 1: Human motion prediction. In Figure 1, the left of top one denotes the current observed poses, the right of top one denotes the predictive poses. From left to right in Figure 1, we show the joint trajectories of sequence in Figure 1 along each axis (i.e. , and ).

The key to human motion prediction is to model the motion dynamics of the human body. The velocities of moving joints carry rich motion dynamic information, which can boost the performance of human motion prediction [21, 2]. However, most of existing literatures ignored the velocities information hidden in a sequence of poses and just modeled the motion dynamics of the human motion sequence in pose space [4, 12, 19, 5]. Recently, some works presented to consider including the velocity information of moving poses [21, 2, 16, 25, 15]

. Most of these works implicitly modeled the velocity information of the future poses simply using residual connection between the input and output of their Decoder, and ignored the velocities of previous poses in their Encoder

[21, 16, 25, 15]. To better capture the motion dynamics, we present to explicitly model the velocities of human motion sequence both at Encoder and Decoder.

Recursively predicting multiple future poses by feeding the current output as the input of the next prediction is an efficient way to obtain more information from the latest prediction for the long-term prediction [21, 14, 6]. Most sequence to sequence models are proposed to recursively predict multiple future human poses entirely based on the recursive unit inherent in RNNs [5, 6, 7]

. Although RNNs have shown their temporal modeling power in many computer vision tasks such as natural language processing and machine translation, they fail to capture the spatial correlations among joints of the human body. Therefore, some works were proposed to address this issue using feedforward neural network

[1, 14, 8]. For example, Li et al. [14]

proposed a CNN (Convolutional Neural Networks) based sequence to sequence model to predict multiple future poses recursively. Follow this strategy, we propose a new feedforward Decoder that considers both spatial and temporal modeling of future poses, which enables the network to predict multiple future poses recursively.

Most of the prediction models were optimized simply using the loss [21, 14, 4] or MPJPE (Mean Per Joint Position Error) [20, 11] that calculates the errors between the target and predicted poses, which pays equal attention to different future poses. It seems reasonable that the predictive errors at previous time-steps are smaller than that at later time-steps. However, these models implicitly pay more attention to the later predictions since the loss at the later time-steps is larger than that at the early time-steps. Therefore, these models are inherently difficult to achieve accurate predictions, especially in a recursive prediction models. Because the early prediction is prone to affect the prediction of the later time-steps in the recursive prediction model, which easily suffers from error accumulation. To address this problem, we propose to pay more attention to the predictions of early time-steps and thus achieves more accurate predictions.

Moreover, existing works ignore the difference among , and coordinates of joints [21, 14, 20]. Take the motion sequences in Figure 1 as an example, Figure 1 shows their joint trajectories along different axes. The evolution of joint trajectories along different axes is different. Therefore, ignoring the information between different coordinates may not capture the motion dynamics well. But there is also interaction between different axes. Considering the above problems, in this paper, we separately treat the , and coordinates of joint at early stage through different branches, and the different branches share the same parameters to capture the correlations between different axes.

Our main contribution can be summarized as follows.

() A novel network, AGVNet, is proposed to forecast the future motion sequence, which explicitly models the velocities both at Encoder and Decoder.

() We proposed a new two stream Encoder that models the motion features from both the positions and velocities of previous poses, which considers the difference among coordinates of joints carefully and thus can better encode the motion dynamics of skeletal motion.

() A new CNN based Decoder is built, which enables the network to predict multiple future poses recursively like RNN based Encoder-Decoder framework.

() A novel loss, ATPL, is proposed, guiding the network to achieve more accurate predictions by paying increasing attention to the early predicted poses and less attention to the later predictions.

Ii Related work

Human motion prediction with mocap vector: in this part, human poses are represented by a group joint angles parameterized by the exponential map [11, 23]

. Most of the existing works were presented based on mocap vector

[14, 7, 8, 9]

. The key of this problem is to model the temporal dependencies of human motion. Due to the effectiveness of RNN (Recurrent Neural Network) in short-term temporal modeling, many works are proposed based on RNN to address this problem

[4, 19, 5, 6]. Fragkiadaki et al. [4] proposed an Encoder-Recurrent-Decoder (ERD) model by incorporating nonlinear encoder and decoder before and after recurrent layers built with LSTM to predict the future mocap vectors recursively. Due to the error accumulation inherently in RNN, these models may coverage to the performance of mean poses [5, 6, 26]. Moreover, human movements are constrained by the physical structure of the human body. Traditional RNN models fail to capture the physical constrains of the human body. Therefore, other RNN models are presented by incorporating some skeletal representation method such as Lie algebra representation to capture the spatial correlations of the human body [6, 19]. Liu et al. [19] proposed a novel model, HMR (Hierarchical Motion Recurrent) , to anticipate future motion sequence. The authors modeled the global and local motion contexts by using LSTM hierarchically and capture the anatomical constraints of the human body by representing the skeletal frames with the Lie algebra representation.

Human motion prediction with D joint position data: in these works, human poses are represented as a group joints of D coordinates. Rare works focus on the problem of human motion prediction using D joint position data [1, 20, 9]. Butepage et al. [1]

proposed to learn a generic representation from the input Cartesian skeleton data and predicted future 3D poses using feed-forward neural networks. Mao et al.

[20] proposed a novel model to predict the future motion sequence with position data. The authors modeled the temporal trajectory information of joints using DCT (Discrete Cosine Transform) and captured the spatial structure information of the human body by representing the body joints as graph using GCN (Graph Convolutional Network). In [20], the authors have shown that D joint position data can better describe the human pose and not suffer from ambiguities since two different mocap vectors can represent the same pose [20]. Therefore, in this paper, we focus on the problem of human motion prediction using D joint position data.

Pose Velocities modeling in human motion prediction: most of related works implicitly modeled the motion velocity of future poses through residual connection [21, 7, 8, 6]. For example, Gui et al. [6, 7] and Butepage et al. [21] introduced a residual connection between the input and the output of their GRU based Decoder to model the velocities of future poses. Rare works modeled the velocities information of human motion both at Encoder and Decoder [2, 16, 15]. Chiu et al. [2] proposed an RNN based model, TP-RNN (triangular-prism RNN), to predict the velocities of future poses with the velocities of previous poses as inputs. This model was built entirely based on LSTM, which fails to capture the spatial correlations among joints of the human body.

Iii Methodology

Iii-a Problem formulation

As is shown in Figure 1, human motion sequence can be represented by a sequence of D joint coordinates of the human body. Our goal is to predict a sequence of D joint coordinates of the future motion sequence given a sequence of D joint coordinates of the observed motion sequence. In this paper, we first predict the velocities of future poses as an intermediate result, and then restore the final poses from the velocities information, rather than directly predict the final future poses as done by many previous literatures.

Given an input human motion sequence with length , its corresponding future human motion sequence with length and the velocities with length of future poses . Here, denotes the pose of sequence at time-step , denotes the future pose of sequence at time-step and denotes the velocity of future pose at time-step . In this paper, the process of human motion prediction can be considered as: , which can be divided into two stages. The first stage aims to predict the velocities , the second stage aims to generate final poses , and the future pose can be restored based on the pose and the future velocity , which can be formulated as: .

Iii-B Skeletal Representation

Recent works have shown that explicitly modeling the temporal information of human motion can enhanced the final performance of the network [17, 22]. Therefore, to better capture the human motion, we introduce a skeletal representation to model the skeletal motion both implicitly and explicitly by representing the human motion sequence in position space and velocity space, respectively.

Take the input sequence shown in SectionIII-A as an example, where , denotes -th joint of pose at time step , is the number of joints. In this paper, we represent the human motion sequence into six

D tensors to conveniently model the difference of joint trajectories among

, and coordinates, including and in position and velocity space, respectively. Moreover, human body can be naturally divided into five parts, including two limbs, two legs and truck [3, 18]. Therefore, as is shown in the bottom right of Figure 2, joints of the same part in our representation are organized in adjacent positions to capture the local characteristic of the human body.

Fig. 2: Skeletal representation. The top one denotes a human motion sequence, the bottom left denotes our skeletal representation, and the bottom right denotes the human body where the numbers in the circle denote the arrangement order of joints in our skeletal representation.

In position space, , and denote the representation of the input sequence along , and axis, respectively, which can be defined as equation 1.

(1)

Where denotes the dimension of joints, i.e. , and . denotes -th joint of pose at time step along axis .

In velocity space, , and denote the representation of the input sequence along , and axis, respectively. The velocity of two consecutive poses is defined as equation 2, and , or is defined as equation 3, which is similar to that in position space.

(2)
(3)

Where denotes the dimension of joints, i.e. , and .

Iii-C Architecture of AGVNet

Our overall architecture of AGVNet is as shown in Figure 3, which mainly includes three parts: Encoder, Decoder and Loss. Among which, Encoder aims to encode skeletal motion of previous poses, and Decoder is to decode the future velocities of future poses. We will describe from three aspects in the following: encode skeletal motion, decode future velocities and our loss.

Fig. 3: Overall architecture of AGVNet. Where DCCN denotes our proposed backbone layer described in the following section, (Conv, , ) denotes a convolutional layer with outputs channels.

Iii-D Encode skeletal motion

Backbone layer: inspired by [10], as is shown in Figure 4, we propose a new backbone, Densely Connected Convolutional Module (DCCM), to maximize the information flow propagation layer by layer, which mainly consists of convolutional layers. At each convolutional layer, the input receives the enhanced features by fusing the concatenated feature maps from all preceding layers using a convolution, which can be formulated as equation 4. Here, the point-level features can be learned by the convolutions of the residual connections. Therefore, the dense residual connections in DCCM allow the network to gradually obtain the enhanced features of deep layers by the point-level features of shallow layers which can be formulated as equation 4.

Fig. 4: Densely Connected Convolutional Module (DCCM).
(4)

Where () denotes the output feature map of the th layer, denotes the fusion layer built with the operation of concatenation across channel followed by a convolution and denotes a

convolutional layer followed by a activation function (i.e. Leaky ReLU).

Encode skeletal motion: Based on the skeletal representation described above, as is shown in the left of Figure 3, a novel two-stream Encoder is built with DCCMs to encode skeletal motion of the input sequence, which mainly consists of pose branch, velocity branch and a fusion module.

At pose branch, , and are fed into each sub-branch respectively, which enables the network to focus on modeling the joint trajectories information along each axis. So that we can modeled the difference between , and coordinates. Moreover, each sub-branch of pose branch are shared weight to reduce the model complexity and also model correlations among , and coordinates. Finally, the feature maps of each sub-branch are concatenated along channel and another DCCM is applied to fuse these information.

At velocity branch, similar to that of pose branch, , and are fed into each sub-branch respectively to capture the spatial and temporal information in velocity space. The pose branch and velocity branch are shared parameters to reduce the model complexity and also improve the final performance potentially in the meantime.

Finally, a fusion module aims to fuse the motion dynamics features captured by pose branch and velocity branch, which is built with the operation of concatenation along channel followed by a convolutional layer and Leaky ReLU.

Iii-E Decode future velocities

RNN models predict future pose from previous state and current pose [21, 7]. Motivated by this, as is shown in the right of Figure 3, a new Decoder built with convolutional layers is proposed to decode future velocities recursively by conditioning on history features of previous time-steps. We assume that previous hidden state needs more operations to extract its spatial and temporal features. Therefore, we first apply two convolutional layers to extract the spatio-temporal features of previous poses, one convolutional layer to extract the spatial representation of velocities at current time-step and then add them element-wisely. Finally, another one convolutional layer and FC layer are applied to restore the spatial information of future velocity at the next time-step. This process can formulated as equation 5.

(5)

Where denotes a mapping learned by our decoder shown in Figure 3, denotes the predicted future velocities at -th time-step , and

denotes the predicted hidden representation of previous poses at

-th time-step.

Iii-F Loss

To explicitly model the velocities of future poses and achieve more accurate predictions, as is shown in Figure 3, our final loss consists of two parts: velocity loss and position loss, which can be formulated as: , where and

are two hyperparameters to balance the loss of velocities and positions of future poses. (

) velocity loss, that guides the network to decode the velocities of future motion sequence; () position loss that encourages the network to restore the spatial information of future poses. For all formulation, denotes the number of joints.

In a recursive model, the current prediction is vulnerable to the prediction of previous time-steps. To address this problem, we propose a attention temporal prediction loss (ATPL) to guide the network to achieve more accurate predictions at early time-steps by paying an increasing attention to the previous time-steps, which can be defined as equation 6.

(6)

Where and denote the groundtruth joint and the predicted joint, respectively, denotes the attention weight at -th time-step, and .

The velocity loss and position loss can be caculated by 6, which represents the ATPL in velocity space and position space, respectively.

Iv Experiments

We evaluate our model on two challenging datasets, including HumanM (HM) [11] and D Pose in the Wild dataset (DPW) [24]. In the following section, we first introduce these datasets and the implementation details. Then, we compare our method with state-of-the-arts and report these results both quantitatively and qualitatively. Finally, we conduct ablation experiments to analyze the effectiveness of several components in AGVNet.

Iv-a Datasets and Implementation Details

Datasets: () HM [11]: HM is a largest dataset for human motion prediction. This dataset consists of actions performed by seven professional actors, such as walking, eating, smoking and Discussion. The human body is represented by joints. () DPW [24]: DPW is a dataset in the wild with accurate D poses performing various activities such as shopping, doing sports. This dataset includes sequences, more than k frames. The human body is represented by joints.

Implementation Details: consistent with baselines[21, 14, 20], we adopt the same training, test and validation set for all datasets. In experiments, we evaluate our model using

D coordinate data, and all experimental settings and data processing are consistent with the baselines. Our model is implemented by TensorFlow. MPJPE (Mean Per Joints Position Error) proposed in

[11] in millimeter is used as our metric to evaluate the performance of our proposed method. All models are trained with Adam optimizer, and the learning rate is initial with .

Iv-B Comparison with state-of-the-arts

Baselines: () RGRU [21] is a classical model for human motion prediction, which is built with GRUs. This model used residual connections to implicitly predict the velocities of future poses. () CSS [14] is a feedforward model based on CNN and predicts multiple future poses recursively. () DTraj [20] is the currently most state-of-the-arts method for human motion prediction based on position data, which is built with DCT and GCN. For a fair comparison, the D errors of [21] and [14] used in this paper are reported in [20].

ms Walking Eating Smoking Discussion
80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
RGRU[21] 23.8 40.4 62.9 70.9 17.6 34.7 71.9 87.7 19.7 36.6 61.8 73.9 31.7 61.3 96.0 103.5
CS2S[14] 17.1 31.2 53.8 61.5 13.7 25.9 52.5 63.3 11.1 21.0 33.4 38.3 18.9 39.3 67.7 75.7
DTraj[20] 8.9 15.7 29.2 33.4 8.8 18.9 39.4 47.2 7.8 14.9 25.3 28.7 9.8 22.1 39.6 44.1
Ours 7.9 15.4 28.2 33.7 7.7 17.0 37.0 45.4 6.2 12.7 23.4 28.2 7.7 19.3 38.3 45.5
ms Directions Greeting Phoning Posing
80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
RGRU[21] 36.5 56.4 81.5 97.3 37.9 74.1 139.0 158.8 25.6 44.4 74.0 84.2 27.9 54.7 131.3 160.8
CS2S[14] 22.0 37.2 59.6 73.4 24.5 46.2 90.0 103.1 17.2 29.7 53.4 61.3 16.1 35.6 86.2 105.6
DTraj[20] 12.6 24.4 48.2 58.4 14.5 30.5 74.2 89.0 11.5 20.2 37.9 43.2 9.4 23.9 66.2 82.9
Ours 9.9 22.3 47.1 58.7 12.6 26.3 66.1 82.6 10.4 19.1 35.8 43.0 7.5 23.2 67.8 83.6
ms Purchases Sitting SittingDown TakingPhoto
80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
RGRU[21] 40.8 71.8 104.2 109.8 34.5 69.9 126.3 141.6 28.6 55.3 101.6 118.9 23.6 47.4 94.0 112.7
CS2S[14] 29.4 54.9 82.2 93.0 19.8 42.4 77.0 88.4 17.1 34.9 66.3 77.7 14.0 27.2 53.8 66.2
DTraj[20] 19.6 38.5 64.4 72.2 10.7 24.6 50.6 62.0 11.4 27.6 56.4 67.6 6.8 15.2 38.2 49.6
Ours 16.9 36.8 63.1 75.9 9.1 21.5 47.0 61.4 10.1 25.0 49.7 60.2 5.9 15.0 39.0 50.0
ms Waiting WalkingDog WalkingTogether Average
80 160 320 400 80 160 320 400 80 160 320 400 80 160 320 400
RGRU[21] 29.5 60.5 119.9 140.6 60.5 101.9 160.8 188.3 23.5 45.0 71.3 82.8 30.8 57.0 99.8 115.5
CS2S[14] 17.9 36.5 74.9 90.7 40.6 74.7 116.6 138.7 15.0 29.9 54.3 65.8 19.6 37.8 68.1 80.2
DTraj[20] 9.5 22.0 57.5 73.9 32.2 58.0 102.2 122.7 8.9 18.4 35.3 44.3 12.1 25.0 51.0 61.3
Ours 7.6 20.6 52.6 66.9 22.6 52.9 90.6 111.1 7.0 16.0 32.5 42.3 9.9 22.9 47.9 59.2
TABLE I: Short-term prediction on HM. Where “ms” denotes “milliseconds”.

Results on HM: Table I reports the results for short term prediction on HM. Our method outperforms all baselines on average at all time-steps, which shows the effectiveness of our proposed model. Specifically, compared with the RNN baseline [21], the errors of our method are decreased significantly. The possible reason is that our method explicitly models velocities information of human motion sequence, while [21] simply models the velocities of future poses using residual connection in their Decoder and ignores a part of spatial features among joints of the human body using GRU cell. Compared with other feedforward baselines [14, 20], our model achieves the best results in most cases. This benefits from two folds: () our method models the velocities of human motion both at Encoder and Decoder, while the baselinses [14, 20] ignore the modeling of velocity information. Therefore, our method can better capture the motion dynamics of human motion. () our model predicts multiple future poses recursively, while [20] predicts the future poses in a non-recursive manner. So that our model can make full use of the latest predictive information to predict the later future poses, while [20] can not.

Milliseconds Walking Eating Smoking Discussion Directions Greeting
560 1000 560 1000 560 1000 560 1000 560 1000 560 1000
RGRU[21] 73.8 86.7 101.3 119.7 85.0 118.5 120.7 147.6
CS2S[14] 59.2 71.3 66.5 85.4 42.0 67.9 84.1 116.9
DTraj[20] 42.2 51.3 56.5 68.6 32.3 60.5 70.4 103.5 85.8 109.3 91.8 87.4
Ours 39.1 49.8 55.2 73.7 34.2 59.0 73.6 91.1 79.7 101.5 97.0 91.7
Milliseconds Phoning Posing Purchases Sitting SittingDown TakingPhoto
560 1000 560 1000 560 1000 560 1000 560 1000 560 1000
RGRU[21]
CS2S[14]
DTraj[20] 65.0 113.6 113.4 220.6 94.3 130.4 79.6 114.9 82.6 140.1 68.9 87.1
Ours 63.3 115.2 105.6 202.2 96.5 131.5 84.1 116.2 76.9 132.4 68.3 81.6
Milliseconds Waiting WalkingDog WalkingTogether Average
560 1000 560 1000 560 1000 560 1000
RGRU[21]
CS2S[14]
DTraj[20] 100.9 167.6 136.6 174.3 57.0 85.0 78.5 114.3
Ours 84.9 157.0 138.4 180.5 52.3 90.7 76.6 111.6
TABLE II: Long-term prediction over activities on HM. The D errors for activities of “RGRU” and “CS2S ” models are reported in [20]. The D errors for activities of [20] are reproduced using the available pretrained model.

Table II reports the results for long term prediction on HM. Similarly, our method achieves the best performance for both ms and ms, especially in ms, which shows the effectiveness of our proposed method powerfully.

To further show the performance of our proposed method, frame-wise performance is evaluated qualitatively in Figure 5. Compared with [20], our method achieves the best visualization performance for both short term and long term prediction, which demonstrates the effectiveness of our method again. As is denoted in Figure 5, our performance significantly outperforms the baseline [20]. The main reason is that our method explicitly models the velocities of human motion both at Encoder and Decoder, while [20] ignores the modeling of velocities of human poses. Therefore, our model can better capture the motion dynamics, so it can achieve better results. More visualization results can be found in the supplementary material.

(a) Waiting
(b) Walking together
(c) Taking photo
Fig. 5: Visualization of frame-wise performance on HM. For each group sequences, from top to bottom, we show the results of DTraj [20] and the results of our proposed method, where the black poses denote the groundtruth, the blue poses and the red poses denote the predictive poses.
Milliseconds 200 400 600 800 1000
RGRU[21] 113.9 173.1 191.9 201.1 210.7
CS2S[14] 71.6 124.9 155.4 174.7 187.5
DTraj[20] 35.6 67.8 90.6 106.9 117.8
Ours 31.4 64.9 - - -
TABLE III: Short and long-term prediction on DPW.

Results on DPW: Table III reports the results for short term and long term prediction on DPW. In general, our conclusion remains unchanged. Our method consistently outperforms the baselines at all time-steps for both short term and long term prediction, which further verifies the effectiveness of our proposed method.

Iv-C Ablation analysis

# Encoder loss ms ms ms ms Average
pb vb sw ATPL
10.3 23.4 48.9 59.8 35.6
10.1 24.0 50.7 62.0 36.7
10.6 24.3 50.3 61.0 36.6
10.1 23.5 50.1 61.2 36.2
10.0 23.2 48.3 59.4 35.2
10.2 23.5 50.2 61.2 36.3
10.5 23.7 48.9 59.1 35.6
9.9 22.9 47.9 59.2 35.0

TABLE IV: Ablation experiments results on HM. Where “ms” denotes “milliseconds”, “” denotes “considering the diference among , and coordinates of joints”, “pb” denotes “pose branch”, “vb” denotes “velocity branch” , “sw” denotes “pose branch and velocity branch are shared parameters”, “” denotes “the position loss”, “” denotes “the velocity loss” and “ATPL” denotes “Attention Temporal Prediction Loss”.

In this section, we conduct ablation experiments to show the effectiveness of several components in AGVNet. The experimental results are reported in Table IV. Compared the errors between # and #, the errors of “#” are increased at all time-steps, which shows that modelling the difference among , and coordinates can better capture the motion dynamics and thus improve the final performance. The experiments of “#”, “#” and “#” show that combining the pose branch and velocity branch can achieve better results, which shows the effectiveness of our two stream Encoder. The performance of “#” outperforms than that of “#”, which shows that shared weights of pose branch and velocity branch can better improve the final performance. This is consistent with the common two-stream method. The errors of “#” and “#” are larger than that of “#”, which shows that ignoring the position loss or the velocity loss can lead to worse results. Moreover, the errors of “#” are increased significantly, especially at the later time-steps. The main reason is that can guide the network to explicitly model the velocities of future poses. In this case, the network can capture the motion dynamics well, so that we can obtain better results, especially at later time-steps. Compared the errors between “#” and “#”, “#” achieves lower errors on average, especially at the early prediction. The decremental weight design in ATPL guides the network to predict more accurate results at early time-steps, and further enhance the overall performance using our proposed recursive model. In summary, all components contribute a positive influence on our final network, and the combination of all components can achieve the best performance.

V Conclusion

In this paper, we propose a novel end-to-end architecture, AGVNet, for human motion prediction. Our method first predicts the velocities of future poses as an intermediate result rather than the position of future poses directly. Different from prior works, we explicitly model the velocities of skeletal motion both at Encoder and Decoder, which can better capture the motion dynamics of human motion. What’s more, we propose an attention temporal prediction loss (ATPL) for the recursive prediction model, which can efficiently guide the network to achieve more accurate predictions, especially at early prediction. Finally, we evaluate our model on two challenging datasets, and our model achieves state-of-the-art performance. The experiments also show that modeling the difference among , and coordinates can improve the performance of human motion prediction.

Acknowledgment

This work was supported partly by the National Natural Science Foundation of China (Grant No. 61673192), Fundamental Research Funds for The Central Universities (No. 2019RC27), and BUPT Excellent Ph.D. Students Foundation (CX2019111).

References

  • [1] J. Butepage, M. J. Black, D. Kragic, and H. Kjellstrom (2017) Deep representation learning for human motion prediction and classification. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §I, §II.
  • [2] H. Chiu, E. Adeli, B. Wang, D. Huang, and J. C. Niebles (2019) Action-agnostic human pose forecasting. In IEEE Winter Conf. on Applications of Computer Vision (WACV), Cited by: §I, §II.
  • [3] Y. Du, W. Wang, and L. Wang (2015) Hierarchical recurrent neural network for skeleton based action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1110–1118. Cited by: §III-B.
  • [4] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §I, §I, §II.
  • [5] A. Gopalakrishnan, A. Mali, D. Kifer, C. L. Giles, and A. G. Ororbia (2019) A neural temporal model for human motion prediction. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §I, §II.
  • [6] L. Gui, Y. Wang, X. Liang, and J. M. Moura (2018) Adversarial geometry-aware human motion prediction. In European Conference on Computer Vision (ECCV), Cited by: §I, §II, §II.
  • [7] L. Gui, Y. Wang, D. Ramanan, and J. M. Moura (2018) Few-shot human motion prediction via meta-learning. In European Conference on Computer Vision (ECCV), Cited by: §I, §II, §II, §III-E.
  • [8] Guo,Xiao and Choi,Jongmoo (2019) Human motion prediction via learning local structure representations and temporal dependencies. In

    AAAI Conference on Artificial Intelligence (AAAI)

    ,
    Cited by: §I, §II, §II.
  • [9] A. Hernandez, J. Gall, and F. Moreno-Noguer (2019) Human motion prediction via spatio-temporal inpainting. In The IEEE International Conference on Computer Vision (ICCV), pp. 7134–7143. Cited by: §II, §II.
  • [10] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708. Cited by: §III-D.
  • [11] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014-07) Human3.6m: large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (7), pp. 1325–1339. Cited by: §I, §II, §IV-A, §IV-A, §IV.
  • [12] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016)

    Structural-rnn: deep learning on spatio-temporal graphs

    .
    In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
  • [13] Y. Kong and Y. Fu (2018) Human action recognition and prediction: a survey. arXiv preprint arXiv:1806.11230. Cited by: §I.
  • [14] C. Li, Z. Zhang, W. S. Lee, and G. H. Lee (2018) Convolutional sequence to sequence model for human dynamics. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §I, §I, §II, §IV-A, §IV-B, §IV-B, TABLE I, TABLE II, TABLE III.
  • [15] M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, and Q. Tian (2019) Symbiotic graph neural networks for 3d skeleton-based human action recognition and motion prediction. arXiv preprint arXiv:1910.02212. Cited by: §I, §II.
  • [16] M. Li, S. Chen, Y. Zhao, Y. Zhang, Y. Wang, and Q. Tian (2020) Dynamic multiscale graph neural networks for 3d skeleton-based human motion prediction. arXiv preprint arXiv:2003.08802. Cited by: §I, §II.
  • [17] Li,Chao, Q. Zhong, Xie,Di, and S. Pu (2018) Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. arXiv preprint arXiv:1804.06055. Cited by: §III-B.
  • [18] X. Liu, J. Yin, H. Liu, and Y. Yin (2019) PISEP: pseudo image sequence evolution based 3d pose prediction. arXiv preprint arXiv:1909.01818. Cited by: §III-B.
  • [19] Liu,Zhenguang, S. Wu, S. Jin, Q. Liu, S. Lu, R. Zimmermann, and L. Cheng (2019) Towards natural and accurate future motion prediction of humans and animals. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §II.
  • [20] W. Mao, M. Liu, M. Salzmann, and H. Li (2019) Learning trajectory dependencies for human motion prediction. In The IEEE International Conference on Computer Vision (ICCV), pp. 9489–9497. Cited by: §I, §I, §II, Fig. 5, §IV-A, §IV-B, §IV-B, §IV-B, TABLE I, TABLE II, TABLE III.
  • [21] J. Martinez, M. J. Black, and J. Romero (2017) On human motion prediction using recurrent neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §I, §I, §I, §II, §III-E, §IV-A, §IV-B, §IV-B, TABLE I, TABLE II, TABLE III.
  • [22] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems (NIPS), pp. 568–576. Cited by: §III-B.
  • [23] G. W. Taylor, G. E. Hinton, and S. T. Roweis (2007) Modeling human motion using binary latent variables. In Advances in Neural Information Processing Systems (NIPS), pp. 1345–1352. Cited by: §II.
  • [24] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll (2018) Recovering accurate 3d human pose in the wild using imus and a moving camera. In The European Conference on Computer Vision (ECCV), pp. 601–617. Cited by: §IV-A, §IV.
  • [25] H. Wang and J. Feng (2019) Vred: a position-velocity recurrent encoder-decoder for human motion prediction. arXiv preprint arXiv:1906.06514. Cited by: §I.
  • [26] H. Wang and J. Feng (2019) VRED: a position-velocity recurrent encoder-decoder for human motion prediction. arXiv preprint arXiv:1906.06514. Cited by: §II.