1 Introduction
Human or animals motion prediction has become an important research field, given its role in facilitating a broad range of applications in sports, healthcare, education, security, virtual and augmented reality, among others. Current techniques are becoming mature to predict shortterm motions from 3D skeleton collection devices like Kinect. For example, motions like hand posing, which contains a series of hand poses, can be collected by depth sensors with their 3D skeleton data recording the trajectories of human body joints. Each pose consists of a fixed number of bones and joints and can be inferred from a video frame (in other words, each frame only contains one pose). The main focus of this paper is on longterm motions, where a motion is a collection of spatiotemporally related poses.
As illustrated in Fig 1, given a sequence of ground truth frames, many existing models, such as EncoderRecurrentDecoder network (ERD) [5], LSTM 3 layers (LSTM3LR) [5]
, Residual Gated Recurrent Unit (RESGRU)
[18] and Hierarchical Motion Recurrent Network (HMR) [15], can generate shortterm motions, which are normally regard as less than 400 milliseconds (e.g. the first four frames in this example). However, the poses become unrecognizable or motionless on longterm prediction. It is well known that modeling motions naturally requires the characterization of their spatiotemporal dependencies among poses. That is to say, a longterm motion prediction model should capture inherent structures associated with individual poses as well as their spatiotemporal dependencies.Despite being a very challenging problem, in recent years there has been a rapid growth of interest in modeling and predicting articulated object motions. Conventional approaches have gained attention in recent years for addressing object motion prediction problems. They leverage expert knowledge about kinematics and utilize latentvariable models like hidden Markov assumptions [12, 11, 29], Gaussian process [30]
[26] and implicit probabilistic model [24] to characterize motion sequences. but motions and their spatiotemporal relations in these models need to be manually encoded, which could be rather difficult to scale up and is almost impossible for many practical scenarios where spatiotemporal relations among poses are intricate.On the other hand, the most popular modeling paradigm might be that of the deep neural networks, which include techniques such as recurrent neural network (RNN), long shortterm memory models (LSTM) and gated recurrent unit (GRU). While these neural networkbased approaches are capable of managing temporal contexts, they have difficulties in capturing longterm dependencies
[15]. This is because these models rely on conventional recurrent units where the hidden state sequentially reads a frame and updates its value, which leads to overwhelming state estimation from the inputs in recent time steps. In particular, they suffer from first frame discontinuity, that is, a prominent jump between the last ground truth frame and the first predicted pose. In addition, current works mainly focus on temporal information and are unfortunately rather limited in characterizing rich finegrained spatial relationships among joints. In fact, as these models mostly focus on coarsegrained (highlevel) spatial information (e.g. taking all the joints as a whole in a pose), ignoring internal joints dependency, only spatial relations associated with entire body can be sufficiently captured. As a result, the predicted human poses often converge to the mean (i.e. motionless) poses [32] or shift to unrecognizable (e.g. nonhuman like) motions [18] in longterm prediction, as illustrated in Fig 1 (e.g. ERD, LSTM3LR, RESGRU and HMR). Moreover, most of the existing approaches adopt walking activity, which only repeats a fixed style of regular movements of legs, to demonstrate their superiority on longterm prediction. However, we found these models can only perform well on such simple activities but not others (e.g. eating), especially for the activities without any explicit discipline.To address these issues in longterm motion prediction, we present a spatiotemporal hierarchical recurrent neural network to explicitly model the motion context of spatiotemporal relations and predict future motions. In particular, our approach considers a principled way of dealing with the inherit structural variability in longterm motions. Briefly speaking, to describe an articulated object, we propose to introduce a set of latent vector variables generated from the Lie algebra to represent several separate kinematic chains of body part movements as shown in Fig 2. Now each resulting vector from the Lie algebrabased representation contains its unique set of poses that together with the corresponding spatial features and temporal information. To fully characterize a certain cluster of instances that possess similar motions and their spatiotemporal dependencies, a hierarchical recurrent network is devised to encode the spatial relationships along with the temporal relations. Specifically, in each recurrent layer, a unit variable that represents a bone in a frame is updated by exchanging information with other unit variables considering both spatial and temporal dependencies. Also, a global spatial state and a global temporal state are incorporated into the unit hierarchically in each layer to capture global spatiotemporal relations. Different from traditional recurrent units, such as LSTM and GRU, all the units in our network can hierarchically read unit states from the previous step and update their values simultaneously within the current step. In this way, spatiotemporal information can be maintained in each recurrent step, allowing our model to capture longterm dependencies. In addition, a structured stack LSTMbased decoder is introduced to decode the predicted poses with a new loss function defined to estimate the importance of a bone quantitatively concerning its kinematic location in the skeleton. In this way, our neural networkbased model is more capable of characterizing the inherit structural variability in longterm motion prediction when comparing to existing methods, which is also verified during empirical evaluations to be detailed in later sections. Our project’s main page with experimental videos is at https://zanecode6574.github.io/STHierarchicalRecurrentNetwork and the code is available at https://github.com/p0werHu/articulatedobjectsmotionprediction.
2 Related work
In this section, a brief review of related topics, i.e. motion representation, modeling and prediction, are listed below.
2.1 Pose and motion representation
As the fundamental issue in motion related applications, three visionbased approaches are commonly used to represent poses, namely, RGBbased representation [4, 16, 20], depth mapbased representation [13] and skeletonbased representation. Here we mainly focus on the skeletonbased representation. Currently, skeletonbased representation has attracted large attention because of its immunity to viewpoint change [7] and the geometric description of rigid body [21]. The existing approaches are roughly divided into two categories: jointbased approaches [14, 1, 25, 3] which regard skeleton as a set of independent points and partbased (or bonebased) approaches [6, 9, 19] which consider skeleton as a set of rigid segment made up of two joint points [28].
Besides, motion representation is also very significant that it should effectively capture the spatiotemporal motion characteristics of joints or bones. Two most common methods are Euler angle representation and unit quaternion representation. However, the Euler angle representation suffers from nonintrinsic singularity or gimbal lock issue, which leads to numerical and analytical difficulty, while the unit quaternions approach leads to singularityfree parametrization of rotation matrices, but at the cost of one additional parameter [23]. Currently, Lie groupbased representation was proposed to solve these singularity and computational issues in manifoldbased skeletal motions. Vemulapalli et al. [28] first introduced a Lie group, named Special Euclidean group SE(3), in skeletal motion representation to calculate the relative geometry between various body parts. It is found that the relative geometry provides a more sensible description compared to absolute locations of one bone over SE(3) representation. On the other hand, Special Orthogonal Group SO(3) [27, 8], another Lie group, was utilized to represent only rotations but not translations in motions, which obtained similar performance as SE(3). However, all the joints in these approaches are regarded equally in a skeleton by ignoring the anatomical restricts among joints [15]. This inspires us to divide an articulated object into several kinematics chains to retain these skeletal restrictions.
2.2 Motion modeling
Motion prediction requires a model having an efficient encoding capability on input motion sequences. Initiatively linear SVM were adopted to model human motion [28, 22], with Lie group skeleton representation to characterize the spatial and temporal features [27]. Lv et al. [17]
leveraged Hidden Markov Models (HMMs) to capture the sequential properties of poses. Recently RNNs become the most popular model. Du et al.
[3] used RNN by dividing the human skeleton into five parts and learning their features separately, which are integrated by a single layer network afterwards. Huang et al. [8] incorporated the Lie group into a recurrent network structure enabling it the ability to learn more appropriate spatiotemporal features than SVM and HMM. Similar to CNNs, this network defines RotMap layer as convolutional layer and RotPoling layer as pooling layer. Liu et al. [14] proposed a spatiotemporal LSTM for motion modeling with a novel trust gate introduced to reduce noise caused by data collection devices. However, the motivation of these models mainly focused on designing efficient encoders, which contain highlevel encoding features, but many significant spatiotemporal dependencies are neglected. Consequently, these encoders cannot be transplanted to motion prediction directly. Different from the previous work, we design a RNNbased encoder to capture the spatiotemporal features of input pose sequences in one single step and use hierarchical structures to retain longterm spatiotemporal information.2.3 Motion prediction
As aforementioned in the introduction section, many conventional approaches need to handcraft spatiotemporal relations and even their weights from domain knowledge in motion prediction. Therefore, deep neural networks are commonly used to predict future motions in recent years. Fragkiadaki et al. [5] proposed the ERD model that incorporates nonlinear encoder and decoder networks before and after recurrent layers and a LSTM in the recurrent layer. SRNN [9] divides human body into three different parts (i.e. spine, arms, and legs) among which the spatial and temporal relations are learnt separately. RESGRU [18]
is a sequencetosequence architecture that combines GRU and residual connection in the decoder. HMR
[15] introduces a hierarchical motion recurrent network, which exchanges information with neighboring frames to obtain temporal features of motions. Tang et al. [25] proposed a modified highway unit (MHU) and a gram matrix loss function for longterm prediction, attempting to reduce the problem of motionless. To address the problems in these models (as mentioned in the introduction section), we present the our hierarchical recurrent network model to explicitly capture the inherent structural varieties of skeleton motions with spatiotemporal dependencies.3 Lie algebra representation for skeletal data
It is known that the relative geometry of a pair of two body parts of one skeleton can be described by representing each of them in a local coordinate system attached to the other [28]. Given two bones and , as shown in Fig 2, the local coordinate system of is computed by rotating with minimum rotation and translating the global coordinate system so that becomes the position and orientation of axis (i.e. its starting joints becomes the origin and the axis is aligned with it). After this process, we can obtain the location of attached to the local system of , denoted by . Then, we can compute a 3D rigid transformation formalized as , where is a rotation matrix and is a 3D translation vector to take to the position and orientation of .
(1) 
where means the end joint of and means the length of . Similarly, the location of attached to the local system of is calculated by another transformation matrix. As a result, a total number of transformation matrices are obtained where is the number of bones. Mathematically, 3D rigid transformation is element of the Special Euclidean group SE(3). In the end, one skeleton is represented as a curve in .
However, different from the process in [28, 27, 15], we fix the bone length by a normalized bone length, indicating that all the translation vectors are static; and thus, only the rotation matrix is required in our model. Meanwhile, given that a human body is described by a kinematic tree consisting of five kinematic chains (i.e. spine, two legs, and two arms), as illustrated in Fig 2, we only need to calculate rotation matrix between two neighbouring bones sharing the same joint instead of two arbitrary bones. In this way, the structure of skeletal anatomy is maintained in terms of the anatomical restricts among bones. In addition, the number of rotation matrices in our model is reduced, which may potentially decrease computational cost compared with those containing any pair of bones [28, 27]. In practice, we first compute the axisangle representation (n, ) by
(2) 
(3) 
where denotes outer and means inner products. Then, the rotation matrix is calculated by Rodriguez formula:
(4) 
where
is a identity matrix and
is the skewsymmetric matrix of
n. Note that the set of rotation matrices belong to the Special Orthogonal Group , the skeleton is represented as a curve in .Because regression in the curved space is nontrivial, we map this curved space to its tangent space regarded as Lie algebra using the approximate solution [8] of logarithm map:
(5) 
(6) 
In the end, the skeleton are mapped to a series of Lie algebra vectors: , where denotes the number of chains (in our model for human motion) and () equals the number of bones in the th chain minus one.
4 Our model
Given a sequence of observed poses in a motion, the goal is to predict its future poses , where is the number of frames. Our model is divided into two parts, a spatiotemporal hierarchical RNN encoder and a structured stack LSTM decoder, as shown in Fig 3. The encoder aims to model the motion efficiently, which encodes all the observed poses in simultaneously concerning their sptiotemporal dependencies. At each recurrent layer, a unit on behalf of a bone in a frame exchanges information with two neighboring units at the spatial axis and the previous unit at the temporal axis. Meanwhile, global states on both temporal and spatial dependencies are incorporated into the unit to help our model maintain global information. On the other hand, the decoder is designed to predict future poses . In the first layer, the decoder deciphers overall information from the previously encoded features. Next, a spine LSTM is first used to decode a spine pose in the second layer, and then another two LSTMs are utilized (i.e. leg LSTM and arm LSTM) to decode two arms and two legs according to the previously decoded spine, respectively. Note that the first frames will be feeded into the encoder, and the last frame will be used in the decoder.
4.1 Spatiotemporal hierarchical RNN encoder
We use and (where ) to denote the index of a frame and an element in the Lie algebra vector , respectively. Although not accurate, we call an element in as a bone for convenience. As shown in Fig 4 (a), a state in layer is update by exchanging information with its neighboring states , , at temporal axis and state at spatial axis. In this way, the model learns the finedgrained features of current bone on both spatial and temporal aspects. In details, with the increase of the recurrent steps, exchanges information with more bones in different frames and bones in the current frame . Besides, global spatial state and temporal state are used to incorporate global information into the state . The purpose is that represents global feature of bones in frame so that the model obtains the highlevel information of current pose. denotes global information about bone at each frame, which enables the model to encode the movement of this bone within one state. At the first recurrent layer, we initialize states such that , , and , where and are parameters in the network.
To update , following the design of LSTM, there are six different forget gates to control the information from six incoming context channels separately (i.e. , , , , , and ): , , , , , and . The input gate and output gate control the information flow from input pose to update the hidden state of this recurrent layer. The process of updating cell and can be formulated as:
(7) 
(8) 
(9) 
where is an affine transformation consisting of parameters in the model and means Hadamard product.
refers to sigmoid activation function. Note that for all
the parameters of in the same frame are shared within the layer , and also the parameters are shared among different recurrent layers.To update temporal global state and spital global state , as shown in Fig 4 (b), for , we first design forget gates for cell of all frames and then a forget gate for is introduced. This process is formulated as:
(10) 
(11) 
(12) 
(13) 
Similarly, for global state , we design forget gates for all the cells of one frame and . The detailed derivations and formulations are provided in our project home page.
4.2 Structured stack LSTM decoder
The decoder aims to decipher the motion from encoded features and output predicted poses frame by frame. Existing methods [15, 18] utilized LSTM or GRU to achieve this goal, which regards different parts of the skeleton as equal important and breaks the structural principle of the skeleton. This inspires us to design a structured stack LSTM decoder with three layers. The first layer models overall information of motion from the encoder. Then, a new LSTM is used to predict the spine in the second layer, and another two LSTMs are utilized to obtain arms and legs in the last layer. At the first layer, the cell state input is set to , where and the hidden state input is . For the second layer, the cell state and input state are and , where . At the rest of layers, the hidden state and cell state inputs are set to .
4.3 Loss function
Currently there are three loss functions commonly used during network training, i.e., calculating L2 loss on the Lie algebra vector directly or obtaining the locations of joints or bones by forward kinematics and computing their L2 loss. However, these functions neglect the kinematic relations among bones in chains and regard all the bones equally. To eliminate this problem, a new loss function [15] is presented by computing a weight for each element in the Lie algebra vector . The fact is that the prediction on a bone is much more important than that on its successive bones in a chain when doing forward kinematics. However, this function cannot quantize the accumulative effect of the bones in the back of the chain. Here we redefine the function, allowing it to estimate such effect, as follows:
(14) 
(15) 
where denotes the length of bone at frame , and refers to the predicted bone. indicates the weight of current bone by accumulating the lengths of its successive bones. Consequently, a bone is given more penalty coefficient if it has longer subsequent bones.
5 Experiment
5.1 Datasets
5.2 Parameters
In our experiments, the length of hidden in the encoder is set to 20 and 16 for H3.6m and muouse datasets, respectively. The recurrent step is 10 and batch size is 32. For shortterm prediction, we randomly collect data samples with 60 consecutive frames from videos (i.e. ), which is the same as other comparative approaches [25, 15]. The first 50 frames are used to feed the encoder and decoder, while the remaining 10 frames are left for the prediction. On the other hand, 50 frames are feeded into the network to predict 100 frames in longterm prediction. The Adam tool [10] is utilized to the optimize the network with its parameters and
set to 0.9 and 0.999, respectively. Our model is implemented on Pytorch 1.0 and the model parameters are randomly initialized using Gaussian distribution.
5.3 Baseline methods
The prediction performance of our approach is compared against six established RNNbased methods: ERD and LSTM3LR [5], SRNN [9], MHU [25], ResGRU [18], and HMR [15]. These competing models are evaluated in two aspects: , which indicates angle errors in the shortterm prediction, and , which considers feasible motion (dynamic and human like) in the longterm prediction. In particular, for quantitative evaluation we use the mean angle error (MAE) metric indicating the angle difference of two bones between the prediction and ground truth. Also, we take agnostic zerovelocity [18] into consideration, which always regards the new predicted pose as the last observed pose. This method is significant to analyze the effectiveness of models in motion prediction as baseline.
6 Results and discussion
6.1 Results on H3.6m dataset
Methods  Greeting  Walking  

80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  
ERD [5]  1.15  1.32  1.58  1.69  1.91  1.92  1.94  2.01  1.06  1.12  1.22  1.26  1.31  1.34  1.41  1.51 
LSTM3LR [5]  0.92  1.12  1.39  1.51  1.76  1.76  1.81  1.91  0.88  0.95  1.02  1.05  1.10  1.12  1.14  1.21 
SRNN [9]  0.74  1.07  1.48  1.67  2.14  2.11  2.19  2.42  0.64  0.83  1.08  1.22  1.46  1.51  1.55  1.58 
ResGRU [18]  0.57  0.92  1.28  1.44  1.74  1.76  1.82  1.95  0.34  0.55  0.77  0.87  1.07  1.14  1.23  1.35 
Zerovelocity [18]  0.54  0.89  1.30  1.49  1.76  1.74  1.77  1.80  0.39  0.68  0.99  1.15  1.35  1.37  1.37  1.32 
MHU [25]  0.54  0.87  1.27  1.45  1.75  1.71  1.74  1.87  0.32  0.53  0.69  0.77  0.90  0.94  0.97  1.06 
HMR [15]  0.54  0.91  1.27  1.41  1.66  1.65  1.69  1.72  0.35  0.54  0.79  0.85  0.94  0.98  1.04  1.11 
Ours(Remove )  0.54  0.88  1.25  1.39  1.62  1.57  1.63  1.69  0.32  0.45  0.69  0.77  0.86  0.90  0.97  0.99 
Ours(Remove )  0.56  0.89  1.26  1.40  1.62  1.59  1.61  1.67  0.35  0.46  0.71  0.79  0.89  0.92  0.97  1.01 
Ours(Replace LSTM)  0.54  0.88  1.24  1.38  1.60  1.58  1.63  1.68  0.32  0.45  0.70  0.77  0.87  0.91  0.96  1.00 
Ours  0.54  0.86  1.23  1.37  1.58  1.55  1.60  1.66  0.30  0.42  0.68  0.76  0.85  0.89  0.94  0.98 
Methods  Posing  Purchases  
80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  
ERD [5]  1.35  1.41  1.69  1.86  2.06  2.12  2.18  2.57  1.16  1.30  1.49  1.52  1.81  1.86  1.85  2.34 
LSTM3LR [5]  1.22  1.25  1.54  1.71  1.93  2.01  2.09  2.73  1.03  1.13  1.35  1.42  1.81  1.88  1.81  2.30 
SRNN [9]  0.96  1.14  1.70  2.04  2.48  2.47  2.69  3.50  0.69  1.09  1.48  1.67  1.92  1.99  1.91  2.48 
ResGRU [18]  0.4  0.74  1.39  1.66  1.98  2.12  2.23  2.67  0.54  0.79  1.10  1.20  1.61  1.69  1.71  2.16 
Zerovelocity [18]  0.28  0.57  1.13  1.37  1.81  2.14  2.23  2.78  0.62  0.88  1.19  1.27  1.64  1.68  1.62  2.45 
MHU [25]  0.33  0.64  1.22  1.47  1.82  2.11  2.17  2.51  _  _  _  _  _  _  _  _ 
HMR [15]  0.24  0.50  1.06  1.31  1.64  1.81  1.95  2.49  0.52  0.78  1.06  1.15  1.60  1.66  1.61  2.11 
Ours(Remove )  0.23  0.51  1.04  1.33  1.63  1.86  2.02  2.60  0.53  0.79  1.07  1.12  1.54  1.57  1.52  2.12 
Ours(Remove )  0.25  0.52  1.07  1.33  1.64  1.87  2.01  2.61  0.52  0.78  1.05  1.14  1.55  1.58  1.54  2.13 
Ours(Replace LSTM)  0.24  0.50  1.06  1.32  1.62  1.87  2.02  2.60  0.52  0.80  1.05  1.12  1.52  1.56  1.51  2.13 
Ours  0.22  0.49  1.03  1.30  1.60  1.84  1.99  2.58  0.50  0.77  1.04  1.10  1.49  1.54  1.49  2.11 
The quantitative results of the complex activities, e.g. Greeting, Walking, Posing, and Purchases, are reported in Table 1 (other activities are not shown due to page limitation, but similar results are provided in our home page). It can be observed that our model clearly outperforms all the other models with a margin for both shortterm and longterm prediction. This is mainly due to its abilities to take advantage of the rich spatiotemporal dependency information between bones in chains sperately. Notably, Greeting and Purchases are more challenging than others because they contain more hand movements than leg and foot movements. Fortunately, our model can effectively encode hand movement and leg movement simultaneously. In addition, it is clear that the zerovelocity performance is better than that of ERD, LSTM3LR and SRNN, which is consistent with the results in [18]. This might be that some activities only change their motions slightly. In such situation, zerovelocity can yield static poses continuously, but other competing models may suffer from the first frame discontinuity issue.
For qualitative evaluation, we evaluate these models by visualizing the motions from two primary aspects: humanlike and recognizable. Note that we only list 3 out of 11 activities here, and visualize 4 of the first 100 frames as shortterm motion and the rest of 12 frames as longterm motion. We refer the interested readers to visit our project website for better visual effect. For instance, all models perform well on shortterm prediction on walking, but for longterm prediction, LSTM3LR, ResGRU, and ERD converge to motionless state. It is obvious that our model and HMR yield humanlike and recognizable poses throughout the entire prediction window where the movement speed of our model is more close to the ground truth than HMR. This is mainly due to the global states being designed to encode integrated information at both spatial and temporal domain. Besides, we found that walking is relatively simple since it only contains repetitive movements of legs and arms. However, for a more complex activity eating, which contains significant motions like feeding food to the mouth with hands, HMR only learns the foot movement but the hands are motionless pose. Other comparison models cannot obtain recognizable motion. To further complicate the matter, in posing, which contains motion features including standing still and doing several poses by hands, it is clear that only our model captures these features and repeatedly yields the motional and human like poses, as shown in Fig 1.
6.2 Results on mouse dataset
Unlike human dataset, mouse dataset is more challenging due to its stochastic nature which causes difficulties to category its motion [31]. Table 2 depicts the comparison results with MAE. Our model outperformes other models on six out of eight frames. We also found that zerovelocity only surpasses others at the ms frame and falls behind with a notable margin on the remaining frames. This is because the movement of mouse is faster and more random than the human. As suggested in Fig 6, our model outperforms others after 40 milliseconds, which verifies the superiority and stability of our model.
Methods  Mouse  

80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  
ERD [5]  0.50  0.48  0.63  0.69  0.72  0.68  0.69  0.81 
LSTM3LR [5]  0.53  0.49  0.66  0.68  0.67  0.62  0.70  0.75 
ResGRU [18]  0.41  0.47  0.62  0.69  0.70  0.64  0.70  0.70 
Zerovelocity [18]  0.40  0.53  0.73  0.95  1.03  0.94  1.07  1.13 
HMR [15]  0.42  0.44  0.64  0.71  0.73  0.71  0.73  0.72 
Ours  0.41  0.43  0.53  0.52  0.57  0.50  0.67  0.72 
6.3 Ablation study
6.3.1 Loss function
In this section, we evaluate the effectiveness of our proposed loss function by comparing it against L2 loss and the HMR loss [15] functions. We use H3.6m dataset for this study with the same parameter settings depicted in section 5.2. Table 3 reports the comparison results of average MAEs on all the activities. Our loss function completely outperforms L2 loss and HMR loss. This is because our loss function considers the length of leaf bones when estimating the root bones. It not only remains the anatomical restricts of chains, but also provides a bone with a quantized weight in terms of its location in a pose.
Loss  H3.6m  

80ms  160ms  320ms  400ms  560ms  640ms  720ms  1000ms  
L2 loss  0.36  0.61  0.97  1.10  1.30  1.40  1.47  1.80 
HMR loss [15]  0.34  0.60  0.95  1.06  1.29  1.37  1.45  1.77 
Our loss  0.33  0.58  0.93  1.06  1.28  1.36  1.44  1.75 
6.3.2 Component effectiveness
We separately evaluate the effects of different components in our network by removing modules or replacing them with conventional methods. They are evaluated by testing for two types of investigations that are common with neural network models: encoder component effects (i.e. remove temporal states and spatial states , respectively) and decoder component effects (i.e. replace our structured stack LSTM decoder with a naïve LSTM of two layers). Table 1 shows that changing the components may lead to negative effects on the performance of our model. It is clear that when removing spatial states , the prediction performance drops faster than that changing other components. This might be due to the highlevel encoding of finegarined spatial information in our model. Besides, when using the naïve LSTM decoder, the model gives worse performance than that using our structured stack LSTM decoder, which indicates that our model is more effective to predict spine, arms, and legs gradually than obtaining them at the same time.
7 Conclusion
In this paper, we present a spatiotemporal hierarchical recurrent network, where the hierarchical model is incorporated to simultaneously capture the inherit spatial and temporal varieties of motions. It is more efficient and flexible than existing methods on both shortterm and longterm motion predictions. As for future work, we will explore the applications of our model on raw image videos, and we will consider predicting multiple motions with probabilities and will instead learn a network generating future motions under uncertainty.
This work was supported by grants from the National Natural Science Foundation of China (grant no. 61977012), the National Major Science and Technology Projects of China (grant nos. 2018AAA0100700, 2018AAA0100703), the Fundamental Research Funds for the Key Research Programm of Chongqing Science & Technology Commission (grant nos. cstc2017rgznzdyf0064), the Chongqing Provincial Human Resource and Social Security Department (grant no. cx2017092), the Central Universities in China (grant nos. 2019CDJGFDSJ001, CQU0225001104447 and 2018CDXYRJ0030).
References
 [1] Judith Butepage, Michael Black, Danica Kragic, and Hedvig Kjellström, ‘Deep representation learning for human motion prediction and classification’, in CVPR, pp. 1591–1599, (2017).
 [2] Ionescu Catalin, Papava Dragos, Olaru Vlad, and Sminchisescu Cristian, ‘Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments’, IEEE Transactions on Pattern Analysis & Machine Intelligence, 36(7), 1325–1339, (2014).
 [3] Yong Du, Wei Wang, and Liang Wang, ‘Hierarchical recurrent neural network for skeleton based action recognition’, in CVPR, (2015).

[4]
Chelsea Finn, Ian Goodfellow, and Sergey Levine, ‘Unsupervised learning for physical interaction through video prediction’, in
NIPS, (2016).  [5] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik, ‘Recurrent network models for human dynamics’, in ICCV, (2015).
 [6] F. Sebastian Grassia, ‘Practical parameterization of rotations using the exponential map’, Journal of Graphics Tools, 3(3), 29–48, (1998).
 [7] Fei Han, Brian Reily, William Hoff, and Hao Zhang, ‘Spacetime representation of people based on 3d skeletal data: A review’, Computer Vision & Image Understanding, 158(C), 85–105, (2017).

[8]
Zhiwu Huang, Chengde Wan, Thomas Probst, and Luc Van Gool, ‘Deep learning on lie groups for skeletonbased action recognition’, in
CVPR, (2017).  [9] Ashesh Jain, Amir R. Zamir, Silvio Savarese, and Ashutosh Saxena, ‘Structuralrnn: Deep learning on spatiotemporal graphs’, in CVPR, (2016).
 [10] Diederik Kingma and Jimmy Ba, ‘Adam: A method for stochastic optimization’, International Conference on Learning Representations, (12 2014).

[11]
Andreas Lehrmann, Peter Gehler, and Sebastian Nowozin, ‘A nonparametric bayesian network prior of human pose’, in
ICCV, pp. 1281–1288, (12 2013).  [12] Andreas Lehrmann, Peter Gehler, and Sebastian Nowozin, ‘Efficient nonlinear markov models for human motion’, in CVPR, (06 2014).
 [13] Liu Li, Cheng Li, Liu Ye, Yongpo Jia, and David S. Rosenblum, ‘Recognizing complex activities by a probabilistic intervalbased model’, in AAAI, (2016).
 [14] Jun Liu, Amir Shahroudy, Dong Xu, Alex C. Kot, and Gang Wang, ‘Skeletonbased action recognition using spatiotemporal lstm network with trust gates’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 3007–3021, (2018).
 [15] Zhenguang Liu, Shuang Wu, Shuyuan Jin, Qi Liu, Shijian Lu, Roger Zimmermann, and Li Cheng, ‘Towards natural and accurate future motion prediction of humans and animals’, in CVPR, (2019).
 [16] William Lotter, Gabriel Kreiman, and David Cox, ‘Deep predictive coding networks for video prediction and unsupervised learning’, in ICLR, (2017).
 [17] Fengjun Lv and Ramakant Nevatia, ‘Recognition and segmentation of 3d human action using hmm and multiclass adaboost’, in ECCV, (2006).
 [18] Julieta Martinez, Michael J. Black, and Javier Romero, ‘On human motion prediction using recurrent neural networks’, in CVPR, (2017).
 [19] Julieta Martinez, Michael J. Black, and Javier Romero, ‘On human motion prediction using recurrent neural networks’, in CVPR, (2017).
 [20] Michael Mathieu, Camille Couprie, and Yann Lecun, ‘Deep multiscale video prediction beyond mean square error’, in ICLR, (2016).
 [21] Richard M. Murray, S. Shankar Sastry, and Li Zexiang, A Mathematical Introduction to Robotic Manipulation, CRC Press, Inc., 1st edn., 1994.
 [22] Eshed OhnBar and Mohan M. Trivedi, ‘Joint angles similarities and hog2 for action recognition’, in CVPRW, (2013).
 [23] Jonghoon Park and WanKyun Chung, ‘Geometric integration on euclidean group with application to articulated multibody systems’, IEEE Transactions on Robotics, 21(5), 850–863.
 [24] Hedvig Sidenbladh, Michael J. Black, and Leonid Sigal, ‘Implicit probabilistic models of human motion for synthesis and tracking’, Eccv, 1, 784–800, (2002).
 [25] Yongyi Tang, Lin Ma, Wei Liu, and Weishi Zheng, ‘Longterm human motion prediction by modeling motion context and enhancing motion dynamic’, in IJCAI, (2018).
 [26] Graham W. Taylor, Geoffrey E. Hinton, and Sam Roweis, ‘Modeling human motion using binary latent variables’, in International Conference on Neural Information Processing Systems, (2006).
 [27] Raviteja Vemulapalli, ‘Rolling rotations for recognizing human actions from 3d skeletal data’, in CVPR, (2016).
 [28] Raviteja Vemulapalli, Felipe Arrate, and Rama Chellappa, ‘Human action recognition by representing 3d human skeletons as points in a lie group’, in CVPR, (2014).
 [29] Pavlovi C Vladimir, James M. Rehg, and John Maccormick, ‘Learning switching linear models of human motion’, NIPS, 13, 981–987, (2001).
 [30] Jack M Wang, Fleet, David J, and Hertzmann Aaron, ‘Gaussian process dynamical models for human motion’, IEEE Transactions on Pattern Analysis & Machine Intelligence, 30(2), 283–298, (2007).

[31]
Chi Xu, Lakshmi Govindarajan, Yu Zhang, and Li Cheng, ‘Liex: Depth image based articulated object pose estimation, tracking, and action recognition on lie groups’,
International Journal of Computer Vision, (09 2016).  [32] Tianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman, ‘Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks’, in NIPS, (2016).
Comments
There are no comments yet.