1 Introduction
Character posing is an important step for keyframe animation. It is difficult for novices and even skilled artists due to the articulate property of human motion. With the most prevailing input device still being the mouse, the users’ input can only provide basic information such as 2D screen coordinates. Based on this limited information, it is challenging to generates satisfactory character poses efficiently. Moreover, it requires considerable information to determine the character’s all degrees of freedom (DOF’s). Given some 3D positional information of one or some joints, it is useful to reposition the rest of joints or even the whole pose if the information provided by users is not accurate. For example, a novice animator is likely to pose an unnatural character within a short time limit. It is then up to the algorithm to extract useful information such as pose style from the unnatural character and create a new natural one. This shall carry out interactively for synthesizing natural poses, and the process is referred to as character posing.
To solve the character posing problem, inverse kinematics is often necessary to find the skeleton in angle space representation. The classical inverse kinematics solves an underdetermined nonlinear system to find the joint angles. One popular method is to exploit the gradient information–that is, to construct the Jacobian matrix and then solve the system iteratively starting from a random initial point. However, the mapping from 3D Euclidean space to joint angle space is onetomany if the users’ constraints are insufficient. For example, given a set of incomplete joint constraints such as the 3D positions of some joints, the solution obtained from Jacobian method will not be unique but depends on the initiation, not to mention that all poses resulting from the possible solutions are unlikely to be natural. One not only has to narrow down the solution set, but must also refine the solutions so that the resulting pose is natural.
One way that may help is by learning from motion capture data. Even though the space of joint configuration is large, the desirable poses only span a much smaller space. For example, human beings can make a large number poses, but the space of natural poses is smaller. By recording these poses in motion capture data and learning from them, we can provide heuristics for solving the IK problem. This is the recent approach taken by researchers and is referred to as datadriven IK. Our approach falls in this category.
The framework of our model is showed in figure 1. In our proposed model, each pose is assumed to have sparse representation given a pose dictionary. The pose dictionary is learned from motion capture data in Euclidean space. These data cover a large number of different motion styles, such as walking, running and other sports activities. In the interactive character posing stage, our model can respond to the users’ inputs and constraints in realtime and construct natural poses that meet the users’ intentions. We solve the pose synthesis problem by breaking the optimization problem into three components: 1) finding the sparse coefficient and rotation parameters; 2) normalizing the pose to determine the scaling parameter and 3) building the output pose in angle space by Jacobian method. Details on our model and how to solve the optimization problem are presented in section 3.
The rest of this paper is organized as follows. We review the related work in the remaining of this section. Starting from the subspace models, we present the inspiration for this work and derive our model in section 2. In section 3 we introduce our proposed model and the algorithm for solving the model in detail, followed by applications and experimental results in section 4. We present some discussions and conclude our work in section 5.
1.1 Related work
Classical inverse kinematics The use of Jacobian matrix for inverse kinematics can be at least traced back to [GM85], in which Girard et al linearised the equation
at current estimate
, yielding , where is the forwardkinematic function which involves a set of translations and rotations, usually implemented procedurally in some programming language; is the Jacobian matrix defined as . The Jacobian matrix is usually not fullrank and the update is given by , where is the pseudoinverse of the Jacobian matrix and is a small positive number. To accommodate constraints such as angle limit or spatial relations , Zhao et al [ZB94] minimized thedistance between input pose and forwardkinematics function subject to the constraints using nonlinear programming technique. Rose et al
[RRP97] solved the IK problem using BFGS optimization method [GMW81], which is an quasiNewton algorithm that does not require the complicated Hessian matrix. Their work aimed at building the final motion in angle representation using the sensor data, which were obtained from motion capture process in Euclidean space.Datadriven Inverse Kinematics In general, datadriven IK leverages the mocap data and models the IK problems as follows
(1) 
where is the energy term that measures the (negative log) observation likelihood.
is the energy term that measures the (negative log) probability of current pose under some prior distribution. This prior distribution is what makes the model distinctive from other ones.
A straightforward model for modelling the prior is to use the Gaussian distribution. Due to the connection in the covariance matrix, this approach is related to Principal Component analysis (PCA) which restricts the solution to lie in the subspace span by the principal components. By imposing the Gaussian prior, we force the solution to approach the mean from the direction of one principal component or a linear combination of them. Instead of using the Gaussian model directly, we can also first partition the motion data by clustering algorithm and then build a Gaussian prior for each cluster. This is similar to the mixture of local linear models which have been used as baseline models in
[CH05].Wei et al [WC11] modelled the prior using the mixture of factor analyzers (MFA) [GH96]. MFA is similar to Mixture of Gaussian, but includes a dimension reduction component and avoids the illcondition problem of covariance matrix. Kallmann et al [Kal08]
introduced analytical IK for the arms instead of the whole body. To construct a natural wholebody, a set of predesigned key body poses are used for pose blending (interpolation). Grochow et al
[GMHP04] proposed stylebased IK (SIK) for modelling human motion. Their model is based on the scaled Gaussian Processes Latent variable Models (GPLVM) [Law04]. Specifically, the training samples and the target poses are mapped to lowdimensional latent variables using GPLVM. These variables are connected by a kernel function. The information then passes from the training set through the correlation of latent variables to the target pose. Their model can generalize to unseen poses thanks to the good generalization ability of Gaussian processes. However, due to the limitation of Gaussian processes, the complexity of their approach is asymptotically cubic to the size of training set. To reduce complexity, they maintained an active set during the training and synthesis. Despite the improved efficiency brought by the active set, it is still prohibitive to learn from large scale pose data for realtime applications. To further improve efficiency, Wu et al [WTR11] considered using different approximations to the speed up Gaussian processes and apply them to solve the IK problem. Other than GPLVM and its variants, modelling the motion based on dimension reduction is also very popular. Examples are those methods that based on state model [BH00, LWS02] and PCA [SHP04], etc.Given incomplete measurements of motion capture sensors, Chai et al [CH05] construct a fullbody human motion use Local PCA (LPCA) model. This basic idea is to incrementally estimate the current pose based on the previous estimated poses and a motion capture database. This prior term measures the deviation of reconstructed poses from the motion capture database. Since the database can be large and heterogeneous, they introduce a LPCA model: given an incomplete pose, they first search the database to find the nearest pose and build a Gaussian motion prior around the neighbourhood. The prior is then used for pose synthesis.
Motion data denoising and completion was considered by Lou et al [LC10]. The idea is to first construct a set of filter bases from the motion capture data and use them for motion completion or denoising. The resulting motion is the solution to a cost function that consists of the basesrepresentation error and the observation likelihood. The filter bases capture spatialtemporal patterns of the human motion, and the dennoising process also relies on the spatialtemporal patterns, which in our case do not exist. Another work on this direction was introduced by Lai et al [LYL11], in which lowrank matrix completion algorithm was used for unsupervised motion denoising and completion. The major difference of our working environment from both of these is that we do not have temporal information available when synthesizing new poses.
Sparse coding
On the other side, sparse representation has been widely applied to image processing and pattern recognition. Examples include face recognition
[WYG09], image superresolution
[YWHM08], etc. For modelling human motion, [LFAJ10] considered each joint’s movement as a signal that admits sparse representation over a set of basis functions. These basis functions are learned from the motion capture data. They demonstrated that the proposed model is useful for action retrieval and classification. Our work is different as we model each pose separately and our target application is on character posing instead of action retrieval and classification.Summary Among the existing models, the numerical IK algorithms do not have access to motion capture data, thus can not guarantee the naturalness of synthesized poses. For datadriven models, Gaussian model will introduce large error for large dataset as it tries to approximate the underlying complicated distribution by Gaussian. Imposing a clustering step before applying Gaussian is an improvement but leaves us the problem of choosing the cluster number. For LPCA, searching the pose in online mode is too slow for learning from large training set. The model accuracy also depends on both the searching result and the neighbourhood size. For SIK, the complexity problem is prohibitive for moderatescale training set, and the introduced activeset approximation is difficult to capture the diversity of motion styles. For MFA, the introduction of a diagonal matrix in the covariance avoids the illcondition problem when the cluster number is large (and thus the pose number in each cluster is small). However, being a variant of Gaussian model, it is still at the risk of underfitting the complicated data. Apart from the limitations mentioned above, most of these models are probabilistic and the training error is measured by likelihood, which is not intuitive: given such a measurement, it is not straightforward to determine whether the model is adequate for fitting the data. Consequently, it is hard to choose the model parameters (e. g. , the cluster number). In contrast, our model measures the training error directly by the mean square error, which is very intuitive for determining model parameters. Besides, our model can learn from large datasets (up to millions of poses) with an arbitrarily small training error (although) at the expense of the increasing the size of dictionary. We found that this increase does not cause the overfitting problem and the denoising and completion algorithm still maintains efficiency, as the complexity of our synthesis algorithm is linear to the size of pose dictionary . We present some realtime applications demonstrating that our model is effective for interactive character posing. We also compare our model with the existing models to test the performance of pose completion when a large proportion of joints are missing, and pose denoising when the pose is corrupted by dense and sparse Gaussian noise. Experimental results show that our model has lower completion and denoising error.
Contributions Our contributions are twofold: 1) starting from the prevailing subspace models in modelling human motion, we propose sparse representation of poses for character posing; to our knowledge, we are the first to propose sparse coding for character posing; 2) different from previous approaches, we propose to learn from the motion capture data in Euclidean space, which not only provides intuitive measurements in training error, but facilitate sparse coding and pose synthesis.
2 Overview of proposed model
Notation setting
In this paper, matrices, vectors and scalers are denoted in bold face uppercase, bold face lowercase and nonbold lowercase letters respectively.
, and denote the vector norm, vector (pseudo)norm and matrix Frobenius norm respectively. measures the number of nonzero components in a vector.From lowrank approximation of motion to sparse representation of poses As the movements of the body parts are correlated, when we represent the human motion as a matrix, it will be approximately lowrank, endowing a fastdecaying spectrum[LYL11]. Lowrank approximation is therefore effective for modelling human motion. Our work on this paper is inspired from the lowrank motion completion approach proposed by Lai et al[LYL11], in which the rank of a motion is minimized for completing and denoising human motion. The connection between rank function and the norm minimization is clear if we observe that the rank of a diagonal matrix is equivalent to the norm of the diagonal vector.
Let denote a set of poses and be the motion matrix:
. The Singular Value Decomposition (SVD) of
gives , where . By setting to zeros, we can approximate each pose by the first singular vectors: , where and is the th component of . The number can be small and we still have a good approximation for human motions. Another words, each pose has a sparse representation given the set of bases , and the supports in this case lie in the first bases.Following this idea, to learn from a large motion capture dataset, one possible way is to first partition the whole training set into clusters and find a set of bases for cluster . If we then collect all the bases into a matrix, i.e., , each pose in the training set will be sparsely represented under such a matrix. This approach is general, as we can set to to get back to original lowrank approximation of the whole dataset, and set to the size of the dataset to use the whole dataset as bases. However, this leaves us the problem to find a way to properly partition the data into clusters. If the poses in a cluster are too linearlyuncorrelated because of improper partition(e.g., too many diversified poses in a cluster), then the sparse approximation error will tend to be large. On the other hand, if each cluster only consists of a few poses , the sparse approximation is small but the number of bases will be very large, with the limit being the size of training set.
Instead of determining the matrix in the above means, we take another way around: we learn the matrix from the training set without being worried about the partitioning. To begin with, we return to onecluster case and note that from an optimization perspective, the bases obtained from SVD is a solution to the following optimization problem with variables and
s.t. 
As a relaxation, the orthonormal constraint on is changed to unit ball constraint on its columns and the size of is extended to be pose dictionary with . However, each column of shall be sparse to reflect the sparse property of poses. The sparsity constraint is measured by the norm of each column of . We refer to the matrix in this case as pose dictionary and denote it as to be consistent with the sparse coding literature. We present this pose dictionary learning process in next section.
Modelling the poses in Euclidean space In the analysis presented above, we have no assumption on the ambient space of poses. Although the motion matrix is lowrank in both Euclidean space and angle space (when preprocessed properly) , we choose the former for sparse representation. This is different from previous datadriven approaches, which directly model the motion in angle space. We do so for two reasons.
One reason is to avoid the periodicity of angles, which potentially corrupts the sparse representation: given two identical poses and add to the one pose vector while leaving the other the same, then the resulting two poses are still identical, but the shifted one is unlikely to have same sparse representation as the other under the same dictionary. This will be a problem especially in pose synthesis, due to the nonsmoothness of norm. The same will not happen for poses represented in Euclidean space. Other parametrizations such as quaternion and exponential map are (also) nonlinear, and thus inconvenient for sparse coding which involves solving a linear system.
The other reason is that by doing so, we can directly measure the representation error, which provides us an intuitive measurements in training. Specifically, we are optimizing directly the (mean) square error of the sparse representation of the training set without invoking the forwardkinematics mapping. Moreover, since the input observations such as an edited poses and 2D/3D coordinates are in Euclidean space, the optimization for pose synthesis will be more efficient because we can defer the demand of Jacobian matrix till we find a pose represented by a full set of joint coordinates. And the need for converting the pose into angle space in the last stage( see figure 1) is only necessary when we want to further process the pose such as changing the skeleton configuration(e.g. joint angle limits, bone length).
3 Learning sparse representation of poses for character posing
The idea of modelling the poses based on sparse coding is similar to the subspace approaches such as PCA, except that the ’subspace’ is generalized to the span of active atoms in the pose dictionary and the ’bases’ are no longer assumed to be orthonormal or even independent. Given a set of observations and constraints, the reconstructed pose shall be a tradeoff between having a sparse supports under the pose dictionary and being consistent with the observations and constraints.
3.1 Learning the pose dictionary
Before applying sparse representation, we need to first determine the underlying dictionary, which should be able to capture the pose variations and be insensitive to global orientation and translation. Given a training set , the pose dictionary is learned such that the poses in the training set are sparsely represented under this dictionary. Specifically, the learning problem is modelled as
s. t.  
where and is the th pose in training set, with global orientation and translation set to zeros since they are usually irrelevant in affecting the pose style. Note the similarity between problem (3.1) and (2).
To solve the above learning problem, we use the KSVD algorithm proposed by Aharon et al [AEB06]. KSVD alternates between sparse coding and dictionary updating in every iterate. Specifically, in the sparse coding stage, the Orthogonal matching pursuit (OMP) [PRK93] is used to find a sparse representation of the training set, while in the dictionary updating stage, the columns of the dictionary are updated sequentially by computing the singular value decomposition of the sparse coding residual matrix. It is reported that this method is better than the naive method of simply computing the least square solution by fixing to update . We also refer readers to [AEB06] for details.
3.2 Pose synthesis problem
Now by assuming that the pose dictionary is given, we propose the following model for pose reconstruction:
(4)  
s. t.  (6)  
In the objective (4), the first term measures the difference between sparse representation and the forwardkinematics function scaled by a positive factor . The scale is applied to all 3 dimensions in Euclidean space to maintain the skeleton scaling ratio. The constraint (6) guarantees the sparsity of in the solution.
The second term measures the sparse coding error of the input under a rigidbody rotation. is the rotation parameter. The notation denotes a 3D rotation of the vector which is concatenation of a set of 3D points. is the input pose and is rootshifted version of . is the diagonal matrix and its diagonal entries are either or , indicating whether the corresponding entry of the input pose is available or not. Through this introduction of , we allow the input observation to be incomplete while maintaining the formula integrity for complete observation by setting
to be the identity matrix. This can conveniently model the users’ constraints on specifying the fixed and moving (or missing) joints.
The final term provides a prior constraint on the rotation parameters, where the diagonal matrix gives a weight for each of the 3 rotation parameters. Usually, the weight on the second rotation parameter (rotation around yaxis) shall be larger than the other two, as this rotation is usually more common.
By solving this problem, we find a pose that on one hand stays close to the input pose subject to a similarity transform, and on the other hand admits a sparse representation given the learned dictionary. The input pose can be incomplete or corrupted by noise, and it can also consist of 2D point clouds obtained from an image, as showed in our experiments in next section.
The optimization variables in the problem are the output pose , sparse coefficients , rotation parameters and positive scaling . To solve the problem, we first find and alternatively: in each iterate we first fix and find using OMP algorithm, and then fix to find by gradient descend. Based on the and found, we then calculate the scaling by an algorithm referred to as Pose Normalization, after which we finally determine the output pose Jacobian method. The framework for solving problem is showed in figure 1, and the optimization details are presented in the next subsection.
3.3 Solving the pose synthesis problem
To efficiently solve the problem , we first assume that given , we can find a positive and a such that (approximately) holds. By substituting it to (4), we arrive at the following,
(7)  
s. t.  (8) 
We use the alternating minimization framework to solve problem . More specifically, we first solve the sparse coding problem by fixing ,
(9)  
s. t.  (10) 
where denotes the inverse of rigidbody rotation . Let , then the above problem is equivalent to solving
s. t. 
where is extraction of rows of which correspond to the nonzeros diagonal entries of , and the same goes for . The problem is solved by OMP.
We then find the rotation parameters by fixing the sparse coefficient . That is, we solve the following unconstrained subproblem:
(12) 
The gradient information of can be used to solved this problem. Note again that the notation denotes the operation that subsequently rotates the pose by three rotation angles around and axis. let , , , then the gradient of the above objective function is given by
(13) 
where denotes inner product.
We alternatively solve the above two subproblems and until convergence is reached. Once we have found the final sparse coefficients and rotation parameters , we can determine the joint angles denoted as vector by solving the IK problem:
s. t. 
Because of the involvement of Jacobian matrix, the Hessian for problem is difficult to find. Moreover, Jacobian method, gradient descend or quasiNewton methods seem to be less efficient for this problem because of the unknown arbitrary scaling . Since we already know the lengths of all bones in our case, we can leverage this knowledge to determine the normalized pose . This process is referred to as Pose Normalization and is presented in algorithm 1. Note the normalization scheme is not simply making the pose vector normalized in norm sense.
After finding the normalized pose , it is ready to apply Jacobian method [GM85] to find by solving the following nonlinear system
(15) 
Convergence and complexity By breaking the pose synthesis problem ) in to subproblems and , we greatly reduce the problem complexity. The assumption for this breakdown is that the residual of the term in (4) diminishes, which corresponds to setting to a large value. Thus this assumption holds and the breakdown makes sense. The convergence of problem is guaranteed as in each iterate both OMP and gradient descend decrease the objective value and the objective (7) is bounded below. The convergence of is also guaranteed as the pose normalization algorithm is deterministic with constant complexity and the Jacobian method with complete and normalized target pose usually converge within iterates in our experiments.
4 Applications and Experiments
4.1 Experimental setting
The training samples we used are obtained from CMU motion capture website^{1}^{1}1http://mocap. cs. cmu. edu:8080/allasfamc. zip. We manually trim the toes and fingers from the pose data. These data are originally in 62 dimensional angle space, and they are trimmed to 46 dimensional so that the resulting skeletons contain only significant DOF’s, as in [WC11]. When converted to Euclidean space, the skeleton model has joints with each in three dimensional . The total dimension for a pose is . We preshift all the training samples to be rooted at the origin and set the global orientation to zeros. The corresponds to setting the first six components in the angle vector to zeros.
We determine the size of pose dictionary by the following procedure: given a target learning error , we randomly sample pose for the pose dictionary and use it to test the sparse coding error of the training set. If , set to and use the current sampled poses as initialization for dictionary learning algorithm; otherwise, set to and continue the searching. We use as a criterion with usually set to because the dictionary learning can usually decrease the error by , as we found in our experiments.
4.2 Largescale Comparison
In largescale comparison, we use the whole database from CMU, which sums up to 4150384 poses, covering a varsity of motion styles ranging from basic types such as walking, running to more complex types such as basketball and golf. We randomly sample of poses for training all models and the rest for testing. In training, we set the cardinality upperbound to . The resulting pose dictionary consists of atoms.
We test our model and other models using the testing set which consists of 2075280 poses. These models are MFA, the model with a Gaussian prior, the model with clustering and then build a Gaussian prior for each cluster (CG), LPCA and PCA. The testing scheme is as follows.
As we mentioned, the existing models can be generalized to (1) (except for PCA which will be discussed soon), in which the parameter is important in determining the resulting pose, and its value provides a tradeoff between the prior and the likelihood. If the noise level is high, the prior should be trusted more than the likelihood, thus should be decreased. The same goes for in our model. Therefore, the value of for all other models and for our synthesis model are chosen using bruteforce search for a fair comparison. Specifically, to find the approximately best value, we first randomly sample poses (about 2000) poses from the training set and use them to select the best value within a proper interval. Then the best value for each model is used for testing the whole testing set.
For MFA, we use the same setting as stated in the paper [WC11], except for the , which was not given originally and is found by bruteforce search. we set the cluster number to for CG, neighbourhood size to for LPCA. For PCA, we use the first several principal components such that
energy of the corresponding eigenvalues is preserved. Since the SIK is too computationdemanding, we have omitted it from this largescaled comparison.
We test the performance of denoising and pose completion. For denoising, we test two types of noise: dense and sparse Gaussian noise. For dense Gaussian noise, we generate standard Gaussian noise and add to the testing set. For sparse Gaussian noise, we generate standard Gaussian noise and randomly corrupt of the joints. The mean square error (MSE) for each recovered pose is calculated and the average MSE for the whole testing set is showed in figure 1. We also test the performance the completion when only a small portion of joints are observed. Specially, the inputs are the 3D coordinates of joint ID 16, 20, 19, 23, 5 and 9 (see figure 2 for joint ID map). This missing pattern is the same as that in [WC11]. The comparison result is showed in 1. For a visual instance of the largescale comprison, see figure 3.
Task  Our model  MFA  Gaussian  LPCA  CG  PCA 

dense noise  0.12  0.26  0.25  0.20  0.95  3.34 
Sparse noise  0.05  0.15  0.14  1.02  0.17  1.36 
completion  0.01  0.24  0.30  0.80  0.20  4.11 
Subject  No. Frames  Our model  MFA  Gaussian  LPCA  CG  SIK  PCA 

07  2161  0.02  0.13  0.08  0.07  0.02  0.70  0.07 
09  769  0.03  0.08  0.10  0.06  0.06  1. 26  0.09 
63  7529  0.07  0.41  2. 44  0.35  0.34  3. 90  0.41 
102  4252  0.10  1.39  0.17  0.08  0.11  1.37  1.28 
Subject  No. Frames  Our model  MFA  Gaussian  LPCA  CG  SIK  PCA 

07  2161  0.03  0.08  0.05  0.07  0.19  0.26  0.08 
09  769  0.04  0.05  0.06  0.06  0.05  0.70  0.05 
63  7529  0.03  0.12  0.47  0.09  0.10  0.99  0.36 
102  4252  0.05  0.08  0.09  0.07  0.08  1.34  1. 09 
Subject  No. Frames  Our model  MFA  Gaussian  LPCA  CG  SIK  PCA 

07  2161  0.07  0.11  0.23  0.25  0.95  0.31  0.07 
09  769  0.09  0.09  0.26  0.26  0.25  0.74  0.05 
63  7529  0.07  0.18  0.26  0.18  0.21  3. 90  0.53 
102  4252  0.09  0.24  0.28  0.25  0.27  1.48  1. 30 
Even though all models considered in this paper do not take into account the motion dynamics in training stage, we can still compare their performance on motion completion by applying completion algorithm to each (incomplete) pose in the motion, as this provides a good reflection on the performance of pose complete. This comparison is done to a running motion and the result is showed in figure 6. Our model outperforms other models in that it preserves a better pose structure of the upperbody (see the figure caption for more details).
4.3 Smallscale Comparison
Similar to the above large scale comparison, we also test the performance of each model for learning from small datasets. We choose four subjects from CMU mocap database website: 07 (walking), 09 (running), 63 (golf) and 102 (basketball). These subjects are representative as they are different styles of motion and their size varies from 1538 to 15079 poses. For each subject, we randomly sample for training and the rest for testing. The training and testing schemes are the same as that in large scale comparison. In training stage, the setting for SIK is the same as mentioned in [GMHP04]; for other models, the setting is similar to the above. We also test the performance of completion and denoising under two types of noises. The results for these three tasks are showed the table 2, 3 and 4 respectively. As we see, our model outperforms the other models for three tasks even for small datasets.
4.4 Interactive character posing
We provide an realtime application of our model in interactive character posing. The user interface is implemented in C++ and we use the pose dictionary learned in the largescale comparison for pose synthesis. We consider two kinds of input here. Freedragging interface provides a freelyedited complete pose as an input. Pose completion takes a set of 2D or 3D points as inputs and reconstructs the whole pose. Other inputs are possible, as long as they can fit into the model, perhaps after some necessary preprocessing .
Freedragging A common scenario is that when the user drags one or multiple joints of the skeleton, the computer is required to respond to this drag and create a new pose. Chances are that the edited pose looks like being corrupted by noise if the user is novice. To synthesize a new pose based on the corrupted pose and the pose dictionary , is solved with set to the identity matrix. This corresponds to setting all joints of the input pose as softconstraints.
As our model is trained from the pose data in Euclidean space, it is wellsuited for interactive character posing in which the user can arbitrarily modify the pose without being worried about bonelength constraints and angle limits. This provides a great continence for the user because s/he can now move the joints to wherever s/he wants. After the user finishes the modification, our model synthesizes natural poses that satisfies the user’s intention on the style and corrects all the violations. See figure 4 for examples.
Pose completion In pose completion problem, we infer the whole pose while only a portion of the joints are observed. It turns out that our model can be conveniently adapted to solving this problem even when the model is trained with the fullbody pose data. To do this, we simply set the entries of corresponding to the joints that we want to set as observed (fixed) to ones and the rest to zeros. With this introduction of , we can conveniently incorporate 2D inputs: given a picture which contains a pose, the user can label the joints with the 2D coordinate of the pose in the picture and reconstruct a 3D pose. We give two examples in figure 5.
5 Discussions and conclusions
Denosing and completion We have compared our model with the existing ones for the performance of denoising and completion. One may think that denoising is irrelevant as the motion capture data are usually ’clean’. This perhaps is true for the alreadyavailable databases. In the process of motion capture however, denoising is necessary because of measurement error and sensor failures[RRP97, LC10]. Moreover, for interactive character posing, the pose edited by users is usually noisy in the sense that it is inconsistent with the training set. We can see the process of pose editing as a measurement of users’ intentions, which will always introduce measurement noise. Apart from dense noise, we have also considered sparse noise. This is meaningful because in motion capture process, noise can be sparse due to error introduced in a few sensors. Similarly, in character posing, the user may only edit a few joints, making the measurement noise sparse. Pose completion also is useful, not only in dealing with motion capture data when the measurements are incomplete, but also in character posing to account for users’ constraints.
Sparsity The choice of in pose synthesis stage depends on the noise level. It reflects our initial knowledge on the property of noise. If we believe that the noise level is high, we can reduce the ; otherwise, we can increase such that it can better approximate the input. This then offers a tradeoff between stratifying the users’ exact constraints (which may result in an invalid pose) and synthesizing a realistic and natural pose.
Combining dictionaries Our synthesis model is flexible in that it can combine dictionaries that are learned separately by simply Concatenating all subdictionaries.. This provides a friendly solution for accepting large scale training set in the training stage. In terms of complexity, KSVD is , where
is the size of training set. By using fast algorithms for clustering such as Kmeans, we can divide the training set into smaller subsets with complexity
before applying KSVD to each of them. In this way, the overall training complexity is reduced. Since KSVD is a generalization of kmeans, this approximation is analogical to the hierarchical version of kmeans. We use this approach for the large scale training.Physical constraints The only physical constraints we consider in this paper is the bone length. Angle limits are not considered here as we found that the solving the problem usually will not violate the angle limit constraints. However, they can be interoperated into subproblem if necessary, and the problem can be solved for example similarly to [ZB94].
Connection to subspace models By setting to a large number and imposing extra orthogonal constraints on , the pose dictionary is the basis matrix obtained from subspace models and the pose synthesis problem is almost the same as the PCA model which optimizes the pose in PCA subspace(except that we model the pose in Euclidean space). From this point of view, our model can be seen as a generalization of the PCA subspace model.
Connection to compressed sensing In compressed sensing[Don06], the random sensing matrix plays an important role. Our model is related to compressed sensing except that we set the random sensing matrix to be square. Then this sensing matrix will have no effect as we can take inverse and remove it from the model. That is, We do not perform any reduced measurement on the pose. This is because the input pose might be incomplete already, as indicated by . Introducing a (fat) sensing matrix will complicate the (incomplete) measurement and make it more difficult to recover the pose.
Connection to nearestneighbour
Although similar in some sense, our approach is not nearestneighbour (NN) algorithm. Firstly, our model is a parametric model, while the NN algorithm is not. Secondly, although the OMP algorithm used for sparse coding stage is a greedy algorithm that resembles NN, it is in fact a greedy algorithm for the solving the sparse coding problem. Other algorithms such as linear programming , shrinkage and interior point method can also be used. However, we find that OMP is more efficient in our case.
Conclusion In this paper, we have proposed a model for articulate character posing. We have shown that our model can be trained to learn the pose dictionary from a largescale training set. We also demonstrated how to apply our model in denoising and completion problem. We have also provided UI examples showing how to use our model for character posing. Experiments have shown that our model outperforms the existing models in pose denoising and completion.
One limitation of our model is that to achieve a small learning error, the pose dictionary size could be large for learning from a large dataset. This could be a problem for applications in devices that have limited memory. Nevertheless, our model is currently designed for applications in personal computers.
Acknowledgement
This project is supported by the Faculty Research Grant of Department of Computer Science, Hong Kong Baptist University.
References
 [AEB06] Aharon M., Elad M., Bruckstein A.: ksvd: An algorithm for designing overcomplete dictionaries for sparse representation. Signal Processing, IEEE Transactions on 54, 11 (nov. 2006), 4311 –4322.
 [BH00] Brand M., Hertzmann A.: Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (New York, NY, USA, 2000), SIGGRAPH ’00, ACM Press/AddisonWesley Publishing Co., pp. 183–192.
 [CH05] Chai J., Hodgins J. K.: Performance animation from lowdimensional control signals. ACM Trans. Graph. 24 (July 2005), 686–696.
 [Don06] Donoho D.: Compressed sensing. Information Theory, IEEE Transactions on 52, 4 (2006), 1289–1306.
 [GH96] Ghahramani Z., Hinton G.: The em algorithm for mixtures of factor analyzers. University of Toronto Technical Report (1996).
 [GM85] Girard M., Maciejewski A.: Computational modeling for the computer animation of legged figures. ACM SIGGRAPH Computer Graphics 19, 3 (1985), 263–270.
 [GMHP04] Grochow K., Martin S. L., Hertzmann A., Popović Z.: Stylebased inverse kinematics. In ACM SIGGRAPH 2004 Papers (New York, NY, USA, 2004), SIGGRAPH ’04, ACM, pp. 522–531.
 [GMW81] Gill P., Murray W., Wright M.: Practical optimization.
 [Kal08] Kallmann M.: Analytical inverse kinematics with body posture control. Computer Animation and Virtual Worlds 19, 2 (2008), 79–91.

[Law04]
Lawrence N.:
Gaussian process latent variable models for visualization of high dimensional data.
Advances in neural information processing systems 16 (2004), 329–336.  [LC10] Lou H., Chai J.: Examplebased human motion denoising. Visualization and Computer Graphics, IEEE Transactions on 16, 5 (sept.oct. 2010), 870 –879.
 [LFAJ10] Li Y., Fermuller C., Aloimonos Y., Ji H.: Learning shiftinvariant sparse representation of actions. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (june 2010), pp. 2630 –2637.
 [LWS02] Li Y., Wang T., Shum H.Y.: Motion texture: a twolevel statistical model for character motion synthesis. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques (New York, NY, USA, 2002), SIGGRAPH ’02, ACM, pp. 465–472.
 [LYL11] Lai R. Y. Q., Yuen P. C., Lee K. K. W.: Motion Capture Data Completion and Denoising by Singular Value Thresholding. Avis N., Lefebvre S., (Eds.), Eurographics Association, pp. 45–48.
 [PRK93] Pati Y., Rezaiifar R., Krishnaprasad P.: Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Signals, Systems and Computers, 1993. 1993 Conference Record of The TwentySeventh Asilomar Conference on (1993), IEEE, pp. 40–44.
 [RRP97] Rose B., Rosenthal S., Pella J.: The process of motion capture: Dealing with the data. In Computer Animation and Simulation (1997), vol. 97.
 [SHP04] Safonova A., Hodgins J. K., Pollard N. S.: Synthesizing physically realistic human motion in lowdimensional, behaviorspecific spaces. In ACM SIGGRAPH 2004 Papers (New York, NY, USA, 2004), SIGGRAPH ’04, ACM, pp. 514–521.
 [WC11] Wei X., Chai J.: Intuitive interactive humancharacter posing with millions of example poses. Computer Graphics and Applications, IEEE 31, 4 (julyaug. 2011), 78 –88.
 [WTR11] Wu X., Tournier M., Reveret L.: Natural character posing from a large motion database. Computer Graphics and Applications, IEEE 31, 3 (mayjune 2011), 69 –77.
 [WYG09] Wright J., Yang A., Ganesh A., Sastry S., Ma Y.: Robust face recognition via sparse representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 31, 2 (feb. 2009), 210 –227.
 [YWHM08] Yang J., Wright J., Huang T., Ma Y.: Image superresolution as sparse representation of raw image patches. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on (june 2008), pp. 1 –8.
 [ZB94] Zhao J., Badler N. I.: Inverse kinematics positioning using nonlinear programming for highly articulated figures. ACM Trans. Graph. 13 (October 1994), 313–336.
Comments
There are no comments yet.