1 Introduction
Humans grasp and manipulate objects with their hands. Modeling 3D poses and surfaces of human hands is important for numerous applications such as animation, games, augmented and virtual reality. Existing hand representations in the literature can be categorized into two paradigms: skeleton representations [19, 40, 41, 28, 71] and meshbased representations [51, 39]. Even though 3D skeletons are defined in the Euclidean space and are easy to interface with deep neural networks, they lack surface information and therefore not suitable for reasoning about handobject interaction. In contrast, meshbased hand models provide surfaces and can thus explicitly reason about physical interactions, such as handobject manipulation [25, 59]. However, their pose and shape parametrizations are often hard to directly interpret and more difficult to learn in an endtoend fashion than 3D keypoints.
In this work, we aim to bridge the gap between 3D keypoints and dense surface models. To this end, we propose Hand ArticuLated Occupancy (HALO), a novel hand representation that is driven by keypointbased skeleton articulation and provides a highfidelity, resolutionindependent neural implicit surface. Importantly, the proposed representation is fully differentiable and can be trained end to end such that volumebased losses can backpropagate gradients to the 3D keypoints. Specifically, HALO makes two key innovations: (1) a fully differentiable skeleton canonicalization algorithm and (2) a shapeaware neural articulated hand representation that is driven directly by skeleton and naturally affords differentiable reasoning about surface and volumetric occupancy.
Differentiable skeleton canonicalization. The core advantage of the HALO model is that its surface representation is fully differentiable and driven by the 3D joint skeleton. Even though a naive iterative fitting procedure such as inverse kinematics could provide the transformations needed for changing from the canonical pose to the target skeleton, it prevents gradient flow from the surface to the keypoints. In addition, due to ambiguities of the twist angle around the finger bones, the optimization procedure may lead to unnatural surfaces. By leveraging biomechanical constraints that ensure plausible poses, we propose a novel differentiable layer that converts 3D keypoint locations to the corresponding bone rotations and translations with respect to a canonicalized pose space in a single forward pass. blackWe call this operation a canonicalization layer. Importantly, this layer does not rely on iterative fitting and can therefore be effectively used in backpropagationbased learning. Furthermore, this layer enables the learning of implicit hand shape representations in the canonical space. This significantly improves the generalization capability of the learned representations across different hand poses and shapes, as shown in Sec. 5.
Skeletondriven neural articulated hand. Recently, several neural occupancy networks for human body modelling, e.g. [15, 36, 53]
, have been proposed. Despite demonstrating the feasibility of parameterizing articulated deformations via neural implicit surfaces, all these methods require groundtruth bone transformations as input, making them unable to interface with keypoints. blackIn addition, it is uncertain and has not been demonstrated how these models will function under noisy predictions without the ground truth transformations. Note that making implicit representations applicable to articulated hands driven by skeleton is nontrivial. The key challenge is how to infer realistic shapes of unseen hands from the skeletons with highly articulated hand poses. Our solution includes two steps. First, by leveraging biomechanical constraints, we ensure bijective mappings between 3D skeletons and hand surfaces using the canonicalization layer, which greatly simplifies the learning of the implicit surface for highly articulated hand skeletons; Second, in the canonical space, we learn both identitydependent and posedependent deformations of the hand surface with a set of multilayer perceptrons that generalize well across different hand shapes and poses. We systematically compare HALO with several baseline methods. The results show that HALO significantly improves the accuracy, generality and visual fidelity over the baselines.
Hand grasps synthesis. To demonstrate the utility of the keypointdriven implicit surface representation, we deploy HALO for conditional generation of hands grabbing 3D objects. We propose a novel generation pipeline that synthesizes hand keypoints and yields the neural occupancy of the hand via HALO. By exploiting the differentiable nature of HALO, we design a handobject interpenetration loss to guide the training of the 3D keypoint generator in an endtoend fashion. Our experiments show that this loss leads to convincing handobject contact of the generated hands. Furthermore, compared to a MANObased method (GrabNet [59]), HALO produces more physically plausible and visually convincing grasps even before refinement, suggesting that the volumebased losses are effective for learning the 3D keypoint generator.
Overall, the contributions include (1) HALO, the first neural implicit hand model that is driven by keypointbased skeleton articulation, provides smooth neural implicit surfaces, and enables differentiable reasoning about volumetric occupancy; (2) A differentiable skeleton canonicalization layer that maps any skeleton to the canonical pose with a unique biomechanically valid transformations; (3) A realistic human grasp generation framework that leverages the efficient volumetric occupancy checks enabled by HALO.
2 Related Work
Hand pose and mesh estimation.
Hand pose estimation is a long standing question and several learningbased approaches have been introduced. These approaches generally involve predicting 3D key point locations
[13, 58, 43, 18, 28, 8, 40, 60, 66, 16, 57, 41, 71, 54, 64, 19], regressing MANO [51] parameters [1, 4, 26, 2, 25, 68], or directly predicting the full dense surface of the hand [20, 30, 39, 61]. The methods that directly predict 3D key points usually achieve better performance, however, they do not yield dense surface which is crucial for hand interaction. Iteratively fitting a templatedbased model such as MANO to the key points could recover the dense surface but also make the process nondifferentiable [46, 44, 62]. Alternatively, the dense surface can also be estimated from 3D or 2D keypoints [70, 11, 65]. However, such estimation could result in a change of hand pose from the input keypoints. In contrast, our model produce hand surface that faithfully respects the input pose and allows surface or volumetric losses to backpropagate directly to the keypoints.Hand representation. The surface of 3D hands can be represented explicitly or implicitly. The most common templatebased approaches such as MANO [51] induce a prior of poses and shapes over its learned parameter space for regularization. However, using the learned parameters also increases the learning complexity as these features do not correspond directly to features in the inputs such as visible joints. In [1, 4, 25, 26, 68], the MANO parameters are predicted directly using additional weak supervision such as hand masks [1, 68] or 2D annotations [4, 68]. Another way of representing explicit 3D hand is to directly store the dense vertex locations of the MANO template [20, 30, 38]. While being more generalizable by avoiding constraints on the parameter space, these approaches require the corresponding, dense 3D annotations, which might be difficult to acquire. Our work differs in that we can recover dense hand surfaces from 3D keypoints, eliminating the need for learning model parameters or predicting dense surface points.
Implicit representation. Several works represent object shapes by learning an implicit function using neural networks [10, 14, 21, 22, 35, 47, 55, 37], which allows for the modeling of arbitrary object topologies with dynamic resolution. Many approaches for learning such implicit function from various input types were also proposed [32, 33, 45, 48, 52, 56]. These works focus on rigid objects and do not permit shape deformation. Recently, the interest is also on learning an articulated implicit function for human body [15, 36, 63, 3, 27]. blackNASA [15] represents human bodies using a set of implicit functions, but the model is limited to a specific body shape. LEAP [36] proposes to learn inverse linear blend skinning functions for multiple body shapes, however, it relies on ground truth bone transformation matrices instead of 3D joint locations. To the best of our knowledge, there are no implicit hand representations that can generalize well to various shapes. Grasping Field [29] learns an implicit function for hand and objects together to represent contact but treats every posed hand as a rigid object. As a result, the complexity of learning a wide range of poses increases significantly. blackIn this work, we leverages biomechanical constraints of human hand to learn a novel hand model that only takes a skeleton as input and generalizes to different hand shapes and poses.
Handobject interaction. There has been many studies into hand interacting with object in various settings [7, 5, 12, 29, 59, 42, 17, 18, 24, 23, 9, 31]. Recently, the community has begun exploring the task of generating plausible hand grasps given an object with notable studies including [12], [29], and [59]. GanHand [12] generates grasps suitable for each object in a given RGB image by predicting a grasp type from grasp taxonomy [17] and its initial orientation, then optimize for a better contact with the object. GrabNet [59] uses Basis Point Set [49] to represent 3D objects as input to generate MANO parameters. The predicted hand is then fed to a refinement model to improve the contact. Grasping Field [29] learns a signed distance field for both hand surface and object surface in one space, allowing the contact to be learned as regions where distances to both surfaces are zeros. However, the output surface cannot be articulated and requires hand model fitting. Our work differs from others in the way that we use the proposed hand representation to model the contact while keeping the synthesis task as simple as generating 3D keypoints.
3 HALO: Hand ArticuLated Occupancy
The HALO model is a skeletondriven neural occupancy function, formally defined as . Parameterized by neural network weights , it maps a 3D point to its occupancy value given the hand skeleton represented by a set of 3D keypoint locations . In this section, we first describe how to convert an arbitrary 3D joint skeleton to the reference canonical pose in a differentiable and consistent manner, then we introduce our simple yet effective neural occupancy networks for hands.
Notations. Given a hand skeleton represented by 3D key points , we denote and to be the flexion/extension and abduction/adduction angles of bone relative to its parent, respectively. For simplicity, we refer to them as flexion and abduction angle. The angle between a palmar bone and its adjacent palmar bone is denoted as . Lastly, is the palmar plane angle between plane and spanned by the palmar bone , , respectively. We denote the properties of the reference canonical hand with . We refer to Supp. Mat. for further details.
3.1 Canonicalization of 3D Hand Skeleton
Our goal is to learn a neural representation of the surface of human hands in a canonical space. Furthermore, we want to deform this shape based on the spatial configuration of the underlying skeleton, represented by 3D keypoints. To do so, we require a mechanism that allows us to convert the 3D keypoints into valid skeletons in the canonical pose in terms of joint angles. As the keypoints have no notion of the surface, naively converting them to axisangles does not work due to the unconstrained twist of bones. While twist does not affect keypoints, they affect the surface.
blackWe take inspiration from Spurr et al. [57]
which defines a consistent local coordinate system for each bone to measure the bone angles for semisupervised learning. Our objective is to derive a differentiable mapping layer that i) provides means to convert predicted keypoints to the rest pose and back, and ii) ensures that the skeleton is free of implausible twist that would influence the surface.
Building on [57], we represent each finger bone by two rotation angles, flexion and abduction, relative to its parent bone (Fig. 1). Each bone cannot rotate about itself, thus, no twist. However, such formulation ignores the palm configuration, which is needed for defining the canonical pose. In this work, we propose a method to parameterize the pose of a palm in order to define a consistent canonical pose. We decouple the palmar bone configuration into 1) finger spreading and 2) palm arching. The spreading of fingers is captured via the angles between two adjacent palmar bones. The arching of the palm is defined by the angle between the two planes spanned by three adjacent palmar bones. The resulting palmar region then serves as a frame of reference for the remaining fingers. Please refer to Supp. Mat. Fig. 1 for better visualization.
Converting 3D Keypoints to Bone Transformations. Formally, we seek the unique set of transformations that maps the skeleton to the canonical pose . black Given a skeleton, we obtain the set by sequentially performing the following operations: First, we rotate each finger to match the description of our canonical palm pose, which we define as a flat hand with fixed angles between palmar bones; Second, we compute joint angles and local coordinate systems following [57] (Fig. 1b), which we use to iteratively undo the angles along the kinematic chain (Fig. 1
a) to acquire the canonical pose. By combining the transformations from both steps, then adjust for the conversion from keypoints to bone vectors, we could obtain a set of transformation matrices
that maps the given skeleton to our canonical pose . Formally,(1)  
Here is a function that maps the keypoints to bone vectors by translating them to the local origin and scaling to unit norm; normalizes the palmar bone and palmar planes angles; then maps the bones to their local coordinate frames; rotates each bone to have the same flexion and abduction angles as the canonical pose. Finally, maps each coordinate frame back to the global coordinate system; reverts bones back to their original length and translates the bones to the tip of their parent bones.
This set of transformations is unique for each skeleton pose and only allows biomechanically valid transformation. For details, we kindly refer the reader to our Supp.Mat.
3.2 Neural Occupancy Networks for Hands
Here, we describe how to leverage the unique mapping between the posed skeleton and the canonical skeleton to learn the neural hand representation that generalizes to different shapes with highly articulated poses. We draw inspirations from NASA [15] and explore similar neural network structure due to its simplicity and efficacy.
NASA [15]. NASA learns an implicit representation of a human body , conditioned on the pose descriptor . Specifically, it defines the implicit surface for each body part separately. Let be the transformation to the canonical pose for bone , NASA can be denoted by:
(2) 
where the pose descriptor is defined by a collection of transformation matrices
, and the probability of
is derived from the maximum occupancy probability across child occupancy functions , where each represents the body part of the bone . For a query point , each child function maps to its local coordinate system by the transformation matrix , so that the local shape of each body part can be learned. The term is used to provide global pose information to each child function. Essentially, by querying the occupancy value using , the NASA model learns a template shape and the correction based on the global pose with . Note that, the bone transformation is assumed to be given. For more details, we kindly refer the reader to [15].Neural Occupancy Networks for Hands. A naive adaptation of the NASA model for human hand results in erroneous surface reconstructions as shown in Sec. 5.1 (Fig. 4). In order to represent hands with highly articulated poses and diverse shapes, we propose to learn the child occupancy functions by conditioning on a shape descriptor , effectively learning . We assume that the identifydependent deformations of the hand are highly correlated to the bones, hence, we propose the use of a collection of bone lengths as the shape descriptor. In particular, we propose a simple yet effective bone length encoder that takes the bone length of individual bones as inputs. We emphasize that under the proposed formulation, we could learn the hand surface using only the key points , as the pose descriptor is derived from by Eq. 1. Our final occupancy function is given by:
(3) 
where each implicit function learns the corresponding part shape based on the hand pose descriptor and our bone length descriptor .
Shape descriptor variations. We investigate two versions of bone length encoders : the local bone encoder and the global encoder , where the MLP for the global encoder has two linear layers. We follow a similar training strategy as [15], for more training details, please refer to the Supp. Mat.
3.3 Skeletondriven Articulated Hand Model
black To build a skeletondriven articulated hand model, we combine the previously described canonicalization layer and the neural hand surface together. Specifically, HALO takes the input 3D keypoints to compute bone transformations for the occupancy networks using the canonicalization layer. As the canonicalization layer is differentiable, the model can be trained end to end and allows volumebased losses from the surface to backpropagate to the keypoints. The overview of HALO is shown in Fig. 2. Note that the bone lengths can also be computed from the keypoints. During inference, only 3D keypoints are needed as input to reconstruct hand surface.
4 Human Grasps Generation
We show the applicability of the HALO model in the challenging task of grasps generation. Given an object, we aim to generate diverse grasps with natural and plausible handobject interaction. Our grasp generation pipeline consists of two parts: a 3D keypoints generator based on a variational autoencoder (VAE) and the HALO model for obtaining the hand surface.
HALOVAE Architecture. The architecture of the HALOVAE model is illustrated in Fig. 3. During training, the object point cloud is first passed to the object encoder, which is a modification of PointNet [50]
with residual connection
[35], to obtained an object latent code. The object latent code is then concatenated to the 3D hand joint location, , and passed to the VAE encoder. The decoder reconstructs the 3D hand joint positions conditioned on the hand and object latent representation. From the key points, the surface is obtained using HALO through the skeleton canonicalization layer.The advantages of using HALO are twofold. First, we decouple the complexity of learning the pose, represented by the skeleton, from that of learning the surface that corresponds to the pose; Second, the implicit model enables fast intersection tests between hand and object, which can be used to efficiently compute an interpenetration loss. Combined with the differentiable skeleton canonicalization layer, the interpenetration loss can be used to improve the keypoints generator in both endtoend training and postoptimization refinement.
Our grasp generation pipeline is similar to [59] and [29], but with the following key differences. First, in [29], the output is a rigid implicit surface that cannot be articulated. To obtain an animatable hand for downstream tasks, additional MANO model fitting is required. Second, in [59], the grasps generator is trained to produce the MANO parameters which is not directly related to the Euclidean space where the hand and the object live in. The challenge of interfacing the MANO parameters with deep neural networks is reflected in the GrabNet (CoarseNet) [59] results which will be discussed in the experiments section.
4.1 Learning and Losses
To train the VAE model, we use the following losses: the KLdivergence loss on the hand latent , L2 loss on the predicted key points, L1 bone lengths loss, and the bone angle losses. The bone angle losses are used to provide additional supervisions for learning the hand structure which consists of 1) flexion angles and abduction angles , 2) angles between adjacent palmar bones and 3) angles between palmar planes . The bone angles are the same as used in Sec. 3. The losses are defined as L1 angle difference between the prediction and the ground truth.
Interpenetration loss. In addition to the losses on the keypoints, we also use the interpenetration loss on the hand surface to avoid collision between hand and object. The key idea is to penalize every points inside the object that is also occupied by the hand. Concretely, for a set of points sampled inside the object and the predicted key points , the interpenetration loss is defined as:
(4) 
where is the bone length vector for and maps the predicted key points to the HALO pose vector using the differentiable transformation matrices in Eq. 1.
4.2 Optimizationbased Refinement
To demonstrate that the efficient intersection tests enabled by HALO can be used for optimization, we refine the sampled hands by changing the global translation to avoid collision with the object. The refinement is run for 10 steps with the interpenetration loss term in Eq. 4. The optimization objective is:
(5) 
This simple optimization step aims at refining the contact after the initial prediction of HALOVAE. It is analogous to the RefineNet in [59], but with an explicit objective to avoid collision instead of being a neural network denoiser.
5 Experiments
blackIn this section, we assess our skeletondriven hand model and the grasp synthesis pipeline. First, in Sec. 5.1, we validate the efficacy of HALO as a neural implicit hand model and compare it to the surface baseline [15] and keypointstosurface baselines [70, 11]. Second, we show in Sec. 5.2 that HALO can be used effectively in generative tasks which require surfacebased reasoning in form of grasp synthesis. For more experiments, please see supplementary materials.
5.1 Neural Hand Model
We first evaluate the performance of the proposed implicit surface representation and analyze the effect of the keypointtotransformation mapping layer.
Training data. To train our neural occupancy hand model, we utilize MANO [51] hand meshes. Following [15], for each mesh we sample points with two strategies: 1) uniformly sampling in the hand bounding box, 2) sampling on the surface with additional isotropic Gaussian noise. Only the uniformly sampled points are used for evaluation. The associated occupancy value of each query point is computed by casting a ray from the sampled point and counting the number of intersections along the ray. The ground truth bone transformation matrices are computed along the kinematic chain to transform the template MANO hand into the target pose. The skinning weights are taken from the skinning weights of MANO. We use the Youtube3D (YT3D) hands dataset [30] in all our experiments. The YT3D training set contains 50,175 hand meshes of hundreds of subjects performing a wide variety of tasks in 102 videos. The test set covers 1,525 meshes from 7 videos.
IoU  Cham. (mm)  Norm.  

NASA [15]  0.896  1.057  0.955 
NASA+surf.  0.883  1.177  0.944 
NASA+surf.+local b.  0.913  0.884  0.950 
HALO (ours)  0.932  0.719  0.959 
HALO keypoints (ours)  0.930  0.740  0.959 
Methods  IOU  Cham. (mm)  MPJPE (mm) 

Choi et al. [11]  0.43  4.651  14.1 
Zhou et al. [70]  0.54  2.811  7.95 
HALO keypoints (ours)  0.93  0.740  0 
Evaluation metrics. For 3D surface reconstruction evaluation, we compute the mean Intersection over Union (IoU), ChamferL1 distance, and normal consistency score [35].
5.1.1 Comparison to implicit surface baseline
Here we investigate the generalization ability of the proposed HALO model to represent articulated hands with various poses and shapes. The results are summarized in Tab. 1
Baseline. We use the NASA model [15] as our baseline. The NASA model is designed to represent an implicit function of an articulated body. However, by changing the input dimension and the number of partmodels to match the number of hand parts, it can also be used to represent an articulated hand. We trained the baseline model using the bone transformation matrices taken from MANO and the sampled query points. For details on implementation and network architecture we refer to the supplementary.
Surface vertex resampling. In [15], the surface vertices used for enforcing the part models in the skinning loss are the mesh vertices of SMPL [34]. Similarly, we use MANO surface vertices during training. However, we notice that the humandesigned mesh often has many more vertices in the area around the joints which could cause the part models to bias toward the bone endpoints. Thus, we propose to resample the surface vertices uniformly on the mesh surface. This result is performance degradation but the bone connections are more natural with less artifact.
Local and global bone encoders. The bone lengths of a human hand greatly influence the hand shape. Therefore, for the local bone encoder, we add the bone length to the backprojected query point as input to the part model . As shown in Tab. 1
, the local bone encoder improves the reconstruction quality both in terms of IoU and ChamferL1 distance. We further extend the local bone encoder by considering all the bone lengths as input. A concatenated vector of bone lengths is first fed into a small feedforward neural network to get the global bone feature
, which is then concatenated with the query point and the local bone length as input to the part model.Results. By combining the local and global bone encoders, HALO significantly improves the 3D surface reconstruction quality compared to NASA. As shown in Tab. 1, the IoU is increased from to and the ChamferL1 distance is decreased from to .
We provide a qualitative comparison between NASA, and HALO in Fig. 4, confirming the quantitative results. The proposed HALO representation generalizes well for highly articulated poses, whereas the NASA model produces severe artifacts at the connection between parts.
5.1.2 3D keypoints to hand surface
blackTab. 1 also shows the result from HALO that only takes 3D keypoints as input. The keypoint model achieves comparable surface reconstruction performance as when the ground truth transformation are given, showing the effectiveness of our method. We show the qualitative results in Fig. A SkeletonDriven Neural Occupancy Representation for Articulated Hands and 4.
In addition, to evaluate the keypointtosurface pipeline, we then compare HALO to the equivalent component in [70] and [11] which estimates hand surface from 3D keypoints. The evaluation is done on same the Youtube3D test set where the ground truth 3D keypoints are given as input. As [11] requires both 2D and 3D coordinates as inputs, the 2D keypoints is obtain by projecting the 3D keypoints perpendicular to the palm. For evaluation, we also report the 3D joint error between the predicted hand and the input joints. This metric measures if the input keyoints are faithfully respected by the models. By design, the HALO model does not change the keypoint locations from input to output, thus does not have this error. The comparison in Tab. 2 shows that [70] and [11] change the hand pose and shape in the prediction while HALO faithfully reconstructs the hand surface according to the given keypoints.
5.2 Grasp Synthesis
To assess the utility of HALO in downstream tasks we demonstrate our grasp generative model, HALOVAE.
Dataset. We leverage the recently introduced GRAB dataset [6, 59] and compare our results to GrabNet [59]. We compare both to the initial (coarse) predictions of GrabNet [59] and the refined results which matches with our own twostage generation process. The test set contains 6 unseen objects. For each object, we fix the object orientation and sample 20 hand proposals from each model.
Physics Metrics. Following [69, 67], we evaluate the physical plausibility (interpenetration volume and contact ratio) and diversity, and provide results from a perceptual study. To evaluate the interpenetration and contact, we measure the ratio of frames in which the hand is in contact with the object and average the interpenetration volume. The volume is calculated by voxelizing hand and object mesh with 1mm cubes and counting the number of intersecting cubes.
User study. We asked 75 participants in a forcedalternativechoice perceptual study to ‘select the grasp that is more realistic’. For each question, the user is shown 4 views per grasp and forced to select one. We compare all possible combinations on the same object. Each question is assigned to at least 2 participants, totaling 4,800 data points per pair of model comparison. To ensure that the grasps from HALOVAE and GrabNet have the exact same texture, we fit MANO to our generated key points for rendering.
Diversity. Following [69]
, we compute the diversity of the sampled grasps by performing kmeans with 20 clusters on all samples, then evaluate the entropy of the cluster assignment and the average cluster size. More diversity results in higher value for both metrics. We use the flatten key point locations of the hands after aligning the root joint and the plane spanned by middle and index palmar bone as features.
Results.
We first validate the efficacy of the interpenetration loss (Eq. 4). We compare the HALOVAE models with and without the interpenetration loss. The results show that the interpenetration loss helps in: 1) reducing the collision between the objects and the generated hands (Tab. 4black, col.12), and 2) largely improves the user preference of the corresponding model (Tab. 4, blackfirst row), demonstrating the efficacy of the proposed neural occupancy representation of articulated hand for reasoning about handobject interaction.
Next, we compare HALOVAE with GrabNet [59]. Both HALOVAE and GrabNetcoarse are CVAE based generative models and endtoend trainable, the key difference is that GrabNetcoarse generates MANO model parameters whereas HALOVAE generates 3D keypoints. As shown in Tab. 4, HALOVAE outperforms GrabNetcoarse by a large margin for interpenetration volume and sample diversity. Moreover, the HALOVAE model without interpenetration also compares favorably to GrabNetcoarse, suggesting that the 3D keypoints based representation is well suited to interface with deep neural networks.
Finally, we compare our optimizationbased refinement with GrabNetrefine. To the best of our knowledge, the RefineNet is not trained endtoend with GrabNetcoarse and used for three steps during the inference. As shown in Tab. 4, our refined grasps attain a higher user score, suggesting they are more realistic and natural compared to the grasps refined by GrabNetrefine.
6 Discussion and Conclusion
In this work, we introduce HALO, a novel surface representation for articulated hands that can generalize to different hand poses and shapes. We address the issue of the transformation matrix requirement for inferring the 3D occupancy hand by proposing a skeleton canonicalization algorithm that computes valid transformations from 3D keypoints. The experiments show that our proposed hand model outperforms the baseline and can represent a wide range of hand poses and shapes. Finally, we demonstrate the HALO can be used to train an endtoend grasp generator conditioned on an object and produces hand grasps with natural and realistic interaction. We believe that HALO can be useful in future work attempting to reconstruct the surface of articulated hands directly from images via differentiable rendering and for several downstream tasks that need to perform surfacebased computation such as collision detection and response.
7 Acknowledgement
We sincerely acknowledge Shaofei Wang and Marko Mihajlovic insightful discussions and help with baselines.
References

[1]
Seungryul Baek, Kwang In Kim, and TaeKyun Kim.
Pushing the envelope for rgbbased dense 3d hand pose estimation via neural rendering.
InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 1067–1076, 2019.  [2] Seungryul Baek, Kwang In Kim, and TaeKyun Kim. Weaklysupervised domain adaptation via gan and mesh model for estimating 3d hand poses interacting objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6121–6131, 2020.

[3]
Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, and Gerard
PonsMoll.
Loopreg: Selfsupervised learning of implicit surface correspondences, pose and shape for 3d human mesh registration.
Advances in Neural Information Processing Systems, 33, 2020.  [4] Adnane Boukhayma, Rodrigo de Bem, and Philip HS Torr. 3d hand shape and pose from images in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10843–10852, 2019.
 [5] Samarth Brahmbhatt, Cusuh Ham, Charles C Kemp, and James Hays. Contactdb: Analyzing and predicting grasp contact via thermal imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8709–8719, 2019.
 [6] Samarth Brahmbhatt, Cusuh Ham, Charles C. Kemp, and James Hays. ContactDB: Analyzing and predicting grasp contact via thermal imaging. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
 [7] Samarth Brahmbhatt, Chengcheng Tang, Christopher D. Twigg, Charles C. Kemp, and James Hays. ContactPose: A dataset of grasps with object contact and hand pose. In The European Conference on Computer Vision (ECCV), August 2020.
 [8] Yujun Cai, Liuhao Ge, Jianfei Cai, and Junsong Yuan. Weaklysupervised 3d hand pose estimation from monocular rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 666–682, 2018.
 [9] YuWei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S. Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, Jan Kautz, and Dieter Fox. DexYCB: A benchmark for capturing hand grasping of objects. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
 [10] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
 [11] Hongsuk Choi, Gyeongsik Moon, and Kyoung Mu Lee. Pose2mesh: Graph convolutional network for 3d human pose and mesh recovery from a 2d human pose. In European Conference on Computer Vision (ECCV), 2020.
 [12] Enric Corona, Albert Pumarola, Guillem Alenyà, Francesc MorenoNoguer, and Grégory Rogez. Ganhand: Predicting human grasp affordances in multiobject scenes. In CVPR, 2020.
 [13] Martin de La Gorce, David J Fleet, and Nikos Paragios. Modelbased 3d hand pose estimation from monocular video. IEEE transactions on pattern analysis and machine intelligence, 33(9):1793–1805, 2011.
 [14] Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. Cvxnet: Learnable convex decomposition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 31–44, 2020.
 [15] Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard PonsMoll, Geoffrey Hinton, Mohammad Norouzi, and Andrea Tagliasacchi. Neural articulated shape approximation. European Conference on Computer Vision (ECCV), 2020.
 [16] Bardia Doosti, Shujon Naha, Majid Mirbagheri, and David J Crandall. Hopenet: A graphbased model for handobject pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6608–6617, 2020.
 [17] Thomas Feix, Javier Romero, HeinzBodo Schmiedmayer, Aaron M Dollar, and Danica Kragic. The grasp taxonomy of human grasp types. IEEE Transactions on humanmachine systems, 46(1):66–77, 2015.
 [18] Guillermo GarciaHernando, Shanxin Yuan, Seungryul Baek, and TaeKyun Kim. Firstperson hand action benchmark with rgbd videos and 3d hand pose annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 409–419, 2018.
 [19] Liuhao Ge, Hui Liang, Junsong Yuan, and Daniel Thalmann. Robust 3d hand pose estimation in single depth images: from singleview cnn to multiview cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3593–3601, 2016.
 [20] Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang, Jianfei Cai, and Junsong Yuan. 3d hand shape and pose estimation from a single rgb image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 10833–10842, 2019.
 [21] Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
 [22] Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Proceedings of the IEEE International Conference on Computer Vision, pages 7154–7164, 2019.
 [23] Patrick Grady, Chengcheng Tang, Christopher D Twigg, Minh Vo, Samarth Brahmbhatt, and Charles C Kemp. Contactopt: Optimizing contact to improve grasps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1471–1481, 2021.
 [24] Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3d annotation of hand and object poses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3196–3206, 2020.
 [25] Yana Hasson, Bugra Tekin, Federica Bogo, Ivan Laptev, Marc Pollefeys, and Cordelia Schmid. Leveraging photometric consistency over time for sparsely supervised handobject reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 571–580, 2020.
 [26] Yana Hasson, Gul Varol, Dimitrios Tzionas, Igor Kalevatykh, Michael J Black, Ivan Laptev, and Cordelia Schmid. Learning joint reconstruction of hands and manipulated objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11807–11816, 2019.
 [27] Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, and Tony Tung. Arch: Animatable reconstruction of clothed humans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3093–3102, 2020.
 [28] Umar Iqbal, Pavlo Molchanov, Thomas Breuel Juergen Gall, and Jan Kautz. Hand pose estimation via latent 2.5 d heatmap regression. In Proceedings of the European Conference on Computer Vision (ECCV), pages 118–134, 2018.
 [29] Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, and Siyu Tang. Grasping field: Learning implicit representations for human grasps. arXiv preprint arXiv:2008.04451, 2020.
 [30] Dominik Kulon, Riza Alp Guler, Iasonas Kokkinos, Michael M. Bronstein, and Stefanos Zafeiriou. Weaklysupervised meshconvolutional hand reconstruction in the wild. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
 [31] Shaowei Liu, Hanwen Jiang, Jiarui Xu, Sifei Liu, and Xiaolong Wang. Semisupervised 3d handobject poses estimation with interactions in time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14687–14697, 2021.
 [32] Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. Learning to infer implicit surfaces without 3d supervision. In Advances in Neural Information Processing Systems, pages 8295–8306, 2019.
 [33] Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2019–2028, 2020.
 [34] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard PonsMoll, and Michael J. Black. SMPL: A skinned multiperson linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1–248:16, Oct. 2015.
 [35] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4460–4470, 2019.
 [36] Marko Mihajlovic, Yan Zhang, Michael J Black, and Siyu Tang. Leap: Learning articulated occupancy of people. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10461–10471, 2021.
 [37] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision, pages 405–421. Springer, 2020.
 [38] Gyeongsik Moon and Kyoung Mu Lee. I2lmeshnet: Imagetolixel prediction network for accurate 3d human pose and mesh estimation from a single rgb image. In European Conference on Computer Vision (ECCV), 2020.
 [39] Gyeongsik Moon, Takaaki Shiratori, and Kyoung Mu Lee. Deephandmesh: A weaklysupervised deep encoderdecoder framework for highfidelity hand mesh modeling. European Conference on Computer Vision (ECCV), 2020.
 [40] Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. V2vposenet: Voxeltovoxel prediction network for accurate 3d hand and human pose estimation from a single depth map. In Proceedings of the IEEE conference on computer vision and pattern Recognition, pages 5079–5088, 2018.
 [41] Gyeongsik Moon, ShoouI Yu, He Wen, Takaaki Shiratori, and Kyoung Mu Lee. Interhand2.6m: A dataset and baseline for 3d interacting hand pose estimation from a single rgb image. In European Conference on Computer Vision (ECCV), 2020.
 [42] Arsalan Mousavian, Clemens Eppner, and Dieter Fox. 6dof graspnet: Variational grasp generation for object manipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2901–2910, 2019.
 [43] Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, and Christian Theobalt. Ganerated hands for realtime 3d hand tracking from monocular rgb. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 49–59, 2018.
 [44] Franziska Mueller, Micah Davis, Florian Bernard, Oleksandr Sotnychenko, Mickeal Verschoor, Miguel A. Otaduy, Dan Casas, and Christian Theobalt. Realtime Pose and Shape Reconstruction of Two Interacting Hands With a Single Depth Camera. ACM Transactions on Graphics (TOG), 38(4), 2019.
 [45] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3504–3515, 2020.
 [46] Paschalis Panteleris, Iason Oikonomidis, and Antonis Argyros. Using a single rgb frame for real time 3d hand pose estimation in the wild. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 436–445. IEEE, 2018.
 [47] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 165–174, 2019.
 [48] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. arXiv preprint arXiv:2003.04618, 2020.
 [49] Sergey Prokudin, Christoph Lassner, and Javier Romero. Efficient learning on point clouds with basis point sets. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019.
 [50] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. CoRR, abs/1612.00593, 2016.
 [51] Javier Romero, Dimitrios Tzionas, and Michael J. Black. Embodied hands: Modeling and capturing hands and bodies together. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6), 2017.
 [52] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixelaligned implicit function for highresolution clothed human digitization. In Proceedings of the IEEE International Conference on Computer Vision, pages 2304–2314, 2019.
 [53] Shunsuke Saito, Jinlong Yang, Qianli Ma, and Michael J. Black. SCANimate: Weakly supervised learning of skinned clothed avatar networks. In Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), June 2021.
 [54] Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1145–1153, 2017.

[55]
Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon
Wetzstein.
Implicit neural representations with periodic activation functions.
Advances in Neural Information Processing Systems, 33, 2020.  [56] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3dstructureaware neural scene representations. In Advances in Neural Information Processing Systems, pages 1121–1132, 2019.
 [57] Adrian Spurr, Umar Iqbal, Pavlo Molchanov, Otmar Hilliges, and Jan Kautz. Weakly supervised 3d hand pose estimation via biomechanical constraints. In European Conference on Computer Vision (ECCV), 2020.
 [58] Adrian Spurr, Jie Song, Seonwook Park, and Otmar Hilliges. Crossmodal deep variational hand pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 89–98, 2018.
 [59] Omid Taheri, Nima Ghorbani, Michael J. Black, and Dimitrios Tzionas. GRAB: A dataset of wholebody human grasping of objects. In European Conference on Computer Vision (ECCV), 2020.
 [60] Bugra Tekin, Federica Bogo, and Marc Pollefeys. H+ o: Unified egocentric recognition of 3d handobject poses and interactions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4511–4520, 2019.
 [61] Chengde Wan, Thomas Probst, Luc Van Gool, and Angela Yao. Dual grid net: Hand mesh vertex regression from single depth maps. In European Conference on Computer Vision, pages 442–459. Springer, 2020.
 [62] Jiayi Wang, Franziska Mueller, Florian Bernard, Suzanne Sorli, Oleksandr Sotnychenko, Neng Qian, Miguel A. Otaduy, Dan Casas, and Christian Theobalt. RGB2Hands: RealTime Tracking of 3D Hand Interactions from Monocular RGB Video. ACM Transactions on Graphics (TOG), 39(6), 12 2020.
 [63] Shaofei Wang, Andreas Geiger, and Siyu Tang. Locally aware piecewise transformation fields for 3d human mesh registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7639–7648, 2021.
 [64] Ying Wu, John Lin, and Thomas S Huang. Analyzing and capturing articulated hand motion in image sequences. IEEE transactions on pattern analysis and machine intelligence, 27(12):1910–1922, 2005.
 [65] Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, and Cewu Lu. Bihand: Recovering hand mesh with multistage bisected hourglass networks. In BMVC, 2020.
 [66] Linlin Yang and Angela Yao. Disentangling latent hands for image synthesis and pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9877–9886, 2019.
 [67] Siwei Zhang, Yan Zhang, Qianli Ma, Michael J Black, and Siyu Tang. Place: Proximity learning of articulation and contact in 3d environments. In 8th international conference on 3D Vision (3DV 2020)(virtual), 2020.
 [68] Xiong Zhang, Qiang Li, Hong Mo, Wenbo Zhang, and Wen Zheng. Endtoend hand mesh recovery from a monocular rgb image. In Proceedings of the IEEE International Conference on Computer Vision, pages 2354–2364, 2019.
 [69] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J Black, and Siyu Tang. Generating 3d people in scenes without people. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6194–6204, 2020.
 [70] Yuxiao Zhou, Marc Habermann, Weipeng Xu, Ikhsanul Habibie, Christian Theobalt, and Feng Xu. Monocular realtime hand shape and motion capture using multimodal data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
 [71] Christian Zimmermann and Thomas Brox. Learning to estimate 3d hand pose from single rgb images. In Proceedings of the IEEE international conference on computer vision, pages 4903–4911, 2017.
Appendix A More Experimental Analysis
a.1 MANO Parameters vs 3D Keypoints
To demonstrate the applicability of the HALO hand model, we introduce the HALOVAE model (Sec. 4) for the conditional human grasps generation task. Our model learns to generate the 3D keypoints of a hand that grasps a given object, whereas our baseline model GrabNet [59] generates MANO parameters that represent the grasping hand. As shown in the Sec. 5.2, the proposed HALOVAE model largely outperforms the GrabNet in terms of physical plausibility and naturalness of the generated human grasps.
Here, we provide more details and analysis on the object encoding schemes. The GrabNet [59] encodes the object using the BPS features [49]. Specifically, the BPS encoder is a 4layers feedforward network with residual connections between each layer. In our trials, we have experimented with this BPS encoder instead of our PointNet encoder. We used the same 4096 basis points as provided in the GRAB dataset. The rest of the architecture is the same to HALOVAE.
However, with the nearzero keypoint reconstruction loss on the validation set, the generated grasps for a given object are always the same, suggesting that the information from the sampled Gaussian is not used. We suspect that during training, the hand keypoints can be inferred using only the object BPS, as the 3D keypoints and the BPS features are highly related. Consequently, the decoder can entirely ignore the features from the hand encoder, producing the same grasp for different samples.
In order to directly compare the keypointbased and the MANO parameter based grasps generation frameworks, here we use the same object encoding scheme that employs the PointNet architecture [50]. Specifically, we change the last layer of HALOVAE (Fig. 3) from predicting the hand keypoints ( dimensions, including keypoints) to predicting MANO parameters ( dimentions, including 3 global translation, 3 global rotation, 10 shape parameters and 45 pose parameters). Both models are trained without the interpenetration loss. The results are shown in Tab A.1. The keypointbased generative model produces grasps with better contact and interpenetration while also being more diverse than those generated from the MANO parameters based model, demonstrating the efficacy of the proposed HALOVAE model.
HALOVAE w/o  Keypoint prediction  MANO parameter prediction 

Interpenetration volume (cm3) / Contaxt ratio (%)  
Binoculars  9.19 / 1.00  4.88 / 0.95 
Camera  3.99 / 1.00  4.00 / 1.00 
Frying pan  0.25 / 0.85  0.21 / 0.50 
Mug  3.38 / 1.00  7.22 / 1.00 
Toothpaste  6.25 / 1.00  6.94 / 1.00 
Wineglass  1.41 / 0.90  3.35 / 1.00 
Average  4.08 / 0.96  4.43 / 0.91 
Diversity  
Entropy  2.88  2.85 
Cluster size  2.25  1.13 
a.2 Comparison with Grasping Field [29] and GrabNet [59] on other datasets
In this section, we show the comparison between the generated grasps from the Grasping Field (GF) model, Grabnet, and HALOVAE on the ObMan [26] and HO3D [24] test objects. Note that the HALOVAE and Grasping Field are not directly comparable as the meshes produced by Grasping Field do not guarantee to be a valid human hand and require MANO fitting, while our HALOVAE produces articulated implicit hand surfaces.
Nevertheless, we show qualitative and quantitative comparisons between the GF meshes after MANO fitting and the HALO hand surfaces in Fig. 1 and Tab A.2, A.2, respectively. Due to the artifacts in the handdesigned objects used in the ObMan dataset that interfere with the interpenetration evaluation, e.g nonwatertight meshes, surface with holes, internal structure with wrong winding number and zerovolume meshes, we perform the evaluation using the object convex hull instead. The evaluation is performed by generating 5 grasps per object on 30 randomly chosen test objects from the ObMan dataset. In total, we evaluate 150 generated grasps from each model.
The results in Tab. A.2 and A.3 show that our HALOVAE model produces comparable physicallyplausible human grasps than Grasping Field [29] and GrabNet [59] with more diversity.
HALOVAE  Grasping Field  GrabNet  Corase  
w/ (ours)  
Interpenetration volume* (cm3)  19.95  21.93  21.82 
Contact ratio* (%)  0.98  0.90  1.00 
Diversity  
Entropy  2.76  2.83  2.88 
Cluster size  3.34  2.87  2.68 
Appendix B Differentiable Biomechanical Canonicalization Layer
In this section, we elaborate on the method for converting 3D keypoints to bone transformation matrices. We closely follow the formulation in Spurr [57] to construct the local coordinate systems . Here we provide a brief summary of the method. For more details on , we refer the readers to the supplementary material of [57].
Recall that we seek to compute the set of matrices , which, in details, is obtained by sequentially performing the following operations: 1) normalizing palmar plane angles, 2) normalizing palmar bone angles, 3) constructing local coordinate frames for each bone with respect to its parent along the kinematic chain, 4) undoing the rotation in the local frames , 5) reverting back to the global coordinate frames. Formally,
Here is a function that maps the keypoints to bone vectors by translating them to the local origin and scaling to unit norm; normalizes the palmar bone and palmar planes angles; then maps the bones to their local coordinate frames; rotates each bone to have the same flexion and abduction angles as the canonical pose. Finally, maps each coordinate frame back to the global coordinate system; reverts bones back to their original length and translates the bones to the tip of their parent bones.
In the following, we define the notations needed and describe the methods for constructing the local coordinate system and each matrix in .
b.1 Notation
We define all the notations with respect to the right hand. The same procedure could also be applied to the left hand by flipping the xaxis of all joints without loss of generality. We denote 3D rootaligned joint locations of a posed hand as where is the root joint. A bone is defined as a vector pointing from the parent joint to its child where denotes the parent of joint in the kinematic tree (see Fig. 1). We define as a normalized bone of and call the mapping from joints to normalized bones. As a shorthand, we call the palmar bones that are connected to the root joint as the level0 bones (bones ). We call a bone with bone segments in between itself and the root joint a klevel bone. The bone level from 0 to 3 are denoted by the color black, blue, dark purple, and orange respectively in Fig. 1(b).
Palmar bone rotations. Given a hand skeleton in global coordinate frame, we denote to be the angle between a palmar bone and its adjacent palmar bone ; to be the plane angle between plane spanned by the palmar bone , and plane spanned by the palmar bone , . We denote the properties of the reference canonical hand with
Nonpalmar bone rotations. Given a local coordinate system where are the orientation of the coordinate system, for a bone , here we define the flexion angle and the abduction angle. Since each bone is defined the same way in a local coordinate system , we drop the subscript for brevity. The flexion angle of a bone is the angle between (the projection of on the plane) and the axis in a coordinate frame . The abduction angle is defined by the angle between the bone and the projection (see Fig. 2(b)).
b.2 Palmar Bone Normalization ()
Given a globally normalized hand keypoints, we first compute the transformation matrices the rotate the palmar bones to match the canonical pose. The palmar bone transformation normalization is a combination of the palmar plane angle normalization and the palmar bone angle normalization , with .
Palmar Plane Angle (). To change the bone angle, we rotate the outer bone (with middle finger being the center) about the shared bone until the plane angle is equal to the canonical angle , which we set to 0.8, 0.2, 0.2 radian for , , , respectively. The plane between the index and middle finger () is fixed as reference. The rotation applied on the ring finger bone is also propagated to the pinky finger bone .
Palmar Bone Angle (). Secondly, we normalize the spread of the fingers by rotating the bones on the plane two adjacent bones. Concretely, we use the middle finger as reference then rotate on plane , on plane , on plane , and on plane . The transformation applied on and are also applied to and respectively. We set the canonical angle to to 0.4, 0.2, 0.2, 0.2 radian for , , , and respectively.
b.3 Local Coordinate System ()
Now we define the local coordinate system for each bone . Note that since in our formulation level0 bones are always fixed, the only bones that characterize the hand pose are bones at level 1 to 3. Thus, we only describe the coordinate systems for nonzerolevel bones. For each nonzero level bone (), its coordinate frame is defined by three normalized vectors . To construct the coordinate system for level1 to level3 bones, the zaxis of for nonzero level bones are always defined by the normalized bone vector of its parent . We then describe how to define the xaxis. Afterwards, each local coordinate system is defined as can be obtained by a cross product: . Note that the coordinate frame does not have a position component because we obtain the bone vectors by subtracting the child joint. Thus, all bones are aligned to the origin. The translation components will be added in the final step of our formulation. However, for illustration purposes, we present each coordinate frame with the translation in mind for our figures.
Coordinate Systems for Level1 Bones Formally, we denote as the normal of a plane spanned by two adjacent level0 bones where
(6) 
For better illustration, Fig. 1(a) demonstrates the normal vectors defining each plane. Using the normal vectors above, we define the xaxis for level1 frames for as follows:
(7) 
In other words, for bone and , we define the xaxis for their coordinate system and by and because they are on the edge of the palm.
For bones , and , the xaxis for their coordinate systems are defined by the average normal around the bones.
Coordinate Systems for Level2 and 3 Bones Given the rotation angles of the level1 bones ) and the corresponding coordinate systems , we can construct coordinate frame () for the secondlevel bones by rotating the coordinate frame along the kinematic chain using . Concretely, the new coordinate frame is given by
(8) 
Similarly, the coordinate systems for level3 bones can be obtained by rotating the coordinate systems of level 2 bones using the rotation angles on level 2.
Rotations to the Canonical Pose () Given a local coordinate system where are the axis of the coordinate system, for a bone , we can measure the flexion angle and the abduction angle that characterize bone with respect to . Fig. 1 visualizes the local coordinate system and how rotation angles are measured. Then, given a canonical pose bone , we compute the rotation matrix to transform to its canonical pose based on the angles relative to the canonical pose , in .
b.4 Angles to the Canonical Pose (, ). ()
Since the angles are measured in a consistent coordinate frame, to obtain the angle needed to rotate to the canonical pose, we simply offset the angles of by the angle of in the canonical pose:
(9)  
where and are the flexion and abduction angle of the canonical pose with respect to . In our experiments, we use the canonical pose identical to that of MANO [51].
b.5 Local Coordinate Frame to Global Coordinate Frame().
Each rotation matrix is defined locally with respect to a coordinate frame . The accumulated rotations with respect to the root of the hand for a bone is then the product of the rotation matrices along the kinematic chain:
(10) 
Where are the rotations up to the parent of . With the accumulated rotation matrices encoding the jointangles relative to the palm of the hand, we need to map all matrices to global coordinates by multiplying with . Recall that encodes the mapping from the global coordinate system to the local frame. Thus, its inverse brings the angles back to the global coordinate frame. We summarize the accumulation of angles and the mapping to the global coordinate with the matrix :
(11) 
With all the necessary components in place, we can compute the transformation matrices for all bones using only the 3D keypoints in a differentiable manner. In Sec. 5.1.2 we show that the matrices derived using this formulation can be uses to recover hand surfaces that are very close to those attained by the ground truth transformation matrices from MANO.
Appendix C Implementation details
c.1 Halo
Training loss.
The loss used to train our articulated hand model can be written as:
(12) 
where determines the weight of the skinning loss. It is set to 0.5 in all experiments. We turn off the skinning loss after the validation IoU reach 80% as we observe it allows smoother transition between hand parts.
Network architecture. For a fair comparison with the baseline [15], we use the same network architecture with 4 layers of size 40 for each part model in the ablation study. For the final model used in HALOVAE, the size is increased to 64 as we observed a small surface quality improvement. The LeakyRelu with factor 0.1 is used as the activation function. All layers have a residual connection and a dropout of 0.2. The subspace projection layers map the input of size to a vector of size 8. When used, the global bone length encoder is a 2layer feedforward network of size 40 that maps 16 bone lengths to an encoded vector of size 16.
We define the number of parts as , with one part responsible for the palm and three parts for each finger. When using the 20 transformation matrices obtained from our formulation as our pose descriptor, we disregard the transformations of the root bones, and add an identity transformation for the palm, resulting in 16 transformation matrices.
Training data. To train our neural occupancy hand model, we utilize MANO [51] hand meshes. The query points for each mesh are selected using two strategies: 1) uniformly sampling points in the bounding box of the hand mesh where the root joint is at the origin, 2) sampling on the surface with additional isotropic Gaussian noise. For each strategy, we sample 100,000 points. The associated occupancy value of each query point is computed by casting a ray from the sampled point and counting the number of intersections along the ray. For evaluation, following [15], we use uniformly sampled points. The bone transformation matrices are computed along the kinematic chain to transform the template hand into the target pose. The shape descriptor is based on bone length, defined as the Euclidean distance between adjacent joints. The skinning weights are taken from the skinning weights for posing the template mesh in MANO. We use the Youtube3D (YT3D) hands dataset [30] in all our experiments. The YT3D training set contains 50,175 hand meshes of hundreds of subjects performing a wide variety of tasks in 102 videos. The test set covers 1,525 meshes from 7 videos.
Training. We used the Adam optimizer with a learning rate and a batch size of 64 in all experiments. For each mesh at each training step, we sample 2048 points from the 200K presampled query points for the occupancy loss and 2,000 out of 6,000 surface points for the skinning loss. When surface point resampling is not used, we sampled 200 out of 778 mesh vertices for the skinning loss.
c.2 HaloVae
Preprocessing.
Network architecture.
Our keypoint VAE consists of an object encoder, keypoint encoder, and a decoder. The object encoder is a 4layers PointNet encoder with a residual connection between each layer. The hand encoder and the decoder are 4layers MLP networks with residuals connections. The hand encoder that takes hand key points and the object latent code then produces mean and standard variation of a 32dimension Gaussian distribution. The decoder takes as inputs noise sample from the Gaussian distribution and the object latent vector to predict the hand key point locations. All layers have size 256. From key points to hand mesh, we use the final HALO model with differentiable canonicalization layer that takes 3D key points as inputs.
Appendix D Limitation
HALO relies on biomechanically plausible 3D keypoints. Training HALO with the model endtoend with angle losses alleviate this problem and results in a more robust surface prediction, as highlighted in HALOVAE. This indicates that the inductive bias of our model helps encourage a biomechanically plausible 3D hand surface. However, a severely physically implausible hand skeleton could still produce artifacts on the hand surface.
Tolerance to Noisy Keypoints.
We further analyse the impact of the biomechanical violations on the reconstruction quality of hand surfaces. We uniformly sample noises with the amplitude and add them to every dimension of every joints of a valid hand. As shown in Fig. 1, with the amplitudes are within , HALO produces reasonable hand surface. When is increased to , the reconstructed hand surface starts to show visible artifacts. Nevertheless, we reiterate that this problem can be mitigated by encouraging a biomechanically valid skeleton output from the estimator or generator, as in HALOVAE.
Appendix E Qualitative Results
e.1 HALO from Keypoints
Figure 1 shows the HALO hand surfaces driven by the keypointbased skeleton articulations.
e.2 Generative Results
Figure 3 shows the grasps randomly sampled from HALOVAE conditioned on the objects from the test set of the GRAB dataset.
Comments
There are no comments yet.