Reconstructing NBA Players

07/27/2020 ∙ by Luyang Zhu, et al. ∙ 0

Great progress has been made in 3D body pose and shape estimation from a single photo. Yet, state-of-the-art results still suffer from errors due to challenging body poses, modeling clothing, and self occlusions. The domain of basketball games is particularly challenging, as it exhibits all of these challenges. In this paper, we introduce a new approach for reconstruction of basketball players that outperforms the state-of-the-art. Key to our approach is a new method for creating poseable, skinned models of NBA players, and a large database of meshes (derived from the NBA2K19 video game), that we are releasing to the research community. Based on these models, we introduce a new method that takes as input a single photo of a clothed player in any basketball pose and outputs a high resolution mesh and 3D pose for that player. We demonstrate substantial improvement over state-of-the-art, single-image methods for body shape reconstruction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 12

page 13

page 14

page 24

page 25

page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given regular, broadcast video of an NBA basketball game, we seek a complete 3D reconstruction of the players, viewable from any camera viewpoint. This reconstruction problem is challenging for many reasons, including the need to infer hidden and back-facing surfaces, and the complexity of basketball poses, e.g., reconstructing jumps, dunks, and dribbles.

Human body modeling from images has advanced dramatically in recent years, due in large part to availability of 3D human scan datasets, e.g., CAESAR [59]. Based on this data, researchers have developed powerful tools that enable recreating realistic humans in a wide variety of poses and body shapes [43], and estimating 3D body shape from single images [61, 70]. These models, however, are largely limited to the domains of the source data – people in underwear [59], or clothed models of people in static, staged poses [56]. Adapting this data to a domain such as basketball is extremely challenging, as we must not only match the physique of an NBA player, but also their unique basketball poses.

Figure 1: Single input photo (left), estimated 3D posed model that is viewed from a new camera position (middle), same model with video game texture for visualization purposes. The insets show the estimated shape from the input camera viewpoint. (Court and basketball meshes are extracted from the video game) Photo Credit: [66]

Sports video games, on the other hand, have become extremely realistic, with renderings that are increasingly difficult to distinguish from reality. The player models in games like NBA2K [68] are meticulously crafted to capture each player’s physique and appearance (Fig. 3). Such models are ideally suited as a training set for 3D reconstruction and visualization of real basketball games.

In this paper, we present a novel dataset and neural networks that reconstruct high quality meshes of basketball players and retarget these meshes to fit frames of real NBA games. Given an image of a player, we are able to reconstruct the action in 3D, and apply new camera effects such as close-ups, replays, and bullet-time effects (Fig. 

1).

Our new dataset is derived from the video game NBA2K (with approval from the creator, Visual Concepts), by playing the game for hours and intercepting rendering instructions to capture thousands of meshes in diverse poses. Each mesh provides detailed shape and texture, down to the level of wrinkles in clothing, and captures all sides of the player, not just those visible to the camera. Since the intercepted meshes are not rigged, we learn a mapping from pose parameters to mesh geometry with a novel deep skinning approach. The result of our skinning method is a detailed deep net basketball body model that can be retargeted to any desired player and basketball pose.

We also introduce a system to fit our retargetable player models to real NBA game footage by solving for 3D player pose and camera parameters for each frame. We demonstrate the effectiveness of this approach on synthetic and real NBA input images, and compare with the state of the art in 3D pose and human body model fitting. Our method outperforms the state-of-the-art methods when reconstructing basketball poses and players even when these methods, to the extent possible, are retrained on our new dataset. This paper focuses on basketball shape estimation, and leaves texture estimation as future work.

Our biggest contributions are, first, a deep skinning approach that produces high quality, pose-dependent models of NBA players. A key differentiator is that we leverage thousands of poses and capture detailed geometric variations as a function of pose (e.g., folds in clothing), rather than a small number of poses which is the norm for datasets like CAESAR (1-3 poses/person) and modeling methods like SMPL (trained on CAESAR and 45 poses/person). While our approach is applicable to any source of registered 3D scan data, we apply it to reconstruct models of NBA players from NBA2K19 game play screen captures. As such, a second key contribution is pose-dependent models of different basketball players, and raw capture data for the research community. Finally, we present a system that fits these player models to images, enabling 3D reconstructions from photos of NBA players in real games. Both our skinning and pose networks are evaluated quantitatively and qualitatively, and outperform the current state of the art.

One might ask, why spend so much effort reconstructing mesh models that already exist (within the game)? NBA2K’s rigged models and in-house animation tools are proprietary IP. By reconstructing a posable model from intercepted meshes (eliminating requirement of proprietary animation and simulation tools), we can provide these best-in-the-world models of basketball players to researchers for the first time (with the company’s support). These models provide a number of advantages beyond existing body models such as SMPL. In particular, they capture not just static poses, but human body dynamics for running, walking, and many other challenging activities. Furthermore, the plentiful pose-dependent data enables robust reconstruction even in the presence of heavy occlusions. In addition to producing the first high quality reconstructions of basketball from regular photos, our models can facilitate synthetic data collection for ML algorithms. Just as simulation provides a critical source of data for many ML tasks in robotics, self-driving cars, depth estimation, etc., our derived models can generate much more simulated content under any desired conditions (we can render any pose, viewpoint, combination of players, against any background, etc.)

Figure 2: Overview: Given a single basketball image (top left), we begin by detecting the target player using [10, 62], and create a person-centered crop (bottom left). From this crop, our PoseNet predicts 2D pose, 3D pose, and jump information. The estimated 3D pose and the cropped image are then passed to mesh generation networks to predict the full, clothed 3D mesh of the target player. Finally, to globally position the player on the 3D court (right), we estimate camera parameters by solving the PnP problem on known court lines and predict global player position by combining camera, 2D pose, and jump information. Blue boxes represent novel components of our method.

2 Related Work

Video Game Training Data. Recent works [58, 57, 39, 54] have shown that, for some domains, data derived from video games can significantly reduce manual labor and labeling, since ground-truth labels can be extracted automatically while playing the game. E.g., [9, 54] collected depth maps of soccer players by playing the FIFA soccer video game, showing generalization to images of real games. Those works, however, focused on low level vision data, e.g., optical flow and depth maps rather than full high quality meshes. In contrast, we collect data that includes 3D triangle meshes, texture maps, and detailed 3D body pose, which requires more sophisticated modeling of human body pose and shape.

Sports 3D reconstruction. Reconstructing 3D models of athletes playing various sports from images has been explored in both academic research and industrial products. Most previous methods use multiple camera inputs rather than a single view. Grau et al. [19, 18] and Guillemaut et al. [23, 22] used multiview stereo methods for free viewpoint navigation. Germann et al. [15]

proposed an articulated billboard presentation for novel view interpolation. Intel demonstrated

degree viewing experiences111https://www.intel.com/content/www/us/en/sports/technology/true-view.html, with their True View [29] technology by installing synchronized 5k cameras around the venue and using this multi-view input to build a volumetric reconstruction of each player. This paper aims to achieve similar reconstruction quality but from a single image.

Rematas et al. [54] reconstructed soccer games from monocular YouTube videos. However, they predicted only depth maps, thus can not handle occluded body parts and player visualization from all angles. Additionally, they estimated players’ global position by assuming all players are standing on the ground, which is not a suitable assumption for basketball, where players are often airborne. The detail of the depth maps is also low. We address all of these challenges by building a basketball specific player reconstruction algorithm that is trained on meshes and accounts for complex airborne basketball poses. Our result is a detailed mesh of the player from a single view, but comparable to multi-view reconstructions. Our reconstructed mesh can be viewed from any camera position.

3D human pose estimation. Large scale body pose estimation datasets
[30, 45, 69] enabled great progress in 3D human pose estimation from single images
[46, 44, 65, 26, 47]. We build on [46] but train on our new basketball pose data, use a more detailed skeleton (35 joints including fingers and face keypoints), and an explicit model of jumping and camera to predict global position. Accounting for jumping is an important step that allows our method outperform state of the art pose.

3D human body shape reconstruction. Parametric human body models [4, 43, 52, 60, 33, 49] are commonly fit to images to derive a body skeleton, and provide a framework to optimize for shape parameters [7, 33, 49, 71, 40, 28, 75]. [70]

further 2D warped the optimized parametric model to approximately account for clothing and create a rigged animated mesh from a single photo.

[34, 51, 35, 38, 50, 24, 76, 37] trained a neural network to directly regress body shape parameters from images. Most parametric model based methods reconstruct undressed humans, since clothing is not part of the parametric model.

Clothing can be modeled to some extent by warping SMPL [43] models, e.g., to silhouettes: Weng et al. [70] demonstrated 2D warping of depth and normal maps from a single photo silhouette, and Alldeick et al. [2, 1, 3] addressed multi-image fitting. Alternatively, given predefined garment models [6] estimated a clothing mesh layer on top of SMPL.

Non-parametric methods [67, 48, 61, 53] proposed voxel [67] or implicit function [61] representations to model clothed humans by training on representative synthetic data. Xu et al. [73, 74] and Habermann et al. [25] assumed a pre-captured multi-view model of the clothed human, retargeted based on new poses.

We focus on single-view reconstruction of players in NBA basketball games, producing a complete 3D model of the player pose and shape, viewable from any camera viewpoint. This reconstruction problem is challenging for many reasons, including the need to infer hidden and back-facing surfaces, and the complexity of basketball poses, e.g., reconstructing jumps, dunks, and dribbles. Unlike prior methods modeling undressed people in various poses or dressed people in a frontal pose, we focus on modeling clothed people under challenging basketball poses and provide a rigorous comparison with the state of the art.

3 The NBA2K Dataset

Figure 3: Our novel NBA2K dataset examples, extracted from the NBA2K19 video game. Our NBA2K dataset captures 27,144 basketball poses spanning 27 subjects, extracted from the NBA2K19 video game.

Imagine having thousands of 3D body scans of NBA players, in every conceivable pose during a basketball game. Suppose that these models were extremely detailed and realistic, down to the level of wrinkles in clothing. Such a dataset would be instrumental for sports reconstruction, visualization, and analysis. This section describes such a dataset, which we call NBA2K, after the video game from which these models derive. These models of course are not literally player scans, but are produced by professional modelers for use in the NBA2K19 video game, based on a variety of data including high resolution player photos, scanned models and mocap data of some players. While they do not exactly match each player, they are among the most accurate 3D renditions in existence (Fig. 3).

Our NBA2K dataset consists of body mesh and texture data for several NBA players, each in around 1000 widely varying poses. For each mesh (vertices, faces and texture) we also provide its 3D pose (35 keypoints including face and hand fingers points) and the corresponding RGB image with its camera parameters. While we used meshes of 27 real famous players to create many of figures in this paper, we do not have permission to release models of current NBA players. Instead, we additionally collected the same kind of data for 28 synthetic players and retrained our pipeline on this data. The synthetic player’s have the same geometric and visual quality as the NBA models and their data along with trained models will be shared with the research community upon publication of this paper. Our released meshes, textures, and models will have the same quality as what’s in the paper, and span a similar variety of player types, but not be named individuals. Visual Concepts [68] has approved our collection and sharing of the data.

The data was collected by playing the NBA2K19 game and intercepting calls between the game engine and the graphics card using RenderDoc [55]. The program captures all drawing events per frame, where we locate player rendering events by analyzing the hashing code of both vertex and pixel shaders. Next, triangle meshes and textures are extracted by reverse-engineering the compiled code of the vertex shader. The game engine renders players by body parts, so we perform a nearest neighbor clustering to decide which body part belongs to which player. Since the game engine optimizes the mesh for real-time rendering, the extracted meshes have different mesh topologies, making them harder to use in a learning framework. We register the meshes by resampling vertices in texture space based on a template mesh. After registration, the processed mesh has 6036 vertices and 11576 faces with fixed topology across poses and players (point-to-point correspondence), has multiple connected components (not a watertight manifold), and comes with no skinning information. We also extract the rest-pose skeleton and per-bone transformation matrix, from which we can compute forward kinematics to get full 3D pose.

4 From Single Images to Meshes

Figure 2 shows our full reconstruction system, starting from a single image of a basketball game, and ending with output of a complete, high quality mesh of the target player with pose and shape matching the image. Next, we describe the individual steps to achieve the final results.

4.1 3D Pose in World Coordinates

2D pose, jump, and 3D pose estimation Since our input meshes are not rigged (no skeletal information or blending weights), we propose a neural network called PoseNet to estimate the 3D pose and other attributes of a player from a single image. This 3D pose information will be used later to facilitate shape reconstruction. PoseNet takes a single image as input and is trained to output 2D body pose, 3D body pose, a binary jump classification (is the person airborne or not), and the jump height (vertical height of the feet from ground). The two jump-related outputs are key for global position estimation and are our novel addition to existing generic body pose estimation.

From the input image, we first extract ResNet [72] features (from layer 4) and supply them to four separate network branches. The output of the 2D pose branch is a set of 2D heatmaps (one for each 2D keypoint) indicating where the particular keypoint is located. The output of the 3D pose branch is a set of location maps (one for each keypoint) [46]. The location map indicates the possible 3D location for every pixel. The 2D and 3D pose branches use the same architecture as [72]. The jump branch estimates a class label, and the jump height branch regresses the height of the jump. Both networks use a fully connected layer followed by two linear residual blocks [44] to get the final output.

The PoseNet model is trained using the following loss:

(1)

where is the loss between predicted () and ground truth () heatmaps, is the loss between predicted () and ground truth () 3D location maps, is the loss between predicted () and ground truth () bone lengths to penalize unnatural 3D poses (we pre-computed the ground truth bone length over the training data), is the loss between predicted () and ground truth () jump height, and is the cross-entropy loss for the jump class. For all experiments, we set , and .

Global Position To estimate the global position of the player we need the camera parameters of the input image. Since NBA courts have known dimensions, we generate a synthetic 3D field and align it with the input frame. Similar to [54, 11], we use a two-step approach. First, we provide four manual correspondences between the input image and the 3D basketball court to initialize the camera parameters by solving PnP [41]. Then, we perform a line-based camera optimization similar to [54], where the projected lines from the synthetic 3D court should match the lines on the image. Given the camera parameters, we can estimate a player’s global position on (or above) the 3D court by the lowest keypoint and the jump height. We cast a ray from the camera center through the image keypoint; the 3D location of that keypoint is where the ray-ground height is equal to the estimated jump height.

4.2 Mesh Generation

Figure 4: Mesh generation contains two sub networks: IdentityNet and SkinningNet. IdentityNet deforms a rest pose template mesh (average rest pose over all players in the database), into a rest pose personalized mesh given the image. SkinningNet takes the rest pose personalized mesh and 3D pose as input and outputs the posed mesh. There is a separate SkinningNet per body part, here we illustrate the arms.

Reconstruction of a complete detailed 3D mesh (including deformation due to pose, cloth, fingers and face) from a single image is a key technical contribution of our method. To achieve this we introduce two sub-networks (Fig. 4): IdentityNet and SkinningNet. IdentityNet takes as input an image of a player whose rest mesh we wish to infer, and outputs the person’s rest mesh by deforming a template mesh. The template mesh is the average of all training meshes and is the same starting point for any input. The main benefit of this network is that it allows us to estimate the body size and arm span of the player according to the input image. SkinningNet takes the rest pose personalized mesh and the 3D pose as input, and outputs the posed mesh. To reduce the learning complexity, we pre-segment the mesh into six parts: head, arms, shirt, pants, legs and shoes. We then train a SkinningNet on each part separately. Finally, we combine the six reconstructed parts into one, while removing interpenetration of garments with body parts. Details are described below.

IdentityNet. We propose a variant of 3D-CODED [20] to deform the template mesh. We first use ResNet [27] to extract features from input images. Then we concatenate template mesh vertices with image features and send them into an AtlasNet decoder [21] to predict per vertex offsets. Finally, we add this offset to the template mesh to get the predicted personalized mesh. We use the L1 loss between the prediction and ground truth to train IdentityNet.

SkinningNet. We propose a TL-embedding network [17] to learn an embedding space with generative capability. Specifically, the 3D keypoints are processed by the pose encoder to produce a latent code . The rest pose personalized mesh vertices (where is the number of vertices in a mesh part) are processed by the mesh encoder to produce a latent code . Then and are concatenated and fed into a fully connected layer to get . Similarly, the ground truth posed mesh vertices are processed by another mesh encoder to produce a latent code . is sent into the mesh decoder during training while is sent into the mesh decoder during testing.

The Pose encoder is comprised of two linear residual blocks [44] followed by a fully connected layer. The mesh encoders and shared decoder are built with spiral convolutions [8]. See supplementary material for detailed network architecture. SkinningNet is trained with the following loss:

(2)

where forces the space of and to be similar, and is the loss between decoded mesh vertices and ground truth vertices . The weights of different losses are set to . See supplementary for detailed training parameters.

Combining body part meshes. Direct concatenation of body parts results in interpenetration between the garment and the body. Thus, we first detect all body part vertices in collision with clothing as in [49], and then follow [63, 64] to deform the mesh by moving collision vertices inside the garment while preserving local rigidity of the mesh. This detection-deformation process is repeated until there is no collision or the number of iterations is above a threshold (10 in our experiments). See supplementary material for details of the optimization.

HMR [34] CMR [38] SPIN [37] Ours(Reg+BL) Ours(Loc) Ours(Loc+BL)
MPJPE 115.77 82.28 88.72 81.66 66.12 51.67
MPJPE-PA 78.17 61.22 59.85 63.70 52.73 40.91
Table 1: Quantitative comparison of 3D pose estimation to state of the art. The metric is mean per joint position error with (MPJPE-PA) and without (MPJPE) Procrustes alignment. Baseline methods are fine-tuned on our NBA2K dataset.
HMR [34] SPIN [37] SMPLify-X [49] PIFu [61] Ours
CD 22.411 14.793 47.720 23.136 4.934
EMD 0.137 0.125 0.187 0.207 0.087
Table 2: Quantitative comparison of our mesh reconstruction to state of the art. We use Chamfer distance denoted by CD (scaled by 1000, lower is better), and Earth-mover distance denoted by EMD (lower is better) for comparison. Both distance metrics show that our method significantly outperforms state of the art for mesh estimation. All related works are retrained or fine-tuned on our data, see text.

5 Experiments

Dataset Preparation. We evaluate our method with respect to the state of the art on our NBA2K dataset. We collected 27,144 meshes spanning 27 subjects performing various basketball poses (about 1000 poses per player). PoseNet training requires generalization on real images. Thus, we augment the data to 265,765 training examples, 37,966 validation examples, and 66,442 testing examples. Augmentation is done by rendering and blending meshes into various random basketball courts. For IdentityNet and SkinningNet, we select 19,667 examples from 20 subjects as training data and test on 7,477 examples from 7 unseen players. To further evaluate generalization of our method, we also provide qualitative results on real images. Note that textures are extracted from the game and not estimated by our algorithm.

5.1 3D Pose, Jump, and Global Position Evaluation

We evaluate pose estimation by comparing to state of the art SMPL-based methods that released training code. Specifically we compare with HMR [34], CMR [38], and SPIN [37]. For fair comparison, we fine-tuned their models with 3D and 2D ground-truth NBA2K poses. Since NBA2K and SMPL meshes have different topology we do not use mesh vertices and SMPL parameters as part of the supervision. Table 1 shows comparison results for 3D pose. The metric is defined as mean per joint position error (MPJPE) with and without procrustes alignment. The error is computed on 14 joints as defined by the LSP dataset [32]. Our method outperforms all other methods even when they are fine-tuned on our NBA2K dataset (lower number is better).

To further evaluate our design choices, we compare the location-map-based representation (used in our network) with direct regression of 3D joints, and also evaluate the effect of bone length (BL) loss on pose prediction. A direct regression baseline is created by replacing our deconvolution network with fully connected layers [44]. The effectiveness of BL loss is evaluated by running the network with and without it. As shown in Table 1, both location maps and BL loss can boost the performance. In supplementary material, we show our results on global position estimation. We can see that our method can accurately place players (both airborne and on ground) on the court due to accurate jump estimation.

Figure 5: Comparison with SMPL-based methods. Column 1 is input, columns 2-5 are reconstructions in the image view, columns 6-9 are visualizations from a novel view. Note the significant difference in body pose between ours and SMPL-based methods.
Figure 6: Comparison with PIFu[61]. Column 1 is input, columns 2-5 are reconstructions in the image viewpoint, columns 6-9 are visualizations from a novel view. PIFu significantly over-smooths shape details and produces lower quality reconstruction even when trained on our dataset (PIFu+NBA).
Figure 7: Garment details at various poses. For each input image, we show the predicted shape, close-ups from two viewpoints.

5.2 3D Mesh Evaluation

Quantitative Results. Table 2 shows results of comparing our mesh reconstruction method to the state of the art on NBA2K data. We compare to both undressed (HMR [34], SMPLify-X [49], SPIN [37]) and clothed (PIFu [61]) human reconstruction methods. For fair comparison, we retrain PIFU on our NBA2K meshes. SPIN and HMR are based on the SMPL model where we do not have groundtruth meshes, so we fine-tuned with NBA2K 2D and 3D pose. SMPLify-X is an optimization method, so we directly apply it to our testing examples. The meshes generated by baseline methods and the NBA2K meshes do not have one-to-one vertex correspondence, thus we use Chamfer (CD) and Earth-mover (EMD) as distance metrics. Prior to distance computations, all predictions are aligned to ground-truth using ICP. We can see that our method outperforms both undressed and clothed human reconstruction methods even when they are trained on our data.

Figure 8: Results on real images. For each example, column 1 is the input image, 2-3 are reconstructions rendered in different views. 4-5 are corresponding renderings using texture from the video game, just for visualization. Our technical method is focused only on shape recovery. Photo Credit: [16]

Qualitative Results. Fig. 5 qualitatively compares our results with the best performing SMPL-based methods SPIN [37] and SMPLify-X [49]. These two methods do not reconstruct clothes, so we focus on the pose accuracy of the body shape. Our method generates more accurate body shape for basketball poses, especially for hands and fingers. Fig. 6 qualitatively compares with PIFu [61], a state-of-the-art clothed human reconstruction method. Our method generates detailed geometry such as shirt wrinkles under different poses while PIFu tends to over-smooth faces, hands, and garments. Fig. 7 further visualizes garment details in our reconstructions. Fig. 8 shows results of our method on real images, demonstrating robust generalization. Please also refer to the supplementary pdf and video for high quality reconstruction of real NBA players.

5.3 Ablative Study

Figure 9: Comparison with SMPL-NBA. Column 1 is input, columns 2-4 are reconstructions in the image view, columns 5-7 are visualizations from a novel viewpoint. SMPL-NBA fails to model clothing and the fitting process is unstable.

Comparison with SMPL-NBA. We follow the idea of SMPL [43] to train a skinning model from NBA2K registered mesh sequences. The trained body model is called SMPL-NBA. Since we don’t have rest pose meshes for thousands of different subjects, we cannot learn a meaningful PCA shape basis as SMPL did. Thus, we focus on the pose dependent part and fit the SMPL-NBA model to 2000 meshes of a single player. We use the same skeleton rig as SMPL to drive the mesh. Since our mesh is comprised of multiple connected parts, we initialize the skinning weights using a voxel-based heat diffusion method [13]. The whole training process of SMPL-NBA is the same as the pose parameter training of SMPL. We fit the learned model to predicted 2D keypoints and 3D keypoints from PoseNet following SMPLify [7]. Fig. 9 compares SkinningNet with SMPL-NBA, showing that SMPL-NBA has severe artifacts for garment deformation – an inherent difficulty for traditional skinning methods. It also suffers from twisted joints which is a common problem when fitting per bone transformation to 3D and 2D keypoints.

CMR [38] 3D-CODED [20] Ours
MPVPE 85.26 84.22 76.41
MPVPE-PA 64.32 63.13 54.71
Table 3: Quantitative comparison with 3D-CODED [20] and CMR [38]. The metric is mean per vertex position error in mm with (MPVPE-PA) and without (MPVPE) Procrustes alignment. All baseline methods are trained on the NBA2K data.

Comparison with Other Geometry Learning Methods. Fig. 10 compares SkinningNet with two state of the art mesh-based shape deformation networks: 3D-CODED [20] and CMR [38]. The baseline methods are retrained on the same data as SkinningNet for fair comparison. For 3D-CODED, we take 3D pose as input instead of a point cloud to deform the template mesh. For CMR, we only use their mesh regression network (no SMPL regression network) and replace images with 3D pose as input. Both methods use the same 3D pose encoder as SkinningNet. The input template mesh is set to the prediction of IdentityNet. Unlike baseline methods, SkinningNet does not suffer from substantial deformation errors when the target pose is far from the rest pose. Table 3 provides further quantitative results based on mean per vertex position error (MPVPE) with and without procrustes alignment.

Figure 10: Comparison with 3D-CODED [20] and CMR [38]. Column 1 is input, columns 2-5 are reconstructions in the image view, columns 6-9 are zoomed-in version of the red boxes. The baseline methods exhibit poor deformations for large deviations from the rest pose.

6 Discussion

We have presented a novel system for state-of-the-art, detailed 3D reconstruction of complete basketball player models from single photos. Our method includes 3D pose estimation, jump estimation, an identity network to deform a template mesh to the person in the photo (to estimate rest pose shape), and finally a skinning network that retargets the shape from rest pose to the pose in the photo. We thoroughly evaluated our method compared to prior art; both quantitative and qualitative results demonstrate substantial improvements over the state-of-the-art in pose and shape reconstruction from single images. For fairness, we retrained competing methods to the extent possible on our new data. Our data, models, and code will be released to the research community.

Limitations and future work This paper focuses solely on high quality shape estimation of basketball players, and does not estimate texture – a topic for future work. Additionally IdentityNet can not model hair and facial identity due to lack of details in low resolution input images. Finally, the current system operates on single image input only; a future direction is to generalize to video with temporal dynamics.

Acknowledgments This work was supported by NSF/Intel Visual and Experimental Computing Award #1538618 and the UW Reality Lab funding from Facebook, Google and Futurewei. We thank Visual Concepts for allowing us to capture, process, and share NBA2K19 data for research.

References

  • [1] T. Alldieck, M. Magnor, B. L. Bhatnagar, C. Theobalt, and G. Pons-Moll (2019) Learning to reconstruct people in clothing from a single RGB camera. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §2.
  • [2] T. Alldieck, M. Magnor, W. Xu, C. Theobalt, and G. Pons-Moll (2018) Video based reconstruction of 3d people models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [3] T. Alldieck, G. Pons-Moll, C. Theobalt, and M. Magnor (2019) Tex2Shape: detailed full human body geometry from a single image. In IEEE International Conference on Computer Vision (ICCV), Cited by: §2, Figure 15, §5.
  • [4] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis (2005) SCAPE: shape completion and animation of people. In ACM transactions on graphics (TOG), Vol. 24, pp. 408–416. Cited by: §2.
  • [5] S. S. Anurag Ranjan and M. J. Black (2018)

    Generating 3D faces using convolutional mesh autoencoders

    .
    In European Conference on Computer Vision (ECCV), pp. 725–741. External Links: Link Cited by: §4.1.
  • [6] B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll (2019-10) Multi-garment net: learning to dress 3d people from images. In IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [7] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black (2016-10) Keep it SMPL: automatic estimation of 3D human pose and shape from a single image. In Computer Vision – ECCV 2016, Lecture Notes in Computer Science. Cited by: §2, §5.3.
  • [8] G. Bouritsas, S. Bokhnyak, S. Ploumpis, M. Bronstein, and S. Zafeiriou (2019) Neural 3d morphable models: spiral convolutional networks for 3d shape representation learning and generation. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §4.1, §4.2.
  • [9] K. Calagari, M. Elgharib, P. Didyk, A. Kaspar, W. Matuisk, and M. Hefeeda (2015) Gradient-based 2-d to 3-d conversion for soccer videos. In ACM Multimedia, pp. 605–619. Cited by: §2.
  • [10] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2018) OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008, Cited by: Figure 2.
  • [11] P. Carr, Y. Sheikh, and I. Matthews (2012) Pointless calibration: camera parameters from gradient-based alignment to edge images. In WACV, Cited by: §4.1.
  • [12] D. Clevert, T. Unterthiner, and S. Hochreiter (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §4.1.
  • [13] O. Dionne and M. de Lasa (2013) Geodesic voxel binding for production character meshes. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 173–180. Cited by: §5.3.
  • [14] M. Garland and P. S. Heckbert (1997) Surface simplification using quadric error metrics. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 209–216. Cited by: §4.1.
  • [15] M. Germann, A. Hornung, R. Keiser, R. Ziegler, S. Würmlin, and M. Gross (2010) Articulated billboards for video-based rendering. In Computer Graphics Forum, Vol. 29, pp. 585–594. Cited by: §2.
  • [16] Getty Images. Note: https://www.gettyimages.com Cited by: Figure 8.
  • [17] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta (2016)

    Learning a predictable and generative vector representation for objects

    .
    In European Conference on Computer Vision, pp. 484–499. Cited by: §4.2.
  • [18] O. Grau, A. Hilton, J. Kilner, G. Miller, T. Sargeant, and J. Starck (2007) A free-viewpoint video system for visualization of sport scenes. SMPTE motion imaging journal 116 (5-6), pp. 213–219. Cited by: §2.
  • [19] O. Grau, G. A. Thomas, A. Hilton, J. Kilner, and J. Starck (2007) A robust free-viewpoint video system for sport scenes. In 2007 3DTV conference, pp. 1–4. Cited by: §2.
  • [20] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry (2018) 3d-coded: 3d correspondences by deep deformation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 230–246. Cited by: §4.2, Figure 10, §5.3, Table 3.
  • [21] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry (2018) A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 216–224. Cited by: §4.2.
  • [22] J. Guillemaut and A. Hilton (2011) Joint multi-layer segmentation and reconstruction for free-viewpoint video applications. IJCV. Cited by: §2.
  • [23] J. Guillemaut, J. Kilner, and A. Hilton (2009) Robust graph-cut scene segmentation and reconstruction for free-viewpoint video of complex dynamic scenes. In ICCV, Cited by: §2.
  • [24] R. A. Guler and I. Kokkinos (2019-06) HoloPose: holistic 3d human reconstruction in-the-wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [25] M. Habermann, W. Xu, M. Zollhoefer, G. Pons-Moll, and C. Theobalt (2019) LiveCap: real-time human performance capture from monocular video. ACM Transactions on Graphics, (Proc. SIGGRAPH). Cited by: §2.
  • [26] I. Habibie, W. Xu, D. Mehta, G. Pons-Moll, and C. Theobalt (2019-06) In the wild human pose estimation using explicit 2d features and intermediate 3d representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.2.
  • [28] Y. Huang, F. Bogo, C. Lassner, A. Kanazawa, P. V. Gehler, J. Romero, I. Akhter, and M. J. Black (2017) Towards accurate marker-less human shape and pose estimation over time. In 2017 International Conference on 3D Vision (3DV), pp. 421–430. Cited by: §2.
  • [29] Intel True View. Note: www.intel.com/content/www/us/en/sports/technology/true-view.html Cited by: §2.
  • [30] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2013) Human3. 6m: large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence 36 (7), pp. 1325–1339. Cited by: §2.
  • [31] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §3.
  • [32] S. Johnson and M. Everingham (2010) Clustered pose and nonlinear appearance models for human pose estimation. In Proceedings of the British Machine Vision Conference, Note: doi:10.5244/C.24.12 Cited by: §5.1.
  • [33] H. Joo, T. Simon, and Y. Sheikh (2018) Total capture: a 3d deformation model for tracking faces, hands, and bodies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8320–8329. Cited by: §2.
  • [34] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik (2018) End-to-end recovery of human shape and pose. In Computer Vision and Pattern Regognition (CVPR), Cited by: §2, Table 1, Table 2, §5.1, §5.2.
  • [35] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik (2019) Learning 3d human dynamics from video. In Computer Vision and Pattern Regognition (CVPR), Cited by: §2.
  • [36] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
  • [37] N. Kolotouros, G. Pavlakos, M. J. Black, and K. Daniilidis (2019) Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §2, Table 1, Table 2, §5.1, §5.2, §5.2, §5.
  • [38] N. Kolotouros, G. Pavlakos, and K. Daniilidis (2019) Convolutional mesh regression for single-image human shape reconstruction. In CVPR, Cited by: §2, Table 1, Figure 10, §5.1, §5.3, Table 3.
  • [39] P. Krähenbühl (2018) Free supervision from video games. In CVPR, Cited by: §2.
  • [40] C. Lassner, J. Romero, M. Kiefel, F. Bogo, M. J. Black, and P. V. Gehler (2017) Unite the people: closing the loop between 3d and 2d human representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6050–6059. Cited by: §2.
  • [41] V. Lepetit, F. Moreno-Noguer, and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §4.1.
  • [42] D. C. Liu and J. Nocedal (1989) On the limited memory bfgs method for large scale optimization. Mathematical programming 45 (1-3), pp. 503–528. Cited by: §4.2.
  • [43] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black (2015) SMPL: a skinned multi-person linear model. ACM transactions on graphics (TOG) 34 (6), pp. 248. Cited by: §1, §2, §2, §5.3.
  • [44] J. Martinez, R. Hossain, J. Romero, and J. J. Little (2017) A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2640–2649. Cited by: §2, §2, §4.1, §4.1, §4.2, §5.1.
  • [45] D. Mehta, H. Rhodin, D. Casas, P. Fua, O. Sotnychenko, W. Xu, and C. Theobalt (2017) Monocular 3d human pose estimation in the wild using improved cnn supervision. In 2017 International Conference on 3D Vision (3DV), pp. 506–516. Cited by: §2.
  • [46] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin, M. Shafiei, H. Seidel, W. Xu, D. Casas, and C. Theobalt (2017) Vnect: real-time 3d human pose estimation with a single rgb camera. ACM Transactions on Graphics (TOG) 36 (4), pp. 44. Cited by: §2, §2, §4.1.
  • [47] G. Moon, J. Chang, and K. M. Lee (2019) Camera distance-aware top-down approach for 3d multi-person pose estimation from a single rgb image. In The IEEE Conference on International Conference on Computer Vision (ICCV), Cited by: §2.
  • [48] R. Natsume, S. Saito, Z. Huang, W. Chen, C. Ma, H. Li, and S. Morishima (2019) Siclope: silhouette-based clothed people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4480–4490. Cited by: §2.
  • [49] G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. A. Osman, D. Tzionas, and M. J. Black (2019) Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2, §4.2, §4.2, Table 2, §5.2, §5.2, §5.
  • [50] G. Pavlakos, N. Kolotouros, and K. Daniilidis (2019) TexturePose: supervising human mesh estimation with texture consistency. In ICCV, Cited by: §2.
  • [51] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis (2018) Learning to estimate 3d human pose and shape from a single color image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 459–468. Cited by: §2.
  • [52] G. Pons-Moll, J. Romero, N. Mahmood, and M. J. Black (2015-08) Dyna: a model of dynamic human shape in motion. ACM Transactions on Graphics, (Proc. SIGGRAPH) 34 (4), pp. 120:1–120:14. Cited by: §2.
  • [53] A. Pumarola, J. Sanchez, G. Choi, A. Sanfeliu, and F. Moreno-Noguer (2019) 3DPeople: Modeling the Geometry of Dressed Humans. In ICCV, Cited by: §2.
  • [54] K. Rematas, I. Kemelmacher-Shlizerman, B. Curless, and S. Seitz (2018) Soccer on your tabletop. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4738–4747. Cited by: §2, §2, §3, §4.1.
  • [55] RenderDoc. Note: https://renderdoc.org Cited by: §3.
  • [56] RenderPeople. Note: https://renderpeople.com Cited by: §1.
  • [57] S. R. Richter, Z. Hayder, and V. Koltun (2017) Playing for benchmarks. In ICCV, Cited by: §2.
  • [58] S. R. Richter, V. Vineet, S. Roth, and V. Koltun (2016) Playing for data: Ground truth from computer games. In ECCV, Cited by: §2.
  • [59] K. M. Robinette, S. Blackwell, H. Daanen, M. Boehmer, and S. Fleming (2002) Civilian American and European Surface Anthropometry Resource (CAESAR), final report. volume 1. summary. Technical report SYTRONICS INC DAYTON OH. Cited by: §1.
  • [60] J. Romero, D. Tzionas, and M. J. Black (2017-11) Embodied hands: modeling and capturing hands and bodies together. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia) 36 (6). Cited by: §2.
  • [61] S. Saito, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li (2019) PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. arXiv preprint arXiv:1905.05172. Cited by: §1, §2, Table 2, Figure 17, Figure 6, §5.2, §5.2, §5.
  • [62] T. Simon, H. Joo, I. Matthews, and Y. Sheikh (2017) Hand keypoint detection in single images using multiview bootstrapping. In CVPR, Cited by: Figure 2.
  • [63] O. Sorkine and M. Alexa (2007) As-rigid-as-possible surface modeling. In Symposium on Geometry processing, Vol. 4, pp. 109–116. Cited by: §4.2, §4.2.
  • [64] O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rössl, and H. Seidel (2004) Laplacian surface editing. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, pp. 175–184. Cited by: §4.2, §4.2.
  • [65] X. Sun, B. Xiao, F. Wei, S. Liang, and Y. Wei (2018) Integral human pose regression. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 529–545. Cited by: §2.
  • [66] USA TODAY Network. Note: https://www.commercialappeal.com Cited by: Figure 1.
  • [67] G. Varol, D. Ceylan, B. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid (2018) BodyNet: volumetric inference of 3D human body shapes. In ECCV, Cited by: §2.
  • [68] VISUAL CONCEPTS. Note: https://vcentertainment.com Cited by: §1, §3.
  • [69] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll (2018) Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 601–617. Cited by: §2.
  • [70] C. Weng, B. Curless, and I. Kemelmacher-Shlizerman (2019) Photo wake-up: 3d character animation from a single photo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5908–5917. Cited by: §1, §2, §2.
  • [71] D. Xiang, H. Joo, and Y. Sheikh (2019) Monocular total capture: posing face, body, and hands in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • [72] B. Xiao, H. Wu, and Y. Wei (2018) Simple baselines for human pose estimation and tracking. In European Conference on Computer Vision (ECCV), Cited by: §2, §4.1.
  • [73] F. Xu, Y. Liu, C. Stoll, J. Tompkin, G. Bharaj, Q. Dai, H. Seidel, J. Kautz, and C. Theobalt (2011-07) Video-based characters: creating new human performances from a multi-view video database. ACM Trans. Graph. 30 (4), pp. 32:1–32:10. External Links: ISSN 0730-0301, Link, Document Cited by: §2.
  • [74] W. Xu, A. Chatterjee, M. Zollhöfer, H. Rhodin, D. Mehta, H. Seidel, and C. Theobalt (2018) MonoPerfCap: human performance capture from monocular video. ACM Trans. Graph.. Cited by: §2.
  • [75] A. Zanfir, E. Marinoiu, and C. Sminchisescu (2018) Monocular 3d pose and shape estimation of multiple people in natural scenes-the importance of multiple scene constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2148–2157. Cited by: §2.
  • [76] H. Zhu, X. Zuo, S. Wang, X. Cao, and R. Yang (2019-06) Detailed human shape estimation from a single image by hierarchical mesh deformation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.

1 NB2K Dataset Capture

In this section we provide more details of how we select captures of the NBA2K dataset.

One way to decide which frames to capture is to let the game use its AI where two teams play against each other, however we found that the variety of poses captured in this manner is rather limited. It captures mostly walking and running people, while we target more complex basketball moves. Instead, we have people play the game and proactively capture frames where dunk, dribble, shooting, and other complex basketball moves occur.

2 PoseNet

In this section we provide more details for the PoseNet architecture and setup.

The input is a single, person-centered image with dimensions . We extract ResNet [72]

features from layer 4 and supply them to four separate network branches (2D pose, 3D pose, jump class, jump height). The 2D and 3D pose branches consist of 3 set of Deconvolution-BatchNorm-ReLu blocks. For the jump class, we use a fully connected layer followed by two linear residual blocks 

[44] to get the final output and we use the same network architecture for the jump height branch. We estimate both the jump class and the jump height because the jump class can serve as a threshold to reject the inaccurate jump height prediction in the global position estimation.

The 2D pose branch outputs a set of 2D heatmaps, one for every keypoint, indicating where a particular keypoint is located. Similarly, the 3D pose branch outputs a set of 2D location maps [46], where each location map indicates the possible 3D location for every pixel. Each location map has 3 channels that encode the position of a keypoint with respect to pelvis. To generate the ground truth heatmaps, we first transform the 2D pose from its original image resolution () to resolution, and then generate a 2D Gaussian map centered at each joint location. For ground truth XYZ location maps, we put the 3D joint location at the position where the heatmap has non-zero value. To obtain the final output, we take the location of the maximum value in every keypoint heatmap to get the 2D pose at resolution and use it to sample the 3D pose from the location maps. After that, the 2d pose is transformed to original resolution. The ground truth jump height is directly extracted from the game, and the jump class is set to 1 if the jump height is greater than 0.1m.

Figure 11: Court line generation on synthetic data. For every example, from left to right: input image, predicted court lines overlaid on the input image, ground truth court lines overlaid on the input image.
Figure 12: Court line generation on real data. For every example, left is input image, right is predicted court lines overlaid on the input image.
Figure 13: Global position estimation. Please zoom in to see details. From left to right: input images, two views of the estimated location (middle and right). Note the location of players with respect to court lines (marked with red boxes).

3 Global Position

In this section we describe the process of placing a 3D player in its corresponding position on (or above) the basketball court.

Since a basketball court with players typically has more occlusions (and curved lines) than a soccer field, we found the traditional line detection method used in [54] fails. To get robust line features, we train a pix2pix [31]

network to translate basketball images to court line masks. For the training data, we use synthetic data from NBA2K, where the predefined 3D court lines are projected to image space using the extracted camera parameters. To demonstrate the robustness of our line feature extraction method, we provide the results on synthetic data in Figure 

11 and real data in Figure 12.

After estimating the camera parameters, we place the player mesh in 3D by considering its 2D pose in the image and the jumping height (Sec 4.1):

(3)
(4)

where is the second column of the extrinsic rotation matrix; T is the extrinsic translation; is focal length; is the principle point; is the camera coordinates of the lowest joint (e.g. foot); is the world coordinate -component of the lowest joint, which equals the predicted jump height; are the pixel coordinates of the lowest joints. Substituting Eqn. 3 into Eqn. 4, we can solve for (camera coordinate in z-component for lowest joints), from which we can further compute the global position of the player. In Figure 13, we show our results of global position estimation. We can see that our method can accurately place players (both airborne and on the ground) on the court due to accurate jump estimation.

4 Mesh Generation

4.1 SkinningNet

In this section we provide more details for the SkinningNet architecture.

As we noted in the main paper, the pose encoder is comprised of linear residual block [44] followed by a fully connected layer. The linear residual block consists of four FC-BatchNorm-ReLu-Dropout blocks with skip connection from the input to the output. For the mesh part, we denote Spiral Convolution [8] as SC, mesh downsampling and upsampling operator [5] as DS and US. The mesh encoder consists of four SC-ELU [12]-DS blocks, followed by a FC layer. The mesh decoder consists of a FC layer, four US-SC-ELU blocks, and a SC layer for final processing. We follow COMA [5] to perform the mesh sampling operation where vertices are removed by minimizing quadric errors [14] during down-sampling and added using barycentric interpolation during up-sampling. In table 4, we provide detailed settings for the mesh encoders and decoders of different body parts.

Training details.

For training IdentityNet and SkinningNet, we use batch size of 16 for 200 epochs and optimize with the Adam solver 

[36] with weight decay set to . Learning rate for IdentityNet is 0.0002 while learning rate for SkinningNet is 0.001 with a decay of 0.99 after every epoch. The weights of different losses are set to .

head arm shoes shirt pant leg
NV 348 842 937 2098 1439 372
DS Factor (2,2,1,1) (2,2,2,1) (2,2,2,1) (4,2,2,2) (2,2,2,2) (2,2,1,1)
NZ 32 for all body parts
Filter Size (16,32,64,64) for encoders, (64,32,16,16,3) for decoders
Dilation (2,2,1,1) for encoders, (1,1,2,2,2) for decoders
Step Size (2,2,1,1) for encoders, (1,1,2,2,2) for decoders

Table 4: Network architecture for mesh encoders and decoders of different body parts. NV represents vertices numbers, DS factor represents downsampling factors. NZ represents the hidden size of latent vector. Filter Size represents the output channel of SC. Dilation represents dilation ratio for SC. Step size represents hops for SC.

4.2 Combining body part meshes

In this section, we provide details of the interpenetration optimization.

As we noted in the main paper, we first detect all the body part vertices in collision with clothing as in [49], and then follow [63, 64] to deform the mesh by moving collision vertices inside the garment while preserving local rigidity of the mesh. This detection-deformation process is repeated until there is no collision or the number of iterations is above a threshold (10 in our experiments). Before each mesh deformation step, collision vertices are first moved in the direction opposite their vertex normals by 10mm. Then we optimize the remaining vertex positions of body parts by minimizing the following loss:

(5)

forces optimized vertices to stay close to the SkinningNet inferred vertices , is the Frobenius norm of Laplacian difference between the optimized and inferred meshes, and encourages the optimized edge length to be same as the inferred edge length . Each of these losses is taken as a sum over all vertices or edges. We set respectively. We use an L-BFGS solver [42], running for 20 iterations. Note that detected collision vertices, after being moved inward, are fixed during the optimization process. This hard constraint ensures the optimization will not move these vertices outside garments in future iterations. Figure 14 shows results before and after interpenetration optimization for two examples.

Figure 14: Before and after interpenetration optimization. Note the garment in the red square. Ground truth textures are used to better visualize the intersection.
Figure 15: Comparison with Tex2shape[3]. Note that tex2shape only predicts rough body shape compared to our reconstructions. We follow their advice to select images where person is large and fully visible.

5 Further Qualitative Evaluation

In this section, we provide additional qualitative comparisons that further demonstrate the effectiveness of our system.

Fig 15 shows qualitative comparison with tex2shape [3]. Note that tex2shape is only trained with their A-pose data and directly tested on NBA images. We can see our method can generate better shirt wrinkles and body details under different poses.

Figure 16: Comparison with SMPL-based methods on real images. Column 1 is input, columns 2-4 are reconstructions in the image view, columns 5-7 are visualizations from a novel viewpoint. Note the significant difference in body pose between ours and SMPL-based methods; our results are qualitatively much more similar to what is seen in the input images. In addition, SMPL-based methods do not handle clothing.
Figure 17: Comparison with PIFu [61] on real images. Column 1 is input (red box shows the target player), columns 2-4 are reconstructions in the image view, columns 5-7 are reconstructions in a novel view. PIFu fails to reconstruct high quality human shapes from real images, even when the players are in nearly standing poses.
Figure 18: Qualitative Results on real images. Please zoom in to see details. For every example, left is input (red box shows the target player), middle is reconstruction in the image view, right is reconstruction in a novel view. Our method generalizes well on real images under a variety of poses.

In the main paper, we only provide qualitative comparisons for synthetic data with state-of-the-art methods. In Figure 16, we compare our method against the best-performing SMPL-based methods [49, 37] on real images. In Figure 17, we additionally compare with PIFu [61], the state-of-the-art method for clothed subjects, on real images. Our system generates more stable poses and more realistic, fine details for real images.

In Figure 18, we provide additional qualitative results of our method for real images. Our method can reconstruct 3D shape of different people under various poses on real images.

In Figure 19, we provide examples where our approach fails to reconstruct a correct 3D shape from single view images.

Figure 19: Typical failure cases of our approach. Column 1 is input (red box shows the target player), columns 2-3 are reconstructions in the image view, columns 4-5 are reconstructions in a novel view, columns 6-7 are zoomed-in versions of main errors. Failures include erroneous pose due to heavy occlusion in multi-person scenes (first and second example), incorrect orientation of head and hands (third and fourth example).