Learning to Regress Bodies from Images using Differentiable Semantic Rendering

by   Sai Kumar Dwivedi, et al.

Learning to regress 3D human body shape and pose (e.g. SMPL parameters) from monocular images typically exploits losses on 2D keypoints, silhouettes, and/or part-segmentation when 3D training data is not available. Such losses, however, are limited because 2D keypoints do not supervise body shape and segmentations of people in clothing do not match projected minimally-clothed SMPL shapes. To exploit richer image information about clothed people, we introduce higher-level semantic information about clothing to penalize clothed and non-clothed regions of the image differently. To do so, we train a body regressor using a novel Differentiable Semantic Rendering - DSR loss. For Minimally-Clothed regions, we define the DSR-MC loss, which encourages a tight match between a rendered SMPL body and the minimally-clothed regions of the image. For clothed regions, we define the DSR-C loss to encourage the rendered SMPL body to be inside the clothing mask. To ensure end-to-end differentiable training, we learn a semantic clothing prior for SMPL vertices from thousands of clothed human scans. We perform extensive qualitative and quantitative experiments to evaluate the role of clothing semantics on the accuracy of 3D human pose and shape estimation. We outperform all previous state-of-the-art methods on 3DPW and Human3.6M and obtain on par results on MPI-INF-3DHP. Code and trained models are available for research at https://dsr.is.tue.mpg.de/.



There are no comments yet.


page 1

page 2

page 4

page 6

page 8

page 13

page 14

page 15


Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation

Direct prediction of 3D body pose and shape remains a challenge even for...

DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare

We present DenseRaC, a novel end-to-end framework for jointly estimating...

Neural Descent for Visual 3D Human Pose and Shape

We present deep neural network methodology to reconstruct the 3d pose an...

Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"

We present the first method to perform automatic 3D pose, shape and text...

Weakly Supervised 3D Human Pose and Shape Reconstruction with Normalizing Flows

Monocular 3D human pose and shape estimation is challenging due to the m...

Geodesy of irregular small bodies via neural density fields: geodesyNets

We present a novel approach based on artificial neural networks, so-call...

A Generative Model of People in Clothing

We present the first image-based generative model of people in clothing ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Estimating 3D human pose and shape from in-the-wild images has received great research interest [4, 13, 15, 17, 19, 29, 33, 54] because of its varied applications in animation, games, and the fashion industry. One aspect that makes this problem challenging is the difficulty of obtaining accurate 3D ground-truth annotations, as they require either specialized –mostly indoors– MoCap systems or careful calibration and setup of IMU sensors [46]. Such data would facilitate training robust regressors paving the way for estimating human-scene interaction with greater granularity.

Given the lack of in-the-wild 3D ground-truth, the vast majority of previous methods focus on 2D keypoints [4, 13] with some learned 3D priors. Even though sparse 2D keypoints give useful constrained, relying only on these leads to unrealistic poses because of depth ambiguities and occlusion. They also do not provide reliable information about body shape. On the other hand, relying too strongly on 3D priors introduces bias. To circumvent this problem, recent approaches [29, 33, 50] propose to use part-segmentations or silhouettes. However, there is a mismatch between part-segmentations/silhouettes and projected SMPL bodies since segmentation covers clothed bodies while the common 3D body models are minimally clothed. We propose an alternative approach to compensate for limited 3D supervision that leverages high-level 2D image cues.

Specifically, we propose more detailed clothing segmentation labels to supervise a neural network. Traditional multi-class clothing segmentation approaches cannot be directly applied as the segmentation loss tries to exactly match the rendered SMPL body. Hence, to make use of such labels, we need to reason about which parts of the SMPL body model correspond to which clothing label. This is non-trivial to obtain because a body part can be covered by many clothing types. Therefore, we learn a semantic clothing prior from a large-scale clothed human scan dataset, which has varied subjects, poses and camera views to which the SMPL body is fitted 

[30]. This prior encodes the likelihood of clothing types given a vertex on the SMPL body model, which gives the correspondence between segmentation labels and the SMPL body surface. Then, we use this prior to calculate a loss between the SMPL body and observed clothing labels in images. To achieve this we introduce Differentiable Semantic Rendering (DSR), a novel loss that supervises the training of 3D body regression with clothing semantics using weak supervision [7].

Our novel loss has two components: DSR-C for supervising the clothing region and DSR-MC for the minimally-clothed region. A high-level illustration of our idea is shown in Fig. 2. While the former ensures that the rendered SMPL mesh stays inside the observed clothing label, the latter tries to tightly match the rendered SMPL mesh to the 2D minimal-clothing mask. The loss between the rendered output and the target mask is back-propagated using a differentiable renderer.

Figure 2: DSR Idea - For more accurate human body estimation, we supervise 3D body regression training with clothed and minimal-clothed regions differently using our novel DSR loss and our learned semantic prior. The semantic prior represents a distribution over possible clothing labels for each vertex. For easier illustration, we depict the most likely labels per-vertex here.

Specifically, for the  DSR-MC term, we apply pixel-level supervision for tight-fitting with the minimal-clothing regions, while for DSR-C

, we minimize the negative log probability of a SMPL semantic part label being inside the respective segmentation mask. For example, there will be a high penalty if the rendered vertices with a high probability of being “shirt” fall in the “pants” segmentation pixels. To ensure that our method is fast and differentiable, we render the semantic class probabilities computed from 3D scans as textures of the SMPL mesh.

While training, DSR can be used as an additional loss in any neural network-based human body estimator that predicts SMPL parameters. First, we examine the effect of our approach over a baseline full-body mask supervision and 3D joint only supervision which verifies our hypothesis about the value of clothing semantics. Then, we perform extensive comparisons and show that DSR outperforms previous state-of-the-art methods as shown in Fig. 1. In summary:

  1. [noitemsep, topsep=1pt]

  2. We explore the importance of clothing semantics for 3D human body estimation by introducing a novel differentiable semantic rendering loss that distinguishes between clothed and minimally-clothed regions.

  3. We estimate a semantic clothing prior for SMPL from 3D scans of clothed people for our method which can be used also for other cases when a vertex clothing probability for a 3D SMPL body is required.

  4. We outperform all state-of-the-art methods on 3DPW and Human3.6M and obtain on par results on MPI-INF-3DHP, suggesting the value of using human parsing and semantics for more accurate human body estimation.

2 Related Work

Estimating human pose and shape is a vastly growing field using different sources of supervision and input (image, video, keypoints, etc.). Here, we focus on different works that estimate 3D human pose and shape from an RGB image. We refer to recent surveys [6, 38] for more details.

2.1 Image cues and 2D/3D joints

Towards estimating 3D human pose and shape, initial attempts focus on estimating the coordinates of 3D joints or heatmaps [11, 20, 32, 39, 40, 41, 43]

from images using geometric assumptions for the human body and 3D training data. However, those approaches require 3D ground-truth data, which are limited in terms of pose variation, quantity and background, and lack generalization to in-the-wild images. The vast progress in 2D pose detection 

[5, 28, 42, 49], along with the introduction of parametric body models of pose and shape [2, 24] lead to significant progress and high-quality in-the-wild 3D humans. In [4] the authors use 2D keypoints to obtain SMPL parameters with an optimization-based approach, while this process improves via human annotations on predicted fits [19]. Martinez et al. [26] show that lifting the predictions of a 2D keypoint detector provides a reasonable baseline for the 3D pose. Pavlakos et al. [31] use additional ordinal depth annotations for weak 3D supervision. Kolotouros et al. [18] regress vertex locations using a sub-sampled SMPL mesh and Graph-CNNs. Xiang et al. [47] extract joint confidence maps and 3D orientation information via CNNs and pair them with a deformable body model. Furthermore, in HMR [15], a regressor from 2D joints to SMPL parameters is trained, using a discriminator with unpaired 3D data [25] to encourage plausible poses. Along these lines, some recent approaches using video as input, have applied similar methods to predict temporal kinematics of 3D bodies [14] and estimate the body using temporal features and a motion discriminator [16]. Another approach [51], uses a disentanglement of the skeleton from the 3D human mesh paired with a self-attention network to ensure temporal coherence. SPIN [17], revisits optimizations methods in collaboration with neural networks as it uses a network [15] that provides an initial estimate to the optimization process (SMPLify). Moreover, a regressor-based alternative suggests the use of the 3D neural regressor as a pose prior [13]. Although such methods produce promising results, they typically estimate average body shapes, are not robust to occlusion, and produce poses that are only approximate. Without 3D training data the problem is hard.

Figure 3: Illustration of DSR - SMPL is rendered with the semantic prior learned from RenderPeople scans. The two novel loss terms are calculated based on different semantic regions of the clothed person. DSR-MC tightly fits the minimal-clothed region, while DSR-C ensures that the rendered body lies within the clothing boundaries.

2.2 Image alignment and pixel-level supervision

Concurrently, there is a line of research that uses additional constraints, in addition to image features and 2D/3D joints, to better align the body with the image such as dense body landmarks, silhouettes, body part segmentation or pixel aligned implicit functions. Initial seminal lines of work, use a few keypoints along with the SCAPE body model and optimize 3D body shape with silhouettes and smooth shading [8]. Along these lines, Balan et al. [3] propose a distance function for the connected silhouette to ensure the rendered 3D model falls inside the mask. Later work uses 2D keypoints, background segmentation, and SMPL to extract 3D bodies from images [44], similar to [33] who use silhouettes for supervision. Even silhouettes, although they provide supervision when keypoints fail, are often ambiguous in the case of self-occlusion. Towards a detailed alignment of the 3D human body surface and pixels the authors in [9]

introduce a dataset with image-to-surface correspondence from MS-COCO 

[21] and a variant of Mask-RCNN that regresses UV coordinates from images. Part-segmentation masks and IUV are also used in [29] and [54], respectively, as dense supervision. A continuous UV map of SMPL for direct pixel correspondence of the image and the 3D mesh is introduced in [54]. In a similar approach [50], exploits IUV maps as a proxy representation. It estimates SMPL parameters by minimising dense body landmarks and human part masks and also by using motion discriminator. While a large majority of the aforementioned work leverages a parametric 3D body model, there is some recent work that uses voxel representations along with 2D pose and part segmentation supervision [45] or employs implicit functions with surface reconstruction techniques to reconstruct clothed humans. Although these approaches output fine-grain details, they are unable to capture the shape under clothing and are prone to occlusion [10, 36, 37]. An interesting approach is proposed in [55]

where a partial UV map of the person is used and the human pose estimation is formulated as an image inpainting problem. Another work that explores scene semantics 

[34], predicts the label of an occluding object and employs this information to detect invisible joints. Finally, Zanfir et al. [52] represent the body with a normalizing flows-based latent space and use body part segmentation supervision to estimate 3D human body pose from videos and images, unifying different previous approaches. Clothing segmentation is used in [48] for clothing deformation to penalize the vertex offset of the clothed body if the rendered vertex falls outside the clothing boundary.

Most of these approaches are based on joints, silhouettes and part-segmentation masks using approximate supervision for the pose of a person in clothing. We claim that there is more that an image can tell us about human pose. Our key insight is that clothing for different parts of the body conveys important information for detailed fitting. We employ an off-the-shelf 2D semantic segmentation method [7] and a semantic clothing prior to apply these labels in 3D. Given those, we supervise clothed and minimal-clothed regions separately, yielding more aligned fits of 3D humans.

3 Method

DSR uses high-level semantic information for more accurate pose and shape estimation using two additional loss terms DSR-MC and DSR-C, as shown in Figure 3. DSR takes an image as input which passes through a CNN. Then, the image features are fed to an iterative regressor, similar to HMR [15], to estimate the parameters of SMPL body model, . Given the rendered SMPL mesh, we apply our novel DSR-MC and DSR-C losses in addition to standard loss terms used in EFT [13]. SMPL is a parametric body model that represents body pose and shape by . The pose parameters include the global rotation and rotations of body joints in axis-angle format and the shape parameters consist of the first 10 coefficients of a PCA shape space. SMPL model is a differentiable function that outputs a 3D mesh according to the pose and shape parameters.

Clothing Semantic Information. Ground-truth clothing segmentations are expensive to obtain for in-the-wild datasets, which limits the scalability of such an approach. Hence, to analyze the importance of clothing semantics for human pose and shape estimation, we employ an off-the-shelf segmentation model to generate pseudo ground-truth clothing semantics. Graphonomy [7]

is a state-of-the-art clothing segmentation model that uses inter and intra graph transfer learning for unifying different clothing datasets and produces

clothing labels and body part segmentations. As DSR-MC reasons about the minimal-clothing region, we use a binary mask comprised of 5 labels - LeftArm, RightArm, LeftShoe, RightShoe and Face from Graphonomy as ground-truth (whenever available). For DSR-C, we use 4 labels - UpperClothes, LowerClothes, MinimalClothing and Background. We run the Universal Model of Graphonomy on all the datasets to generate pseudo-truth clothing segmentations. For more details on the generation of pseudo-ground truth, cleaning of obtained masks and mapping of graphonomy labels for DSR-C and DSR-MC, please refer to the Sup. Mat.

Semantic Prior for SMPL. To use the semantic information obtained from Graphonomy as pseudo ground-truth training labels, we need a semantic prior of clothing for SMPL 3D bodies. To achieve this, we use thousands of scans from Renderpeople [35] with varied clothing, subject, pose and camera views for which we have ground-truth SMPL fits from AGORA [30]. We run the universal model of Graphonomy on the rendered images of the scan with clothing and body part segmentation labels. Next, we use the ground-truth SMPL mesh to compute the visible face triangles given the mesh and camera parameters. Then, each visible triangular face is assigned the corresponding segmentation label. We repeat this process for all the available scans. We compute the probability of each vertex being a particular label out of the labels from Graphonomy. This probabilistic label for each vertex is referred to as semantic prior. For more details refer to Sup. Mat.

Differentiable Semantic Rendering. We use SoftRas [22]

as the differentiable renderer to supervise the estimation of the 3D parametric model using semantic information. It uses a differentiable aggregation process for rendering, which fuses the probabilistic contributions of all mesh triangles with respect to rendered pixels. The semantic prior obtained from AGORA 


is used as a texture. Specifically, for each semantic label, we render the probability of that label for each visible vertex. Once the semantic probability is rendered as images by SoftRas, the loss is computed on the 2D image output by comparing with the semantic image segmentation and this is backpropagated to change the vertices, in turn, changing the network to give more accurate SMPL parameters.

Standard Losses. As we use the EFT [13] data for training, we use the standard supervision loss similar to EFT which is defined as:


where, are the estimated SMPL parameters, is the joint reprojection loss, and are losses on 3D joints and SMPL parameters, respectively. Ground truth 2D joints are represented by , 3D joints by , SMPL parameters by and the camera projection function by .

DSR - Minimal-Clothing. For minimal-clothing, we choose five labels from Graphonomy namely, LeftArm, RightArm, LeftShoe, RightShoe and Face, which often appear similar in shape to the rendered SMPL body; i.e. look roughly “naked.” For a particular image, we take the clothing segmentation mask given by Graphonomy and create a binary mask comprising of the valid labels for that image from the available five labels. This forms the ground-truth for DSR-MC denoted by GT - DSR-MC in Fig. 3

. We render the probability distribution of vertex labels for SMPL precomputed from RenderPeople as textures; these are shown as

Rendered Semantics in Fig. 3. We only take the probability distribution of vertices that are visible and set the others as zero. Thus, we define the DSR-MC loss to tightly match the corresponding rendered minimal-clothing region of SMPL to the available semantic binary mask as shown in Fig. 3 (bottom).

We study two variants of the loss for DSR-MC: soft-DistM and soft-IOU. Soft-DistM is inspired by the DistM loss of Naked Truth [3] which was originally proposed for estimating body shape under-clothing. Since we render the semantic probability instead of silhouettes, we call it soft-DistM. It is a distance measure function that takes the rendered image and target binary Graphonomy mask and is defined as:


where are the pixels inside rendered human body and is a distance function which is zero if pixel is inside . For points outside, it is defined as the Euclidean distance to the closest point on the boundary of . Soft-DistM can pull the output inside the target because of the sharp difference in penalization between pixels inside the mask and pixels outside. Given a good initial estimate, the Soft-DistM loss ignores spurious and scattered labels outside the region of interest because the loss is high for pixels far away. This is particularly helpful, when using an off-the-shelf segmentation model without instance segmentation, which can give the wrong output for hard examples.

However, soft-DistM cannot fully ensure that the rendered output exactly matches the target as it gives the same penalty for outputs with different percentages of overlap when it is inside the boundary. Hence, we studied soft-IoU, which ensures tight fitting and is calculated as:


where is the rendered vertex probability at pixel , is the graphonomy label for that pixel. Soft-IoU suffers from spurious and scattered labels outside the region of interest and also suffers from the lack of instance segmentation in the off-the-shelf model. However, we choose soft-IoU for the metric for DSR-MC due to better quantitative results in the baseline experiments in Table 1.

DSR - Clothing. The rendered SMPL body mesh cannot exactly match all the target pixels for the clothing region. Hence, for a more accurate estimate of the 3D body model, we want to encourage the rendered SMPL mesh to stay inside the clothing mask. Previous methods [3] define a distance function to deal with such scenarios. However, we have higher-level semantic information than a silhouette to better address this. We have additional boundaries other than the body outline to enforce that a particular semantic part of the SMPL mesh should fall inside the corresponding semantic part of the segmentation mask. Clothing segmentation provides additional boundaries, such as between the upper and lower body or between clothing and skin.

Specifically, we define four labels, UpperClothes, LowerClothes, MinimalClothing and Background, shown as four color masks in Fig. 3 (top). We introduce a MinimalClothing label for DSR-C to avoid confusion between the background and minimal-clothing region. Without it, the DSR-C loss would give the same penalty when the minimal-clothing region falls on the corresponding target region or the background. As the semantic prior learned from RenderPeople has probability labels per vertex, we add all the probabilities of upper body clothing labels for Upperclothes, lower body clothing labels for LowerClothes and body part segmentation labels for MinimalClothing. We define DSR-C loss as the negative log-likelihood (NLL) of the rendered probability distribution of each vertex belonging to one of the four labels. The rendered probability distribution is first sent through log softmax before applying NLL loss for numerical stability. So, is defined as


where is the probability output for the vertex at pixel , is the height and is the width of the image. Hence, the total loss .

Figure 4: Are 3D joints enough? We over-fit a batch of H36M samples on ground-truth (GT) joints (green) and joints with DSR (blue). The weak supervision with semantic information improves accuracy.

4 Experimental Setup

Training Procedure. Following EFT [13], we train a regressor similar to HMR [15] with mixed 3D and 2D datasets. We use the pseudo-ground 3D annotations for 2D datasets from EFT. For 2D data, we only use COCO [21] as including other in-the-wild datasets did not give a performance gain and for 3D datasets, we use Human3.6M [12] and MPI-INF-3DHP [27]. We also use the 3DPW [46] training set for fair comparisons and the same data ratio for mixed 2D and 3D datasets as EFT. For baseline and ablation experiments, we train only on COCO-EFT [13]. For faster training, we initialize the network with SPIN pre-trained weights and use the same hyper-parameters as SPIN [17] and train the model for 100K iterations.

Evaluation Procedure. For state-of-the-art comparisons, we use 3DPW [20], Human3.6M [12] and MPI-INF-3DHP [27]. As in prior work [13, 17], we use the gender information for ground truth meshes on 3DPW. We report results with and without 3DPW training on Procrustes-aligned mean per joint position error (PA-MPJPE), mean per joint position error (MPJPE) and Per Vertex Error (PVE).

Differentiable Semantic Rendering. We use SoftRas [22] to render the probability distribution for DSR-C and DSR-MC. For SoftRas, we use a higher gamma value of to ensure the loss affects the occluded part of the body and a lower sigma value of to ensure the error does not significantly affect the spatial region. For more details, refer to SoftRas [22]

. We render the probability distribution of each triangle face as textures and compute the loss on the RGB channel of the rendered output. We render 5 images for each sample in a batched manner: 1 for DSR-MC and 4 for DSR-C. However, the loss is calculated per individual sample to avoid calculating for samples that do not have a valid segmentation mask. In such cases, the loss is set to zero. After using the heuristics to clean the mask, a valid label set is created for DSR-C and DSR-MC. The weighting parameters for both the components are set to

. As DSR depends on weak supervision from off the shelf clothing segmentation model and hence not robust for hard examples, we enable the loss after 10K iterations into our training.

C-EFT 58.5 101.0 119.3
l+ DSR-FB 59.8 102.1 120.3
l+ DSR-FB (s-DistM) 58.0 100.2 117.8
l+ DSR-MC (s-DistM) 58.2 100.6 118.5
l+ DSR-MC (s-IoU) 58.0 100.3 118.1
l+ DSR-C 57.6 99.8 117.6
l+ DSR-MVP 58.1 100.3 117.8
l+ DSR-C + DSR-MC (Ours) 57.2 99.2 116.3
Table 1: Baseline Comparisons for DSR on 3DPW. C-EFT is the regressor trained with COCO-EFT and standard losses. DSR-FB is supervised with a full-body silhouette. DSR-MC is minimal-clothing, DSR-C is clothing and DSR-MVP is manual labelling of clothing and minimal-clothing.
HMR [15] 76.7 130.0 - 56.8 88 89.8 124.2
NBF [29] - - - 59.9 - - -
Pavlakos [33] - - - 75.9 - - -
CMR [18] 70.2 - - 50.1 - - -
SPIN [17]  59.2 96.9 116.4 41.1 62.5 67.5 105.2
EFT [13] 54.2 - - 43.7 - 68.0 -
Zanfir et. al [52] (w/ 3DPW train) 57.1 90.0 - - - - -
EFT [13] (w/ 3DPW train) 52.2 - - 43.8 - 67.0 -
DSR 54.1 91.7 105.8 40.3 60.9 66.7 105.3
DSR (w/ 3DPW train) 51.7 85.7 99.5 41.4 62.0 67.0 104.7
Table 2: Evaluation of state-of-the-art models on 3DPW, Human3.6M, and MPI-INF-3DHP datasets. DSR is our proposed model trained on monocular images similar to [17, 13, 15]. DSR outperforms all state-of-the-art models, including EFT [13] on the challenging datasets. “” shows the results that are not available.
Standard Loss (SD) 47.5 73.9 99.2
SD + DSR 45.1 71.3 96.6
Table 3: Potential of DSR. We train and test on a subset of Human3.6M to evaluate the full potential of DSR loss. SD refers to standard joint loss.

5 Results

5.1 Baseline Comparison and Ablation Studies

We perform baseline experiments to (1) motivate the use of semantic rendering and (2) study how the different terms and design choices contribute to the final result as shown in Table 1. As a baseline, we use an HMR [15] based regressor trained on EFT-COCO [13] data and report results on 3DPW (C-EFT). Then, we supervise the baseline with an additional full-body silhouette (DSR-FB) which is a per pixel binary classification loss guided by differentiable rendering. The results deteriorate as the rendered SMPL body does not match the full body. We further train DSR-FB with the Dist-M loss in contrast to per-pixel classification to ensure all body parts (irrespective of clothing) stay inside the silhouette. The result in Table 1 shows that explicit supervision with clothing semantics (Ours) outperforms the naive cloth-agnostic approach. We study the importance of estimating clothing semantics from scans in contrast to manual vertex painting (MVP) of semantic labels as the former gives a distribution over possible clothing labels () for each vertex whereas the latter would give only . To quantitatively verify the benefit of the probabilistic clothing semantic prior, we take the most likely label per vertex (Fig. 2) as a proxy for MVP. Since we have 1 label per vertex, we use IoU instead of s-IoU. Table 1 shows low performance of a fixed semantic prior (MVP) compared to a probabilistic one (Ours). We also study the individual contribution of DSR-C and DSR-MC on the overall performance and find that the clothing term helps more than the minimal-clothing term. One possible explanation could be that the off-the-shelf segmentation model is not robust for hands and feet hence causing less gain. Empirically, we observe that soft-IoU performs better than soft-DistM and hence use it as the metric for DSR-MC for all subsequent experiments. Overall, the best accuracy is reached when both terms are used showing that supervising minimally-clothed and clothed regions differently helps improve 3D body estimation.

Figure 5: Qualitative Results on COCO. From left to right - Input image, SPIN [17], EFT [13] and DSR results.

5.2 State-of-the-art comparison

We compare our approach with state-of-the-art methods in Table 2. We use two variants of our model, with and without the 3DPW training set, to be aligned with the training data of other methods. In 3DPW, an in-the-wild challenging 3D dataset, we outperform previous work when using 3DPW training data, while performing on par with EFT [13] when they are not used. Moreover, we clearly improve accuracy on Human3.6M [12], a standard indoor benchmark, over state-of-the-art SPIN [17] and EFT [13] methods. We also report on par results in MPI-INF-3DHP [1]. We perform significantly better than previous approaches that use ground-truth part-segmentation or silhouettes [33, 29, 52] compared to our weak supervision. Overall, we consistently perform better than previous approaches across different datasets, both indoors and outdoors. In Fig. 5 we can see different comparisons of DSR with the previous state-of-the-art and observe that the estimated mesh is more aligned with image evidence. These observations validate our hypothesis that clothing semantics, even when used as weak supervision, provides additional information for estimating more accurate 3D bodies.

Method Ankle Knee Hip Wrist Elbow Head
Standard Loss (SD) 99.3 54.6 20.0 109.5 81.1 81.8
SD + DSR 96.1 50.4 19.5 107.3 79.3 80.5
Table 4: Per joint error for Human3.6M subset. SD refers to standard joint loss used in 3D body estimation.

5.3 Potential of DSR

To test the significance of high-level semantics on shape and pose estimation, we use an off-the-shelf segmentation model [7]. However, such models are not robust to in-the-wild examples. Because we use the output of the model as pseudo ground-truth for supervision, it is hard to determine the full potential of our approach. Hence, we experiment on the Human3.6M dataset to test the DSR loss in a more controlled setting. Human3.6M is an indoor dataset with significantly less background complexity as compared to outdoor datasets. Hence, it is ideal for testing the limit of DSR. We study two different cases. First, we split the training set of Human3.6M, with SMPL parameters computed by MoSh [23], into training and validation sets with S8 in the validation set. This is done to evaluate the per-vertex-error (PVE) using the MoSh ground truth SMPL parameters, thus, giving insight into the shape estimation efficacy of our method. As shown in Table. 3, the performance gain with the DSR loss is significantly higher compared to the standard joint loss. This emphasizes the importance of semantic information. We also analyze the per joint error to understand the source of a performance gain as shown in the Table. 4. Using the DSR, the maximum performance gains are from Ankle, Knee, Wrist which are common failures in 3D pose estimation. Second, we take a step further to examine whether ground truth 3D joints are enough for accurate and pixel aligned body estimation. To this end, we take a random batch of samples from Human3.6M and over-fit on only joints and joints with the DSR loss for iterations with the same hyper-parameters used for other experiments. The qualitative results are depicted in Fig. 4. As we can see, supervision with ground 3D joints cannot always reason about all the pixels. Using DSR produces more pixel-aligned fits, especially for hands and feet.

6 Conclusion

While huge progress has been made in estimating 3D human pose and shape, we are still far from estimating highly accurate 3D humans in everyday scenes. We hypothesize that clothing semantics is an under-explored feature that can benefit 3D body estimation methods. Therefore, we introduce a novel method to exploit clothing semantics as weak supervision. Namely, we: (1) Introduce a novel differentiable loss that supervises clothed and minimally-clothed regions differently to ensure that the body lies inside the clothes for the former while tightly fitting for the latter. (2) Learn a semantic clothing prior, i.e. a probability distribution over clothing labels for SMPL vertices, to apply our method efficiently. This can also be used independently. (3) Thoroughly evaluate our approach qualitatively and quantitatively, outperforming the state-of-the-art. (4) Analyze our method’s components and show that clothing semantics, even as weak supervision, is a valuable complementary cue to 3D joints that improves the estimation of 3D bodies. Our experiments show the importance of such semantics, providing new insight into 3D human body estimation.

DSR uses clothing as weak supervision, which can be limited in complex scenes with multiple people and occlusion. Our method can be easily extended to pipelines that account for multiple people in the scene [53]. In the future, we should explore methods that model 3D clothing semantics, build a better prior for SMPL bodies or incorporate additional constraints to disambiguate scene semantics.

Acknowledgements: We thank Sergey Prokudin, Chun-Hao P. Huang, Vassilis Choutas, Priyanka Patel, Radek Danecek, Cornelia Kohler and all Perceiving Systems department members for their help, feedback and fruitful discussions. Disclosure: https://files.is.tue.mpg.de/black/CoI/ICCV2021.txt


  • [1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele (2014) 2D human pose estimation: new benchmark and state of the art analysis. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §5.2.
  • [2] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis (2005) SCAPE: shape completion and animation of people. In SIGGRAPH, Cited by: §2.1.
  • [3] A. Balan and M. J. Black (2008) The naked truth: estimating body shape under clothing,. In European Conference on Computer Vision (ECCV), Cited by: §2.2, §3, §3.
  • [4] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black (2016) Keep it SMPL: automatic estimation of 3D human pose and shape from a single image. In European Conference on Computer Vision (ECCV), Cited by: §1, §1, §2.1.
  • [5] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2021) OpenPose: realtime multi-person 2D pose estimation using part affinity fields. In IEEE Transaction on Pattern Analysis and Machine Intelligence (TPAMI), Cited by: §2.1.
  • [6] Y. Chen, Y. Tian, and M. He (2020)

    Monocular human pose estimation: a survey of deep learning-based methods

    In Comput. Vis. Image Underst., Cited by: §2.
  • [7] K. Gong, Y. Gao, X. Liang, X. Shen, M. Wang, and L. Lin (2019) Graphonomy: universal human parsing via graph transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Figure, Appendix A, §1, §2.2, §3, §5.3.
  • [8] P. Guan, A. Weiss, A. Balan, and M. J. Black (2009) Estimating human shape and pose from a single image. In International Conference on Computer Vision (ICCV), Cited by: §2.2.
  • [9] R. A. Güler, N. Neverova, and I. Kokkinos (2018) DensePose: dense human pose estimation in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [10] T. He, J. Collomosse, H. Jin, and S. Soatto (2020) Geo-pifu: geometry and pixel aligned implicit functions for single-view human reconstruction. In Advances in Neural Information Processing (NeurIPS), Cited by: §2.2.
  • [11] D. C. Hogg (1983) Model-based vision: a program to see a walking person. In Image Vis. Comput., Cited by: §2.1.
  • [12] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014) Human3.6m: large scale datasets and predictive methods for 3D human sensing in natural environments. In IEEE Transaction on Pattern Analysis and Machine Intelligence (TPAMI), Cited by: Appendix C, §4, §4, §5.2.
  • [13] H. Joo, N. Neverova, and A. Vedaldi (2020) Exemplar fine-tuning for 3d human pose fitting towards in-the-wild 3d human pose estimation. In arXiv preprint arXiv:2004.03686, Cited by: §A.2, Figure I, Figure I, Appendix D, Figure 1, §1, §1, §2.1, §3, §3, Table 2, §4, §4, Figure 5, §5.1, §5.2.
  • [14] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik (2019) Learning 3D human dynamics from video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [15] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik (2018) End-to-end recovery of human shape and pose. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §A.2, §A.2, §1, §2.1, §3, Table 2, §4, §5.1.
  • [16] M. Kocabas, N. Athanasiou, and M. J. Black (2020) VIBE: video inference for human body pose and shape estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [17] N. Kolotouros, G. Pavlakos, M. J. Black, and K. Daniilidis (2019) Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In International Conference on Computer Vision (ICCV), Cited by: §A.2, Figure I, Figure I, Appendix D, §1, §2.1, Table 2, §4, §4, Figure 5, §5.2.
  • [18] N. Kolotouros, G. Pavlakos, and K. Daniilidis (2019) Convolutional mesh regression for single-image human shape reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1, Table 2.
  • [19] C. Lassner, J. Romero, M. Kiefel, F. Bogo, M. J. Black, and P. Gehler (2017) Unite the people: closing the loop between 3D and 2D human representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.1.
  • [20] S. Li and A. B. Chan (2014)

    3D human pose estimation from monocular images with deep convolutional neural network

    In Asian Conference on Computer Vision (ACCV), Cited by: Appendix D, §2.1, §4.
  • [21] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft COCO: common objects in context. In European Conference on Computer Vision (ECCV), Cited by: Appendix D, §2.2, §4.
  • [22] S. Liu, T. Li, W. Chen, and H. Li (2019) Soft rasterizer: a differentiable renderer for image-based 3d reasoning. In International Conference on Computer Vision (ICCV), Cited by: §3, §4.
  • [23] M. M. Loper, N. Mahmood, and M. J. Black (2014) MoSh: motion and shape capture from sparse markers. In ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), Cited by: §5.3.
  • [24] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black (2015) SMPL: a skinned multi-person linear model. In ACM Trans. Graphics (Proc. SIGGRAPH Asia), Cited by: §2.1.
  • [25] N. Mahmood, N. Ghorbani, N. Troje, G. Pons-Moll, and M. J. Black (2019) AMASS: archive of motion capture as surface shapes. In International Conference on Computer Vision (ICCV), Cited by: §2.1.
  • [26] J. Martinez, R. Hossain, J. Romero, and J. Little (2017) A simple yet effective baseline for 3D human pose estimation. In International Conference on Computer Vision (ICCV), Cited by: §2.1.
  • [27] D. Mehta, H. Rhodin, D. Casas, P. Fua, O. Sotnychenko, W. Xu, and C. Theobalt (2017) Monocular 3d human pose estimation in the wild using improved cnn supervision. In International Conference on 3D Vision (3DV), Cited by: §4, §4.
  • [28] A. Newell, K. Yang, and J. Deng (2016) Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision (ECCV), Cited by: §2.1.
  • [29] M. Omran, C. Lassner, G. Pons-Moll, P. Gehler, and B. Schiele (2018) Neural body fitting: unifying deep learning and model based human pose and shape estimation. In International Conference on 3D Vision (3DV), Cited by: §1, §1, §2.2, Table 2, §5.2.
  • [30] P. Patel, C. P. Huang, J. Tesch, D. Hoffmann, S. Tripathi, and M. J. Black (2021)

    AGORA: avatars in geography optimized for regression analysis

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Appendix B, §1, §3, §3.
  • [31] G. Pavlakos, X. Zhou, and K. Daniilidis (2018) Ordinal depth supervision for 3D human pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [32] G. Pavlakos, X. Zhou, K. Derpanis, and K. Daniilidis (2017) Coarse-to-fine volumetric prediction for single-image 3D human pose. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [33] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis (2018) Learning to estimate 3D human pose and shape from a single color image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §2.2, Table 2, §5.2.
  • [34] U. Rafi, J. Gall, and B. Leibe (2015) A semantic occlusion model for human pose estimation from a single depth image. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Cited by: §2.2.
  • [35] (2020) Renderpeople. Note: https://renderpeople.com Cited by: §3.
  • [36] S. Saito, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li (2019) PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. In International Conference on Computer Vision (ICCV), Cited by: §2.2.
  • [37] S. Saito, T. Simon, J. Saragih, and H. Joo (2020) PIFuHD: multi-level pixel-aligned implicit function for high-resolution 3D human digitization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.
  • [38] N. Sarafianos, B. Boteanu, B. Ionescu, and I. A. Kakadiaris (2016) 3D human pose estimation: a review of the literature and analysis of covariates. In Computer Vision and Image Understanding, Cited by: §2.
  • [39] L. Sigal, S. Bhatia, S. Roth, M. J. Black, and M. Isard (2004) Tracking loose-limbed people. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [40] E. Simo-Serra, A. Quattoni, C. Torras, and F. Moreno-Noguer (2013) A joint model for 2D and 3D pose estimation from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [41] C. Stoll, N. Hasler, J. Gall, H. Seidel, and C. Theobalt (2011) Fast articulated motion tracking using a sums of gaussians body model. In International Conference on Computer Vision (ICCV), Cited by: §2.1.
  • [42] K. Sun, B. Xiao, D. Liu, and J. Wang (2019) Deep high-resolution representation learning for human pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [43] X. Sun, J. Shang, S. Liang, and Y. Wei (2017) Compositional human pose regression. In International Conference on Computer Vision (ICCV), Cited by: §2.1.
  • [44] H. F. Tung, H. Tung, E. Yumer, and K. Fragkiadaki (2017)

    Self-supervised learning of motion capture

    In Advances in Neural Information Processing (NeurIPS), Cited by: §2.2.
  • [45] G. Varol, D. Ceylan, B. C. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid (2018) BodyNet: volumetric inference of 3D human body shapes. In European Conference on Computer Vision (ECCV), Cited by: §2.2.
  • [46] T. von Marcard, R. Henschel, M. Black, B. Rosenhahn, and G. Pons-Moll (2018) Recovering accurate 3D human pose in the wild using imus and a moving camera. In European Conference on Computer Vision (ECCV), Cited by: §1, §4.
  • [47] D. Xiang, H. Joo, and Y. Sheikh (2019) Monocular total capture: posing face, body, and hands in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [48] D. Xiang, F. Prada, C. Wu, and J. Hodgins (2020) MonoClothCap: towards temporally coherent clothing capture from monocular rgb video. In International Conference on 3D Vision (3DV), Cited by: §2.2.
  • [49] B. Xiao, H. Wu, and Y. Wei (2018) Simple baselines for human pose estimation and tracking. In European Conference on Computer Vision (ECCV), Cited by: §2.1.
  • [50] Y. Xu, S. Zhu, and T. Tung (2019) Denserac: joint 3D pose and shape estimation by dense render-and-compare. In International Conference on Computer Vision (ICCV), Cited by: §1, §2.2.
  • [51] S. Yu, Y. Yun, L. Wu, G. Wenpeng, F. Yi-li, and M. Tao (2019) Human mesh recovery from monocular images via a skeleton-disentangled representation. In International Conference on Computer Vision (ICCV), Cited by: §2.1.
  • [52] A. Zanfir, E. G. Bazavan, H. Xu, B. Freeman, R. Sukthankar, and C. Sminchisescu (2020) Weakly supervised 3D human pose and shape reconstruction with normalizing flows. In European Conference on Computer Vision (ECCV), Cited by: §2.2, Table 2, §5.2.
  • [53] A. Zanfir, E. Marinoiu, and C. Sminchisescu (2018) Monocular 3d pose and shape estimation of multiple people in natural scenes: the importance of multiple scene constraints. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §6.
  • [54] W. Zeng, W. Ouyang, P. Luo, W. Liu, and X. Wang (2020) 3D human mesh regression with dense correspondence. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.2.
  • [55] T. Zhang, B. Huang, and Y. Wang (2020) Object-occluded human shape and pose estimation from a single color image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.2.

Appendix A Clothing Semantic Information

It is difficult to obtain ground-truth clothing segmentation masks for in-the-wild datasets. Hence, we use Graphonomy [7], which is an off-the-shelf human clothing segmentation model that provides reasonably reliable pseudo-ground truth.

Figure: Clothed Human Scans. Examples of clothed human scans in different clothing, pose and camera views (Columns 1,3,5) along with the corresponding SMPL bodies where each vertex is colored based on the output of the clothing segmentation model [7] (Columns 2,4,6) applied on the respective scan images. We only show camera views here.

a.1 Clothing Segmentation Masks

Graphonomy has three different models depending on the granularity of the segmentation mask and we choose the one with labels, also known as the Universal Model. This model provides the best clothing segmentation performance compared to other Graphonomy variants. The different labels are: Background, Hat, Hair, Glove, Sunglasses, UpperClothes, Dress, Coat, Socks, Pants, Jumpsuits, Scarf, Skirt, Face, LeftArm, RightArm, LeftLeg, RightLeg, LeftShoe and RightShoe.

During inference, to get more accurate predictions – as suggested in the original implementation – we use different scaling factors for the input image – – to account for different image resolutions. Then, we merge the outputs for different scaling factors using appropriate upsample and downsample functions (bilinear) to produce an output size the same as the original image. For images more than , we use a single scaling factor of . We also flip the image horizontally and average the output predictions of the flipped image with the original one.

a.2 Processing Pseudo Ground-Truth Masks

The generated pseudo ground-truth cannot be directly used for supervising existing human body estimator networks because of incompatibility between Graphonomy’s output and 3D pose regressor’s training procedure [15].

Graphonomy is not an instance segmentation model, which means it is hard to differentiate between people in the image. However, standard human body estimators [15, 13, 17] use a single person during training. To circumvent this problem, we use 2D keypoints to get a rough estimate of the region of the person in the image. Furthermore, we add/subtract an offset of pixels in both and direction according to the maximum/minimum keypoint location.

Due to occlusion or inaccuracies in the prediction, the spread of pixels for a particular label of Graphonomy may cover an extremely small part of the image. As DSR-MC tries to tightly supervise the rendered SMPL body with the target binary mask, it is important to ensure the target masks are reliable. Hence, we remove labels that cover less than pixels from the predefined set of five labels (LeftArm, RightArm, LeftShoe, RightShoe, Face).

There is a one to one mapping from the DSR-MC labels to Graphonomy labels. The same is not true for DSR-C as there are several clothing labels. Consequently, for DSR-C, we define a coarse mapping as per Table I.

DSR-C Labels Graphonomy Labels
Background Background
LowerClothes Pants, Skirts
UpperClothes Upperclothes, Dress, Coat, Jumpsuits
MinimalClothing Hat, Hair, Glove, Sunglasses,
Socks, Scraf, Face, LeftArm, RightArm,
LeftLeg, RightLeg, LeftShoe, RightShoe
Table I: Mapping of DSR-C labels to Graphonomy labels.
Figure I: Occlusion Failure Analysis Qualitative failure results in case of occlusion. We show outputs from COCO and 3DPW in Rows 1-2 respectively. Rows 3-4: Similar occlusion cases present in the training samples.

Appendix B Semantic Prior for SMPL

To supervise the human body regressor network with semantic information, we need a term that captures the a priori probability that describes what parts of the SMPL body correspond to a particular semantic label. To this end, we use clothed human scans from the AGORA dataset [30] with varied clothing, pose and identity. AGORA contains clothed 3D people with ground truth SMPL-X bodies fit to the scans. We convert SMPL-X fits to SMPL. For each scan, we render it from different camera views to cover different angles and generate scan images. We run Graphonomy on each of these images to obtain 2D clothing segmentation images for each scan. An illustration of the output from this process is depicted in Fig. A. We also render the fitted SMPL model with the known camera parameters to obtain the correspondences between the vertices of the SMPL body and the pixels in the image.

Given this training data, we can very simply compute the prior probability of a SMPL vertex having one of the

Graphonomy labels. We estimate this by calculating the occurrences of a particular label being present at the vertex divided by the total occurrences of other labels–excluding the Background label. Finally, this gives us the prior per-vertex probability that a SMPL vertex has given a Graphonomy label. We also assign a small probability of a vertex being assigned the background label; this increases robustness to occlusion. As an additional step, we use the SMPL body part segmentation to clean the semantic prior. Graphonomy gives incorrect predictions for some clothed body scan images and this will affect downstream tasks. Hence, if the semantic label probability of a “leg” vertex (denoted by SMPL part segmentation) has a higher probability of being hand, we set it to zero. This approach helps to avoid obvious failures when Graphonomy produces incorrect predictions. Note that a more sophisticated prior model could also capture spatial correlations of clothing but we did not find this necessary.

Figure I: Multi-Person Failure Analysis Qualitative failure results in case multiple people are present. We show outputs from COCO and 3DPW in Rows 1-2 respectively. Rows 3-4: Similar multi-person failure cases present in the training samples.
Figure I: Additional Qualitative Results of 3DPW. From left to right - Input image, SPIN [17], SPIN Sideview, EFT [13], EFT Sideview, DSR and DSR Sideview results
Figure I: Additional Qualitative Results of COCO. From left to right - Input image, SPIN [17], SPIN Sideview, EFT [13], EFT Sideview, DSR and DSR Sideview results

Appendix C Failure Case Analysis

We qualitatively analyse the failure cases using our method and broadly categorise them into two types: occlusion failures as shown in Fig. I and multi-person failures as shown in Fig. I. Note that these are also cases where standard 3D pose estimation methods commonly fail.

First, we observe failures in case of either self-occlusion or scene occlusion producing unreasonable pose. Hence, we tried to analyse the training samples with occlusion. As we can see in Fig. I, Graphonomy outputs a black patch (Background class) when an object or the scene is occluding the person. As DSR-C tries to minimise the negative log probability of a rendered vertex being a particular label, and the background label has a low probability, occlusions can cause the pose to be incorrect. More complete labeling of things like backpacks or training with synthetic occlusion could improve this. Moreover, it can also hinder detailed fitting of the body where the labels associated with DSR-MC are occluded. Additional occlusion handling techniques could help our approach in such cases.

Furthermore, another failure case occurs when multiple people are present in a scene. As Graphonomy is not an instance segmentation network, the pseudo ground-truth data may still contain other people even after using the heuristics to clean them, as described in Section A.2. This confuses training, resulting in misaligned bodies at inference time. Figure I shows common cases where all the upper body clothing of multiple people are merged into one segment and clothing masks of partially visible people in the background, which affect the quality of the obtained masks. Our entire method could be improved by better instance-level clothing segmentation.

Higher quality of Graphonomy masks leads to increased performance gains in the case of DSR. We demonstrate it by doing an ablation study using Human3.6M [12] dataset where the Graphonomy predictions are more reliable because of the simpler background and single subject. The quantitative results of this experiment are reported in the main paper.

Overall, our performance is affected by the off-the-self model we use to supervise the clothing semantics of the person. However, improvements over the state-of-the-art show that even weak supervision of clothing semantics is crucial for detailed 3D body fits. The success of our approach suggests that more accurate human parsing and clothing segmentation are a good investment for the community.

Appendix D Additional Qualitative Results

We show additional qualitative results comparing our method with other state-of-the-art methods [17, 13] for 3DPW [20] and COCO [21] which are challenging in-the-wild benchmarks for 3D human pose and shape estimation. The results are depicted in Figures I and I. Next to each example, we show the corresponding side view. We observe that our approach produces more accurate pose and shape that are better aligned with the human in the image than current SOTA approaches.