Log In Sign Up

3DPeople: Modeling the Geometry of Dressed Humans

by   Albert Pumarola, et al.

Recent advances in 3D human shape estimation build upon parametric representations that model very well the shape of the naked body, but are not appropriate to represent the clothing geometry. In this paper, we present an approach to model dressed humans and predict their geometry from single images. We contribute in three fundamental aspects of the problem, namely, a new dataset, a novel shape parameterization algorithm and an end-to-end deep generative network for predicting shape. First, we present 3DPeople, a large-scale synthetic dataset with 2.5 Million photo-realistic images of 80 subjects performing 70 activities and wearing diverse outfits. Besides providing textured 3D meshes for clothes and body, we annotate the dataset with segmentation masks, skeletons, depth, normal maps and optical flow. All this together makes 3DPeople suitable for a plethora of tasks. We then represent the 3D shapes using 2D geometry images. To build these images we propose a novel spherical area-preserving parameterization algorithm based on the optimal mass transportation method. We show this approach to improve existing spherical maps which tend to shrink the elongated parts of the full body models such as the arms and legs, making the geometry images incomplete. Finally, we design a multi-resolution deep generative network that, given an input image of a dressed human, predicts his/her geometry image (and thus the clothed body shape) in an end-to-end manner. We obtain very promising results in jointly capturing body pose and clothing shape, both for synthetic validation and on the wild images.


page 1

page 3

page 4

page 5

page 6

page 8


3D Human Shape Reconstruction from a Polarization Image

This paper tackles the problem of estimating 3D body shape of clothed hu...

Fashion is Taking Shape: Understanding Clothing Preference Based on Body Shape From Online Sources

To study the correlation between clothing garments and body shape, we co...

FACSIMILE: Fast and Accurate Scans From an Image in Less Than a Second

Current methods for body shape estimation either lack detail or require ...

Shape and Material Capture at Home

In this paper, we present a technique for estimating the geometry and re...

Adversarial Parametric Pose Prior

The Skinned Multi-Person Linear (SMPL) model can represent a human body ...

Inferring Implicit 3D Representations from Human Figures on Pictorial Maps

In this work, we present an automated workflow to bring human figures, o...

Automated pebble mosaic stylization of images

Digital mosaics have usually used regular tiles, simulating the historic...

1 Introduction

With the advent of deep learning, the problem of predicting the geometry of the human body from single images has experienced a tremendous boost. The combination of Convolutional Neural Networks with large MoCap datasets 

[42, 19], resulted in a substantial number of works that robustly predict the 3D position of the body joints [27, 28, 30, 34, 38, 46, 49, 52, 59].

In order to estimate the full body shape a standard practice adopted in [11, 13, 17, 22, 50, 61]

is to regress the parameters of low rank parametric models 

[9, 26]. Nevertheless, while these parametric models describe very accurately the geometry of the naked body, they are not appropriate to capture the shape of clothed humans.

Current trends focus on proposing alternative representations to the low rank models. Varol  [51] advocate for a direct inference of volumetric body shape, although still without accounting for the clothing geometry. Very recently, [33] uses 2D silhouettes and the visual hull algorithm to recover shape and texture of clothed human bodies. Despite very promising results, this approach still requires frontal-view input images of the person with no background, and under relatively simple body poses.

In this paper, we introduce a general pipeline to estimate the geometry of dressed humans which is able cope with a wide spectrum of clothing outfits and textures, complex body poses and shapes, and changing backgrounds and camera viewpoints. For this purpose, we contribute in three key areas of the problem, namely, the data collection, the shape representation and the image-to-shape inference.

Concretely, we first present 3DPeople a new large-scale dataset with 2.5 Million photorealistic synthetic images of people under varying clothes and apparel. We split the dataset 40 male/40 female with different body shapes and skin tones, performing 70 distinct actions (see Fig. 1). The dataset contains the 3D geometry of both the naked and dressed body, and additional annotations including skeletons, depth and normal maps, optical flow and semantic segmentation masks. This additional data is indeed very similar to SURREAL [52] which was built for similar purposes. The key difference between SURREAL and 3DPeople, is that in SURREAL the clothing is directly mapped as a texture on top of the naked body, while in 3DPeople the clothing does have its own geometry.

As essential as gathering a rich dataset, is the question of what is the most appropriate geometry representation for a deep network. In this paper we consider the “geometry image” proposed originally in [16] and recently used to encode rigid objects in [7, 43]. The construction of the geometry image involves two steps, first a mapping of a genus-0 surface onto a spherical domain, and then to a 2D grid resembling an image. Our contribution here is on the spherical mapping. We found that existing algorithms [12, 7] were not accurate, specially for the elongated parts of the body. To address this issue we devise a novel spherical area-preserving parameterization algorithm that combines and extends the FLASH [12] and the optimal mass transportation methods [31].

Our final contribution consists of designing a generative network to map input RGB images of a dressed human into his/her corresponding geometry image. Since we consider geometry images, learning such a mapping is highly complex. We alleviate the learning process through a coarse-to-fine strategy, combined with a series of geometry-aware losses. The full network is trained in an end-to-end manner, and the results are very promising in variety of input data, including both synthetic and real images.

2 Related work

3D Human shape estimation. While the problem of localizing the 3D position of the joints from a single image has been extensively studied  [27, 28, 30, 34, 38, 46, 49, 52, 59, 62], the estimation of the 3D body shape has received relatively little attention. This is presumably due to the existence of well-established datasets [42, 19], uniquely annotated with skeleton joints.

The inherent ambiguity for estimating human shape from a single view is typically addressed using shape embeddings learned from body scan repositories like SCAPE [9] and SMPL [26]. The body geometry is described by a reduced number of pose and shape parameters, which are optimized to match image characteristics [10, 11, 25]. Dibra  [13] are the first in using a CNN fed with silhouette images to estimate shape parameters. In [47, 50] SMPL body parameters are predicted by incorporating differential renders into the deep network to directly estimate and minimize the error of image features. On top of this,  [22] introduces an adversarial loss that penalizes non-realistic body shapes. All these approaches, however, build upon low-dimensional parametric models which are only suitable to model the geometry of the naked body.

Figure 1: Annotations of the 3D People Dataset. For each of the 80 subjects of the dataset, we generate 280 video sequences (70 actions seen from 4 camera views). The bottom of the figure shows 5 sample frames of the Running sequence. Every RGB frame is annotated with the information reported in the top of the figure. 3DPeople is the first large-scale dataset with geometric meshes of body and clothes.

Non-parametric representations for 3D objects. What is the most appropriate 3D object representation to train a deep network remains an open question, especially for non-rigid bodies. Standard non-parametric representations for rigid objects include voxels [14, 58], octrees [48, 54, 55] and point-clouds [45].  [7, 43] uses 2D embeddings computed with geometry images [16] to represent rigid objects. Interestingly, very promising results for the reconstruction of non-rigid hands were also reported. DeformNet [36] proposes the first deep model to reconstruct the 3D shape non-rigid surfaces from a single image. Bodynet [51] explores a network that predicts voxelized human body shape. Very recently, [33] introduces a pipeline that given a single image of a person in frontal position predicts the body silhouette as seen from different views, and then uses a visual hull algorithm to estimate 3D shape.

Generative Adversarial Networks. Originally introduced by [15], GANs have been previously used to model human body distributions and generate novel images of a person under arbitrary poses [37]. Kanazawa  [22] explicitly learned the distribution on real parameters of SMPL. DVP [23], paGAN [32] and GANimation [35] presented models for continuous face animation and manipulation. They have also been applied to edit [18, 44, 53] and generate [5] talking faces.

Datasets for body shape analysis. Datasets are fundamental in the deep-learning era. While obtaining annotations is quite straightforward for 2D poses [41, 8, 21], it requires using sophisticated MoCap systems for the 3D case. Additionally, the datasets acquired this way [42, 19, 19] are mostly indoors. Even more complex is the task of obtaining 3D body shape, which requires expensive setups with muti-cameras or 3D scanners. To overcome this situation, datasets with synthetically but photo-realistic images have emerged as a tool to generate massive amounts of training data. SURREAL [52] is the largest and more complete dataset so far, with more than 6 million frames generated by projecting synthetic textures of clothes onto random SMPL body shapes. The dataset is further annotated with body masks, optical flow and depth. However, since clothes are projected onto the naked SMPL shapes just as textures, they can not be explicitly modeled. To fill this gap, we present the 3DPeople dataset of 3D dressed humans in motion.

            (a) xxxxxxxaaxxccxx (b) xxxxxxxxcxxxxxx (c) xxxxxxxxxxxxcxxx (d) xxxxxxxxxxxxxxxxx (e) xxxxxxxxxxxxcxx (f)

Figure 2: Geometry image representation of the reference mesh. (a) Reference mesh in a tpose configuration color coded using the xyz position. (b) Spherical parameterization; (c) Octahedral parameterization; (d) Unwarping the octahedron to a planar configuration; (e) Geometry image, resulting from the projection of the octahedron onto a plane; (f) mesh reconstructed from the geometry image. Colored edges in the ochtahedron and in the geometry image represent the symmetry that is later exploited by the mesh regressor .

Figure 3: Comparison of spherical mapping methods. Shape reconstructed from a geometry image obtained with three different algorithms. Left: FLASH [12]; Center: [7]; Right: SAPP algorithm we propose. Note that SAPP is the only method that can effectively recover feet and hands.

3 3DPeople dataset

To facilitate future researches, we introduce 3DPeople, the first dataset of dressed humans with specific geometry representation for the clothes. The dataset in numbers can be summarized as follows: it contains 2.5 Million photorealistic images split into 40 male/40 female performing 70 actions. Every subject-action sequence is captured from 4 camera views, and for each view the texture of the clothes, the lighting direction and the background image are randomly changed. Each frame is annotated with (see Fig. 1): 3D textured mesh of the naked and dressed body; 3D skeleton; body part and cloth segmentation masks; depth map; optical flow; and camera parameters. We next briefly describe the generation process:

Body models: We have generated fully textured triangular meshes for 80 human characters using Adobe Fuse [1] and MakeHuman [2]. The distribution of the subjects physical characteristics cover a broad spectrum of body shapes, skin tones and hair geometry (see Fig. 1).

Clothing models: Each subject is dressed with a different outfit including a variety of garments, combining tight and loose clothes. Additional apparel like sunglasses, hats and caps are also included. The final rigged meshes of the body and clothes contain approximately 20K vertices.

Mocap squences: We gather 70 realistic motion sequences from Mixamo [3]. These include human movements with different complexity, from drinking and typing actions that produce small body motions to actions like breakdance or backflip that involve very complex patterns. The mean length of the sequences is of 110 frames. While these are relatively short sequences, they have a large expressivity, which we believe make 3DPeople also appropriate for exploring action recognition tasks.

Textures, camera, lights and background: We then use Blender [4] to apply the 70 MoCap animation sequences to each character. Every sequence is rendered from 4 camera views, yielding a total of 22,400 clips. We use a projective camera with a 800 mm focal length and pixel resolution. The 4 viewpoints correspond approximately to orthogonal directions aligned with the ground. The distance to the subject changes for every sequence to ensure a full view of the body in all frames. The textures of the clothes are randomly changed for every sequence (see again Fig. 1). The illumination is composed of an ambient lighting plus a light source at infinite, which direction is changed per sequence. As in [52] we render the person on top of a static background image, randomly taken from the LSUN dataset [60].

Semantic labels: For every rendered image, we provide segmentation labels of the clothes (8 classes) and body (14 parts). Observe in Fig. 1-top-right that the former are aligned with the dressed human, while the body parts are aligned with the naked body.

               (a) xxxxxxxxxxccx (b) xxxxxxxxxxxxxxxxx (c) xxxxxxxxxxxxxx (d) xxxxxxxxxxxxxxxx (e) xxxxxxxxxxxxxxx (f)

Figure 4: Geometry image estimation for an arbitrary mesh. (a) Input mesh in an arbitrary pose color coded using the xyz position of the vertices; (b) Same mesh in a tpose configuration (). The color of the mesh is mapped from ; (c) Reference tpose . The colors again correspond from those transferred from through the non-rigid map between and ; (d) Spherical mapping of ; (e) Geometry image of ; (f) Mesh reconstructed from the geometry image. Note that while being computed through a non-rigid mapping between the two reference poses, the recovered shape is a very good approximation of the input mesh .

4 Problem formulation

Given a single image of a person wearing an arbitrary outfit, we aim at designing a model capable of directly estimating the 3D shape of the clothed body. We represent the body shape through the mesh associated to a geometry image with vertices where are the 3D coordinates of the -th vertex, expressed in the camera coordinates system and centered on the root joint . This representation is a key ingredient of our design, as it maps the 3D mesh to a regular 2D grid structure that preserves the neighborhood relations, fulfilling thus the locality assumption required in CNN architectures. Furthermore, the geometry image representation allows uniformly reducing/increasing the mesh resolution by simply uniformly downsampling/upsampling. This will play an important role in our strategy of designing a coarse-to-fine shape estimation approach.

We next describe the two main steps of our pipeline: 1) the process of constructing the geometry images, and 2) the deep generative model we propose for predicting 3D shape.

5 Geometry image for dressed humans

The deep network we describe later will be trained using pairs of images and their corresponding geometry image. For creating the geometry images we consider two different cases, one for a reference mesh in a tpose configuration, and another for any other mesh of the dataset.

5.1 Geometry image for a reference mesh

One of the subjects of our dataset in a tpose configuration is chosen as a reference mesh. The process for mapping this mesh into a planar regular grid is illustrated in Fig. 2. It involves the following steps:

Repairing the mesh. Let be the reference mesh with vertices in a tpose configuration (Fig. 2

-a). We assume this mesh to be a manifold mesh and to be genus-0. Most of the meshes in our dataset, however, do not fulfill these conditions. In order to fix the mesh we follow the heuristic described in 

[7] which consists of a voxelization, a selection of the largest connected region of the -shape, and subsequent hole filling using a medial axis approach. We denote by the repaired mesh.

Spherical parameterization. Given the repaired genus-0 mesh , we next compute the spherical parameterization that maps every vertex of onto the unit sphere (Fig. 2-b). Details of the algorithm we use are explained below.

Unfolding the sphere. The sphere is mapped onto an octahedron and then cut along edges to output a flat geometry image . Let us formally denote by , and by the mapping from the reference mesh to the geometry image. The unfolding process is shown in Fig. 2-(c,d,e). Color lines in the geometry image correspond to the same edge in the octahedron, and are split after the unfolding operation. We will later enforce this symmetry constraint when predicting geometry images.

5.2 Spherical area-Preserving parameterization

Although there exist several spherical parameterization schemes ( [12, 7]) we found that they tend to shrink the elongated parts of the full body models such as the arms and legs, making the geometry images incomplete (see Fig. 3). In this work, we develop a spherical area-preserving parameterization algorithm for genus-0 full body models by combining and extending the FLASH method [12] and the optimal mass transportation method [31]. Our algorithm is particularly advantageous for handling models with elongated parts. The key idea is to begin with an initial parameterization onto a planar triangular domain with a suitable rescaling correcting the size of it. The area distortion of the initial parameterization is then reduced using quasi-conformal composition. Finally, the spherical area-preserving parameterization is produced using optimal mass transportation followed by the inverse stereographic projection.

Figure 5: Overview GimNet. The proposed architecture consists of two main blocks: a multiscale geometry image regressor and a multiscale discriminator to evaluate the local and global consistency of the estimated meshes.

5.3 Geometry image for arbitrary meshes

The approach for creating the geometry image described in the previous subsection is quite computationally demanding (can last up to 15 minutes for meshes with complex topology). To compute the geometry image for several thousand training meshes we have devised an alternative approach. Let be the mesh of any subject of the dataset under an arbitrary pose (Fig. 4-a), and let be its tpose configuration (Fig. 4-b). We assume there is a 1-to-1 vertex correspondence between both meshes, that is, where is a known bijective function111This is guaranteed in our dataset, with all meshes of the same subject having the same number of vertices..

We then compute dense correspondences between and the reference tpose , using a nonrigid icp algorithm [6]. We denote this mapping as (see Fig. 4-c). We can then finally compute the geometry image for the input mesh by concatenating mappings:


where is the mapping from the reference mesh to the geometry image domain estimated in Sec. 5.1. It is worth pointing the the nonrigid icp between the pairs of tposes is also highly computationally demanding, but it only needs to be computed once per every subject of the dataset. Once this is done, the geometry image for a new input mesh can be created in a few seconds.

An important consequence of this procedure is that all geometry images of the dataset will be semantically aligned, that is, every entry in will correspond to (approximately) the same semantic part of the model. This will significantly alleviate the learning task of the deep network.

6 GimNet

We next introduce GimNet, our deep generative network to estimate geometry images (and thus 3D shape) of dressed humans from a single images. An overview of the model is shown in Fig. 5. Given the input image, we first extract the 2D joint locations represented as heatmaps [57, 36], which are then fed into a mesh regressor trained to reconstruct the shape of the person in employing a geometry image based representation. Due to the high complexity of the mapping (both and are of size ), the regressor operates in a coarse-to-fine manner, progressively reconstructing meshes at higher resolution. To further enforce the reconstruction to lie on the manifold of anthropomorphic shapes, an adversarial scheme with two discriminators is applied.

6.1 Model architecture

Mesh regressor. Given the input image and the estimated 2D body joints , the mesh regressor aims to predict the geometry image , we seek to estimate the mapping . Instead of directly learning the complex mapping , we break the process into a sequence of more manageable steps. initially estimates a low-resolution mesh, and then progressively increases its resolution (see Fig. 5). This coarse-to-fine approach allows the regressor to first focus on the basic shape configuration and then shift attention to finer details, while also providing more stability compared to a network that learns the direct mapping.

As shown in Fig. 2-e, the geometry images have symmetry properties derived from unfolding the octahedron into a square, specifically, each side of the geometry image is symmetric with respect to its midpoint. We force this property using a differentiable layer that linearly operates over the edges of the estimated geometry images.

Figure 6: Mean Error Distance on the test set. We plot the results for the 15 worst and 15 best actions. Besides the results of GimNet, we report the results obtained by the ground truth GIM (recall that it is an approximation of the actual ground truth mesh). We also display the results obtained by the recent parametric approach of [22]. The results of this method, however are merely indicative, as we did not retrain the network with our dataset.

Multi-Scale Discriminator.

Evaluating high-resolution meshes poses a significant challenge for a discriminator, as it needs to simultaneously guarantee local and global mesh consistency on very high dimensional data. We therefore use two discriminators with the same architecture, but that operate in different geometry image scales: (i) a discriminator with a large receptive field that evaluates the shape coherence as a whole; and (ii) a local discriminator that focuses on small patches and enforces the local consistency of the surface triangle faces.

6.2 Learning the model

3D reconstruction error. We first define a supervised multi-level L1 loss for 3D reconstruction as:


being and the real and generated data distribution of clothed human geometry images respectively, the number of scales, the ground-truth reconstruction at scale and the estimated reconstruction. The error at each scale is weighted by where is the ratio between and sizes. During initial experimentation L1 loss reported better reconstructions than mean squared error.

2D Projection Error. To encourage the mesh to correctly project onto the input image we penalize, at every scale , its projection error computed as:

where is the differentiable projection equation and is calculated as above.

Adversarial loss. In order to further enforce the mesh regressor to generate anthropomorphic shapes we perform a min-max strategy game [15] between the regressor and two discriminators operating at different scales. It is well-known that non-overlapping support between the true data distribution and model distributions can cause severe training instabilities. As proven by [40, 29], this can be addressed by penalizing the discriminator when deviating from the Nash-equilibrium, ensuring that its gradients are non-zero orthogonal to the data manifold. Formally, being the th discriminator, the loss is defined as:


where is a penalty regularization for discriminator gradients, only considered on the true data distribution.

Feature matching loss. To improve training stabilization we penalize higher level features on the discriminators [56]. Similar to a perception loss, the estimated geometry image is compared with the ground truth at multiple feature levels of the discriminators. Being the th layer of the th discriminator, is defined as:


where is a weight regularizer denoting the number of elements in the th layer of the th discriminator.

Total Loss. Finally, we to solve the min-max problem:


where , and are the hyper-parameters that control the relative importance of every loss term.

Figure 7: Qualitative results. For the synthetic images we plot our estimated results and the shape reconstructed directly from the ground truth geometry image. In all cases we show two different views. The color of the meshes encodes the vertex position.

6.3 Implementation details

For the mesh regressor we build upon the U-Net architecture [39] consisting on an encoder-decoder structure with skip connections between features at the same resolution extended to estimate geometry images at multiple scales. Both discriminator networks operate at different mesh resolutions [56] but have the same PatchGan [20] architecture mapping from the geometry image to a matrix , where

represents the probability of the patch

to be close to a real geometry image distribution. The global discriminator evaluates the final mesh resolution at scale and the local discriminator the down-sampled mesh at scale .

The model is trained with 170,000 synthetic images of cropped clothed people resized to pixels and geometry images of

(meshes with 16,384 vertices) during 60 epochs and

. As for the optimizer, we use Adam [24] with learning rate of , beta1 , beta2 and batch size . Every epochs we decay the learning rate by a factor of . The weight coefficients for the loss terms are set to , , and .

7 Experimental evaluation

We next present quantitative and qualitative results on synthetic images of our dataset and on images in the wild.

Synthetic Results We evaluate our approach on 25,000 test images randomly chosen for 8 subjects (4 male/ 4 female) of the test split. For each test sample we feed GimNet with the RGB image and the ground truth 2D pose, corrupted by Gaussian noise with 2 pixel std. For a given test sample, let be the estimated mesh, resulting from a direct reshaping of its estimated geometry image . Also, let be the ground truth mesh, which does not need to have neither the same number of vertices as , nor necessarily the same topology. Since there is no direct 1-to-1 mapping between the vertices of the two meshes we propose using the following metric:


where represents the average Euclidean distance for all vertices of to their nearest neighbor in . Note that is not a true distance measure because it is not symmetric. This is why we compute it bidirectionally.

The quantitative results are summarized in Fig. 6

. We report the average error (in mm) of GimNet for 30 actions (the 15 with the highest and lowest error). Note that the error of GimNet is bounded between 15 and 35mm. Recall, however, that we do not consider outlier 2D detections in our experiments, but just 2D noise. We also evaluate the error of the ground truth geometry image, as it is an approximation of the actual ground truth mesh. This error is below 5mm, indicating that the geometry image representation does indeed capture very accurately the true shape. Finally, we also provide the error of the recent parametric approach of 

[22], that fits SMPL parameters to the input images. Nevertheless, these results are just indicative, and cannot be directly compared with our approach, as we did not retrain [22]. We add them here just to demonstrate the challenge posed by the new 3DPeople dataset. Indeed, the distance error in [22] was computed after performing a rigid-icp of the estimated mesh with the ground truth mesh (there was no need of this for GimNet).

Qualitative Results We finally show in Fig. 7 qualitative results on synthetic images from 3DPeople and real fashion images downloaded from Internet. Remarkably, note how our approach is able to reconstruct long dresses (top row images), known to be a major challenge [33]

. Note also that some of the reconstructed meshes have some spikes. This is one of the limitations of the non-parametric models, that the reconstructions tend to be less smooth than when using parametric fittings. However, non-parametric models have also the advantage that, if properly trained, can span a much larger configuration space.

8 Conclusions

In this paper we have made three contributions to the problem of reconstructing the shape of dressed humans: 1) we have presented the first large-scale dataset of 3D humans in action in which cloth geometry is explicitly modelled; 2) we have proposed a new algorithm to perform spherical parameterizations of elongated body parts, to later model rigged meshes of human bodies as geometry images; and 3) we have introduced an end-to-end network to estimate human body and clothing shape from single images, without relying on parametric models. While the results we have obtained are very promising, there are still several avenues to explore. For instance, extending the problem to video, exploring new regularization schemes on the geometry images, or combining segmentation and 3D reconstruction are all open problems that can benefit from the proposed 3DPeople dataset.


This work is supported in part by an Amazon Research Award, the Spanish Ministry of Science and Innovation under projects HuMoUR TIN2017-90086-R, ColRobTransp DPI2016-78957 and María de Maeztu Seal of Excellence MDM-2016-0656; and by the EU project AEROARMS ICT-2014-1-644271. We also thank NVidia for hardware donation under the GPU Grant Program.


  • [1]
  • [2]
  • [3]
  • [4] Blender - a 3d modelling and rendering package.
  • [5] Wav2pix: Speech-conditioned face generation using generative adversarial networks. In ICASSP.
  • [6] Nonrigid ICP, MATLAB Central File Exchange, 2019., 2019.
  • [7] J. B. A. Sinha and K. Ramani. Deep learning 3D Shape Surfaces using Geometry Images. In ECCV, 2016.
  • [8] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In CVPR, June 2014.
  • [9] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. SCAPE: Shape Completion and Animation of People. ACM Transactions on Graphics, 24(3):408–416, July 2005.
  • [10] A. O. Balan, L. Sigal, M. J. Black, J. E. Davis, and H. W. Haussecker. Detailed human shape and pose from images. In CVPR, 2007.
  • [11] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In ECCV, 2016.
  • [12] P. T. Choi, K. C. Lam, and L. M. Lui. FLASH: Fast landmark aligned spherical harmonic parameterization for genus-0 closed brain surfaces. SIAM J. Imaging Science, 8(1):67–94, 2015.
  • [13] E. Dibra, H. Jain, C. Oztireli, R. Ziegler, and M. Gross. Human Shape from Silhouettes using Generative HKS Descriptors and Cross-modal Neural Networks. In CVPR, 2017.
  • [14] R. Girdhar, D. Fouhey, M. Rodriguez, and A. Gupta. Estimating Human Shape and Pose from a Single Image. In ECCV, 2016.
  • [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Nets. In NIPS, 2014.
  • [16] X. Gu, S. J. Gortler, and H. Hoppe. Geometry Images. ACM Transactions on Graphics, 21(3):355–361, July 2002.
  • [17] P. Guan, A. Weiss, A. O. Balan, and M. J. Black. Estimating Human Shape and Pose from a Single Image. In ICCV, 2009.
  • [18] Z. L. P. L. X. W. Hang Zhou, Yu Liu. Talking face generation by adversarially disentangled audio-visual representation. In AAAI, 2019.
  • [19] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. PAMI, 36(7):1325–1339, 2014.
  • [20] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image Translation with Conditional Adversarial Networks. In CVPR, 2017.
  • [21] S. Johnson and M. Everingham. Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation. In BMVC, 2010.
  • [22] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end Recovery of Human Shape and Pose. In CVPR, 2018.
  • [23] H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Péerez, C. Richardt, M. Zollhöfer, and C. Theobalt. Deep video portraits. ACM Transactions on Graphics 2018 (TOG), 2018.
  • [24] D. Kingma and J. Ba. ADAM: A method for stochastic optimization. In ICLR, 2015.
  • [25] C. Lassner, J. Romero, M.Kiefel, F.Bogo, M.J.Black, and P.V.Gehler. Unite the People: Closing the Loop between 3D and 2D human representations. In CVPR, 2017.
  • [26] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics, 34(6):248:1–248:16, Oct. 2015.
  • [27] J. Martinez, R. Hossain, J. Romero, and J. Little. A simple yet effective baseline for 3d human pose estimation. In ICCV, 2017.
  • [28] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin, M. Shafiei, H.-P. Seidel, W. Xu, D. Casas, and C. Theobalt. VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera. ACM Transactions on Graphics, 36(4), 2017.
  • [29] L. Mescheder, A. Geiger, and S. Nowozin. Which Training Methods for GANs do actually Converge? In ICML, 2018.
  • [30] F. Moreno-Noguer. 3D Human Pose Estimation from a Single Image via Distance Matrix Regression. In CVPR, 2017.
  • [31] S. Nadeem, Z. Su, W. Zeng, A. E. Kaufman, and X. Gu. Spherical Parameterization Balancing Angle and Area Distortions. IEEE Trans. Vis. Comput. Graph., 23(6):1663–1676, 2017.
  • [32] K. Nagano, J. Seo, J. Xing, L. Wei, Z. Li, S. Saito, A. Agarwal, J. Fursund, and H. Li. pagan: real-time avatars using dynamic textures. In SIGGRAPH Asia 2018 Technical Papers, page 258. ACM, 2018.
  • [33] R. Natsume, S. Saito, Z. Huang, W. Chen, C. Ma, H. Li, and S. Morishima. SiCloPe: Silhouette-Based Clothed People. In CVPR, 2019.
  • [34] G. Pavlakos, X. Z. and. K. G. Derpanis, and K. Daniilidis. Coarse-to-fine volumetric prediction for single-image 3D human pose. In CVPR, 2017.
  • [35] A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Moreno-Noguer. Ganimation: Anatomically-aware facial animation from a single image. In

    Proceedings of the European Conference on Computer Vision (ECCV)

    , pages 818–833, 2018.
  • [36] A. Pumarola, A. Agudo, L. Porzi, A. Sanfeliu, V. Lepetit, and F. Moreno-Noguer. Geometry-aware network for non-rigid shape prediction from a single view. In CVPR, 2018.
  • [37] A. Pumarola, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer. Unsupervised person image synthesis in arbitrary poses. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 8620–8628, 2018.
  • [38] G. Rogez and C. Schmid. MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild. In NIPS, 2016.
  • [39] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [40] K. Roth, A. Lucchi, S. Nowozin, and T. Hofmann. Stabilizing training of generative adversarial networks through regularization. In NIPS. Curran Associates Inc., 2017.
  • [41] B. Sapp and B. Taska. Modec: Multimodal Decomposable Models for Human Pose Estimation. In CVPR, 2013.
  • [42] L. Sigal, A. O. Balan, and M. J. Black. HumanEva: Synchronized Video and Motion Capture Dataset and Baseline Algorithm for Evaluation of Articulated Human Motion. IJCV, 87(1-2), 2010.
  • [43] A. Sinha, A. Unmesh, Q. Huang, and K. Ramani. SurfNet: Generating 3D shape surfaces using deep residual networks. In CVPR, 2017.
  • [44] Y. Song, J. Zhu, X. Wang, and H. Qi. Talking face generation by conditional recurrent adversarial network. arXiv preprint arXiv:1804.04786, 2018.
  • [45] H. Su, H. Fan, and L. Guibas. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In CVPR, 2017.
  • [46] X. Sun, B. Xiao, S. Liang, and Y. Wei. Integral Human Pose Regression. In ECCV, 2018.
  • [47] J. Tan, I. Budvytis, and R. Cipolla. Indirect Deep structured Learning for 3D Human Body Shape and Pose Prediction. In BMVC, 2017.
  • [48] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs. In ICCV, 2017.
  • [49] D. Tome, C. Russell, and L. Agapito. Lifting from the deep: Convolutional 3D pose estimation from a single image. In CVPR, 2017.
  • [50] H.-Y. Tung, H.-W. Tung, E. Yumer, and K. Fragkiadaki.

    Self-supervised learning of motion capture.

    In NIPS, 2017.
  • [51] G. Varol, D. Ceylan, B. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid. BodyNet: Volumetric inference of 3D human body shapes. In ECCV, 2018.
  • [52] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid. Learning from synthetic humans. In CVPR, 2017.
  • [53] K. Vougioukas, S. Petridis, and M. Pantic. End-to-end speech-driven facial animation with temporal gans. In BMVC, 2018.
  • [54] P.-S. Wang, Y. Liu, Y.-X. Guo, C.-Y. Sun, and X. Tong. O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. ACM Transactions on Graphics, 36(4), 2017.
  • [55] P.-S. Wang, C.-Y. Sun, Y. Liu, and X. Tong. Adaptive O-CNN: A Patch-based Deep Representation of 3D Shapes. ACM Transactions on Graphics, 37(6), 2018.
  • [56] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [57] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional Pose Machines. In CVPR, 2016.
  • [58] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective Transformer Nets: Learning Single-view 3D object Reconstruction without 3D Supervision. In NIPS, 2016.
  • [59] W. Yang, W. Ouyang, X. Wang, J. Rena, H. Li, , and X. Wang. 3d human pose estimation in the wild by adversarial learning. In CVPR, 2018.
  • [60] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv:1506.03365, 2015.
  • [61] A. Zanfir, E. Marinoiu, M. Zanfir, A.-I. Popa, and C. Sminchisescu. Deep network for the integrated 3d sensing of multiple people in natural images. In Advances in Neural Information Processing Systems, pages 8420–8429, 2018.
  • [62] X. Zhou, Q. Huang, X. Sun, X. Xue, and Y. Wei. Towards 3d human pose estimation in the wild: A weakly-supervised approach. In ICCV, 2017.