Photo Wake-Up: 3D Character Animation from a Single Photo

12/05/2018 ∙ by Chung-Yi Weng, et al. ∙ 20

We present a method and application for animating a human subject from a single photo. E.g., the character can walk out, run, sit, or jump in 3D. The key contributions of this paper are: 1) an application of viewing and animating humans in single photos in 3D, 2) a novel 2D warping method to deform a posable template body model to fit the person's complex silhouette to create an animatable mesh, and 3) a method for handling partial self occlusions. We compare to state-of-the-art related methods and evaluate results with human studies. Further, we present an interactive interface that allows re-posing the person in 3D, and an augmented reality setup where the animated 3D person can emerge from the photo into the real world. We demonstrate the method on photos, posters, and art.



There are no comments yet.


page 1

page 3

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Whether you come back by page or by the big screen, Hogwarts will always be there to welcome you home.

J.K. Rowling

In this paper, we propose to “wake up a photo” by bringing the foreground character to life, so that it can be animated in 3D and emerge from the photo. Related to our application are cinemagraphs and GIFs111Artistic cinemagraphs: where a small motion is introduced to a photo to visualize dominant dynamic areas. Unlike a cinemagraph, which is a 2D experience created from video, our method takes a single photo as input and results in a fully 3D experience. The output animation can be played as a video, viewed interactively on a monitor, and as an augmented or virtual reality experience, where a user with an headset can enjoy the central figure of a photo coming out into the real world.

A central challenge in delivering a compelling experience is to have the reconstructed subject closely match the silhouette of the clothed person in the photo, including self-occlusion of, e.g., the subject’s arm against the torso. Our approach begins with existing methods for segmenting a person from an image, 2D skeleton estimation, and fitting a (semi-nude) morphable, posable 3D model. The result of this first stage, while animatable, does not conform to the silhouette and does not look natural.

Our key technical contribution, then, is a method for constructing an animatable 3D model that matches the silhouette in a single photo and handles self-occlusion. Rather than deforming the 3D mesh from the first stage – a difficult problem for intricate regions such as fingers and for scenarios like abstract artwork – we map the problem to 2D, perform a silhouette-aligning warp in image space, and then lift the result back into 3D. This 2D warping approach works well for handling complex silhouettes. Further, by introducing label maps that delineate the boundaries between body parts, we extend our method to handle certain self-occlusions.

Our operating range on input and output is as follows. The person should be shown in whole (full body photo) as a fairly frontal view. We support partial occlusion, specifically of arms in front of the body. While we aim for a mesh that is sufficient for convincing animation, we do not guarantee a metrically correct 3D mesh, due to the inherent ambiguity in reconstructing a 3D model from 2D input. Finally, as existing methods for automatic detection, segmentation, and skeleton fitting are not yet fully reliable (esp. for abstract artwork), and hallucinating the appearance of the back of a person is an open research problem, we provide a user interface so that a small amount of input can correct errors and guide texturing when needed or desired.

To the best of our knowledge, our system is the first to enable 3D animation of a clothed subject from a single image. The closest related work either does not recover fully 3D models [15] or is built on video, i.e., multi-view, input [1]. We compare to these prior approaches, and finally show results for a wide variety of examples as 3D animations and AR experiences.

2 Related Work

General animation from video has led to many creative effects over the years. The seminal “Video Textures” [34] work shows how to create a video of infinite length starting from a single video. Human-specific video textures were produced from motion capture videos via motion graphs [11]. [39] explore multi-view captures for human motion animation, and [42] demonstrate that clothing can be deformed in user videos guided by body skeleton and videos of models wearing the same clothing. Cinemagraphs [35, 3] or Cliplets [17] create a still with small motion in some part of the still, by segmenting part of a given video in time and space.

Relevant also are animations created from big data sets of images, e.g., personal photo collections of a person where the animation shows a transformation of a face through years [21], or Internet photos to animate transformation of a location in the world through years [29], e.g., how flowers grow on Lombard street in San Francisco, or the change of glaciers over a decade.

Animating from a single photo, rather than videos or photo collections, also resulted in fascinating effects. [8] animate segmented regions to create an effect of water ripples or swaying flowers. [40] predict motion cycles of animals from a still photo of a group of animals, e.g., a group of birds where each bird has a different wing pose. [22] show that it’s possible to modify the 3D viewpoint of an object in a still by matching to a database of 3D shapes, e.g., rotating a car on in a street photo. [2] showed how to use a video of an actor making facial expressions and moving their head to create a similar motion in a still photo. Specific to body shapes, [41] showed that it’s possible to change the body weight and height from a single image and in a full video [16]. [15] presented a user-intensive, as-rigid-as-possible 2D animation of a human character in a photo, while ours is 3D.

For 3D body shape estimation from single photo, [6] provided the SMPL model which captures diverse body shapes and proved highly useful for 3D pose and shape estimation applications. Further, using deep networks and the SMPL model, [37, 18, 31] present end-to-end frameworks for single view body pose and shape estimation. [36] directly infer a volumetric body shape. [13] finds dense correspondence between human subjects and UV texture maps. For multi-view, [26] uses two views (frontal and side) to reconstruct a 3D mesh from sketches. [1]

applied SMPL model fitting to video taken while walking around a stationary human subject in a neutral pose to obtain a 3D model, including mesh deformation to approximately fit clothing. Recently, the idea of parametric model has further been extended from humans to animals 

[43, 19].

Most single-image person animation has focused on primarily 2D or pseudo-3D animation (e.g., [15]) while we aim to provide a fully 3D experience. Most methods for 3D body shape estimation focus on semi-nude body reconstruction and not necessarily ready for animation, while we take cloth into account and look for an animatable solution. The most similar 3D reconstruction work is [1] although they take a video as input. We compare our results to [15] and [1] in Sec. 6.

Figure 2: Overview of our method. Given a photo, person detection, 2D pose estimation, and person segmentation, is performed using off-the-shelf algorithms. Then, A SMPL template model is fit to the 2D pose and projected into the image as a normal map and a skinning map. The core of our system is: find a mapping between person’s silhouette and the SMPL silhouette, warp the SMPL normal/skinning maps to the output, and build a depth map by integrating the warped normal map. This process is repeated to simulate the model’s back view and combine depth and skinning maps to create a complete, rigged 3D mesh. The mesh is further textured, and animated using motion capture sequences on an inpainted background.

3 Overview

Given a single photo, we propose to animate the human subject in the photo. The overall system works as follows (Fig. 2): We first apply state-of-the-art algorithms to perform person detection, segmentation, and 2D pose estimation. From the results, we devise a method to construct a rigged mesh (Section 4). Any 3D motion sequence can then be used to animate the rigged mesh.

To be more specific, we use Mask R-CNN [14] for person detection and segmentation (implementation by [30]). 2D body pose is estimated using [38], and person segmentation is refined using Dense CRF [24]. Once the person is segmented out of the photo, we apply PatchMatch [4] to fill in the regions where the person used to be.

4 Mesh Construction and Rigging

The key technical idea of this paper is how to recover an animatable, textured 3D mesh from a single photo to fit the proposed application.

We begin by fitting the SMPL morphable body model [25] to a photo, including the follow-on method for fitting a shape in 3D to the 2D skeleton [6]. The recovered SMPL model provides an excellent starting point, but it is semi-nude, does not conform to the underlying body shape of the person and, importantly, does not match the clothed silhouette of the person.

One way is to force the SMPL model to fit the silhouettes by optimizing vertex locations on the SMPL mesh, taking care to respect silhouette boundaries, avoid pinching, and self-intersection. This is challenging especially around intricate regions such as fingers. This was indeed explored by [1], and we compare to those results in the experiments.

Instead, we take a 2D approach: warp the SMPL silhouette to match the person silhouette in the original image and then apply that warp to projected SMPL normal maps and skinning maps. The resulting normal and skinning maps can be constructed for both front and (imputed) back views and then lifted into 3D, along with the fitted 3D skeleton, to recover a rigged body mesh that exactly agrees with the silhouettes, ready for animation. The center box in Figure 

2 illustrates our approach.

In the following, we describe how we construct a rigged mesh using 2D warping (Section 4.1), then present how to handle arm-over-body self-occlusion (Section 4.2).

4.1 Mesh Warping, Rigging, & Skinning

In this section, we describe the process for constructing a rigged mesh for a subject without self-occlusion.

We start with the 2D pose of the person and the person’s silhouette mask . For simplicity, we refer to both as a set and as a function, i.e., as the set of all pixels within the silhouette, and as a binary function for pixel inside the silhouette or for outside the silhouette.

To construct a 3D mesh with skeletal rigging, we first fit a SMPL model to the 2D input pose using the method proposed by [6], which additionally recovers camera parameters. We then project this mesh into the camera view to form a silhouette mask . The projection additionally gives us a depth map , a normal map and a skinning map for pixels

. The skinning map is derived from the per-vertex skinning weights in the SMPL model and is thus vector-valued at each pixel (one skinning weight per bone).

Guided by and the input photo’s silhouette mask , we then warp , , and to construct an output depth map (at the silhouette only) , normal map , and skinning map , respectively, for pixels . is then integrated to recover the final depth map , subject to matching at the silhouette boundary . More concretely, we solve for a smooth inverse warp, , such that:


and then apply this warp to the depth and skinning maps:


We experimented with setting , but the resulting meshes were usually too flat in the direction (See Fig. 3b). The warping procedure typically stretches the geometry in the plane (the SMPL model is usually thinner than the clothed subject, often thinner than even the unclothed subject), without similarly stretching (typically inflating) the depth. We address this problem by instead warping the normals to arrive at and then integrating them to produce . In particular, following [5], we solve a sparse linear system to produce a that agrees closely with the warped normals subject to the boundary constraint that for pixels . Fig. 3 shows the difference between the two methods we experimented with.

Figure 3: Comparison of different depth map constructions, after stitching front and back depth maps together (Section 4.1.3). Given (a) a reference SMPL model, we can reconstruct a mesh (b) by warping the SMPL depth maps or (c) by warping the SMPL normal maps and then integrating. Notice the flattening evident in (b), particularly around the head.

To construct the inverse warp, , many smooth warping functions are possible; we choose one based on mean-value coordinates [12] because it is well defined over the entire plane for arbitrary planar polygons without self-intersections, which fits our cases very well. In particular, given the ordered set of points (vertices) on the closed polygonal boundary of the input silhouette, , we can represent any point inside of as:


where are the mean-value coordinates of any with respect to the boundary vertices .

Suppose we have a correspondence function that identifies on the input silhouette boundary with points on the SMPL silhouette boundary :


Then, using the same mean-value coordinates from Eq. 6, we define the warp function to be:


Next, we describe how we compute the correspondence function , fill holes in the normal and skinning maps, and then construct a complete mesh with texture.

4.1.1 Boundary matching

We now seek a mapping that provides correspondence between points and points . We would like each point to be close to its corresponding point , and, to encourage smoothness, we would like the mapping to be monotonic without large jumps in the indexing. To this end, we solve for to satisfy:






is designed to encourage closeness of corresponding points, and avoids generating an out-of-order sequence with big jumps. Because we are indexing over closed polygons, we actually use in the objective. With , we solve for with dynamic programming.

4.1.2 Hole-filling

In practice, holes may arise when warping by , i.e., small regions in which , due to non-bijective mapping between and . We smoothly fill these holes in the warped normal and skinning weight maps. Please refer to the supplemental material for more detail and illustration of the results of this step.

4.1.3 Constructing the complete mesh

The method described so far recovers depth and skinning maps for the front of a person. To recover the back of the person, we virtually render back view of the fitted SMPL model, mirror the person mask, and then apply the warping method described previously.

We reconstruct front and back meshes in the standard way: back-project depths into 3D and construct two triangles for each 2x2 neighborhood. We assign corresponding skinning weights to each vertex. Stitching the front and back meshes together is straightforward as they correspond at the boundary. Fig. 4 illustrates the front and back meshes and the stitched model.

Figure 4: Reconstructed mesh results. We reconstruct the front mesh (a) and the back mesh (c) separately and then combine them into one mesh, viewed from the side in (b).
Figure 5: Starting from the input image (a) and its corresponding silhouette and projected SMPL body part model, we recover an initial body part label map (b). After identifying points at occlusion boundaries, we construction an occlusion mask (lighter areas in (c)) and then refine it to construct the final body label map (d). The body part regions near occlusions have spurious boundaries, shown in red in (e). We remove these spurious boundaries (f) and replace them with transformed versions of the SMPL boundaries (g). We can then reconstruct the body part-by-part (h) and assemble into the final mesh (i).

4.2 Self-occlusion

When the subject self-occludes – one body part over another – reconstructing a single depth map (e.g., for the front) from a binary silhouette will not be sufficient. To handle self-occlusion, we segment the body into parts via body label map, complete the partially occluded segments, and then reconstruct each part using the method described in Section 4.1. Fig. 5 illustrates our approach.

We focus on self-occlusion when the arms partially cross other body parts such that the covered parts are each still a single connected component. Our method does not handle all self-occlusion scenarios, but does significantly extend the operating range and show a path toward handling more cases.

4.2.1 Body label map

The projected SMPL model provides a reference body label map that does not conform closely to the image. We use this label map to construct a final label map in two stages: (1) estimate an initial label map for each pixel to be as similar as possible to , then (2) refine at occlusion boundaries where the label discontinuities should coincide with edges in the input image.

Initial Body Labeling. We solve for the initial (rough) body label map by minimizing a Markov Random Field (MRF) objective:




is the 8-neighborhood of . scores a label according to the distance to the nearest point in with the same label, thus encouraging to be similar in shape to , while encourages spatially coherent labels.

We use -expansion [7] to approximately solve for , with . Fig. 5(b) illustrates the initial label map produced by this step.

Refined Body Labeling. Next, we refine the body label map to more cleanly separate occlusion boundaries.

Occlusion boundaries occur when two pixels with different part labels are neighbors in the image, but are not neighbors on the 3D body surface. To identify these pixels, we first compute warp functions that map each body part to the corresponding body part , using the mean-value coordinate approach described in Section 4.1, performed part-by-part. Then, along the boundaries of arm parts of , for each pair of neighboring pixels with different labels, we determine the corresponding projected SMPL locations , back-project them onto the SMPL mesh, and check if they are near each other on the surface. If not, these pixels are identified as occlusion pixels. Finally, we dilate around these occlusion pixels to generate an occlusion mask . The result is shown in Fig. 5(c).

We now refine the labels within to better follow color discontinuities in the image , giving us the final body label map . For this, we define another MRF:





is the probability of

with color labeled as

, modeled using a Gaussian Mixture Model. We set

, and, following [33], we set to be:


where averages over all pairs of neighboring pixels in .

The problem is solved by iteratively applying -expansions [7], where in each iteration we re-estimate using the latest approximated initizlied as . Fig 5(d) illustrates the final body map.

4.2.2 Occluded region construction

We now have the shapes of the unoccluded segments; the next challenge is to recover the shapes of the partially occluded parts.

We first combine the labels of the head, torso, and legs together into one region . Then we extract the boundary and identify the occlusion boundaries, (shown in red in Fig. 5(e)). Next, for a contiguous set of points (e.g., one of the three separate red curves in Fig. 5(e)), we find the corresponding boundary using the the boundary matching algorithm from Section 4.1.1, where is the region formed by projecting the SMPL head, torso, and legs to the image plane. We then replace with by a similarity transform defined by the end points of and , as shown in Fig.5-(f) and (g).

4.2.3 Mesh construction

Once we have completed body labeling and recovered occluded shapes, we project the SMPL model part-by-part to get per-part SMPL depth, normal, and skinning weight maps, then follow the approach in Section 4.1 to build part meshes (Fig.5-(h)), and assemble them together to get our final body mesh (Fig.5-(i)). Finally, we apply Laplacian smoothing to reduce jagged artifacts along the mesh boundaries due to binary silhouette segmentation.

Figure 6: Examples of computed body label maps and meshes (input photos are put on top right corner).

5 Final Steps

Head pose correction: Accuracy in head pose is important for good animation, while the SMPL head pose is often incorrect. Thus, as in [23, 20], we detect facial fiducials in the image and solve for the 3D head pose that best aligns the corresponding, projected 3D fiducials with the detected ones. After reconstructing the depth map for the head as before, we apply a smooth warp that exactly aligns the projected 3D fiducials to the image fiducials. Whenever the face or fiducials are not detected, this step is skipped.

Texturing: For the front of the subject, we project the image onto the geometry. Occluded, frontal body part regions are filled using PatchMatch [4]. Hallucinating the back texture is an open research problem  [27, 10, 28]. We provide two options: (1) paste a mirrored copy of the front texture onto the back, (2) inpaint with optional user guidance. For the second option, inpainting of the back is guided by the body label maps, drawing texture from regions with the same body labels. The user can easily alter these label maps to, e.g., encourage filling in the back of the head with hair texture rather than face texture. Finally the front and back textures are stitched with poisson blending [32].

Please refer to the supplemental material for more details and illustrations of head pose correction and texturing.

6 Results and Discussion

Below we describe our user interface, results, comparisons to related methods, and human study. We have tested our method on 70 photos downloaded from the Internet (spanning art, posters, and graffiti that satisfied our photo specifications–full body, mostly frontal). Figs. 7 and 8 show our typical animation and augmented reality results. With our UI, the user can change the viewpoint during animation, and edit the human pose. With an AR headset, the user can place the artwork on the wall and walk around the animation while it is playing. Please refer to the supplementary video222 for dynamic versions of the results.

Figure 7: Six animation results. The input is always on left.
Figure 8: Augmented reality results of our method for different environments (input photos inset). The floor (left) and couch (right) are real, while the people are augmented.

User interface: We have created a user interface where the user can interactively: (1) Modify the animation: the default animation is “running”, where the user can keep some body parts fixed, change the sequence (e.g., choose any sequence from [9]), modify pose and have the model perform an action starting from the modified pose. (2) Improve the automatic detection box, skeleton, segmentation, and body label map if they wish. (3) Choose to use mirrored textures for the back or make adjustments via editing of the body label map. The user interaction time for (2) and (3) is seconds, when needed.

Fig. 9 shows an example of the pose editing process. In our UI the mesh becomes transparent to reveal the body skeleton. By selecting and dragging the joints the user can change the orientation of the corresponding bones. A new image where the pose is edited can be then easily generated.

Figure 9: Our user interface for pose editing: (a) Input photo. (b) Editing pose by dragging joints. (c) Result.

The underlying reconstructed geometry for several examples is shown in Fig. 6. The resulting meshes do not necessarily represent the exact 3D geometry of the underlying subject, but they are sufficient for animation in this application and outperform state of the art as shown below.

Human study: We evaluated our animation quality via Amazon Mechanical Turk. We tested all 70 examples we produced, rendered as videos. Each result was evaluated by 5 participants, on a scale of 1-3 (1 is bad, 2 is ok, 3 is ‘nice!’). 350 people responded, and the average score was 2.76 with distribution: 1: 0.6%, 2: 22.0%, 3: 77.4%.

Figure 10: Comparison result with [15]: (a) input photo; (b) animation method proposed in [15]; (c) 3D demonstration using our method, which is not possible in [15].

Comparison with [15]: We have run our method on the only example in [15] that demonstrated substantial out-of-plane motion rather than primarily in-plane 2D motion (see Fig. 10). Our result appears much less distorted in still frames (due to actual 3D modeling) and enables 3D experiences (e.g., AR) not possible in [15]. We verified our qualitative observation with a user study on MTurk, asking users to decide which animation is “more realistic.” 103 participants responded, and 86% preferred ours.

Figure 11: Comparison with [6, 1]. (a) Input photo. (b) A fitted SMPL model [6]. (c) A deformed mesh using [1]. Notice the mesh does not fit the silhouette (green arrow) and fails to deform fingers (blue box). (d) Our mesh.

Comparison with Bogo et al. [6]: As shown in Fig. 11(b), the fitted, semi-nude SMPL model [6] does not correctly handle subject silhouettes.

Comparison with Alldieck et al. [1]: In [1], a SMPL mesh is optimized to approximately match silhouettes of a static subject in a multi-view video sequence. Their posted code uses 120 input frames, with objective weights tuned accordingly; we thus provide their method with 120 copies of the input image, in addition to the same 2D person pose and segmentation we use. The results are shown in Fig. 11(c). Their method does not fit the silhouette well; e.g., smooth SMPL parts don’t become complex (bald head mapped to big, jagged hair) and the detailed fingers are not warped well to the closed fists or abstract art hands. Further, it does not handle self-occlusion well, since the single-image silhouette does not encode self-occlusion boundaries.

Figure 12: Examples of limitations (inputs in blue boxes). (a) Shadows not modeled. (b) Unrealistic mesh. (c) 3D pose error leads to incorrect geometry.

Limitations: We note the following limitations (see also Fig. 12

): (1) Shadows and reflections are currently not modeled by our method and thus won’t move with the animation. (2) The SMPL model may produce incorrect 3D pose due to ambiguities. (3) Since the reconstructed mesh must fit the silhouette, the shape may look unrealistic, e.g., wrong shape of shoes; on the other hand this enables handling abstract art. (4) Our method accounts for self-occlusions when arms partially occlude the head, torso, or legs. It remains future work to handle other occlusions, e.g., legs crossed when sitting. (5) We have opted for simple texture inpainting for occluded body parts, with some user interaction if needed. Using deep learning to synthesize, e.g., the appearance of the back of a person given the front, is a promising research area, but current methods that we have tested 

[27, 10, 28] give very blurry results.

Summary: We have presented a method to create a 3D animation of a person in a single image. Our method works with large variety of of whole-body, fairly frontal photos, ranging from sports photos, to art, and posters. In addition, the user is given the ability to edit the human in the image, view the reconstruction in 3D, and explore it in AR.

We believe the method not only enables new ways for people to enjoy and interact with photos, but also suggests a pathway to reconstructing a virtual avatar from a single image while providing insight into the state of the art of human modeling from a single photo.

Acknowledgements The authors thank Konstantinos Rematas for helpful discussions, Bogo et al. and Alldieck et al. for sharing their research code, and labmates from UW GRAIL lab for the greatest support. This work was supported by NSF/Intel Visual and Experimental Computing Award #1538618, the UW Reality Lab, Facebook, Google, Huawei, and a Reality Lab Huawei Fellowship.


  • [1] T. Alldieck, M. Magnor, W. Xu, C. Theobalt, and G. Pons-Moll. Video based reconstruction of 3d people models. In

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2018.
  • [2] H. Averbuch-Elor, D. Cohen-Or, J. Kopf, and M. F. Cohen. Bringing portraits to life. ACM Transactions on Graphics (Proceeding of SIGGRAPH Asia 2017), 36(4):to appear, 2017.
  • [3] J. Bai, A. Agarwala, M. Agrawala, and R. Ramamoorthi. Automatic cinemagraph portraits. In Computer Graphics Forum, volume 32, pages 17–25. Wiley Online Library, 2013.
  • [4] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3):24–1, 2009.
  • [5] R. Basri, D. Jacobs, and I. Kemelmacher. Photometric stereo with general, unknown lighting. International Journal of Computer Vision, 72(3):239–257, 2007.
  • [6] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In Computer Vision – ECCV 2016, Lecture Notes in Computer Science. Springer International Publishing, Oct. 2016.
  • [7] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence, 23(11):1222–1239, 2001.
  • [8] Y.-Y. Chuang, D. B. Goldman, K. C. Zheng, B. Curless, D. H. Salesin, and R. Szeliski. Animating pictures with stochastic motion textures. In ACM Transactions on Graphics (TOG), volume 24, pages 853–860. ACM, 2005.
  • [9] CMU. CMU Graphics Lab Motion Capture Database, 2007.
  • [10] P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. arXiv preprint arXiv:1804.04694, 2018.
  • [11] M. Flagg, A. Nakazawa, Q. Zhang, S. B. Kang, Y. K. Ryu, I. Essa, and J. M. Rehg. Human video textures. In Proceedings of the 2009 symposium on Interactive 3D graphics and games, pages 199–206. ACM, 2009.
  • [12] M. S. Floater. Mean value coordinates. Computer aided geometric design, 20(1):19–27, 2003.
  • [13] R. A. Güler, N. Neverova, and I. Kokkinos. Densepose: Dense human pose estimation in the wild. arXiv preprint arXiv:1802.00434, 2018.
  • [14] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. arXiv preprint arXiv:1703.06870, 2017.
  • [15] A. Hornung, E. Dekkers, and L. Kobbelt. Character animation from 2d pictures and 3d motion data. ACM Transactions on Graphics (TOG), 26(1):1, 2007.
  • [16] A. Jain, T. Thormählen, H.-P. Seidel, and C. Theobalt. Moviereshape: Tracking and reshaping of humans in videos. In ACM Transactions on Graphics (TOG), volume 29, page 148. ACM, 2010.
  • [17] N. Joshi, S. Mehta, S. Drucker, E. Stollnitz, H. Hoppe, M. Uyttendaele, and M. Cohen. Cliplets: juxtaposing still and dynamic imagery. In Proceedings of the 25th annual ACM symposium on User interface software and technology, pages 251–260. ACM, 2012.
  • [18] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. arXiv preprint arXiv:1712.06584, 2017.
  • [19] A. Kanazawa, S. Tulsiani, A. A. Efros, and J. Malik. Learning category-specific mesh reconstruction from image collections. In ECCV, 2018.
  • [20] I. Kemelmacher-Shlizerman and S. M. Seitz. Face reconstruction in the wild. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1746–1753. IEEE, 2011.
  • [21] I. Kemelmacher-Shlizerman, E. Shechtman, R. Garg, and S. M. Seitz. Exploring photobios. In ACM Transactions on Graphics (TOG), volume 30, page 61. ACM, 2011.
  • [22] N. Kholgade, T. Simon, A. Efros, and Y. Sheikh. 3d object manipulation in a single photograph using stock 3d models. ACM Transactions on Graphics (TOG), 33(4):127, 2014.
  • [23] D. E. King.

    Dlib-ml: A machine learning toolkit.

    Journal of Machine Learning Research, 10(Jul):1755–1758, 2009.
  • [24] P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pages 109–117, 2011.
  • [25] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. SMPL: A skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1–248:16, Oct. 2015.
  • [26] Z. Lun, M. Gadelha, E. Kalogerakis, S. Maji, and R. Wang. 3d shape reconstruction from sketches via multi-view convolutional networks. In 3D Vision (3DV), 2017 International Conference on, pages 67–77. IEEE, 2017.
  • [27] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In Advances in Neural Information Processing Systems, pages 405–415, 2017.
  • [28] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. arXiv preprint arXiv:1712.02621, 2017.
  • [29] R. Martin-Brualla, D. Gallup, and S. M. Seitz. Time-lapse mining from internet photos. ACM Transactions on Graphics (TOG), 34(4):62, 2015.
  • [30] Matterport. Mask R-CNN Implementation by Matterport, Inc, 2017.
  • [31] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis. Learning to estimate 3d human pose and shape from a single color image. arXiv preprint arXiv:1805.04092, 2018.
  • [32] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. In ACM Transactions on graphics (TOG), volume 22, pages 313–318. ACM, 2003.
  • [33] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. In ACM transactions on graphics (TOG), volume 23, pages 309–314. ACM, 2004.
  • [34] A. Schödl, R. Szeliski, D. H. Salesin, and I. Essa. Video textures. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 489–498. ACM Press/Addison-Wesley Publishing Co., 2000.
  • [35] J. Tompkin, F. Pece, K. Subr, and J. Kautz.

    Towards moment imagery: Automatic cinemagraphs.

    In Visual Media Production (CVMP), 2011 Conference for, pages 87–93. IEEE, 2011.
  • [36] G. Varol, D. Ceylan, B. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid. BodyNet: Volumetric inference of 3D human body shapes. In ECCV, 2018.
  • [37] G. Varol, J. Romero, X. Martin, N. Mahmood, M. Black, I. Laptev, and C. Schmid. Learning from synthetic humans. arXiv preprint arXiv:1701.01370, 2017.
  • [38] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4724–4732, 2016.
  • [39] F. Xu, Y. Liu, C. Stoll, J. Tompkin, G. Bharaj, Q. Dai, H.-P. Seidel, J. Kautz, and C. Theobalt. Video-based characters: creating new human performances from a multi-view video database. ACM Transactions on Graphics (TOG), 30(4):32, 2011.
  • [40] X. Xu, L. Wan, X. Liu, T.-T. Wong, L. Wang, and C.-S. Leung. Animating animal motion from still. In ACM Transactions on Graphics (TOG), volume 27, page 117. ACM, 2008.
  • [41] S. Zhou, H. Fu, L. Liu, D. Cohen-Or, and X. Han. Parametric reshaping of human bodies in images. In ACM Transactions on Graphics (TOG), volume 29, page 126. ACM, 2010.
  • [42] Z. Zhou, B. Shu, S. Zhuo, X. Deng, P. Tan, and S. Lin. Image-based clothes animation for virtual fitting. In SIGGRAPH Asia 2012 Technical Briefs, page 33. ACM, 2012.
  • [43] S. Zuffi, A. Kanazawa, and M. J. Black. Lions and tigers and bears: Capturing non-rigid, 3D, articulated shape from images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2018.