Log In Sign Up

Wish You Were Here: Context-Aware Human Generation

by   Oran Gafni, et al.

We present a novel method for inserting objects, specifically humans, into existing images, such that they blend in a photorealistic manner, while respecting the semantic context of the scene. Our method involves three subnetworks: the first generates the semantic map of the new person, given the pose of the other persons in the scene and an optional bounding box specification. The second network renders the pixels of the novel person and its blending mask, based on specifications in the form of multiple appearance components. A third network refines the generated face in order to match those of the target person. Our experiments present convincing high-resolution outputs in this novel and challenging application domain. In addition, the three networks are evaluated individually, demonstrating for example, state of the art results in pose transfer benchmarks.


page 6

page 7

page 8

page 11

page 12

page 13

page 14

page 15


Scene Aware Person Image Generation through Global Contextual Conditioning

Person image generation is an intriguing yet challenging problem. Howeve...

Deformable GANs for Pose-based Human Image Generation

In this paper we address the problem of generating person images conditi...

Bottom-up Pose Estimation of Multiple Person with Bounding Box Constraint

In this work, we propose a new method for multi-person pose estimation w...

PandaNet : Anchor-Based Single-Shot Multi-Person 3D Pose Estimation

Recently, several deep learning models have been proposed for 3D human p...

Pose-Guided High-Resolution Appearance Transfer via Progressive Training

We propose a novel pose-guided appearance transfer network for transferr...

Human Pose Transfer by Adaptive Hierarchical Deformation

Human pose transfer, as a misaligned image generation task, is very chal...

Person-in-Context Synthesiswith Compositional Structural Space

Despite significant progress, controlled generation of complex images wi...

1 Introduction

The field of image generation has been rapidly progressing in recent years due to the advent of GANs, as well as the introduction of sophisticated architectures and training methods. However, the generation is either done while giving the algorithm an “artistic freedom” to generate attractive images, or while specifying concrete constraints such as an approximate drawing, or desired keypoints.

In other contributions, there is a set of semantic specifications, such as in image generation based on scene graphs, or on free text, yet these are not demonstrated to generate high-fidelity human images. What seems to be missing, is the middle ground of the two: a method that allows some freedom, while requiring adherence to high-level constraints that arise from the image context.

In our work, the generated image has to comply with the soft requirement to have a coherent composition. Specifically, we generate a human figure that fits into the existing scene. Unlike previous work in the domain of human placement, we do not require a driving pose or a semantic map to render a novel person, but rather, we generate a semantic map independently, such that it is suitable to the image context. In addition, we provide rich control over the rendering aspects, enabling additional applications, such as individual component replacement and sketching a photorealistic person. Moreover, we provide for significantly higher resolution results ( vs. a resolution of or in the leading pose transfer benchmarks), over images with substantial pose variation.

The application domain we focus on is the insertion of a target person into an image that contains other people. This is a challenging application domain, since it is easy to spot discrepancies between the novel person in the generated image, and the existing ones. In contrast, methods that generate images from scratch enjoy the ability to generate “convenient images”.

In addition, images that are entirely synthetic are judged less harshly, since the entire image has the same quality. In our case, the generated pixels are inserted into an existing image and can therefore stand out as being subpar with respect to the quality of the original image parts. Unlike other applications, such as face swapping, our work is far less limited in the class of objects.

Similar to face swapping and other guided image manipulation techniques, the appearance of the output image is controlled by that of an example. However, the appearance in our case is controlled by multiple components: the face, several clothing items, and hair.

Our method employs three networks. The first generates the pose of the novel person in the existing image, based on contextual cues that pertain to the other persons in the image. The second network renders the pixels of the new person, as well as a blending mask. Lastly, the third augments the face of the target person in the generated image in order to ensure artifact-free faces.

In an extensive set of experiments, we demonstrate that the first of our networks can create poses that are indistinguishable from real poses, despite the need to take into account the social interactions in the scene. The first and second networks provide a state of the art solution for the pose transfer task, and the three networks combined are able to provide convincing “wish you were here” results, in which a target person is added to an existing photograph.

The method is trained in an unsupervised manner, in the sense that unlike previous work, such as networks trained on the DeepFashion dataset, it trains on single images, which do not present the same person in different poses. However, the method does employ a set of pretrained networks, which were trained in a fully supervised way, to perform various face and pose related tasks: a human body part parser, a face keypoint detector, and a face-recognition network.

Our main contributions are: (i) the first method, as far as we can ascertain, is the first to generate a human figure in the context of the other persons in the image, (ii) a person generating module that renders a high resolution image and mask, given two types of conditioning, the first being the desired multi-labeled shape in the target image, and the second being various appearance components, (iii) the ability to perform training on a set of unlabeled images “in the wild”, without any access to paired source and target images, by utilizing existing modules trained for specific tasks, (iv) unlike recent pose transfer work, which address a simpler task, we work with high resolution images, generating images, (v) our results are demonstrated in a domain in which the pose, scale, viewpoint, and severe occlusion vary much more than in the pose transfer work from the literature, and (vi) demonstrating photo realistic results in a challenging and so far unexplored application domain.

Our research can be used to enable natural remote events and social interaction across locations. AR applications can also benefit from the addition of actors in context. Lastly, the exact modeling of relationships in the scene can help recognize manipulated media.

2 Related work

There is considerably more work on the synthesis of novel images, than on augmenting existing views. A prominent line of work generates images of human subjects in different poses [2, 13], which can be conditioned on a specific pose [6, 28, 15]. The second network we employ (out of the three mentioned above) is able to perform this task, and we empirically compare with such methods. Much of the literature presents results on the DeepFashion dataset [17], in which a white background is used. In the application we consider, it is important to be able to smoothly integrate with a complex scene. However, for research purposes only and for comparing with the results of previous work [18, 23, 10, 32, 8], we employ this dataset.

Contributions that include both a human figure and a background scene, include vid2vid [25] and the ”everybody dance now” work [6]. These methods learn to map between a driver video and an output video, based on pose or on facial motion. Unlike the analog pose-to-image generation part of our work, in [25, 6] the reference pose is extracted from a real frame, and the methods are not challenged with generated poses. Our method deals with generated poses, which suffer from an additional burden of artifacts. In addition, the motion-transfer work generates an entire image, which includes both the character and the background, resulting in artifacts near the edges of the generated pose [20, 7], and the loss of details from the background. In our work, the generated figure is integrated with the background using a generated alpha-mask.

Novel generation of a target person based on a guiding pose was demonstrated by Esser et al., who presented two methods for mixing the appearance of a figure seen in an image with an arbitrary pose [10, 9]. Their methods result in a low-resolution output with noticeable artifacts, while we work at a higher resolution of 512p. The work of Balakrishanan et al. also provides lower resolution outputs, which are set in a specific background [2]. In our experiment, we compared against the recent pose transfer work [18, 23, 32].

A semantic map based method for human generation was presented by [8]. Contrary to our method, this work was demonstrated solely on the lower resolution, and lower pose variation datasets of DeepFashion and Market-1501 ( and ). Additionally, the target encoding method in [8] relies on an additional semantic map, identical to the desired target person, requiring the target person to be of the same shape, which precludes other applications, such as component replacement. Moreover, the previous method requires the pose keypoints, which increases the complexity of the algorithm, and limits the application scope, such as the one that we show for drawing a person.

As far as we know, no literature method generates a human pose in the context of other humans in the scene.

3 Method

Given a source image , the full method objective is to embed an additional person into the image, such that the new person is both realistic, and coherent in context. The system optionally receives a coarse position for the new person, in the form of a bounding box . This allows for crude control over the new person position and size, yet still leaves most of the positioning for the algorithm.

We employ three phases of generation, in which the inserted person becomes increasingly detailed. The Essence Generation Network (EGN) generates the semantic pose information of the target person in the new image, capturing the scene essence, in terms of human interaction. The Multi-Conditioning Rendering Network (MCRN) renders a realistic person, given the semantic pose map

, and a segmented target person, which is given as a multi-channel tensor

. The Face Refinement Network (FRN) is used to refine the high-level features of the generated face , which requires special attention, due to the emphasis given to faces in human perception.

Figure 2: The architecture of the Essence Generation Network. Given a body and face semantic maps , and an optional bounding-box , the network generates the semantic map of a novel person, which is correlated in context to the human interaction in the scene. The generated person is highlighted in blue.

3.1 Essence generation network

The Essence Generation Network (EGN) is trained to capture the human interaction in the image, and generate a coherent way for a new human to join the image. Given a two-channel semantic map of the input image with a varying number of persons, and an optional binary third channel containing a bounding box, the network generates the two-channel semantic map of a new person , which is compatible with the context of the existing persons, as seen in Fig. 25.

More precisely: both and contain one channel for the person’s semantic map, and one face channel, derived from facial keypoints. pertains to the one or more persons in the input image, while refers to the novel person. The semantic map, i.e., the first channel of and , is reduced to eight label groups, encoded as the values . These represent the background (0), hair, face, torso and upper limbs, upper-body wear, lower-body wear, lower limbs, and finally shoes. The choice of this reduced number of groups is used to simplify semantic generation, while still supporting detailed image generation.

The face channel of and is extracted by considering the convex hulls over the detected facial keypoints, obtained by the method of [5]. The third channel is optional, and contains a bounding box, indicating the approximate size and position of the new person in . During training, the bounding box is taken as the minimal and maximal positions of the labels in the x and y axes. Both the face and bounding-box channels are binary and have values that are either or .

We train two EGN models ( and ) in parallel to perform the following mapping:


where obtains one additional input channel in comparison to . For brevity, we address below. The input tensors are resized to the spatial dimensions of pixels. The subsequent networks employ higher resolutions, generating high resolution images. The EGN encoder-decoder architecture is based on the one of pix2pixHD [26] with two major modifications. First, the VGG feature-matching loss is disabled, as there is an uncertainty of the generated person. In other words, given a source image, there is a large number of conceivable options for a new person to be generated, in the context of the other persons in the scene. These relations are captured by the discriminator loss as well as the discriminator feature-matching loss, as both losses receive both and . The second modification is the addition of a derivative regularization loss , which is applied over the first channel of . This loss minimizes the high-frequency patterns in the generated semantic map image.

3.2 Multi-conditioning rendering network

The MCRN mapping is trained to render and blend a realistic person into the input image , creating a high-resolution () image . It is given a conditioning signal in the form of a semantic pose map , and an input specifying the parts of a segmented person , see Fig. 3(a). The conditioning signal , which is generated by the EGN at inference time, is introduced to the decoder part of MCRN through SPADE blocks [19]. This conditioning signal acts as the structural foundation for the rendered person image , and the corresponding mask .

The segmented person is incorporated through the MCRN encoder, which embeds the target person appearance attributes into a latent space. allows for both substantial control over the rendered person (e.g. replacing the person’s hair or clothing, as seen in Fig. 7, and supplementary Fig. 1,2,3). The segmented structure of has the advantage over simply passing the image of the target person, in that it does not allow for a simple duplication of the target person in the output image

. This property is important, as during training we employ the same person in both the target output image, and as the input to MCRN.

The tensor is of size , which corresponds to the six semantic segmentation classes (hair, face, upper-body wear, lower-body wear, skin, and shoes), three RGB channels each, and a spatial extent of pixels. Each of the six parts is obtained by cropping the body part using a minimal bounding box, and resizing the crop to these spatial dimensions.

To preempt a crude insertion of the generated person into the image output and avoid a ”pasting” effect, the network generates a learnable mask in tandem with the rendered image of the person . The output image is therefore generated as:


The mask is optimized to be similar to the binary version of the pose image , which is denoted by . For this purpose, the L1 loss is used . Additionally, the mask is encouraged to be smooth as captured by the loss


The architecture of the MCRN encoder is composed of five consecutive (Conv2D,InstanceNorm2D [24]) layers, followed by an FC layer with a LeakyReLU activation, resulting in a latent space the size of . The latent space is processed through an additional FC layer, reshaped to a size of . The decoder has seven upsample layers with interleaving SPADE blocks. It is trained using the loss terms depicted in Fig. 3(b). Namely:


with being the number of layers, the number of elements in each layer, the activations of discriminator in layer , and .


with being the number of elements in the -th layer, and

the VGG classifier activations at the

-th layer.

(a) (b)
Figure 3: (a) MCRN’s architecture. Given an input target , and a conditioning semantic map , a person and blending mask are rendered. The mask is then employed to blend the rendered person into a final image . (b) The loss terms used to train MCRN.

3.3 Face refinement network

The third network, FRN, receives as input the face crop of the novel person in , as well as a conditioning signal that is the face descriptor of the target face, as obtained from the original image of the target person (before it was transformed to the tensor ). For that purpose, the pretrained VGGFace2 [4] network is used, and the activations of the penultimate layer are concatenated to the FRN latent space.

FRN applies the architecture of [11], which employs the same two conditioning signals, for a completely different goal. While in [11], the top level perceptual features of the generated face , obtained from the embedding of the VGGFace2 network, are distanced from those of the face in , in our case, the perceptual loss encourages the two to be similar by minimizing the distance .

FRN’s output is blended with a second mask as:


where is the operator that crops the face bounding box.

4 Experiments

Figure 4: ”Wish you were here” samples. Each shows has a source image , and 3 different pairs of inserted person and output image .
Figure 5: Unconstrained (no bounding-box) samples of EGN’. For each input (red), the generated pose (purple) is shown.
Figure 6: Drawing a person (DeepFashion). A semantic map is crudely drawn (row 1) utilizing the annotation tool of [31], distinguishing between the hair (orange), face (red), torso/upper-limbs (bright-green 1), T-shirt (yellow), sweat-shirt (bright-green 2), pants (green), lower-limbs (blue). The rendered person generated by the MCRN (row 2) conforms to the conditioning segmentation, despite the deviation from the original dataset. The facial keypoints (not shown here) are taken from a randomly detected image. A video depicting the drawing and generation process is attached in the supplementary.
Figure 7: Replacing the hair, shirt, and pants (DeepFashion). For each target (row 1), the hair, shirt and pants (row 2), shirt only (row 3), are replaced for the semantic map of the upper-left and upper-right person. EGN/FRN are not used. See also Fig. 2,3,4 in the supp.
(a) (b) (c) (d) (e) (f) (g)
Figure 8: MCRN ablation study. (a) Target person, (b) our result, (c) no FRN (distorted face, does not resemble target), (d) no and (blurry face, distorted skin patterns), (e) not tuned (strong edges pixelization), (f) no mask (unnatural blending “pasting” effect”), (g) no segmented encoder (excessive artifacts stemming from target and label spatial difference).
(a) (b) (c) (d) (e) (f) (g)
Figure 9: EGN ablation study. (a) Semantic map input for (b)-(c), (b) our result, (c) no (high-frequency patterns, as well as isolated objects generated), (d) semantic map input for (e)-(g), (e) single person input (context can be less descriptive), (f) VGG feature-matching enabled (shape is matched regardless of deformation artifacts), (g) generation shape reduced to (labels are perforated, new labels generated on top of existing segmentations). Columns (b)-(c) and (e)-(g) are presented in high-contrast colors for clarity.

Both the EGN and MCRN are trained on the Multi-Human Parsing dataset ([14][31]). We choose this as our primary dataset, due to the high-resolution images and the diverse settings, in terms of pose, scene, ethnicity, and age, which makes it suitable for our task. We randomly select images for training, and images for testing. EGN is trained such that for each sample, all semantic maps in are maintained, excluding one, which is the generation objective . In addition, we filter-out images that do not contain at least one detected set of facial keypoints. Overall, we obtain

training samples, training for 300 epochs, with a batch size of 64. MCRN is trained on each person separately, resulting in

sampled images. The network is trained for 200 epochs, with a batch size of 32.

Our method has a single tuning parameter. This is the strength of the mask edge regularization (Eq. 3) . The scale of the loss term was set during the development process to be multiplied by a factor of , after being tested with the values of . This value is verified in the MCRN ablation study in Fig. 9.

Context-aware generation. We provide samples for a variety of target persons in the full context-aware generation task, in Fig. 14. In these experiments, EGN is given a random bounding-box , with a size and y-axis location randomly selected to be between to

of an existing person in the image, while the x-axis location is randomly selected by a uniform distribution across the image. The EGN generates a semantic map

, which is then run by the MCRN for various targets , shown for each column. FRN is then applied to refine the rendered face. As can be observed by the generated results, EGN felicitously captures the scene context, generating a semantic map of a new person that is well-correlated with the human interactions in the scene. MCRN successfully renders a realistic person, as conditioned by the target , and blends the novel person well, as demonstrated over diverse targets.

The case with no specified input bounding box is demonstrated in Fig. 5. As can be observed, EGN’ selects highly relevant poses by itself.

Individual component replacement. Evaluating the MCRN ability to generalize to other tasks, we utilize it for hair, shirt, and pants replacement, demonstrated over both the DeepFashion dataset [17] in Fig. 7, supplementary Fig. 23, and high-resolution in supplementary Fig. 4. As seen in the latter dataset, MCRN can be successfully applied to unconstrained images, rather than low-variation datasets only, such as DeepFashion, increasing the applicability and robustness of this task. We employ the model of [16, 12] for human parsing.

Person drawing. An additional application of the MCRN is free-form drawing of a person. We intentionally demonstrate this task over a set of extreme, and crudely drawn sketches, depicting the ability to render persons outside of the dataset manifold, yet resulting in coherent results, as seen in Fig. 6, and the supplementary video. The annotation tool presented in [31] is used to sketch the semantic map, and a video depicting the drawing and generation process is attached as supplementary.

Pose transfer evaluation. MCRN can be applied to the pose transfer task. By modifying EGN to accept as input the concatenation of a source semantic map, source pose keypoints (a stick figure, as extracted by the method of [5]), and target pose keypoints, we can generate the target semantic map , which is then fed into MCRN. A DensePose [21] representation can be used instead of the stick-figure as well.

A qualitative comparison of this pipeline to the methods of [32, 18, 10, 23] is presented in supplementary Fig. 4. The work of [8] presents visually compelling results, similar to ours in this task. We do not present a qualitative comparison to [8] due to code unavailability. However, a quantitative comparison is presented in Tab. 1 (FRN is not applied).

Providing reliable quantitative metrics for generation tasks is well known to be challenging. Widely used methods such as Inception Score [22] and SSIM [27] do not capture perceptual notion, or human-structure [3, 32]. Metrics capturing human-structure such as PCK [29], or PCKh [1] have been proposed. However, they rely on a degenerated form of the human form (keypoints).

We therefore develop two new dense-pose based human-structure metrics (DPBS and DPIS), and provide the Python code in the supplementary. Additionally, we evaluate perceptual notions using the LPIPS (Learned Perceptual Image Patch Similarity) metric [30]. DPBS (DensePose Binary Similarity) provides a coarse metric between the detected DensePose [21] representation of the generated and ground-truth images, by computing the Intersection over Union (IoU) of the binary detections. The second novel metric, DPIS (DensePose Index Similarity), provides a finer shape-consistency metric, calculating the IoU of body-part indices, as provided by the DensePose detection. The results are then averaged across the bodyparts.

The quantitative comparison follows the method described by [32] in terms of dataset split into train and test pairs (101,966 pairs are randomly selected for training and 8,570 pairs for testing, with no identity overlap between train and test). Our method achieves the best results in terms of perceptual metrics out of the tested methods (both for our keypoint and DensePose based methods). For human-structural consistency, both our methods achieve top results for the DPBS metric, and highest for the DensePose based model in the DPIS metric. Our methods scores well for the controversial metrics (SSIM, IS) as well.

(SqzNet) (VGG)
Ma [18] 0.416 0.523 0.791 0.392 0.773 3.163
Siarohin [23] - - - - 0.760 3.362
Esser [10] - - - - 0.763 3.440
Zhu [32] 0.170 0.299 0.840 0.463 0.773 3.209
Dong [8] - - - - 0.793 3.314
Ours (DP) 0.149 0.264 0.862 0.470 0.793 3.346
Ours (KP) 0.156 0.271 0.852 0.448 0.788 3.189
Table 1: Pose-transfer on the DeepFashion dataset. Shown are the LPIPS [30], DPBS, DPIS, SSIM [27], and IS [22] metrics. Both our DensePose (DP) and keypoint (KP) based methods achieve state-of-the-art results in most metrics. FRN is not applied.
(a) (b)
Table 2: User study. (a) Success rate in user recognition of the generated person. Shown per number of persons in an image. (b) Examples of images used. For each image, the user is given unlimited time to identify the generated person.

User Study. A user study is shown in Tab. 2, presented per number of persons in an image (including the generated person). For each image, the user selects the generated person. The user is aware that all images contain a single generated person, and contrary to user studies commonly used for image generation, no time constraints are given. The low success rate validates EGN’s ability to generate novel persons in context. Note that the success rate does not correlate with as expected, perhaps since the scene becomes more challenging to modify the larger is.

Ablation study We provide qualitative ablation studies for both EGN and MCRN. As the ”wish your were here” application does not have a ground-truth, perceptual comparisons, or shape-consistency quantitative methods do not capture the visual importance of each component. Other methods that do not rely on a ground-truth image (e.g. Inception Score, FID), are unreliable, as for the pose-transfer task, higher IS seems correlated with more substantial artifacts, indicating that a higher degree of artifacts results in a stronger perceived diversity by the IS.

The MCRN ablation is give in Fig. 9, showcasing the importance of each component or setting. Details are given in the figure caption.

The EGN ablation is given in Fig. 9. For the generated person, there are numerous generation options that could be considered applicable in terms of context. This produces an involved ablation study, encompassing additional deviations between tested models, that are not a direct result of the different component tested. Observing beyond the minor differences, the expected deviations (as seen throughout the experiments performed to achieve the final network) are detailed in the figure caption.

5 Discussion

Our method is trained on cropped human figures. The generated figure tends to be occluded by other persons in the scene, and does not occlude them. The reason is that during training, the held-out person can be occluded, in which case the foreground person(s) are complete. Alternatively, the held-out person can be complete, in which case, once removed, the occluded person(s) appear to have missing parts. At test time, the persons contain missing areas that are solely due to the existing scene. Therefore, test images appear as images in which the held-out person is occluded.

In a sense, this is exactly what the “wish you were here” application (adding a person to an existing figure) calls for – finding a way to add a person, without disturbing the persons already there. However, having control over the order of the persons in the scene relative to the camera plane, would add another dimension of variability.

A limitation of the current method, is that the generated semantic map is not conditioned on the target person or their attributes . Therefore, for example, the hair of the generated figure is not in the same style as the target person. This limitation is not an inherent limitation, as one can condition EGN on more inputs, but rather a limitation of the way training is done. Since during training we have only one image, providing additional appearance information might impair the network generalization capability. A partial solution may be to condition, for example, on very crude descriptors such as the relative hair length.

6 Conclusions

We demonstrate a convincing ability to add a target person to an existing image. The method employs three networks that are applied sequentially, and progress the image generation process from the semantics to the concrete.

From a general perspective, we demonstrate the ability to modify images, adhering to the semantics of the scene, while preserving the overall image quality.


  • [1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele (2014)

    2d human pose estimation: new benchmark and state of the art analysis


    Proceedings of the IEEE Conference on computer Vision and Pattern Recognition

    pp. 3686–3693. Cited by: §4.
  • [2] G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag (2018) Synthesizing images of humans in unseen poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8340–8348. Cited by: §2, §2.
  • [3] S. Barratt and R. Sharma (2018) A note on the inception score. arXiv preprint arXiv:1801.01973. Cited by: §4.
  • [4] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman (2017) VGGFace2: a dataset for recognising faces across pose and age. arXiv preprint arXiv:1710.08092. Cited by: §3.3.
  • [5] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2018)

    OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields

    In arXiv preprint arXiv:1812.08008, Cited by: §3.1, §4.
  • [6] C. Chan, S. Ginosar, T. Zhou, and A. A. Efros (2018) Everybody dance now. arXiv preprint arXiv:1808.07371. Cited by: §2, §2.
  • [7] P. Chao, A. Li, and G. Swamy (2018) Generative models for pose transfer. arXiv preprint arXiv:1806.09070. Cited by: §2.
  • [8] H. Dong, X. Liang, K. Gong, H. Lai, J. Zhu, and J. Yin (2018) Soft-gated warping-gan for pose-guided person image synthesis. In Advances in Neural Information Processing Systems, pp. 474–484. Cited by: §2, §2, Table 1, §4.
  • [9] P. Esser, J. Haux, T. Milbich, and B. Ommer (2018) Towards learning a realistic rendering of human behavior. In ECCV WORKSHOP, Cited by: §2.
  • [10] P. Esser, E. Sutter, and B. Ommer (2018) A variational u-net for conditional appearance and shape generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8857–8866. Cited by: §2, §2, Table 1, §4.
  • [11] O. Gafni, L. Wolf, and Y. Taigman (2019-10) Live face de-identification in video. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §3.3.
  • [12] K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin (2017-07) Look into person: self-supervised structure-sensitive learning and a new benchmark for human parsing. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
  • [13] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik (2018) Learning 3d human dynamics from video. arXiv preprint arXiv:1812.01601. Cited by: §2.
  • [14] J. Li, J. Zhao, Y. Wei, C. Lang, Y. Li, T. Sim, S. Yan, and J. Feng (2017) Multi-human parsing in the wild. arXiv preprint arXiv:1705.07206. Cited by: §4.
  • [15] Y. Li, C. Huang, and C. C. Loy (2019) Dense intrinsic appearance flow for human pose transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3693–3702. Cited by: §2.
  • [16] X. Liang, K. Gong, X. Shen, and L. Lin (2018) Look into person: joint body parsing & pose estimation network and a new benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §4.
  • [17] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang (2016-06) DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §4.
  • [18] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool (2017) Pose guided person image generation. In Advances in Neural Information Processing Systems, pp. 406–416. Cited by: §2, §2, Table 1, §4.
  • [19] T. Park, M. Liu, T. Wang, and J. Zhu (2019) Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §3.2.
  • [20] A. Pumarola, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer (2018) Unsupervised person image synthesis in arbitrary poses. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [21] I. K. R iza Alp Güler (2018) DensePose: dense human pose estimation in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4, §4.
  • [22] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. arXiv preprint arXiv:1606.03498. Cited by: Table 1, §4.
  • [23] A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe (2018) Deformable gans for pose-based human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3408–3416. Cited by: §2, §2, Table 1, §4.
  • [24] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §3.2.
  • [25] T. Wang, M. Liu, J. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro (2018) Video-to-video synthesis. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2.
  • [26] T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §3.1.
  • [27] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: Table 1, §4.
  • [28] C. Yang, Z. Wang, X. Zhu, C. Huang, J. Shi, and D. Lin (2018) Pose guided human video generation. arXiv preprint arXiv:1807.11152. Cited by: §2.
  • [29] Y. Yang and D. Ramanan (2012) Articulated human detection with flexible mixtures of parts. IEEE transactions on pattern analysis and machine intelligence 35 (12), pp. 2878–2890. Cited by: §4.
  • [30] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    In CVPR, Cited by: Table 1, §4.
  • [31] J. Zhao, J. Li, Y. Cheng, L. Zhou, T. Sim, S. Yan, and J. Feng (2018) Understanding humans in crowded scenes: deep nested adversarial learning and a new benchmark for multi-human parsing. arXiv preprint arXiv:1804.03287. Cited by: Figure 6, §4, §4.
  • [32] Z. Zhu, T. Huang, B. Shi, M. Yu, B. Wang, and X. Bai (2019) Progressive pose attention transfer for person image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2347–2356. Cited by: §2, §2, Table 1, §4, §4, §4.

Appendix A Additional individual component replacement samples

Figure 10: Replacing the hair, shirt, and pants (DeepFashion). For each target (row 1), the hair (row 2), shirt (row 3), and pants (row 4), are replaced for the semantic map of the upper-left person. The EGN and FRN are not used.
Figure 11: Replacing the hair, shirt, and pants (DeepFashion). For each target (row 1), the hair (row 2), shirt (row 3), and pants (row 4), are replaced for the semantic map of the upper-left person. The EGN and FRN are not used.
Figure 12: Replacing the hair, shirt, and pants for high-resolution unconstrained images. For each target (row 1), the hair (row 2), shirt (row 3), and pants (row 4), are replaced using a chosen semantic map .

Appendix B Pose-transfer qualitative comparison

Figure 13: Comparison of our method on the pose-transfer task. Even without the Face Refinement Network, our method provides photorealistic rendered targets.

Appendix C EGN training samples

(a) (b) (c)
Figure 14: Training the Essence Generation Network. Shown for each row are the (a) input semantic map and bounding box, (b) generated semantic map, (c) ground truth semantic map. The scene essence is captured, while the generated semantic map is not identical to the ground truth.

Appendix D DPBS and DPIS (Python) code

1import numpy as np
2import os
3import cv2
5gt_path = ’path/to/ground_truth_densepose’
6gen_path = ’path/to/generated_densepose’
8read_gen_by_order = True  # Read generated images by order, else by name
9n_DP_MAX_IDX = 24  # DensePose generates I with values 0-24
11def get_I_iou(img_gt, img_gen, I_idx):
12    I_gt = np.zeros_like(img_gt)
13    I_gen = np.zeros_like(img_gen)
15    I_gt[img_gt == I_idx] = 1  # binarization of the GT image
16    I_gen[img_gen == I_idx] = 1  # binarization of the generated image
18    I_iou = get_iou(I_gt, I_gen)
19    return I_iou
21def get_iou(img_gt, img_gen):
22    bin_gt = img_gt.copy()
23    bin_gen = img_gen.copy()
25    bin_gt[bin_gt > 0] = 1  # binarization of the GT image
26    bin_gen[bin_gen > 0] = 1  # binarization of the generated image
28    bin_union = bin_gt.copy()
29    bin_union[bin_gen == 1] = 1  # union over gt and gen (1 where either is present)
31    bin_overlap = bin_gt + bin_gen  # overlap of both
32    bin_overlap[bin_overlap != 2] = 0  # overlap will be == 2
33    bin_overlap[bin_overlap != 0] = 1  # binarization
35    union_sum = np.sum(bin_union)
36    if union_sum == 0:  # if neither the generated or GT image are present, mask out
37        iou = -1
38    else:
39        iou = np.sum(bin_overlap) / union_sum
41    return iou
43def get_stats(metric, masked=False):
44    if masked:
45        return,,
46    else:
47        return np.mean(metric), np.std(metric), np.median(metric)
50gt_list = os.listdir(gt_path)  # get ground-truth file names
53gen_list = os.listdir(gen_path)  #  get ground-truth files
56n_list = len(gt_list)
57n_gen = len(gen_list)
58if n_list != n_gen:
59    print(’Error. Ground-truth and generated folders do not contain the same number of images’)
60    exit(1)
62    print(’Computing distance metrics over {} images.’.format(n_list))
64DPBSs = np.zeros((n_list))  # DensePose Binary Similarity
65DPISs = np.zeros((n_list))  # DensePose Index Similarity
67for img_idx, filename in enumerate(gt_list):
68    img_gt = cv2.imread(os.path.join(gt_path, filename), cv2.IMREAD_UNCHANGED)[:, :, 0]  # DP GT image
69    if read_gen_by_order:
70        img_gen = cv2.imread(os.path.join(gen_path, gen_list[img_idx]), cv2.IMREAD_UNCHANGED)[:, :, 0]  # DP Generated image read by order
71    else:
72        img_gen = cv2.imread(os.path.join(gen_path, filename), cv2.IMREAD_UNCHANGED)[:, :, 0]  # DP Generated image read by name
74    max_idx = max(np.amax(img_gt), np.amax(img_gen))  #  the max index is taken as the max between the generated and GT
75    if max_idx > n_DP_MAX_IDX:
76        print(’Error. The maximum index value was {}. Should not be over 24’.format(max_idx))
77        exit(1)
79    DPBSs[img_idx] = get_iou(img_gt, img_gen)  # get DensePose Binary Similarity
80    I_ious = np.zeros((n_DP_MAX_IDX))  # DPIS indices per image
81    I_mask = np.ones_like(I_ious, dtype=bool)  # masking for DPIS indices per image
82    for I_idx in range(1, max_idx + 1):  # iterated over the indices present
83        I_ious[I_idx - 1] = get_I_iou(img_gt, img_gen, I_idx)  # index IoU (per body part)
84    I_mask[I_ious != -1] = 0  # do not mask IoUs found
85    masked_arr =, mask=I_mask)  # masked IoUs
87    DPISs[img_idx] =  # DensePose Index Similarity is calculated over the present indices
89    if img_idx % 1000 == 0:
90        print(’Done with {}/{} images.’.format(img_idx, n_list))
91DPBSs_mask = np.zeros_like(DPBSs, dtype=bool)  # masking for DPBS
92DPBSs_mask[DPBSs == -1] = 1
93masked_DPBSs_arr =, mask=DPBSs_mask)  # masked IoUs
95DPBS_mean, DPBS_SD, DPBS_median = get_stats(masked_DPBSs_arr, masked=True)
96DPIS_mean, DPIS_SD, DPIS_median = get_stats(DPISs)
100print(’Mean: {}, SD: {}, Median: {}’.format(DPBS_mean, DPBS_SD, DPBS_median))
103print(’Mean: {}, SD: {}, Median: {}’.format(DPIS_mean, DPIS_SD, DPIS_median))