Being able to generate novel photo-realistic views of a person in an arbitrary pose from a single image would open the door to many new exciting applications in different areas, including fashion and e-commerce business, photography technologies to automatically edit and animate still images, and the movie industry to name a few. Addressing this task without explicitly capturing the underlying processes involved in the image formation such as estimating the 3D geometry of the body, hair and clothes, and the appearance and reflectance models of the visible and occluded parts seems an extremely complex endeavor. Nevertheless, Generative Adversarial Networks (GANs) have shown impressive results in rendering new realistic images, e.g., faces [8, 22], indoor scenes  and clothes , by directly learning a generative model from data. Very recently, they have been used for the particular problem we consider in this paper of multi-view person image generation from single-view images [16, 35]. While the results shown by both these approaches are very promising, they suffer from the same fundamental limitation in that are methods trained in a fully supervised manner, that is, they need to be trained with pairs of images of the same person dressing exactly the same clothes and under two different poses. This requires from specific datasets, typically in the fashion domain [15, 36]. Tackling the problem in an unsupervised manner, one could leverage to an unlimited amount of images and use other datasets for which no multi-view images of people are available.
In this paper we therefore move a step forward by proposing a fully unsupervised GAN framework that, given a photo of a person, automatically generates images of that person under new camera views and distinct body postures. The generative model we build is able to synthesize novel views of the body parts and clothes that are visible in the original image and also hallucinating those that are not seen. As shown in Fig. 1
, the generated images retain the body shape, and the new textures are consistent with the original image, even when input and desired poses are radically different. In order to learn this model using unlabeled data (i.e., our training data consists of single images of people plus the input and desired poses), we propose a GAN architecture that combines ingredients of the pose conditional adversarial networks, Cycle-GANs  and the loss functions used in image style transfer that aim at producing new images of high perceptual quality .
More specifically, to circumvent the need for pairs of training images of the same person under different poses, we split the problem in two main stages. First, we consider a pose conditioned bidirectional adversarial architecture which, given a single training photo, initially renders a new image under the desired pose. This synthesized image is then rendered-back to the original pose, hence being directly comparable to the input image. Second, in order to assess the quality of the rendered images we devise a novel loss function computed over the -tuple of images –original, rendered in the desired pose, and back-rendered to the original pose– that incorporates content and style terms. This function is conditioned on the pose parameters and enforces the rendered image to retain the global semantic content of the original image as well as its style at the joints location.
2 Related Work
Rendering a person in an arbitrary pose from a single image is a severely ill-posed problem as there are many cloth and body shape ambiguities caused by the new camera view and the changing body pose, as well as large areas of missing data due to body self-occlusions. Solving such a rendering problem requires thus introducing several sources of prior knowledge including, among others, the body shape, kinematic constraints, hair dynamics, cloth texture, reflectance models and fashion patterns.
Initial solutions to tackle this problem first built a 3D model of the object and then synthesized the target images under the desired views [1, 9, 37]. These methods, however, were constrained to rigid objects defined by either CAD models or relatively simple geometric primitives.
More recently, with the advent of deep learning, there has been a growing interest in learning generative image models from data. Several advanced models have been proposed for this purpose. These include the variational autoencoders[11, 12, 25]
, the autoregressive models[30, 31], and, most importantly the Generative Adversarial Networks .
GANs are very powerful generative models based on game theory. They simultaneously train a generator network that produces synthetic samples (rendered images in our context) and a discriminator network that is trained to distinguish between the generator’s output and the true data. This idea is embedded by the so-calledadversarial loss, which we shall use in this paper to train our model. GANs have been shown to produce very realistic images with a high level of detail. They have been successfully used to render faces [8, 22], indoor scenes [8, 32] and clothes .
Particularly interesting for this work are those approaches that incorporate conditions to train GANs and constrain the generation process. Several conditions have been explored so far, such as discrete labels [19, 20], and text 
. Images have also been used as a condition, for instance in the problem of image-to-image translation, for future frame prediction 21] and face alignment . Very recently  used both textual descriptions and images as a condition to generate new clothing outfits. The works that are most related to ours are [16, 35]. They both propose GANs models for the muti-view person image generation problem. However, the two approaches use ground-truth supervision during train, i.e., pairs of images of the same person in two different poses dressed the same. Tackling the problem in a fully unsupervised manner, as we do in this paper, becomes a much harder task that requires more elaborate network designs, specially when estimating the loss of the rendered images.
The unsupervised strategy we propose is somewhat related to that used in the Cycle-GANs [13, 14, 38] for image-to-image translation, also trained in the absence of paired examples. However, these approaches aim at estimating a mapping between two distributions of images and no spatial transformation of the pixels in the input image are considered. This makes that the overall strategies and network architectures to address the two problems (image-to-image translation and multi-view generation) are essentially different.
3 Problem Formulation
Given a single-view image of a person, our goal is to train a GAN model in an unsupervised manner, allowing to generate photo-realistic pose transformations of the input image while retaining the person identity and clothes appearance. Formally, we seek to learn the mapping between an image of a person with pose and the image of the same person with the desired position . Poses are represented by 2D skeletons with joints , where is the i-th joint pixel location in the image. The model is trained in an unsupervised manner with training samples that do not contain the ground-truth output image .
Figure 2 shows an overview of our model. It is composed of four main modules: (1) A generator that acts as a differentiable render mapping one input image of a given person under a specific pose to an output image of the same person under a different pose. Note that is used twice in our network, first to map the input image and then to render the latter back to the original pose ; (2) A regressor responsible of estimating the 2D joint locations of a given image; (3) A discriminator that seeks to discriminate between generated and real samples; (4) A loss function, computed without ground truth, that aims to preserve the person identity. For this purpose, we devise a novel loss function that enforces semantic content similarity of and , and style similarity between and .
In the following subsections we describe in detail each of these components as well as the 2D pose embedding we consider.
4.1 Pose Embedding
Drawing inspiration on , the 2D location of each skeleton joint in an image
is represented as a probability density mapcomputed over the entire image domain as:
being the set of all pixel locations in the input image . For each vertex
we introduce a Gaussian peak with variance 0.03 in the positionof the belief map . The full person pose is represented as the concatenation of all belief maps .
4.2 Network Architecture
Given an input image of a person, the generator aims to render a photo-realistic image of that person in a desired pose . In order to condition the generator with the pose we consider the concatenation and feed this into a feed forward network that produces an output image with the same dimensions as . The generator is implemented as the variation of the network from Johnson et al.  proposed by  as it achieved impressive results for the image-to-image translation problem.
We implement the discriminator as a PatchGan  network mapping from the input image to a matrix , where represents the probability of the overlapping patch to be real. This discriminator contains less parameters than other conventional discriminators typically used for GANs and enforces high frequency correctness to reduce the blurriness of the generated images.
4.3 Learning the Model
The loss function we define contains three terms, namely an image adversarial loss  that pushes the distribution of the generated images to the distribution of the training images, the conditional pose loss that enforces the pose of the generated images to be similar to the desired ones, and the identity loss that favors to preserve the person identity. We next describe each of these terms.
Image Adversarial Loss.
In order to optimize the generator parameters and learn the distribution of the training data, we perform a standard min-max strategy game between the generator and the image discriminator . The generator and discriminator are jointly trained with the objective function where
tries to maximize the probability of correctly classifying real and rendered images whiletries to foul the discriminator. Formally, this loss is defined as:
Conditional Pose Loss.
While reducing the image adversial loss, the generator must also reduce the error produced by the 2D pose regressor . In this way, the generator not only learns to produce realistic samples but also learns how to generate samples consistent with the desired pose . This loss is defined by:
With the two previously defined losses and the generator is enforced to generate realistic images of people in a desired position. However, without ground-truth supervision there is no constraint to guarantee that the person identity –e.g., body shape, hair style – in the original and rendered images is the same. In order to preserve person identity, we draw inspiration on the content-style loss that was previously introduced in  to maintain high perceptual quality in the problem of image style transfer. This loss consists of two main components, one to retain semantic similarity (‘content’) and the other to retain texture similarity (‘style’). Based on this idea we define two sub-losses that aim at retaining the identity between the input image and the rendered image .
For the content term, we argue that the generator should be able to render-back the original image given the generated image and the original pose , that is, , where . Nevertheless, even when using PatchGan based discriminators, directly comparing and at a pixel level would struggle to handle high-frequency details leading to overly-smoothed images. Instead, we compare them based on their semantic content. Formally, we define the content loss to be:
where represents the activations at the z-th layer of a pretrained network.
In order to retain the style of the original image into the rendered ones we enforce the texture around the visible joints of and to be similar. This involves a first step of extracting – in a differential manner – patches of features around the joints of and . More specifically, let be the semantic features of , and the down-sampled (using average pooling) probability maps associated to the pose . The pose-conditioned patches are computed as:
The representation of a patch style is then captured by the correlation between the different channels of its hidden representationsusing the spatial extend of the feature maps as the expectation. As previously done in  this can be implemented by computing the Gram matrix for each patch
, defined as the inner product between the vectorized feature maps of. The Patch-Style loss is then computed as the mean square error between visible pairs of Gram matrices of the same joint in both images and :
Finally, we define the identity loss as the weighted sum of the content and style losses:
where he parameter controls the relative importance or the two components.
We take the full loss as a linear combination of all previous loss terms:
where is used to train the pose regressor . Our ultimate goal is to solve:
Some could argue that the terms and for the recovered image are not required because the same information is expressed by . However, we experienced that these two terms improved robustness and convergence properties during training.
5 Implementation Details
In order to reduce the model oscillation and obtain more photo-realistic results we use the learning trick introduced in  and replace the negative log likelihood of the adversarial loss by a least square loss. The image features are obtained from a pretrained VGG16  with . We use Adam solver 
with learning rate of 0.0002 for the generator, 0.0001 for the discriminators and a batch size 12. We train for 300 epochs with a linear decreasing rate after epoch 100. The weights for the loss terms are set toand . As in , to improve training stability, we update the discriminators using a buffer with the previous rendered images rather than those generated in the current iteration. During training, the poses are randomly sampled from those in the training set.
6 Experimental Evaluation
We verify the effectiveness of our unsupervised GAN model through quantitative and qualitative evaluations. We next describe the dataset we used for evaluation and the results we obtained. Supplementary material can be found on http://www.albertpumarola.com/research/person-synthesis/.
Benchmark. We have evaluated our approach on the publicly available In-shop Clothes Retrieval Benchmark of the DeepFashion dataset , that contains a large number of clothing images with diverse person poses. Images of the dataset were initially resized to a fixed size of . We then applied data augmentation with all three possible flips per each image. After that, 2D pose was computed in all images using the Convolutional Pose Machine (CPM) , and images for which CPM failed were removed from the dataset. From the remaining images, we randomly selected 24,145 for training and 5,000 for test. Test samples are also associated to a desired pose and its corresponding ground truth image, that will be used for quantitative evaluation purposes. Training images are only associated to a desired 2D pose. No ground truth warped image is considered during training.
6.1 Quantitative results
Since test samples are annotated with ground truth images under the desired pose, we can quantitatively evaluate the quality of the synthesis. Specifically, we use the metrics considered by previous approaches on multi-view person generation [16, 35], namely the Structural Similarity (SSIM)  and the Inception Score (IS) . These are fairly standard metrics that focus more on the overall quality of the generated image rather than on the pixel-level similarity between the generated image and the ground truth. Concretely, SSIM models the changes in the structural information and IS give high scores for images with a large semantic content.
|Ma et al. NIPS’2017 ||0.762||3.09|
|Zhao et al. ArXiv’2017 ||0.620||3.03|
|Sohn et al. NIPS’2015 *||0.580||2.35|
|Mirza et al. ArXiv’2014 *||0.590||2.45|
In Table 1 we report these scores for our approach and the two fully supervised methods  and , when evaluated on the DeepFashion  dataset. Two additional implementations of a Variational AutoEncoder (VAE)  and a Conditional GAN (CGAN) model , reported in , are included. It is worth to point that while all methods are evaluated on the same dataset, the test splits in each case are not the same. Therefore, the results on this table should be considered only as indicative. In any event, note that the two metrics indicate that the quality of the synthesis obtained by our unsupervised approach are very similar to the most recent supervised approaches and even outperform previous VAE and CGAN implementations.
6.2 Qualitative results
We next present and discuss a series of qualitative results that will highlight the main characteristics of the proposed approach, including its ability to generalize to novel poses, to hallucinate image patches not observed in the original image and to render textures with high-frequency details.
In the Teaser image 1 we observe all these characteristics. First, note the ability of our GAN model to generalize to desired poses very different from that in the original image. In this case given a frontal image of the upper body of a woman, we show some of the generated images in which her pose is rotated by 180 . In the right-most image of this example, the network is also able to hallucinate the two legs, not seen in the original image (despite not rendering the skirt). For this particular example, the network convincingly renders the high frequency details of the blouse. This is a very important characteristic of our model, and is a direct consequence of the loss function we have designed, and in particular of the term in Eq. (6) that aims at retaining the texture details of the original image into the generated one. This is in contrast to most of the renders generated by other GAN models [16, 35, 39], which typically wash out texture details.
Figure 3 presents another series of results obtained with our model. In this case, each synthetically generated image is accompanied by the ground truth. Note again, the number of complex examples that are successfully addressed. Several cases show the hallucination of frontal poses from original poses facing back (or vice versa). Also are worth to mention those examples where the original image is in a side position with only one arm being observed, and the desired pose is either frontal of backwards, having to hallucinate both arms. Some of the textures of the t-shirts have very high frequency patterns and textures (example 4-th row/2-nd column, examples 6-th row) that are convincingly rendered under new poses.
Failure cases. Tackling such an unconstrained problem in a fully unsupervised manner causes a number of errors. We have roughly split them into four categories which we summarize in Figure 4. The first type of error (top-left) is produced when textures in the original image are not correctly mapped onto the generated image. In this case, the partially observed dark trousers are transferred to a lower leg, resembling boots. In the top-right example, the face of the original image is not fully wash out in the new generated image. In the bottom-left we show a type of error which we denote as ‘geometric error’, where the pose of the original image is not properly transferred to the target image. The bottom-right image shows an example in which a part of the body in the original image (hand) is mapped as a texture in the synthesized one.
Ablation study. Each component is crucial for the proper performance of the system. and constrain the system to generate realistic images; and ensure the generator conditions the image generation to the given pose; and and force the generator to preserve the input image texture. Removing any of these elements would damage our network. For instance, Figure 5 shows the results when replacing by the standard L1 loss used by most state-of-the-art GAN works. As it can observed in the last column of the figure, although is preserving the low frequency texture of the original image, the person identity in is lost and all results tend to converge to a mean brunette woman with white t-shirt and blue jeans.
Images with background. To further test the limits of our model Figure 6 presents an evaluation of the model performance when the input image contains background. Surprisingly, although the model has no loss on background consistency nor was trained with images with background, the results are still very consistent. The person is quite correctly rendered, while the background is over-smoothed. To become robust to background would require more complex datasets and specialized loss functions.
We have presented a novel approach for generating new images of a person under arbitrary poses using a GAN model that can be trained in a fully unsupervised manner. This advances state-of-the-art, which so far, had only addressed the problem using supervision. To tackle this challenge, we have proposed an new framework that circumvents the need of training data by optimizing a loss function that only depends on the input image and the rendered one, and aims at retaining the style and semantic content of the original image. Quantitative and qualitative evaluation on the DeepFashion  dataset shows very promising results, even for new body poses that highly differ from the input one and require hallucinating large portions of the image. In the future, we plan to further exploit our approach in other datasets (not only of humans) in the wild for which supervision is not possible. An important issue that will need to be addressed in this case, is the influence of complex backgrounds, and how they interfere in the generation process. Finally, in order to improve the failure cases we have discussed, we will explore novel object- and geometry-aware loss functions.
Acknowledgments: This work is supported in part by a Google Faculty Research Award, by the Spanish Ministry of Science and Innovation under projects HuMoUR TIN2017-90086-R, ColRobTransp DPI2016-78957 and María de Maeztu Seal of Excellence MDM-2016-0656; and by the EU project AEROARMS ICT-2014-1-644271. We also thank Nvidia for hardware donation under the GPU Grant Program.
-  T. Chen, Z. Zhu, A. Shamir, S.-M. Hu, and D. Cohen-Or. 3-sweep: Extracting editable objects from a single photo. TOG, 32(6), 2013.
L. A. Gatys, A. S. Ecker, and M. Bethge.
Image style transfer using convolutional neural networks.In CVPR, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
-  R. Huang, S. Zhang, T. Li, and R. He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving frontal view synthesis. arXiv preprint arXiv:1704.04086, 2017.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In ECCV, 2016.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. arXiv:1710.10196, 2017.
-  N. Kholgade, T. Simon, A. Efros, and Y. Sheikh. 3D object manipulation in a single photograph using stock 3D models. TOG, 33(4), 2014.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  C. Lassner, G. Pons-Moll, and P.Gehler. A generative model of people in clothing. In ICCV, 2017.
-  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. arXiv preprint arXiv:1703.00848, 2017.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016.
-  L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. arXiv preprint arXiv:1705.09368, 2017.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang. Multi-class generative adversarial networks with the L2 loss function. arXiv preprint arXiv:1611.04076, 2016.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
-  A. Radford, L. Metz, and S. Chintala. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
-  S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In NIPS, 2016.
-  D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
-  T. Salimans, I. Goodfellow, W. Zaremba, V.Cheung, A. .Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In NIPS, 2015.
-  A. van den Oord, N. Kalchbrenner, L. Espeholt, K. Kavukcuoglu, O. Vinyals, and A. Graves. Conditional image generation with pixelcnn decoders. In NIPS, 2016.
-  A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
-  X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. arXiv preprint arXiv:1603.05631, 2016.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. TIP, 13(4):600–612, 2004.
-  S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
-  B. Zhao, X. Wu, Z.-Q. Cheng, H. Liu, and J. Feng. Multi-view image generation from a single-view. arXiv preprint arXiv:1704.04886, 2017.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, 2015.
-  Y. Zheng, X. Chen, M.-M. Cheng, K. Zhou, S.-M. Hu, and N. J. Mitra. Interactive images: cuboid proxies for smart image manipulation. TOG, 31(4):1–10, 2012.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
-  S. Zhu, S. Fidler, R. Urtasun, D. Lin, and C. C. Loy. Be your own prada: Fashion synthesis with structural coherence. In ICCV, 2017.