In this paper we deal with the problem of generating images where the foreground object changes because of a viewpoint variation or a deformable motion, such as the articulated human body. Specifically, inspired by Ma et al. , our goal is to generate a human image conditioned on two different variables: (1) the appearance of a specific person in a given image and (2) the pose of the same person in another image. The task our networks need to solve is to preserve the appearance details (e.g., the texture) contained in the first variable while performing a deformation on the structure of the foreground object according to the second variable. We focus on the human body which is an articulated “object”, important for many applications (e.g., computer-graphics based manipulations or re-identification dataset synthesis). However, our approach can be used with other deformable objects such as human faces or animal bodies, provided that a significant number of keypoints can be automatically extracted from the object of interest in order to represent its pose.
Pose-based human-being image generation is motivated by the interest in synthesizing videos 
with non-trivial human movements or in generating rare poses for human pose estimation or re-identification  training datasets. However, most of the recently proposed, deep-network based generative approaches, such as Generative Adversarial Networks (GANs) 
or Variational Autoencoders (VAEs) do not explicitly deal with the problem of articulated-object generation. Common conditional methods (e.g., conditional GANs or conditional VAEs) can synthesize images whose appearances depend on some conditioning variables (e.g., a label or another image). For instance, Isola et al. 
recently proposed an “image-to-image translation” framework, in which an input imageis transformed into a second image represented in another “channel” (see Fig. (a)a). However, most of these methods have problems when dealing with large spatial deformations between the conditioning and the target image. For instance, the U-Net architecture used by Isola et al.  is based on skip connections which help preserving local information between and . Specifically, skip connections are used to copy and then concatenate the feature maps of the generator “encoder” (where information is downsampled using convolutional layers) to the generator “decoder” (containing the upconvolutional layers). However, the assumption used in  is that and are roughly aligned with each other and they represent the same underlying structure. This assumption is violated when the foreground object in undergoes to large spatial deformations with respect to (see Fig. (b)b). As shown in , skip connections cannot reliably cope with misalignments between the two poses.
Ma et al.  propose to alleviate this problem using a two-stage generation approach. In the first stage a U-Net generator is trained using a masked loss in order to produce an intermediate image conditioned on the target pose. In the second stage, a second U-Net based generator is trained using also an adversarial loss in order to generate an appearance difference map which brings the intermediate image closer to the appearance of the conditioning image. In contrast, the GAN-based method we propose in this paper is end-to-end trained by explicitly taking into account pose-related spatial deformations. More specifically, we propose deformable skip connections
which “move” local information according to the structural deformations represented in the conditioning variables. These layers are used in our U-Net based generator. In order to move information according to a specific spatial deformation, we decompose the overall deformation by means of a set of local affine transformations involving subsets of joints, then we deform the convolutional feature maps of the encoder according to these transformations and we use common skip connections to transfer the transformed tensors to the decoder’s fusion layers. Moreover, we also propose to use anearest-neighbour loss as a replacement of common pixel-to-pixel losses (such as, e.g., or losses) commonly used in conditional generative approaches. This loss proved to be helpful in generating local information (e.g., texture) similar to the target image which is not penalized because of small spatial misalignments.
We test our approach using the benchmarks and the evaluation protocols proposed in  obtaining higher qualitative and quantitative results in all the datasets. Although tested on the specific human-body problem, our approach makes few human-related assumptions and can be easily extended to other domains involving the generation of highly deformable objects. Our code and our trained models are publicly available111https://github.com/AliaksandrSiarohin/pose-gan.
2 Related work
Most common deep-network-based approaches for visual content generation can be categorized as either Variational Autoencoders (VAEs)  or Generative Adversarial Networks (GANs) . VAEs are based on probabilistic graphical models and are trained by maximizing a lower bound of the corresponding data likelihood. GANs are based on two networks, a generator and a discriminator, which are trained simultaneously such that the generator tries to “fool” the discriminator and the discriminator learns how to distinguish between real and fake images.
Isola et al. 
propose a conditional GAN framework for image-to-image translation problems, where a given scene representation is “translated” into another representation. The main assumption behind this framework is that there exits a spatial correspondence between the low-level information of the conditioning and the output image. VAEs and GANs are combined in to generate realistic-looking multi-view clothes images from a single-view input image. The target view is filled to the model via a viewpoint label as front or left side and a two-stage approach is adopted: pose integration and image refinement. Adopting a similar pipeline, Lassner et al.  generate images of people with different clothes in a given pose. This approach is based on a costly annotation (fine-grained segmentation with 18 clothing labels) and a complex 3D pose representation.
Ma et al.  propose a more general approach which allows to synthesize person images in any arbitrary pose. Similarly to our proposal, the input of their model is a conditioning image of the person and a target new pose defined by 18 joint locations. The target pose is described by means of binary maps where small circles represent the joint locations. Similarly to [8, 20], the generation process is split in two different stages: pose generation and texture refinement. In contrast, in this paper we show that a single-stage approach, trained end-to-end, can be used for the same task obtaining higher qualitative results.
Jaderberg et al.  propose a spatial transformer layer, which learns how to transform a feature map in a “canonical” view, conditioned on the feature map itself. However only a global, parametric transformation can be learned (e.g., a global affine transformation), while in this paper we deal with non-parametric deformations of articulated objects which cannot be described by means of a unique global affine transformation.
Generally speaking, U-Net based architectures are frequently adopted for pose-based person-image generation tasks [8, 12, 18, 20]. However, common U-Net skip connections are not well-designed for large spatial deformations because local information in the input and in the output images is not aligned (Fig. 1). In contrast, we propose deformable skip connections to deal with this misalignment problem and “shuttle” local information from the encoder to the decoder driven by the specific pose difference. In this way, differently from previous work, we are able to simultaneously generate the overall pose and the texture-level refinement.
. However, the perceptual loss, based on an element-by-element difference computed in the feature map of an external classifier, does not take into account spatial misalignments. On the other hand, the patch-based similarity, adopted in  to compute a dense feature correspondence, is very computationally expensive and it is not used as a loss.
3 The network architectures
In this section we describe the architectures of our generator () and discriminator () and the proposed deformable skip connections. We first introduce some notation. At testing time our task, similarly to , consists in generating an image showing a person whose appearance (e.g., clothes, etc.) is similar to an input, conditioning image but with a body pose similar to , where is a different image of the same person and is a sequence of 2D points describing the locations of the human-body joints in . In order to allow a fair comparison with , we use the same number of joints () and we extract using the same Human Pose Estimator (HPE)  used in . Note that this HPE is used both at testing and at training time, meaning that we do not use manually-annotated poses and the so extracted joint locations may have some localization errors or missing detections/false positives.
At training time we use a dataset containing pairs of conditioning-target images of the same person in different poses. For each pair , a conditioning and a target pose and is extracted from the corresponding image and represented using two tensors and , each composed of heat maps, where () is a 2D matrix of the same dimension as the original image. If is the j-th joint location, then:
with pixels (chosen with cross-validation). Using blurring instead of a binary map is useful to provide widespread information about the location .
is fed with: (1) a noise vector, drawn from a noise distribution and implicitly provided using dropout  and (2) the triplet . Note that, at testing time, the target pose is known, thus can be computed. Note also that the joint locations in and are spatially aligned (by construction), while in they are different. Hence, differently from [12, 4], is not concatenated with the other input tensors. Indeed the convolutional-layer units in the encoder part of have a small receptive field which cannot capture large spatial displacements. For instance, a large movement of a body limb in with respect to , is represented in different locations in and which may be too far apart from each other to be captured by the receptive field of the convolutional units. This is emphasized in the first layers of the encoder, which represent low-level information. Therefore, the convolutional filters cannot simultaneously process texture-level information (from ) and the corresponding pose information (from ).
For this reason we independently process and from in the encoder. Specifically, and are concatenated and processed using a convolutional stream of the encoder while is processed by means of a second convolutional stream, without sharing the weights (Fig. 2). The feature maps of the first stream are then fused with the layer-specific feature maps of the second stream in the decoder after a pose-driven spatial deformation performed by our deformable skip connections (see Sec. 3.1).
Our discriminator network is based on the conditional, fully-convolutional discriminator proposed by Isola et al. . In our case, takes as input 4 tensors: , where either or (see Fig. 2). These four tensors are concatenated and then given as input to . The discriminator’s output is a scalar value indicating its confidence on the fact that is a real image.
3.1 Deformable skip connections
As mentioned above and similarly to , the goal of the deformable skip connections is to “shuttle” local information from the encoder to the decoder part of . The local information to be transferred is, generally speaking, contained in a tensor , which represents the feature map activations of a given convolutional layer of the encoder. However, differently from , we need to “pick” the information to shuttle taking into account the object-shape deformation which is described by the difference between and . To do so, we decompose the global deformation in a set of local affine transformations, defined using subsets of joints in and . Using these affine transformations and local masks constructed using the specific joints, we deform the content of and then we use common skip connections to copy the transformed tensor and concatenate it with the corresponding tensor in the destination layer (see Fig. 2). Below we describe in more detail the whole pipeline.
Decomposing an articulated body in a set of rigid sub-parts. The human body is an articulated “object” which can be roughly decomposed into a set of rigid sub-parts. We chose 10 sub-parts: the head, the torso, the left/right upper/lower arm and the left/right upper/lower leg. Each of them corresponds to a subset of the 18 joints defined by the HPE  we use for extracting . Using these joint locations we can define rectangular regions which enclose the specific body part. In case of the head, the region is simply chosen to be the axis-aligned enclosing rectangle of all the corresponding joints. For the torso, which is the largest area, we use a region which includes the whole image, in such a way to shuttle texture information for the background pixels. Concerning the body limbs, each limb corresponds to only 2 joints. In this case we define a region to be a rotated rectangle whose major axis () corresponds to the line between these two joints, while the minor axis () is orthogonal to and with a length equal to one third of the mean of the torso’s diagonals (this value is used for all the limbs). In Fig. 3 we show an example. Let be the set of the 4 rectangle corners in defining the -th body region (). Note that these 4 corner points are not joint locations. Using we can compute a binary mask which is zero everywhere except those points lying inside . Moreover, let be the corresponding rectangular region in . Matching the points in with the corresponding points in we can compute the parameters of a body-part specific affine transformation (see below). In either or , some of the body regions can be occluded, truncated by the image borders or simply miss-detected by the HPE. In this case we leave the corresponding region empty and the -th affine transform is not computed (see below).
Note that our body-region definition is the only human-specific part of the proposed approach. However, similar regions can be easily defined using the joints of other articulated objects such as those representing an animal body or a human face.
Computing a set of affine transformations. During the forward pass (i.e., both at training and at testing time) we decompose the global deformation of the conditioning pose with respect to the target pose by means of a set of local affine transformations, one per body region. Specifically, given in and in (see above), we compute the 6 parameters of an affine transformation using Least Squares Error:
The parameter vector is computed using the original image resolution of and and then adapted to the specific resolution of each involved feature map . Similarly, we compute scaled versions of each . In case either or is empty (i.e., when any of the specific body-region joints has not been detected using the HPE, see above), then we simply set to be a matrix with all elements equal to 0 ( is not computed).
Note that and their lower-resolution variants need to be computed only once per each pair of real images and, in case of the training phase, this is can be done before starting training the networks (but in our current implementation this is done on the fly).
Combining affine transformations to approximate the object deformation. Once , are computed for the specific spatial resolution of a given tensor , the latter can be transformed in order to approximate the global pose-dependent deformation. Specifically, we first compute for each :
where is a point-wise multiplication and is used to “move” all the channel values of corresponding to point . Finally, we merge the resulting tensors using:
where is a specific channel. The rationale behind Eq. 4 is that, when two body regions partially overlap each other, the final deformed tensor is obtained by picking the maximum-activation values. Preliminary experiments performed using average pooling led to slightly worse results.
and are trained using a combination of a standard conditional adversarial loss with our proposed nearest-neighbour loss . Specifically, in our case is given by:
Previous works on conditional GANs combine the adversarial loss with either an  or an -based loss [4, 12] which is used only for . For instance, the distance computes a pixel-to-pixel difference between the generated and the real image, which, in our case, is:
However, a well-known problem behind the use of and is the production of blurred images. We hypothesize that this is also due to the inability of these losses to tolerate small spatial misalignments between and . For instance, suppose that , produced by , is visually plausible and semantically similar to , but the texture details on the clothes of the person in the two compared images are not pixel-to-pixel aligned. Both the and the loss will penalize this inexact pixel-level alignment, although not semantically important from the human point of view. Note that these misalignments do not depend on the global deformation between and , because is supposed to have the same pose as . In order to alleviate this problem, we propose to use a nearest-neighbour loss based on the following definition of image difference:
where is a local neighbourhood of point (we use and neighbourhoods for the DeepFashion and the Market-1501 dataset, respectively, see Sec. 6). is a vectorial representation of a patch around point in image , obtained using convolutional filters (see below for more details). Note that is not a metrics because it is not symmetric. In order to efficiently compute Eq. 7, we compare patches in and using their representation () in a convolutional map of an externally trained network. In more detail, we use VGG-19 
, trained on ImageNet and, specifically, its second convolutional layer (called). The first two convolutional maps in VGG-19 ( and
) are both obtained using a convolutional stride equal to 1. For this reason, the feature map () of an image in has the same resolution of the original image . Exploiting this fact, we compute the nearest-neighbour field directly on , without losing spatial precision. Hence, we define: , which corresponds to the vector of all the channel values of with respect to the spatial position . has a receptive field of in , thus effectively representing a patch of dimension using a cascade of two convolutional filters. Using , Eq. 7 becomes:
5 Implementation details
a convolution-ReLU layer withfilters and stride , (2) the same as with instance normalization before ReLU and (3) the same as with the addition of dropout at rate . Differently from , we use dropout only at training time. The encoder part of the generator is given by two streams (Fig. 2), each of which is composed of the following sequence of layers:
The decoder part of the generator is given by:
In the last layer, ReLU is replaced with .
The discriminator architecture is:
where the ReLU of the last layer is replaced with .
The generator for the DeepFashion dataset has one additional convolution block () both in the encoder and in the decoder, because images in this dataset have a higher resolution.
The person re-identification Market-1501 dataset  contains 32,668 images of 1,501 persons captured from 6 different surveillance cameras. This dataset is challenging because of the low-resolution images (12864) and the high diversity in pose, illumination, background and viewpoint. To train our model, we need pairs of images of the same person in two different poses. As this dataset is relatively noisy, we first automatically remove those images in which no human body is detected using the HPE, leading to 263,631 training pairs. For testing, following , we randomly select 12,000 pairs. No person is in common between the training and the test split.
The DeepFashion dataset (In-shop Clothes Retrieval Benchmark)  is composed of 52,712 clothes images, matched each other in order to form 200,000 pairs of identical clothes with two different poses and/or scales of the persons wearing these clothes. The images have a resolution of 256256 pixels. Following the training/test split adopted in , we create pairs of images, each pair depicting the same person with identical clothes but in different poses. After removing those images in which the HPE does not detect any human body, we finally collect 89,262 pairs for training and 12,000 pairs for testing.
Evaluation in the context of generation tasks is a problem in itself. In our experiments we adopt a redundancy of metrics and a user study based on human judgments. Following , we use Structural Similarity (SSIM) , Inception Score (IS)  and their corresponding masked versions mask-SSIM and mask-IS . The latter are obtained by masking-out the image background and the rationale behind this is that, since no background information of the target image is input to , the network cannot guess what the target background looks like. Note that the evaluation masks we use to compute both the mask-IS and the mask-SSIM values do not correspond to the masks () we use for training. The evaluation masks have been built following the procedure proposed in  and adopted in that work for both training and evaluation. Consequently, the mask-based metrics may be biased in favor of their method. Moreover, we observe that the IS metrics 
, based on the entropy computed over the classification neurons of an external classifier, is not very suitable for domains with only one object class. For this reason we propose to use an additional metrics that we call Detection Score (DS). Similarly to the classification-based metrics (FCN-score) used in , DS is based on the detection outcome of the state-of-the-art object detector SSD , trained on Pascal VOC 07  (and not fine-tuned on our datasets). At testing time, we use the person-class detection scores of SSD computed on each generated image . corresponds to the maximum-score box of SSD on and the final DS value is computed by averaging the scores of all the generated images. In other words, DS measures the confidence of a person detector in the presence of a person in the image. Given the high accuracy of SSD in the challenging Pascal VOC 07 dataset , we believe that it can be used as a good measure of how much realistic (person-like) is a generated image.
Finally, in our tables we also include the value of each metrics computed using the real images of the test set. Since these values are computed on real data, they can be considered as a sort of an upper-bound to the results a generator can obtain. However, these values are not actual upper bounds in the strict sense: for instance the DS metrics on the real datasets is not 1 because of SSD failures.
6.1 Comparison with previous work
In Tab. 1 we compare our method with . Note that there are no other works to compare with on this task yet. The mask-based metrics are not reported in  for the DeepFashion dataset. Concerning the DS metrics, we used the publicly available code and network weights released by the authors of  in order to generate new images according to the common testing protocol and ran the SSD detector to get the DS values.
On the Market-1501 dataset our method reports the highest performance with all but the IS metrics. Specifically, our DS values are much higher than those obtained by . Conversely, on the DeepFashion dataset, our approach significantly improves the IS value but returns a slightly lower SSIM value.
6.2 User study
In order to further compare our method with the state-of-the-art approach  we implement a user study following the protocol of Ma et al. . For each dataset, we show 55 real and 55 generated images in a random order to 30 users for one second. Differently from Ma et al. 
, who used Amazon Mechanical Turk (AMT), we used “expert” (voluntary) users: PhD students and Post-docs working in Computer Vision and belonging to two different departments. We believe that expert users, who are familiar with GAN-like images, can more easily distinguish real from fake images, thus confusing our users is potentially a more difficult task for our GAN. The results222 means Real images rated as generated / Real images; means Generated images rated as Real / Generated images. in Tab. 2 confirm the significant quality boost of our images with respect to the images produced in . For instance, on the Market-1501 dataset, the human “confusion” is one order of magnitude higher than in .
Finally, in Sec. D we show some example images, directly comparing with . We also show the results obtained by training different person re-identification systems after augmenting the training set with images generated by our method. These experiments indirectly confirm that the degree of realism and diversity of our images is very significant.
|Ma et al. ||11.2||5.5||9.2||14.9|
6.3 Ablation study and qualitative analysis
In this section we present an ablation study to clarify the impact of each part of our proposal on the final performance. We first describe the compared methods, obtained by “amputating” important parts of the full-pipeline presented in Sec. 3-4. The discriminator architecture is the same for all the methods.
Baseline: We use the standard U-Net architecture  without deformable skip connections. The inputs of and and the way pose information is represented (see the definition of tensor in Sec. 3) is the same as in the full-pipeline. However, in , , and are concatenated at the input layer. Hence, the encoder of is composed of only one stream, whose architecture is the same as the two streams described in Sec.5.
DSC: is implemented as described in Sec. 3, introducing our Deformable Skip Connections (DSC). Both in DSC and in Baseline, training is performed using an loss together with the adversarial loss.
PercLoss: This is DSC in which the loss is replaced with the Perceptual loss proposed in . This loss is computed using the layer of , chosen to have a receptive field the closest possible to in Eq. 8, and computing the element-to-element difference in this layer without nearest neighbor search.
In Tab. 3 we report a quantitative evaluation on the Market-1501 and on the DeepFashion dataset with respect to the four different versions of our approach. In most of the cases, there is a progressive improvement from Baseline to DSC to Full. Moreover, Full usually obtains better results than PercLoss. These improvements are particularly evident looking at the DS metrics, which we believe it is a strong evidence that the generated images are realistic. DS values on the DeepFashion dataset are omitted because they are all close to the value .
In Fig. 4 and Fig. 5 we show some qualitative results. These figures show the progressive improvement through the four baselines which is quantitatively presented above. In fact, while pose information is usually well generated by all the methods, the texture generated by Baseline often does not correspond to the texture in or is blurred. In same cases, the improvement of Full with respect to Baseline is quite drastic, such as the drawing on the shirt of the girl in the second row of Fig. 5 or the stripes on the clothes of the persons in the third and in the fourth row of Fig. 4. Further examples are shown in the Appendix.
In this paper we presented a GAN-based approach for image generation of persons conditioned on the appearance and the pose. We introduced two novelties: deformable skip connections and nearest-neighbour loss. The first is used to solve common problems in U-Net based generators when dealing with deformable objects. The second novelty is used to alleviate a different type of misalignment between the generated image and the ground-truth image.
Our experiments, based on both automatic evaluation metrics and human judgments, show that the proposed method is able to outperform previous work on this task. Despite the proposed method was tested on the specific task of human-generation, only few assumptions are used which refer to the human body and we believe that our proposal can be easily extended to address other deformable-object generation tasks.
We want to thank the NVIDIA Corporation for the donation of the GPUs used in this project.
-  Z. Cao, T. Simon, S. Wei, and Y. Sheikh. Realtime multi-person 2D pose estimation using part affinity fields. In CVPR, 2017.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.CVPR, 2017.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, 2015.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In ECCV, 2016.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
-  C. Lassner, G. Pons-Moll, and P. V. Gehler. A generative model of people in clothing. In ICCV, 2017.
-  J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang. Visual attribute transfer through deep image analogy. ACM Trans. Graph., 36(4), 2017.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In ECCV, 2016.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016.
-  L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In NIPS, 2017.
-  D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. CVPR, 2016.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv:1512.00567, 2015.
-  D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
-  J. Walker, K. Marino, A. Gupta, and M. Hebert. The pose knows: Video forecasting by generating pose futures. In ICCV, 2017.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing, 2004.
-  B. Zhao, X. Wu, Z. Cheng, H. Liu, and J. Feng. Multi-view image generation from a single-view. arXiv:1704.04886, 2017.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, 2015.
-  L. Zheng, Y. Yang, and A. G. Hauptmann. Person re-identification: Past, present and future. arXiv:1610.02984, 2016.
-  Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In ICCV, 2017.
-  Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned CNN embedding for person reidentification. ACM Trans. on Multimedia Computing, Communications, and Applications (TOMCCAP), 14(1):13:1–13:20, 2018.
In this Appendix we report some additional implementation details and we show other quantitative and qualitative results. Specifically, in Sec. A we explain how Eq. 8 can be efficiently implemented using GPU-based parallel computing, while in Sec. B we show how the human-body symmetry can be exploited in case of missed limb detections. In Sec. C we train state-of-the-art Person Re-IDentification (Re-ID) systems using a combination of real and generated data, which, on the one hand, shows how our images can be effectively used to boost the performance of discriminative methods and, on the other hand, indirectly shows that our generated images are realistic and diverse. In Sec. D we show a direct (qualitative) comparison of our method with the approach presented in  and in Sec. E we show other images generated by our method, including some failure cases. Note that some of the images in the DeepFashion dataset have been manually cropped (after the automatic generation) to improve the overall visualization quality.
Appendix A Nearest-neighbour loss implementation
Our proposed nearest-neighbour loss is based on the definition of given in Eq. 8. In that equation, for each point in , the “most similar” (in the -based feature space) point in needs to be searched for in a neighborhood of . This operation may be quite time consuming if implemented using sequential computing (i.e., using a “for-loop”). We show here how this computation can be sped-up by exploiting GPU-based parallel computing in which different tensors are processed simultaneously.
Given , we compute shifted versions of : , where is a translation offset ranging in a relative neighborhood () and is filled with the value in the borders. Using this translated versions of , we compute corresponding difference tensors , where:
and the difference is computed element-wise. contains the channel-by-channel absolute difference between and . Then, for each , we sum all the channel-based differences obtaining:
where ranges over all the channels and the sum is performed pointwise. is a matrix of scalar values, each value representing the norm of the difference between a point in and a corresponding point in :
For each point , we can now compute its best match in a local neighbourhood of simply using:
Finally, Eq. 8 becomes:
Appendix B Exploiting the human-body symmetry
As mentioned in Sec. 3.1, we decompose the human body in 10 rigid sub-parts: the head, the torso and 8 limbs (left/right upper/lower arm, etc.). When one of the joints corresponding to one of these body-parts has not been detected by the HPE, the corresponding region and affine transformation are not computed and the region-mask is filled with 0. This can happen because of either that region is not visible in the input image or because of false-detections of the HPE.
However, when the missing region involves a limb (e.g., the right-upper arm) whose symmetric body part has been detected (e.g., the left-upper arm), we can “copy” information from the “twin” part. In more detail, suppose for instance that the region corresponding to the right-upper arm in the conditioning image is and this region is empty because of one of the above reasons. Moreover, suppose that is the corresponding (non-empty) region in and that is the (non-empty) left-upper arm region in . We simply set: and we compute as usual, using the (now, no more empty) region together with .
Appendix C Improving person Re-ID via data-augmentation
The goal of this section is to show that the synthetic images generated with our proposed approach can be used to train discriminative methods. Specifically, we use Re-ID approaches whose task is to recognize a human person in different poses and viewpoints. The typical application of a Re-ID system is a video-surveillance scenario in which images of the same person, grabbed by cameras mounted in different locations, need to be matched to each other. Due to the low-resolution of the cameras, person re-identification is usually based on the colours and the texture of the clothes . This makes our method particularly suited to automatically populate a Re-ID training dataset by generating images of a given person with identical clothes but in different viewpoints/poses.
In our experiments we use Re-ID methods taken from [22, 24] and we refer the reader to those papers for details about the involved approaches. We employ the Market-1501 dataset that is designed for Re-ID method benchmarking. For each image of the Market-1501 training dataset (), we randomly select 10 target poses, generating 10 corresponding images using our approach. Note that: (1) Each generated image is labeled with the identity of the conditioning image, (2) The target pose can be extracted from an individual different from the person depicted in the conditioning image (this is different from the other experiments shown here and in the main paper). Adding the generated images to we obtain an augmented training set . In Tab. 4 we report the results obtained using either (standard procedure) or for training different Re-ID systems. The strong performance boost, orthogonal to different Re-ID methods, shows that our generative approach can be effectively used for synthesizing training samples. It also indirectly shows that the generated images are sufficiently realistic and different from the real images contained in .
|Standard training set ()||Augmented training set ()|
|Model||Rank 1||mAP||Rank 1||mAP|
|IDE + Euclidean ||73.9||48.8||78.5||55.9|
|IDE + XQDA ||73.2||50.9||77.8||57.9|
|IDE + KISSME ||75.1||51.5||79.5||58.1|
|Discriminative Embedding ||78.3||55.5||80.6||61.3|
Appendix D Comparison with previous work
In this section we directly compare our method with the results generated by Ma et al. . The comparison is based on the pairs conditioning image-target pose used in , for which we show both the results obtained by Ma et al.  and ours.
Figs. 6-7 show the results on the Market-1501 dataset. Comparing the images generated by our full-pipeline with the corresponding images generated by the full-pipeline presented in , most of the times our results are more realistic, sharper and with local details (e.g., the clothes texture or the face characteristics) more similar to the details of the conditioning image. For instance, in the first and the last row of Fig. 6 and in the last row of Fig. 7, our results show human-like images, while the method proposed in  produced images which can hardly be recognized as humans.
Figs. 8-9 show the results on the DeepFashion dataset. Also in this case, comparing our results with , most of the times ours look more realistic or closer to the details of the conditioning image. For instance, the second row of Fig. 8 shows a male face, while the approach proposed in  produced a female face (note that the DeepFashion dataset is strongly biased toward female subjects ). Most of the times, the clothes texture in our case is closer to that depicted in the conditioning image (e.g., see rows 1, 3, 4, 5 and 6 in Fig. 8 and rows 1 and 6 in Fig. 9). In row 5 of Fig. 9 the method proposed in  produced an image with a pose closer to the target; however it wrongly generated pants while our approach correctly generated the appearance of the legs according to the appearance contained in the conditioning image.
We believe that this qualitative comparison using the pairs selected in , shows that the combination of the proposed deformable skip-connections and the nearest-neighbour loss produced the desired effect to “capture” and transfer the correct local details from the conditioning image to the generated image. Transferring local information while simultaneously taking into account the global pose deformation is a difficult task which can more hardly be implemented using “standard” U-Net based generators as those adopted in .
|Full (ours)||Ma et al. |
|Full (ours)||Ma et al. |
|Full (ours)||Ma et al. |
Appendix E Other qualitative results
In this section we present other qualitative results. Fig. 10 and Fig. 11 show some images generated using the Market-1501 dataset and the DeepFashion dataset, respectively. The terminology is the same adopted in Sec. 6.2. Note that, for the sake of clarity, we used a skeleton-based visualization of but, as explained in the main paper, only the point-wise joint locations are used in our method to represent pose information (i.e., no joint-connectivity information is used).
Similarly to the results shown in Sec. 6.2, also these images show that, despite the pose-related general structure is sufficiently well generated by all the different versions of our method, most of the times there is a gradual quality improvement in the detail synthesis from Baseline to DSC to PercLoss to Full.
Ambiguity of the pose representation. For instance, in row 3 of Fig. 13, the left elbow has been detected in although it is actually hidden behind the body. Since contains only 2D information (no depth or occlusion-related information), there is no way for the system to understand whether the elbow is behind or in front of the body. In this case our model chose to generate an arm considering that the arm is in front of the body (which corresponds to the most frequent situation in the training dataset).
Rare object appearance. For instance, the backpack in row 1 of Fig. 12 is light green, while most of the backpacks contained in the training images of the Market-1501 dataset are dark. Comparing this image with the one generated in the last row of Fig. 10 (where the backpack is black), we see that in Fig. 10 the colour of the shirt of the generated image is not blended with the backpack colour, while in Fig. 12 it is. We presume that the generator “understands” that a dark backpack is an object whose texture should not be transferred to the clothes of the generated image, while it is not able to generalize this knowledge to other backpacks.
Warping problems. This is an issue related to our specific approach (the deformable skip connections). The texture on the shirt of the conditioning image in row 2 of Fig. 13 is warped in the generated image. We presume this is due to the fact that in this case the affine transformations need to largely warp the texture details of the narrow surface of the profile shirt (conditioning image) in order to fit the much wider area of the target frontal pose.
|Baseline (ours)||DSC (ours)||PercLoss (ours)||Full (ours)|
|Baseline (ours)||DSC (ours)||PercLoss (ours)||Full (ours)|
|Baseline (ours)||DSC (ours)||PercLoss (ours)||Full (ours)|
|Baseline (ours)||DSC (ours)||PercLoss (ours)||Full (ours)|