Appearance and Pose-Conditioned Human Image Generation using Deformable GANs

04/30/2019 ∙ by Aliaksandr Siarohin, et al. ∙ Università di Trento 0

In this paper, we address the problem of generating person images conditioned on both pose and appearance information. Specifically, given an image xa of a person and a target pose P(xb), extracted from a different image xb, we synthesize a new image of that person in pose P(xb), while preserving the visual details in xa. In order to deal with pixel-to-pixel misalignments caused by the pose differences between P(xa) and P(xb), we introduce deformable skip connections in the generator of our Generative Adversarial Network. Moreover, a nearest-neighbour loss is proposed instead of the common L1 and L2 losses in order to match the details of the generated image with the target image. Quantitative and qualitative results, using common datasets and protocols recently proposed for this task, show that our approach is competitive with respect to the state of the art. Moreover, we conduct an extensive evaluation using off-the-shell person re-identification (Re-ID) systems trained with person-generation based augmented data, which is one of the main important applications for this task. Our experiments show that our Deformable GANs can significantly boost the Re-ID accuracy and are even better than data-augmentation methods specifically trained using Re-ID losses.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 8

page 10

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The appearance and pose-conditioned human image generation task aims at generating a person image conditioned on two different variables: (1) the appearance of a specific person in a given image and (2) the pose of the same person in another image. Specifically, the generation process needs to preserve the appearance details (e.g., the colors of the clothes, the texture, etc.) contained in the first variable while performing a deformation on the structure (the pose) of the foreground person according to the second variable. Generally speaking, this task can be extended to the generation of images where the foreground, deformable object (e.g., a human face or an animal body) changes because of a viewpoint variation or a deformable motion. The common assumption is that the object structure can be automatically extracted using a keypoint detector.

(a) Aligned task
(b) Unaligned task
Fig. 3: (a) An example of “rigid” scene generation tasks, where the conditioning and the output image local structures are well aligned. (b) In a deformable-object generation task, the input and output images are not spatially aligned.

After the publication of the pioneering work of Ma et al. [2], there has been a quickly growing interest in this task, as witnessed by different, very recent papers on this topic [7, 8, 3, 9, 10, 4]

. The reason of this large interest is probably due to the many potential applicative scenarios, ranging from computer-graphics based manipulations

[7] to data augmentation for training person re-identification (Re-ID) [11, 10]

or human pose estimation

[12] systems. However, most of the recently proposed, deep-network based generative approaches, such as Generative Adversarial Networks (GANs) [13]

or Variational Autoencoders (VAEs)

[14] do not explicitly deal with the problem of articulated-object generation. Common conditional methods (e.g., conditional GANs or conditional VAEs) can synthesize images whose appearances depend on some conditioning variables (e.g., a label or another image). For instance, Isola et al. [15]

recently proposed an “image-to-image translation” framework, in which an input image

is transformed into a second image represented in another “channel” (see Fig. (a)a). However, most of these methods have problems when dealing with large spatial deformations between the conditioning appearance and the target image. For instance, the U-Net architecture used by Isola et al. [15] is based on skip connections which help preserving local information between and . Specifically, skip connections are used to copy and then concatenate the feature maps of the generator “encoder” (where information is downsampled using convolutional layers) to the generator “decoder” (containing the upconvolutional layers). However, the assumption used in [15] is that and are roughly aligned with each other and they represent the same underlying structure. This assumption is violated when the foreground object in undergoes to large spatial deformations with respect to (see Fig. (b)b). As shown in [2], skip connections cannot reliably cope with misalignments between the two poses. In Sec. 2 we will see that U-Net based generators are widely used in most of the recent person-generation approaches, hence this misalignment problem is orthogonal to many methods.

Ma et al. [2] propose to alleviate this problem using a two-stage generation approach. In the first stage, a U-Net generator is trained using a masked loss in order to produce an intermediate image conditioned on the target pose. In the second stage, a second U-Net based generator is trained using also an adversarial loss in order to generate an appearance difference map which brings the intermediate image closer to the appearance of the conditioning image. In contrast, the U-Net-based method we propose in this paper is end-to-end trained by explicitly taking into account pose-related spatial deformations. More specifically, we propose deformable skip connections

which “move” local information according to the structural deformations represented in the conditioning variables. These layers are used in our U-Net based generator. In order to move information according to specific spatial deformations, first we decompose the overall deformation by means of a set of local affine transformations involving subsets of joints. After that, we deform the convolutional feature maps of the encoder according to these transformations and we use common skip connections to transfer the transformed tensors to the decoder’s fusion layers. Moreover, we also propose to use a

nearest-neighbour loss as a replacement of common pixel-to-pixel losses (such as, e.g., or losses) commonly used in conditional generative approaches. This loss proved to be helpful in generating local details (e.g., texture) similar to the target image which are not penalized because of small spatial misalignments.

Part of the material presented here appeared in [4]. The current paper extends [4] in several ways. First, we present a more detailed analysis of related work by including very recently published papers dealing with pose-conditioned human image generation. Second, we show how a variant of our method can be used in order to introduce a third conditioning variable: the background, represented by a third input image. Third, we describe in more details our method. Finally, we extend the quantitative and qualitative experiments by comparing our Deformable GANs with the very recent work in this area. Specifically, this comparison with the state of the art is performed using: (1) the protocols proposed by Ma et al. [2] and (2) Re-ID based experiments. The latter are motivated by the recent trend of using generative methods for data-augmentation [7, 10, 16, 17, 18, 19], and show that Deformable GANs can largely improve the accuracy of different Re-ID systems. Conversely, most of the other state-of-the-art methods generate new training samples which are harmful for Re-ID systems, leading to a significantly worse performance with respect to a non-augmented training dataset.

Although tested on the specific human-body problem, our approach makes few human-related assumptions and can be easily extended to other domains involving the generation of highly deformable objects. Our code and our trained models are publicly available111https://github.com/AliaksandrSiarohin/pose-gan.

The rest of the paper is organized as follows. In Sec. 2, we analyse the related work. Our method is presented in Sec. 3 and Sec. 4. Sec. 5 presents the experimental evaluation and conclusions are drawn in Sec. 6.

2 Related work

Most common deep-network-based approaches for visual content generation can be categorized as either Variational Autoencoders (VAEs) [14] or Generative Adversarial Networks (GANs) [13]. VAEs are based on probabilistic graphical models and are trained by maximizing a lower bound of the corresponding data likelihood. GANs are based on two networks, a generator and a discriminator, which are trained simultaneously such that the generator tries to “fool” the discriminator and the discriminator learns how to distinguish between real and fake images.

Isola et al. [15]

propose a conditional GAN framework for image-to-image translation problems, where a given scene representation is “translated” into another representation. The main assumption behind this framework is that there exits a spatial correspondence between the low-level information of the conditioning and the output image.

VAEs and GANs are combined in [8] to generate realistic-looking multi-view images of clothes from a single-view input image. The target view is fed to the model using a viewpoint label such as front or left side and a two-stage approach is adopted: pose integration and image refinement. Ma et al. [2] propose a more general approach which allows to synthesize person images in any arbitrary pose. Similarly to our proposal, the input of their model is a conditioning appearance image of the person and a target new pose defined by 18 joint locations. The target pose is described by means of binary maps where small circles represent the joint locations. This work has been extended in [20] by learning disentangled representations of person images. More precisely, in the generator, the pose, the foreground and background are separately encoded in order to obtain a disentangled description of the image. The input image is then reconstructed by combining the three descriptors. The major advantage of this approach is that it does not require pairs of images of the same person at training time. However, the generated images consequently suffer from a lower level of realism.

Inspired by Ma et al. [2], several methods have been recently proposed to generate human images. In [21, 8], the generation process is split into two different stages: pose generation and texture refinement. Si et al [9] propose multistage adversarial losses for generating images of a person in the same pose but from another camera viewpoint. Specifically, the first generation stage generates the body pose in the new viewpoint. The second and the third stage generate the foreground (i.e., the person) and the background, respectively. Similarly to our proposal, Balakrishnan et al. [22] partition the human body into different parts and separately deform each of them. Their method is based on producing a set of segmentation masks, one per body part, plus a whole-body mask which separates the human figure from the background. However, in order for the model to segment the human figure without relying on pixel-level annotations, training is based on pairs of conditioning images with the same background (e.g., frame pairs extracted from the same video with a static camera and background). This constraint prevents the use of this method in applications such as Re-ID data augmentation in which training images are usually taken in different environments. In contrast to [21, 8, 9, 22], in this paper we show that a single-stage approach, trained end-to-end, can be used for the same task obtaining higher qualitative results and that our method can be easily used as a useful black-box for Re-ID data augmentation.

Recently, Neverova et al. [23] propose to synthesize a new image of the input person by blending different generated texture maps. This method is based on a dense-pose estimation system [24] which maps pixels from images to a common surface-based coordinate framework. However, since the dense-pose estimator needs to be trained with a large-scale ground-truth dataset with image-to-surface correspondences manually annotated [24], [23] is not directly comparable with most of the other works (ours included) which rely on (sparse) keypoint detectors, whose training is based on a lower level of human supervision.

In [3] a VAE is used to represent the appearance and pose with two separated encoders. The appearance and pose descriptors are then concatenated and passed to a decoder which generates the final image. Zanfir et al. [25] estimate the human 3D-pose using meshes. Then, they identify the mesh regions which can be transferred directly from the input image mesh to the target mesh. Finally, the missing surfaces are filled using a color regressor trained via Euclidean loss minimization. Despite the visually satisfying results, this method requires prior knowledge in order to obtain the 3D body meshes and the clothes segmentation used to synthesize the final image. In [10], a person generation model is specifically designed for boosting Re-ID accuracy using data augmentation. A sub-network is added to a standard U-Net GAN network in order to verify whether the identity of the person in the generated images can be distinguished from other identities.

Generally speaking, U-Net based architectures are frequently adopted for pose-based person-image generation tasks [21, 2, 7, 8, 3, 9, 10]. However, common U-Net skip connections are not well-designed for large spatial deformations because local information in the input and in the output images is not aligned (Fig. 3). In contrast, we propose deformable skip connections to deal with this misalignment problem and “shuttle” local information from the encoder to the decoder driven by the specific pose difference. In this way, differently from previous work, we are able to simultaneously generate the overall pose and the texture-level refinement. Note that many of the above mentioned U-Net based methods are partially complementary to our approach since our deformable skip connections can potentially be plugged into the corresponding U-net, possibly increasing the final performance.

[landmarkGen]1Landmark locations are exploited for other generation tasks such as face synthesis [6, 5]. However, since human face can be considered as a more rigid object than the human body, the misalignment between the input and output images is limited and high quality images can be obtained without feature alignment.

It is worth noticing that, for discriminative tasks, other architectures have been proposed to deal with spatial deformations. For instance, Jaderberg et al. [26] propose a spatial transformer layer, which learns how to transform a feature map in a “canonical” view, conditioned on the feature map itself. However only a global, parametric transformation can be learned (e.g., a global affine transformation), while in this paper we deal with non-parametric deformations of articulated objects which cannot be described by means of a unique global affine transformation.

Finally, our nearest-neighbour loss is similar to the perceptual loss proposed in [1] and to the style-transfer spatial-analogy approach recently proposed in [27]

. However, the perceptual loss, based on an element-by-element difference computed in the feature map of an external classifier

[1], does not take into account spatial misalignments. On the other hand, the patch-based similarity, adopted in [27] to compute a dense feature correspondence, is very computationally expensive and it is not used as a loss.

3 Deformable GANs

Fig. 4: A schematic representation of our network architectures. For the sake of clarity, in this figure we depict as a skeleton and each tensor as the average of its component matrices (). The white rectangles in the decoder represent the feature maps directly obtained using up-convolutional filters applied to the previous-layer maps. The reddish rectangles represent the feature maps “shuttled” by the skip connections in the target stream. Finally, blueish rectangles represent the deformed tensors () “shuttled” by the deformable skip connections in the source stream.

We start with a description of the architectures of our generator () and discriminator () and the proposed deformable skip connections. We first introduce some notation. At testing time our task, similarly to [2], consists in generating an image showing a person whose appearance (e.g., clothes, etc.) is similar to an input, conditioning appearance image but with a body pose similar to , where is a different image of the same person and is a sequence of 2D points describing the locations of the human-body joints in . In order to allow a fair comparison with [2] and other works, we use the same number of joints () and we extract using the same Human Pose Estimator (HPE) [12] used in [2]. Note that this HPE is used both at testing and at training time, meaning that we do not use manually-annotated poses and the so extracted joint locations may have some localization errors or missing detections/false positives.

At training time we use a dataset containing pairs of conditioning-target images of the same person in different poses. For each pair , two poses and are extracted from the corresponding images and represented using two tensors and . Each tensor is composed of heat maps, where () is a 2D matrix of the same dimension as the original image. If is the j-th joint location, then:

(1)

with pixels (chosen with cross-validation). Using blurring (Eq. (1)) instead of a binary map as instead adopted in [2], is useful to provide widespread information about the location .

The generator

is fed with: (1) a noise vector

, drawn from a noise distribution and implicitly provided using dropout [15] and (2) the triplet . Note that, at testing time, the target pose is known, thus can be computed. Note also that the joint locations in and are spatially aligned (by construction), while in they are different. Hence, differently from [2, 15], is not concatenated with the other input tensors. Indeed the convolutional-layer units in the encoder part of have a small receptive field which cannot capture large spatial displacements. For instance, when there is a large movement of a body limb in with respect to , this limb is represented in different locations in and which may be too far apart from each other to be captured by the receptive field of the convolutional units. This is emphasized in the first layers of the encoder, which represent low-level information. Therefore, the convolutional filters cannot simultaneously process texture-level information (from ) and the corresponding pose information (from ).

For this reason we independently process and from in the encoder. Specifically, and are concatenated and processed using the source stream of the encoder while is processed by means of the target stream, without weight sharing (Fig. 4). The feature maps of the first stream are then fused with the layer-specific feature maps of the second stream in the decoder after a pose-driven spatial deformation performed by our deformable skip connections (see Sec. 3.1).

Our discriminator network is based on the conditional, fully-convolutional discriminator proposed by Isola et al. [15]. In our case, takes as input 4 tensors: , where either or (see Fig. 4). These four tensors are concatenated and then given as input to . The discriminator’s output is a scalar value indicating its confidence on the fact that is a real image.

3.1 Deformable skip connections

As mentioned above and similarly to [15], the goal of the deformable skip connections is to “shuttle” local information from the encoder to the decoder part of . The local information to be transferred is, generally speaking, contained in a tensor , which represents the feature map activations of a given convolutional layer of the encoder. However, differently from [15], we need to “pick” the information to shuttle taking into account the object-shape deformation which is described by the difference between and . To do so, we decompose the global deformation in a set of local affine transformations, defined using subsets of joints in and . Using these affine transformations together with local masks constructed using the specific joints, we deform the content of and then we use common skip connections to copy the transformed tensor and concatenate it with the corresponding tensor in the destination layer (see Fig. 4). Below we describe in more detail the whole pipeline.

Decomposing an articulated body in a set of rigid sub-parts. The human body is an articulated “object” which can be roughly decomposed into a set of rigid sub-parts. We chose 10 sub-parts: the head, the torso, the left/right upper/lower arm and the left/right upper/lower leg. Each of them corresponds to a subset of the 18 joints defined by the HPE [12] we use for extracting . Using these joint locations we can define rectangular regions which enclose the specific body part. In case of the head, the region is simply chosen to be the axis-aligned enclosing rectangle of all the corresponding joints. For the torso, which is the largest area, we use a region which includes the whole image, in such a way to shuttle texture information for the background pixels. Note that in Sec. 3.3 we present an alternative way to generate background information. Concerning the body limbs, each limb corresponds to only 2 joints. In this case we define a region to be a rotated rectangle whose major axis () corresponds to the line between these two joints, while the minor axis () is orthogonal to and with a length equal to one third of the mean of the torso’s diagonals (this value is used for all the limbs). In Fig. 5 we show an example. [Rhcom]1 Let be the set of the 4 rectangle corners in defining the -th body region (). Note that these 4 corner points are not joint locations. Using , we can compute a binary mask which is zero everywhere except the points lying inside the rectangle area corresponding to .

Moreover, let be the corresponding rectangular region in . Matching the points in with the corresponding points in we can compute the parameters of a body-part specific affine transformation (see below). In either or , some of the body regions can be occluded, truncated by the image borders or simply miss-detected by the HPE. In this case we leave the corresponding region empty and the -th affine transform is not computed (see below).

Note that our body-region definition is the only human-specific part of the proposed approach. However, similar regions can be easily defined using the joints of other articulated objects such as those representing an animal body or a human face.

Computing a set of affine transformations. During the forward pass (i.e., both at training and at testing time) we decompose the global deformation of the conditioning pose with respect to the target pose by means of a set of local affine transformations, one per body region. Specifically, given in and in (see above), we compute the 6 parameters of an affine transformation using Least Squares Error:

(2)

The parameter vector is computed using the original image resolution of and and then adapted to the specific resolution of each involved feature map . Similarly, we compute scaled versions of each . In case either or is empty (i.e., when any of the specific body-region joints has not been detected using the HPE, see above), then we simply set to be a matrix with all elements equal to 0 ( is not computed).

Note that and their lower-resolution variants need to be computed only once per each pair of real images and, in case of the training phase, this can be done before starting training the networks (but in our current implementation this is done on the fly).

Fig. 5: For each specific body part, an affine transformation is computed. This transformation is used to “move” the feature-map content corresponding to that body part.

Combining affine transformations to approximate the object deformation. Once , are computed for the specific spatial resolution of a given tensor , the latter can be transformed in order to approximate the global pose-dependent deformation. Specifically, we first compute for each :

(3)

where is a point-wise multiplication and is used to “move” all the channel values of corresponding to point . [eq4c]1Finally, we merge the resulting tensors treating each feature channel independently:

(4)

In other words, for each channel and each feature location , we select the maximum value over the ten feature maps corresponding to the ten considered body-parts. The rationale behind Eq. (4) is that, when two body regions partially overlap each other, the final deformed tensor is obtained by picking the maximum-activation values. [maxOrNot]1We experimentally show the benefit of this hard-decision formulation over softer combinations as averaging in Sec 5.7. [BGpreserve-eq]1Note that, background is not modeled in Eqs. (3) and  (4) since there is no need for preserving it within the source stream. This point is further discussed in Sec. 3.3.

3.2 Training

and are trained using a combination of a standard conditional adversarial loss with our proposed nearest-neighbour loss . Specifically, in our case is given by:

(5)

where .

Previous works on conditional GANs combine the adversarial loss with either an [28] or an -based loss [15, 2] which is used only for . For instance, the distance computes a pixel-to-pixel difference between the generated and the real image, which, in our case, is:

(6)

However, a well-known problem behind the use of and is the production of blurred images. We hypothesize that this is also due to the inability of these losses to tolerate small spatial misalignments between and . For instance, suppose that , produced by , is visually plausible and semantically similar to , but the texture details on the clothes of the person in the two compared images are not pixel-to-pixel aligned. Both the and the loss will penalize this inexact pixel-level alignment, although not semantically important from the human point of view. Note that these misalignments do not depend on the global deformation between and , because is supposed to have the same pose as . In order to alleviate this problem, we propose to use a nearest-neighbour loss based on the following definition of image difference:

(7)

where is a local neighbourhood of point (we use and neighbourhoods for the DeepFashion and the Market-1501 dataset, respectively, see Sec. 5). is a vectorial representation of a patch around point in image , obtained using convolutional filters (see below for more details). Note that is not a metrics because it is not symmetric. In order to efficiently compute Eq. (7), we compare patches in and using their representation () in a convolutional map of an externally trained network. In more detail, we use VGG-19 [29]

, trained on ImageNet and, specifically, its second convolutional layer (called

). The first two convolutional maps in VGG-19 ( and

) are both obtained using a convolutional stride equal to 1. For this reason, the feature map (

) of an image in has the same resolution of the original image . Exploiting this fact, we compute the nearest-neighbour field directly on , without losing spatial precision. Hence, we define: , which corresponds to the vector of all the channel values of with respect to the spatial position . has a receptive field of in , thus effectively representing a patch of dimension using a cascade of two convolutional layers interspersed by a non-linearity. Using , Eq. (7) becomes:

(8)

In Sec. 4.1, we show how Eq. (8) can be efficiently implemented using GPU-based parallel computing. The final -based loss is:

(9)

Combining Eq. (5) and Eq. (9) we obtain our objective:

(10)

with used in all our experiments. The value of is small because it acts as a normalization factor in Eq. (8) with respect to the number of channels in and the number of pixels in (more details in Sec. 4.1).

3.3 Conditioning on the Background

We now introduce a third (optional) conditioning variable: the background. Controlling the background generation has two main practical interests. First, in the context of data augmentation, we can increase the diversity of the generated data by using different background images. Second, when we aim at generating image sequences (e.g., short videos), a temporally coherent background is helpful. Consequently, in this section we show how to extend the proposed method in order to generate the background area conditioned on a given, third input image.

Formally speaking, the output image should be conditioned (also) on a target background image . In the generator, this is simply obtained by concatenating with (see Fig. 4). In this way, background information can be provided to the decoder using the target stream. [BGpreserve]1Note that, since the background image is part of the target stream (in red in Fig. 4), the deformable skip connections defined in Eq. 3

are not applied to the image features extracted from the background image. Therefore, there is no need to specifically modify Eq. 

3 to handle background conditioning.

Similarly, the discriminator network takes as an additional input, which is concatenated with the other input images. Therefore, the discriminator can detect whether the background of corresponds to the conditioning background image and force the generator to output images with the desired background.

Training is performed using video sequences from which a background image can be easily extracted. More in details, we used the PRW dataset [30] which contains a set of videos annotated with the bounding box of each tracked person. When we train our networks with background conditioning information, we extract and from a person track in two different frames which may come from different videos. Then, is obtained by choosing at random a frame in the same video of with no bounding box overlapping with the area corresponding to . Note that in the PRW dataset the cameras are static and the background objects do not move during the video (except for a few objects as, for instance, bikes).

4 Implementation details

In this section we provide additional technical details of our proposed method. We first show how the proposed nearest-neighbour loss can be efficiently computed exploiting optimized matrix-multiplications typically used in GPU-based programming. Second, we show how to use the symmetry of the human body in order to handle possible missing/non-detected body parts. Finally, we report the details of the architectures and the training procedure used in our experiments.

4.1 Nearest-neighbour loss implementation

Our proposed nearest-neighbour loss is based on the definition of given in Eq. (8). In that equation, for each point in , the “most similar” (in the -based feature space) point in needs to be searched for in a neighborhood of . This operation may be quite time consuming if implemented using sequential computing (i.e., using a “for-loop”). We show here how this computation can be sped-up by exploiting GPU-based parallel computing in which different tensors are processed simultaneously.

Given , we compute shifted versions of : , where is a translation offset ranging in a relative neighborhood () and is filled with the value in the borders. Using this translated versions of , we compute corresponding difference tensors , where:

(11)

and the difference is computed element-wise. contains the channel-by-channel absolute difference between and . Then, for each , we sum all the channel-based differences obtaining:

(12)

where ranges over all the channels and the sum is performed pointwise. is a matrix of scalar values, each value representing the norm of the difference between a point in and a corresponding point in :

(13)

For each point , we can now compute its best match in a local neighbourhood of simply using:

(14)

Finally, Eq. (8) becomes:

(15)

Since we do not normalize Eq. (12) by the number of channels nor Eq. (15) by the number of pixels, the final value is usually very high. For this reason we use a small value in Eq. (10) when weighting with respect to .

4.2 Exploiting the human-body symmetry

As mentioned in Sec. 3.1, we decompose the human body in 10 rigid sub-parts: the head, the torso and 8 limbs (left/right upper/lower arm, etc.). When one of the joints corresponding to one of these body-parts has not been detected by the HPE, the corresponding region and affine transformation are not computed and the region-mask is filled with 0. This can happen because of either that region is not visible in the input image or because of false-detections of the HPE.

However, when the missing region involves a limb (e.g., the right-upper arm) whose symmetric body part has been detected (e.g., the left-upper arm), we can “copy” information from the “twin” part. In more detail, suppose for instance that the region corresponding to the right-upper arm in the conditioning appearance image is and this region is empty because of one of the above reasons. Moreover, suppose that is the corresponding (non-empty) region in and that is the (non-empty) left-upper arm region in . We simply set: and we compute as usual, using the (now, no more empty) region together with .

4.3 Network and Training details

We train and for 90k iterations, with the Adam optimizer (learning rate: , , ). Following [15] we use instance normalization [31]. In the following we denote with: (1)

a convolution-ReLU layer with

filters and stride , (2) the same as with instance normalization before ReLU and (3) the same as with the addition of dropout at rate . Differently from [15], we use dropout only at training time. The encoder part of the generator is given by two streams (Fig. 4), each of which is composed of the following sequence of layers:

.

The decoder part of the generator is given by:

.

In the last layer, ReLU is replaced with .

The discriminator architecture is:

,

where the ReLU of the last layer is replaced with .

The generator for the DeepFashion dataset has one additional convolution block () both in the encoder and in the decoder, because images in this dataset have a higher resolution.

5 Experiments

In this section we compare our method with other state-of-the-art person generation approaches, both qualitatively and quantitatively and we show an ablation study. Since quantitative evaluation of generative methods is still an open research problem, we adopt different criteria, which can be summarized in: (1) the evaluation protocols suggested by Ma et al. [2], (2) human judgements and (3) experiments based on Re-ID training with data-augmentation. Note that in all but the qualitative experiments shown in Sec. 5.6 we do not use background conditioning information. Indeed, since most of the methods we compare with do not use additional background conditioning information, we also avoided this for a fair comparison.

5.1 Datasets

The person Re-ID Market-1501 dataset [32] contains 32,668 images of 1,501 persons captured from 6 different surveillance cameras. This dataset is challenging because of the low-resolution images (12864) and the high diversity in pose, illumination, background and viewpoint. To train our model, we need pairs of images of the same person in two different poses. As this dataset is relatively noisy, we first automatically remove those images in which no human body is detected using the HPE, leading to 263,631 training pairs. For testing, following [2], we randomly select 12,000 pairs. No person is in common between the training and the test split.

The DeepFashion dataset (In-shop Clothes Retrieval Benchmark) [33] is composed of 52,712 clothes images, matched each other in order to form 200,000 pairs of identical clothes with two different poses and/or scales of the persons wearing these clothes. The images have a resolution of 256256 pixels. Following the training/test split adopted in [2], we create pairs of images, each pair depicting the same person with identical clothes but in different poses. After removing those images in which the HPE does not detect any human body, we finally collect 89,262 pairs for training and 12,000 pairs for testing.

5.2 Metrics

Evaluation in the context of generation tasks is a problem in itself. In our experiments we adopt a redundancy of metrics and, following [2], we use: Structural Similarity (SSIM) [34], Inception Score (IS) [35] and their corresponding masked versions mask-SSIM and mask-IS [2]. The latter are obtained by masking-out the image background and the rationale behind this is that, since no background information of the target image is input to , the network cannot guess what the target background looks like (remember that we do not use background conditioning in these experiments, see above). Note that the evaluation masks we use to compute both the mask-IS and the mask-SSIM values do not correspond to the masks () we use for training. The evaluation masks have been built following the procedure proposed in [2] and adopted in that work for both training and evaluation. Consequently, the mask-based metrics may be biased in favour of their method. Moreover, we observe that the IS metrics [35]

, based on the entropy computed over the classification neurons of an external classifier

[36], is not very suitable for domains with only one object class (the person class in this case). For this reason we propose to use an additional metrics that we call Detection Score (DS). Similarly to the classification-based metrics FCN-score, used in [15], DS is based on the detection outcome of the state-of-the-art object detector SSD [37], trained on Pascal VOC [38] (and not fine-tuned on our datasets). At testing time, we use the person-class detection scores of SSD computed on each generated image . corresponds to the maximum-score box of SSD on and the final DS value is computed by averaging the scores of all the generated images. In other words, DS measures the confidence of a person detector on the presence of a person in the image. Given the high accuracy of SSD in the challenging Pascal VOC dataset [37], we believe that it can be used as a good measure of how much realistic (person-like) is a generated image.

Finally, in our tables we also include the value of each metrics computed using the real images of the test set. Since these values are computed on real data, they can be considered as a sort of an upper-bound to the results a generator can obtain. However, these values are not actual upper bounds in the strict sense: for instance the DS metrics on the real datasets is not 1 because of SSD failures.

Market-1501 DeepFashion
Model SSIM IS mask-SSIM mask-IS DS SSIM IS DS
Ma et al. [2]
Ma et al. [20] -
Esser et al. [3]
Ours
Real-Data
TABLE I: Comparison with the state of the art.

5.3 Comparison with previous work

In this section we qualitatively and quantitatively compare our method with state-of-the-art person generation approaches.

Qualitative comparison. Fig. 6 shows the results on the Market-1501 dataset. Comparing the images generated by our full-pipeline with the corresponding images generated by the full-pipeline presented in [2], most of the times our results are more realistic, sharper and with local details (e.g., the clothes texture or the face characteristics) more similar to the details of the conditioning appearance image. In all the examples of Fig. 6 the method proposed in [2] produced images either containing more artefacts or more blurred than our corresponding images. Concerning the approach proposed in [3], we observe that the generated images are sometimes more realistic than ours (e.g., rows 1 and 3). However, the approach proposed in [3] is less effective in preserving the specific details of the conditioning appearance image. For instance, in the last row, our method better preserves the blue color of the shorts. Similarly, in the fourth row, the strips of the dress are well generated by our approach but not by [3].

Figs. 7 shows the results on the DeepFashion dataset. Also in this case, comparing our results with [2], most of the times ours look more realistic or closer to the details of the conditioning appearance image. For instance, the third row of Fig. 7 shows many artefacts in the image generated by [2]. Additionally, we see in the first two rows that our method more effectively transfers the details of the corresponding clothes textures. Concerning [3], and similarly to the Market-1501 dataset, the generated images are usually smoother and sometimes more realistic that ours. However, the appearance details look more generic and less conditioned on the specific details contained in . For instance, the shape of the shorts in the second row or the color of the trousers in the third row are less similar to the details of the corresponding appearance image with respect to our results.

We believe that this qualitative comparison shows that the combination of the proposed deformable skip-connections and the nearest-neighbour loss produced the desired effect to “capture” and transfer the correct local details from the conditioning appearance image to the generated image. Transferring local information while simultaneously taking into account the global pose deformation is a difficult task which can more hardly be implemented using “standard” U-Net based generators as those adopted in [2, 3]. We also believe that this comparison shows that our method is better able than [3] to transfer person-specific details. This observation based on a qualitative comparison is confirmed by the quantitative experiments using different Re-ID systems in Sec. 5.5: A significant, drastic decrease of the accuracy of all the tested Re-ID systems, obtained when using [3] for data augmentation, indirectly shows that the generated images are too generic to be used to populate a Re-ID training set.

Full (ours) Esser et al. [3] Ma et al. [2]
Fig. 6: A qualitative comparison on the Market-1501 dataset between our approach and [3] and [2]. Columns 1 and 2 show the (testing) conditioning appearance and pose image, respectively, which are used as reference by all methods. Columns 3, 4 and 5 respectively show the images generated by our full-pipeline and by [3] and [2].
Full (ours) Esser et al. [3] Ma et al. [2]
Fig. 7: A qualitative comparison on the DeepFashion dataset between our approach and the results obtained by [3] and [2].

Quantitative comparison. Using the metrics presented in Sec. 5.2 we perform the quantitative evaluation shown in Tab. I. Since background in this dataset is uniform and trivial to be reproduced, the mask-based metrics are not reported in the papers of the competitor methods for the DeepFashion dataset. Concerning the DS metrics, we used the publicly available code and network weights released by the authors of [2, 20, 3] in order to generate new images according to the common testing protocol and ran the SSD detector to get the DS values. Note that the DS metric is not reported for [20] because the authors have not released the code nor the generated images for the DeepFashion dataset. On the Market-1501 dataset our method reports the highest performance according to the mask-SSIM and the mask-IS metrics. Note that, except [20], none of the methods, including ours, is explicitly conditioned on background information, thus the mask-based metrics purely compare the region under conditioning. Ranking the methods according to the non-masked metrics is less easy. Specifically, our DS values are much higher than those obtained by [2] but lower than the scores obtained by [3]. A bit surprisingly, the DS scores obtained using [3] are even higher than the values obtained using real data. We presume this is due to the fact that the images generated by [3] look very realistic but probably they are relatively easy for a detector to be recognized, lacking of sufficient inter-person variability. The experiments performed in Sec. 5.5 indirectly confirm this interpretation. Conversely, on the DeepFashion dataset, our approach ranks the first with respect to the IS and the DS metrics and the third with respect to the SSIM metrics. This incoherence in the rankings illustrates that no final conclusion can be drawn using only the metrics presented in Sec. 5.2. For this reason, we extend the comparison performing two user studies (Sec. 5.4) and experiments based on person Re-ID (Sec. 5.5).

5.4 User study

[user1]1In order to further compare our approach with state of the art methods, we implement two different user studies. On the one hand, we follow the protocol of Ma et al. [2].  For each dataset, we show 55 real and 55 generated images in a random order to 30 users for one second. Differently from Ma et al. [2]

, who used Amazon Mechanical Turk (AMT), we used “expert” (voluntary) users: PhD students and Post-docs working in Computer Vision and belonging to two different departments. We believe that expert users, who are familiar with GAN-like images, can more easily distinguish real from fake images, thus confusing our users is potentially a more difficult task for our GAN. In Tab. 

II we show our results222 means Real images rated as generated / Real images; means Generated images rated as Real / Generated images. together with the results reported in [2]. We believe these results can be compared to each other because obtained using the same experimental protocol, although they have been obtained using different sets of users. No user-study is reported in [3, 20]. Tab. II confirms the significant quality boost of our images with respect to the images produced in [2]. For instance, on the Market-1501 dataset, the human “confusion” is one order of magnitude higher than in [2].

Market-1501 DeepFashion
Model R2G G2R R2G G2R
Ma et al. [2] 11.2 5.5 9.2 14.9
Ours 22.67 50.24 12.42 24.61
TABLE II: User study (). These results are reported in [2] and refer to a similar study with AMT users.

[user2]1On the other hand, we propose to directly compare the images generated by our method and by the methods of [2, 3]. Specifically, we randomly chose a source image and a target pose. We then show to the user the source image and three generated images (one per method). The user is asked to select the most realistic image of the person in the source image. By displaying the source image, we aim at evaluating both realism and appearance transfer. Using AMT, we ask 10 users to reapeat this evaluation on 50 different source images for each dataset. Results are reported in Tab.III. We observe that for both datasets, our method is chosen most frequently (in about of the cases). When comparing with the performances of [2], our approach reaches a preference percentage that is about twice higher. This result is well in line with the first user study reported in Tab.II. Overall, it shows again that our approach outperforms other methods on both datasets.

Model Market-1501 DeepFashion
Ma et al. [2] 23.8 19.4
Esser et al. [3] 30.0 35.8
Ours 46.2 44.8
TABLE III: User study based on direct comparisons: we report user preference in .

5.5 Person generation for Re-ID data-augmentation

The experiments of this section are motivated by the importance of using generative methods as a data-augmentation tool which provides additional labeled samples for training discriminative methods (see Sec. 1). Specifically, we show here that the synthetic images generated by our Deformable GANs can be used to train different Re-ID networks. The typical Re-ID task consists in recognizing the identity of a human person in different poses, viewpoints and scenes. The common application of a Re-ID system is a video-surveillance scenario in which images of the same person, grabbed by cameras mounted in different locations, need to be matched to each other. Due to the low-resolution of the cameras, person Re-ID is usually based on the colours and the texture of the clothes [39]. This makes our method particularly suited to automatically populate a Re-ID training dataset by generating images of a given person with identical clothes but in different viewpoints/poses.

In our experiments we use different Re-ID methods, taken from [39, 40]. [expeRID]1First, IDE [39] is an approach that consists in regarding Re-ID training as an image classification task where each class corresponds to a person identity. At test time, the identity is assigned based on the image feature representation obtained before the classification layer of the network. Each query image is associated to the identity of the closest image in the gallery. Different metrics can be employed at this stage to determine the closest gallery image. In our experiments, we consider three metrics: the Euclidean distance, a metric based on Cross-view Quadratic Discriminant Analysis (XQDA [41]) and a Mahalanobis-based distance (KISSME [42]). Second, opposed to the IDE approach that predicts identity labels, in [40], a siamese network predicts whether the identities of the two input images are the same. For all approaches, we use a ResNet-50 backbone pre-trained on ImageNet. Here, these Re-ID methods are used as black-boxes and trained with or without data-augmentation. We refer the reader to the corresponding articles for additional details about the involved approaches.

For training and testing we use the Market-1501 dataset that is designed for Re-ID benchmarking. Since [3, 20] cannot be explicitly conditioned on a background image, for a fair comparison we also tested our Deformable GANs without background conditioning (Sec. 3.3). For each of the tested person-generation approaches, we use the following data augmentation procedure. In order to augment the Market-1501 training dataset ( of size ) by a factor , for each image in we randomly select target poses, generating corresponding images using a person-generation approach. Note that: (1) Each generated image is labeled with the identity of the conditioning appearance image, (2) The target pose can be extracted from an individual different from the person depicted in the conditioning appearance image. Adding the generated images to we obtain an augmented training set.

In Tab. IV we report the results obtained using either (standard procedure, ) or the augmented dataset for training different Re-ID systems. Each row of the table corresponds to a different generative method used for data augmentation. Specifically, the results corresponding to our Deformable GANs are presented using two variants of our method: the Full pipeline, as described in Sec. 3 and using a Baseline generator architecture with an heat-map based pose representation (Eq. 1) but without deformable skip connections and nearest-neighbour loss (see Sec. 5.7 for more details).

The other person-generation approaches used for data augmentation are: [3, 20, 10]. Note that in [20] the data-augmentation procedure is slightly different from the one used by all the other methods (ours included). Indeed, in [20], new person appearances are synthesized by sampling appearance descriptors in a preliminarly learned embedding. Moreover, [10] is the only method which is specifically designed and trained for Re-ID data augmentation, using a Re-ID based loss to specifically drive the person-generation task (see Sec. 2), while all the other tested approaches, including ours, generate images independently of the Re-ID task which is used in this section for testing. Liu et al. [10] report slightly better results (Rank 1 = 79.7 and mAP = 57.9) obtained using Label Smoothing Regularization (LSR) [36] when training the final Re-ID system. However, LSR is based on a confidence value used to weight the generated samples differently from the real samples, and this hyper-parameter needs to be manually tuned depending on how trustable the generator is. For a fair comparison with other methods which do not adopt LSR (including ours), in Tab. IV, column “IDE + Euclidean”, we report the results obtained by [10] without LSR when training the final Re-ID system.

Tab. IV shows a significant accuracy boost when using our full model with respect to using only . This dramatic performance boost, orthogonal to different Re-ID methods, shows that our generative approach can be effectively used for synthesizing training samples. It also indirectly shows that the generated images are sufficiently realistic and different from the real images contained in . Importantly, we notice that there is no boost when using the Baseline model. Conversely, the Baseline-based results are even lower than without data-augmentation and this accuracy decrement is even more drastic when (higher data-augmentation factor). The comparison between Full and Baseline shows the importance of the proposed method to get sufficiently realistic images. Interestingly, we observe a similar, significant negative accuracy difference when data-augmentation is performed using either [20] or [3]. Our Full results are even slightly better than [10], despite the latter method is specifically designed for person Re-ID and data are generated driven by a Re-ID loss during training. These results indirectly confirm that our Deformable GANs can effectively capture person-specific details which are important to identify a person.

[expeRID-10]1Interestingly, we observe that with a high augmentation factor, such as , the performance is lower than with a factor 2. This indicates that increasing the number of generated images may harm Re-ID performance. This observation may seem counterintuitive since, in standard scenarios, the more data the better. A possible explanation for this drop in performance is that the proportion of real, and so artefact-free data is reduced when the augmentation factor increases.

Augmentation IDE + Euclidean [39] IDE + XQDA [39] IDE + KISSME [39] Discriminative Embedding [40]
factor () Rank 1 mAP Rank 1 mAP Rank 1 mAP Rank 1 mAP
No augmentation () 1 73.9 48.8 73.2 50.9 75.1 51.5 78.3 55.5
Ma et al. [20] 66.9 41.7 69.9 47.4 71.9 47.7 73.9 51.6
Esser et al. [3] 2 58.1 33.7 68.9 46.1 67.8 46.1 63.1 40.3
Liu et al.[10] 2 77.9 56.62 - - - - - -
Ours (Baseline) 2 68.1 42.82 69.57 46.43 69.45 45.88 70.69 46.58
Ours (Full) 2 78.9 56.9 78.2 57.9 79.7 58.3 81.4 60.3
Ours (Baseline) 10 59.8 34.5 60.9 38.2 61.9 37.8 61.6 39.4
Ours (Full) 10 78.5 55.9 77.8 57.9 79.5 58.1 80.6 61.3
TABLE IV: Influence of person-generation based data augmentation on the accuracy different Re-ID methods on the Market-1501 test set (Rank 1 / mAP in ). Uses a different data-augmentation strategy (see details in the text). These results have been provided by the authors of [10] via personal communication and are slightly different from those reported in [10], the latter being obtained by training the Re-ID network using LSR (see the text for more details).

5.6 Qualitative evaluation of the background-based conditioning

We provide in this section a qualitative evaluation of the background conditioning variant of our method presented in Sec. 3.3. As aforementioned, for a fair comparison, we have not used background conditioning information in the experiments in Secs. 5.35.4 and 5.5, since all the other methods we compare with do not use additional background information in their generation process.

In Fig. 8, we show some qualitative results combining different triplets of conditioning variables . [expefailure]1For each input pair, we employ seven different target background images . The first five background images are extracted from the PRW dataset and the last two were gathered from the internet in order to be visually really different from the background images of the PRW dataset. When we use background images visually similar to what the network saw at training time, we observe that our approach is able to naturally integrate the foreground person into the corresponding background. Interestingly, the generated images do not correspond to a simple pixel-to-pixel superimposition of the foreground image on top of the conditioning background. For instance, by comparing the third and fourth background columns, we see that the network adapts the brightness of the foreground to the brightness of the background. The image contrast and the blurring level in the generated images depend on the conditioning background. In the fifth column, second row, the network generated the bike wheel ahead of the person leg. A similar effect can be observed in the second background column, in which the legs of the people are partially occluded by the bikes. However, when we use images that are far from the background image distribution of the training set, our model fails to generate natural images. The backgrounds are correctly generated, but the persons are partially transparent.



Fig. 8: Qualitative results on the PRW dataset when conditioning on the background. We use three different pairs of conditioning appearance image and target pose . For each pair, we use five different target background images extracted from the PRW dataset and 2 background images that are visually really different from the PRW dataset backgrounds (last two columns).

5.7 Ablation study and qualitative analysis

Baseline DSC PercLoss Full
Fig. 9: Qualitative results on the Market-1501 dataset. Columns 1, 2 and 3 represent the input of our model. We plot as a skeleton for the sake of clarity, but actually no joint-connectivity relation is exploited in our approach. Column 4 corresponds to the ground truth. The last four columns show the output of our approach with respect to different variants of our method.
Baseline DSC PercLoss Full
Fig. 10: Qualitative results on the DeepFashion dataset with respect to different variants of our method. Some images have been cropped to improve the visualization.
Market-1501 DeepFashion
Model SSIM IS mask-SSIM mask-IS DS SSIM IS
Baseline
DSC
PercLoss
Full
Real-Data
TABLE V: Quantitative ablation study on the Market-1501 and the DeepFashion dataset.

In this section we present an ablation study to clarify the impact of each part of our proposal on the final performance. We first describe the compared methods, obtained by “amputating” important parts of the full-pipeline presented in Sec. 3. The discriminator architecture is the same for all the methods.

  • Baseline: We use the standard U-Net architecture [15] without deformable skip connections. The inputs of and and the way pose information is represented (see the definition of tensor in Sec. 3) is the same as in the full-pipeline. However, in , , and are concatenated at the input layer. Hence, the encoder of is composed of only one stream, whose architecture is the same as the two streams described in Sec. 4.3.

  • DSC: is implemented as described in Sec. 3, introducing our Deformable Skip Connections (DSC). Both in DSC and in Baseline, training is performed using an loss together with the adversarial loss.

  • PercLoss: This is DSC in which the loss is replaced with the Perceptual loss proposed in [1]. This loss is computed using the layer of [29], chosen to have a receptive field the closest possible to in Eq. 8, and computing the element-to-element difference in this layer without nearest neighbour search.

  • Full: This is the full-pipeline whose results are reported in Tabs. I-IV, and in which we use the proposed loss (see Sec. 3.2).

In Tab. V we report a quantitative evaluation on the Market-1501 and on the DeepFashion dataset with respect to the four different versions of our approach. In most of the cases, there is a progressive improvement from Baseline to DSC to Full. Moreover, Full usually obtains better results than PercLoss. These improvements are particularly evident looking at the DS metrics. DS values on the DeepFashion dataset are omitted because they are all close to the value .

In Fig. 9 and Fig. 10 we show some qualitative results. These figures show the progressive improvement through the four baselines which is quantitatively presented above. In fact, while pose information is usually well generated by all the methods, the texture generated by Baseline often does not correspond to the texture in or is blurred. In same cases, the improvement of Full with respect to Baseline is quite drastic, such as the drawing on the shirt of the girl in the second row of Fig. 10 or the stripes on the clothes of the persons in the third and in the fourth row of Fig. 9.

Finally, Fig. 11 and Fig. 12 show some failure cases (badly generated images) of our method on the Market-1501 dataset and the DeepFashion dataset, respectively. Some common failure causes are:

  • Errors of the HPE [12]. For instance, see rows 2, 3 and 4 of Fig. 11 or the wrong right-arm localization in row 2 of Fig. 12.

  • Ambiguity of the pose representation. For instance, in row 3 of Fig. 12, the left elbow has been detected in although it is actually hidden behind the body. Since contains only 2D information (no depth or occlusion-related information), there is no way for the system to understand whether the elbow is behind or in front of the body. In this case our model chose to generate an arm considering that the arm is in front of the body (which corresponds to the most frequent situation in the training dataset).

  • Rare poses. For instance, row 1 of Fig. 12 shows a girl in an unusual rear view with a sharp 90 degree profile face (). The generator by mistake synthesized a neck where it should have “drawn” a shoulder. Note that rare poses are a difficult issue also for other methods (e.g., [2]).

  • Rare object appearance. For instance, the backpack in row 1 of Fig. 11 is light green, while most of the backpacks contained in the training images of the Market-1501 dataset are dark. Comparing this image with the one generated in the last row of Fig. 9 (where the backpack is black), we see that in Fig. 9 the colour of the shirt of the generated image is not blended with the backpack colour, while in Fig. 11 it is. We presume that the generator “understands” that a dark backpack is an object whose texture should not be transferred to the clothes of the generated image, while it is not able to generalize this knowledge to other backpacks.

  • Warping problems. This is an issue related to our specific approach (the deformable skip connections). The texture on the shirt of the conditioning image in row 2 of Fig. 12 is warped in the generated image. We presume this is due to the fact that in this case the affine transformations need to largely warp the texture details of the narrow surface of the profile shirt (conditioning image) in order to fit the much wider area of the target frontal pose.

Baseline DSC PercLoss Full
Fig. 11: Examples of badly generated images on the Market-1501 dataset. See the text for more details.
Baseline DSC PercLoss Full
Fig. 12: Examples of badly generated images on the DeepFashion dataset.

[maxCombined]1Combining the affine transformations. In our approach, we combine the different local affine transformations by selecting the maximum response as specified in Eq.4. Alternatively, we could use the average operator to combine the local affine transformations leading to a soft combination. A third approach could consist in combining the transformations via linear combination. In that case, the features at location , , are combined following:

(16)

where are the weights of the linear combination. The intuition behind this formulation is that, at each location , we use the weights to select in a soft manner the relevant features . In order to condition on the feature maps , we feed the concatenation of the tensors (with ) along the channel axis into a 1x1 convolution layer that returns all the maps. We normalize in order to sum to one at each location . We compare these two soft approaches with our max-based approach in Tab.VI on the Market-1501. These results show that both the average and the linear combination approaches perform well but slightly worst than the max-based formulation of Eq.16. The performance differences are especially clear when looking at the mask-based metrics and the detection scores. In addition, we observe that the linear combination model under-performs both the average and the maximum approaches.

Market-1501
Combination SSIM IS mask-SSIM mask-IS DS
Average
Linear Comb.
Max
TABLE VI: Combining the affine transformations: quantitative ablation study on the Market-1501.

[expePose]1Robustness to HPE errors.

We now evaluate the robustness of our model to HPE errors. We propose to perform this evaluation using the following protocol. First, we train a model in a standard way. Then, at test time, we simulate HPE errors by randomly perturbing the limb positions predicted by the HPE. More precisely, the source poses are randomly perturbed using an isotropic 0-mean Gaussian noise to the arm and the leg landmarks with standard deviations equal to

pixels. This experiment is conducted with standard deviations varying from 0 to 25 pixels. We model HPE errors by adding noise to the arm and leg landmarks since this corresponds to the most frequent HPE errors. The performances are reported in Fig. 13 in term of mask-SSIM and mask-IS.

Fig. 13: Robustness to HPE errors: at test time we randomly perturbing the estimated limb position with a Gaussiam noise with a standard deviation .

We observe that mask-SSIM decreases consistently when the HPE error increases. Since mask-SSIM measures the reconstruction quality, this shows that an accurate HPE helps to reconstruct well the details of the person in the input image. Conversely, we observe that mask-IS values are not much affected by the HPE errors. This can be explained by the fact that IS-based metrics measure image quality and diversity but not reconstruction. This experiment shows that HPE errors impact the model ability to preserve the appearance of the conditioning image, but do not affect significantly the quality and diversity of the generated images. Nevertheless, even in the case of very noisy HPE, eg. pixels, our model performs similarly to the Baseline model in Tab. V which does not use deformable skip connections. This shows that our proposed deformable skip connections are robust to HPE errors.

[expeGfunc]1Choice of the function. Our nearest neighbour loss uses an auxiliary function . In order to measure the impact of the choice of , in Table VII, we compare the scores obtained when using different layers of the VGG-19 network to implement . We observe that when our nearest neighbour loss is computed in the pixel space, we obtain good mask-SSIM and mask-IS scores but really poor detection scores. It shows that, assessing reconstruction quality in the pixel space leads to images with a poor general structure. Conversely, when we use a higher network layer to implement , we obtain lower mask-SSIM and mask-IS scores but better detection score. Note that, using a higher layer for (ie. Block ) also increases the training computation. Finally, our proposed implementation that uses Block for , reaches the highest detection score and the best trade-off between the other metrics.

Market-1501
function SSIM IS mask-SSIM mask-IS DS
Pixel space
Block
Block
Block
TABLE VII: Choice of the function: quantitative ablation study on the Market-1501.

[expelambda]1Sensitivity to the parameter. As training objective, we use a combination on a reconstruction and an adversarial loss as specified in Eq. (10). We now evaluate the impact of the parameter that controls the balance between the two losses. Quantitative results are reported in Table VIII. We observe that a low value leads to better SSIM scores but lower IS values. Indeed, since SSIM measures reconstruction quality, a low value reduces the impact of the adversarial loss and therefore generates images with a high reconstruction quality. Consequently, it also reduces diversity and realism leading to poorer IS values. Conversely, with a high value, we obtain high IS values but low SSIM scores. We further investigate the impact of the parameter by reporting a qualitative comparison in Fig 14. Consistently with the quantitative comparison, we observe that lower values result in smoother images without texture details whereas high values generate detailed images but with more artifacts.

Market-1501
SSIM IS mask-SSIM mask-IS DS
0.1
0.01
0.001 0.245 3.566 0.779
TABLE VIII: Quantitative ablation study on the Market-1501: sensitivity to the parameter
value
Fig. 14: Qualitative ablation study: impact of the parameter.

[expeSym]1Exploiting symmetry.  In Sec.4.2, we explained how we exploit human-body symmetry to improve generation. When a missing region involves a limb whose symmetric body part is detected, we copy the features from the twin part. We now evaluate the impact of this strategy. For comparison, we introduce a model that does not use symmetry. More precisely, the regions and the affine transformations corresponding to undetected body-part are not computed and the corresponding region-masks are filled with 0. Results are reported in Table  IX. We observe that exploiting human-body symmetry with the proposed consistently improves all the metrics. Although limited, these consistent gains clearly show the benefit of our approach.

Market-1501
Symmetry SSIM IS mask-SSIM mask-IS DS
Without
With
TABLE IX: Quantitative ablation study on the Market-1501: Exploiting symmetry.

6 Conclusions

In this paper we presented a GAN-based approach for image generation of persons conditioned on the appearance and the pose. We introduced two novelties: deformable skip connections and nearest-neighbour loss. The first are used to solve common problems in U-Net based generators when dealing with deformable objects. The second novelty is used to alleviate a different type of misalignment between the generated image and the ground-truth image.

Our experiments, based on both automatic evaluation metrics and human judgements, show that the proposed method outperforms or is comparable with previous work on this task. Importantly, we show that, contrary to other

generic person-generation methods, our Deformable GANs can be used to significantly improve the accuracy of different Re-ID systems using data-augmentation and the obtained performance boost is even higher than a state-of-the-art Re-ID specific data-augmentation approach.

Despite we tested our Deformable GANs on the specific task of human-generation, only few assumptions are used which refer to the human body and we believe that our proposal can be easily extended to address other deformable-object generation tasks.

Acknowledgments

We want to thank the NVIDIA Corporation for the donation of the GPUs used in this project.

References

  • [1]

    J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in

    ECCV, 2016.
  • [2] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, “Pose guided person image generation,” in NIPS, 2017.
  • [3] P. Esser, E. Sutter, and B. Ommer, “A variational u-net for conditional appearance and shape generation,” in CVPR, 2018, pp. 8857–8866.
  • [4] A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe, “Deformable gans for pose-based human image generation,” in CVPR, 2018.
  • [5] W. Wang, X. Alameda-Pineda, D. Xu, P. Fua, E. Ricci, and N. Sebe, “Every smile is unique: Landmark-guided diverse smile generation,” in CVPR, Jun 2018.
  • [6] Q. Sun, L. Ma, S. Joon Oh, L. V. Gool, B. Schiele, and M. Fritz, “Natural and effective obfuscation by head inpainting,” in CVPR, Jun 2018.
  • [7] J. Walker, K. Marino, A. Gupta, and M. Hebert, “The pose knows: Video forecasting by generating pose futures,” in ICCV, 2017.
  • [8] B. Zhao, X. Wu, Z. Cheng, H. Liu, and J. Feng, “Multi-view image generation from a single-view,” arXiv:1704.04886, 2017.
  • [9] C. Si, W. Wang, L. Wang, and T. Tan, “Multistage adversarial losses for pose-based human image synthesis,” in CVPR, 2018, pp. 118–126.
  • [10] J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu, “Pose transferrable person re-identification,” in CVPR, 2018, pp. 4099–4108.
  • [11] Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated by GAN improve the person re-identification baseline in vitro,” in ICCV, 2017.
  • [12] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, “Realtime multi-person 2D pose estimation using part affinity fields,” in CVPR, 2017.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, 2014.
  • [14] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in ICLR, 2014.
  • [15]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,”

    CVPR, 2017.
  • [16] B. Hariharan and R. B. Girshick, “Low-shot visual recognition by shrinking and hallucinating features,” in ICCV, 2017, pp. 3037–3046.
  • [17] Y. Zhu, M. Elhoseiny, B. Liu, and A. M. Elgammal, “Imagine it for me: Generative adversarial approach for zero-shot learning from noisy texts,” arXiv:1712.01381, 2017.
  • [18] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” CVPR, 2018.
  • [19] Y. Wang, R. B. Girshick, M. Hebert, and B. Hariharan, “Low-shot learning from imaginary data,” CVPR, 2018.
  • [20] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz, “Disentangled person image generation,” in CVPR, 2018, pp. 99–108.
  • [21] C. Lassner, G. Pons-Moll, and P. V. Gehler, “A generative model of people in clothing,” in ICCV, 2017.
  • [22] G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag, “Synthesizing images of humans in unseen poses,” in CVPR, 2018.
  • [23] N. Neverova, R. Alp Guler, and I. Kokkinos, “Dense pose transfer,” in ECCV, 2018.
  • [24] R. A. Guler, N. Neverova, and I. Kokkinos, “Densepose: Dense human pose estimation in the wild,” 2018.
  • [25] M. Zanfir, A.-I. Popa, A. Zanfir, and C. Sminchisescu, “Human appearance transfer,” in CVPR, 2018, pp. 5391–5399.
  • [26] M. Jaderberg, K. Simonyan, A. Zisserman et al.

    , “Spatial transformer networks,” in

    NIPS, 2015.
  • [27] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang, “Visual attribute transfer through deep image analogy,” ACM Trans. Graph., vol. 36, no. 4, 2017.
  • [28] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” CVPR, 2016.
  • [29] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
  • [30] L. Zheng, H. Zhang, S. Sun, M. Chandraker, and Q. Tian, “Person re-identification in the wild,” arXiv preprint arXiv:1604.02531, 2016.
  • [31] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv:1607.08022, 2016.
  • [32] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in ICCV, 2015.
  • [33] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations,” in CVPR, 2016.
  • [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE TIP, 2004.
  • [35] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in NIPS, 2016.
  • [36] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in CVPR, 2016.
  • [37] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in ECCV, 2016.
  • [38] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.”
  • [39] L. Zheng, Y. Yang, and A. G. Hauptmann, “Person re-identification: Past, present and future,” arXiv:1610.02984, 2016.
  • [40] Z. Zheng, L. Zheng, and Y. Yang, “A discriminatively learned CNN embedding for person reidentification,” TOMCCAP, vol. 14, no. 1, pp. 13:1–13:20, 2018.
  • [41] S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2015, pp. 2197–2206.
  • [42] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof, “Large scale metric learning from equivalence constraints,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition.   IEEE, 2012, pp. 2288–2295.