The person image generation task, as proposed by Ma et al. , consists in generating “person images in arbitrary poses, based on an image of that person and a novel pose”. This task has recently attracted a lot of interest in the community because of different potential applications, such as computer-graphics based manipulations  or data augmentation for training person re-identification [41, 16]
or human pose estimation systems. Previous work on this field [19, 15, 39, 26, 3, 25] assume that the generation task is conditioned on two variables: the appearance image of a person (we call this variable the source image) and a target pose, automatically extracted from a different image of the same person using a Human Pose Estimator (HPE).
Using person-specific abundant data the quality of the generated images can be potentially improved. For instance, a training dataset specific to each target person can be recorded . Another solution is to build a full-3D model of the target person . However, these approaches lack of flexibility and need an expensive data-collection.
In this work we propose a different direction which relies on a few, variable number of source images (e.g., from 2 to 10). We call the corresponding task multi-source human image generation. As far as we know, no previous work has investigated this direction yet. The reason for which we believe this generalization of the person-image generation task is interesting is that multiple source images, when available, can provide richer appearance information. This data redundancy can possibly be exploited by the generator in order to compensate for partial occlusions, self-occlusions or noise in the source images. More formally, we define our multi-source human image generation task as follows. We assume that a set of () source images is given and that these images depict the same person with the same overall appearance (e.g., the same clothes, haircut, etc.). Besides, a unique target body pose is provided, typically extracted from a target image not contained in . The multi-source human image generation task consists in generating a new image with an appearance similar to the general appearance pattern represented in but in the pose (see Fig. 1). Note that is not a-priori fixed, and we believe this task characteristics are important for practical applications, in which the same dataset can contain multiple-source images of the same person but with unknown and variable cardinalities.
Most of previous methods on single-source human image generation [26, 15, 19, 34, 39, 9, 25, 16] are based on variants of the U-Net architecture generator proposed by Isola et al. . A common, general idea in these methods is that the conditioning information (e.g., the source image and/or the target pose) is transformed into the desired synthetic image using the U-Net skip connections, which shuttle information between those layers in the encoder and in the decoder having a corresponding resolution (see Sec. 3). However, when the cardinality
of the source images is not fixed a priori, as in our proposed task, a “plain” U-Net architecture cannot be used, being the number of input neurons a-priori fixed. For this reason, we propose to modify the U-Net generator introducing anattention mechanism. Attention is widely used to represent a variable-length input into a deep network [2, 36, 33, 32, 10, 31] and, without loss of generality, it can be thought of as a mechanism in which multiple-input representations are averaged (i.e., summed) using some saliency criterion emphasizing the importance of specific representations with respect to the others. In this paper we propose to use attention in order to let the generator decide which specific image locations of each source image are the most trustable and informative at different convolutional layer resolutions. Specifically, we keep the standard encoder-decoder general partition typical of the U-Net (see Sec. 3) but we propose three novelties. First, we introduce an attention-based decoder () which fuses the feature representations of each source. Second, we encode the target pose and each source image with an encoder () which processes each source image independently of the others and locally deforms each performing a target-pose driven geometric “normalization” of . Once normalized, the source images can be compared to each other in , assigning location and source-specific saliency weights which are used for fusion. Finally, we use a multi-source adversarial loss that employs a single conditional discriminator to handle any arbitrary number of source images.
2 Related work
Most of the image generation approaches are based either on Variational Autoencoders (VAEs) or on Generative Adversarial Networks (GANs) . GANs have been extended to conditional GANs , where the image generation depends on some input variable. For instance, in , an input image is “translated” into a different representation using a U-Net generator.
The person generation task (Sec. 1) is a specific case of a conditioned generation process, where the conditioning variables are the source and the target images. Most of the previous works use conditional GANs and a U-Net architecture. For instance, Ma et al.  propose a two-step training procedure: pose generation and texture refinement, both obtained using a U-Net architecture. Recently, this work has been extended in  by learning disentangled representations of the pose, the foreground and the background. Following , several methods for pose-guided image generation have been recently proposed [15, 39, 26, 3, 25]. All these approaches are based on the U-Net. However, the original U-Net, having a fixed-number of input images, cannot be directly used for the multi-source image generation as defined in Sec. 1. Siarohin et al.  modify the U-Net using deformable skip connections which align the input image features with the target pose. In this work we use an encoder similar to their proposal in order to align the source images with the target pose, but we introduce a pose stream which compares the similarity between the source and the target pose. Moreover, similarly to the aforementioned works, also  is single-source and uses a “standard” U-Net decoder .
Other works on image-generation rely on a strong supervision during training or testing. For instance, Neverova et al.  use a dense-pose estimator  trained using image-to-surface correspondences . Dong et al.  use an externally trained model for image segmentation in order to improve the generation process. Zanfir et al.  estimate the human 3D-pose using meshes and identify the mesh regions that can be transferred directly from the input image mesh to the target mesh. However, these methods cannot be directly compared with most of the other works, including ours, which rely only on a sparse keypoint detection. Hard data-collection constraints are used also in , where a person and a background specific model are learned for video generation. This approach requires that the target person moves for several minutes covering all the possible poses and that a new model is trained specifically for each target person. Similarly, Liu et al.  compute the 3D human model by combining several minutes of video. In contrast with these works, our approach is based on fusing only a few source images in random poses and in variable number, which we believe is important because it makes it possible to exploit existing datasets where multiple images are available for the same person. Moreover, our network does not need to be trained for each specific person.
Sun et al.  propose a multi-source image generation approach whose goal is to generate a new image according to a target-camera position. Note that this task is different from what we address in this paper (Sec. 1), since a human pose describes an articulated object by means of a set of joint locations, while a camera position describes a viewpoint change but does not deal with source-to-target object deformations. Specifically, Sun et al.  represent the camera pose with either a discrete label (e.g., left, right
,etc.) or a 6DoF vector and then they generate a pixel-flow which estimates the “movement” of each source-image pixel. Multiple images are integrated using a Convolutional LSTM and confidence maps. Most of the reported results concern 3D synthetic (rigid) objects, while a few real scenes are also used but only with a limited viewpoint change.
3 Attention-based U-Net
We first introduce some notation and provide a general overview of the proposed method. Referring to the multi-source human image generation task defined in Sec. 1, we assume a training set is given, being each sample , where is a set of source images of the same person sharing a common appearance and is the target image. Every sample image has the same size . Note that the source-set size is variable and depends on the person identity . Given an image depicting a person, we represent the body-pose as a set of 2D keypoints , where each is the pixel location of a body joint in . The body pose can be estimated from an image using an external HPE. The target pose is denoted by .
Our method is based on a conditional GAN approach, where the generator follows a general U-Net architecture  composed of an encoder and a decoder. A U-Net encoder is a sequence of convolutional and pooling layers, which progressively decrease the spatial resolution of the input representation. As a consequence, a specific activation in a given encoder layer has a receptive field progressively increasing with the layer depth, so gradually encoding “contextual” information. Vice versa, the decoder is composed of up-convolution layers, and, importantly, each decoder layer is connected to the corresponding layer in the encoder by means of skip connections, that concatenate the encoder-layer feature maps with the decoder-layer feature maps . Finally, Isola et al.  use a conditional discriminator in order to discriminate between real and fake “image transformations”.
We modify the aforementioned framework in three main aspects. First, we use replicas of the same encoder in order to encode the geometrically normalized source images together with the target pose. Second, we propose an attention-based decoder that fuses the feature maps provided by the encoders. Finally, we propose a multi-source adversarial loss .
Fig. 2 shows the architecture of . Given a set of source images, encodes each source image together with the target pose. Similarly to the standard U-Net, for a given source image , each encoder outputs feature maps for different-resolution blocks. Each is aligned with the target pose (Sec 3.3). This alignment acts as a geometric “normalization” of each with respect to and makes it possible to compare with (
). Finally, each tensorjointly represents pose and appearance information at resolution .
3.2 The Attention-based Decoder
is composed of blocks. Similarly to the standard U-Net, the spatial resolution increases symmetrically with respect to the blocks in . Therefore, to highlight this symmetry, the decoder blocks are indexed from R to 1. In the current -th block, the image which is going to be generated is represented by a tensor . This representation is progressively refined in the subsequent blocks using an attention-based fusion of . We call the latent representation of at resolution , and is recursively defined starting from till as follows:
The initial latent representation is obtained by averaging the output tensors of the last layer of (Fig. 2):
Note that each spatial position in corresponds to a large receptive field in the original image resolution which, if is sufficiently large, may include the whole initial image. As a consequence, we can think of as encoding general contextual information on .
For each subsequent block , is computed as follows. Given , we first perform a up-sampling on followed by a convolution layer in order to obtain a tensor . is then fed to an attention mechanism in order to estimate how the different tensors should be fused into a single final tensor :
where denotes the element-wise product and is the proposed attention module.
In order to reduce the number of weights involved in computing Eq. (2), we factorize using a spatial-attention (which is channel independent) and a channel-attention vector (which is spatial independent). Specifically, at each spatial coordinate , compares the current latent representation with and assigns a saliency weight to which represents how significant/trustable is with respect to . The function is implemented by taking the concatenation of and as input and then using a convolution layer. Similarly, is implemented by means of global-average-pooling on the concatenation of and followed by two fully-connected layers. We employ sigmoid activations on both and . Combining together and , we obtain:
Importantly, is not spatially or channel normalized. This because a normalization would enforce that, overall, each source image is used in the same proportion. Conversely, without normalization, given, for instance, a non-informative source (e.g., completely black), the attention module can correspondingly produce a null saliency tensor . Nevertheless, the final attention tensor in Eq. (2) is normalized in order to assign a relative importance to each source:
Finally, the new latent representation at resolution is obtained by concatenating with :
where is the tensor concatenation along the channel axis.
3.3 The Pose-based Encoder
Rather than using a generic convolutional encoder as in 
, we use a task-specific encoder specifically designed to work synergistically with our proposed attention model. Our pose-based encoderis similar to the encoder proposed in  but it also contains a dedicated stream which is used to compare each other the source and the target pose.
In more detail, is composed of two streams (see Fig. 3). The first stream, referred to as pose stream, is used to represent pose information and to compare each other the target pose with the pose of the person in the source image. Specifically, the target pose is represented using a tensor composed of heatmaps . For each joint , a heatmap is computed using a Gaussian kernel centered in . Similarly, given , we extract the pose using  and we describe it using a tensor . The tensors and are concatenated and input to the pose stream, which is composed of a sequence of convolutional and pooling layers. The purpose of the pose stream is twofold. First, it provides the target pose to the decoder. Second, it encodes the similarity between the -th source pose and the target pose. This similarity is of a crucial importance for our attention mechanism to work (Sec. 3.2) since a source image with a pose similar to the target pose is likely more trustable in order to transfer appearance information to the final generated image. For instance, a leg in with a pose closer to than the corresponding leg in , should be most likely preferred for encoding the leg appearance.
The second stream, called source stream, takes as input the concatenation of the RGB image and its pose representation . is provided as input to the source stream in order to guide the source-stream convolutional layers in extracting relevant information which may depend on the joint locations. The output of each convolutional layer of the source stream is a tensor (green blocks in Fig. 3). This tensor is then deformed according to the difference between and (the circles in Fig. 3). Specifically, we use body part-based affine deformations as in  to locally deform the source-stream feature maps at each given layer and then concatenate the obtained tensor with the corresponding-layer pose-stream tensor. In this way we get a final tensor for each of the different layers in (). Each is a representation of aligned with and it is obtained independently of .
Given a set of source images, we apply replicas of the encoder to each producing the set of output tensors that are input to the decoder described in Sec.3.2.
We train the whole network in an end-to-end fashion combining a reconstruction loss with an adversarial loss. For the reconstruction loss, we use the nearest-neighbour loss introduced in  which exploits the convolutional maps of an external network (VGG-19 
, trained on ImageNet) at the original image resolution in order to compare each location of the generated image with a local neighbourhood of the ground-truth image . This reconstruction loss is more robust to small spatial misalignments between and than other common losses as the loss.
On the other hand, in our multi-source problem, the employed adversarial loss has to handle a varying number of sources. We use a single-source discriminator conditioned on only one source image  More precisely, we use discriminators that share their parameters and independently process each . Each takes as input the concatenation of four tensors: , where is either the ground truth real image or the generated image . Differently from other multi-source losses [37, 1, 22], we employ a conditional discriminator in order to exploit the information contained in the source image and the pose heatmaps. The GAN loss for the source image is defined as:
where and, with a slight abuse of notation, means the expectation computed over pairs of single-source and target image extracted at random from the training set . Using Eq. (6), the multi-source adversarial loss () is defined as:
Putting all together, the final training loss is given by:
where the weight is set to in all our experiments.
|Ma et al. ||1|
|Ma et al. ||1|
|Esser et al. ||1|
|Siarohin et al. ||1|
In this section we evaluate our method both qualitatively and quantitatively adopting the evaluation protocol proposed by Ma et al. . We train and for 60k iterations, using the Adam optimizer (learning rate: , , ). We use instance normalization  as recommended in . The networks used for and have the same convolutional-layer dimensions and normalization parameters used in . Also the up-convolutional layers of have the same dimensions of the corresponding decoder used in . Finally, the number of the hidden-layer neurons used to implement (Sec. 3.2) is . For a fair comparison with single-source person generation methods [19, 20, 9, 26], we adopt the HPE proposed in .
Even if there is no constraint on the cardinality of the source images , in order to simplify the implementation, we train and test our networks using different steps, each step having fixed for all in . Specifically, we initially train , and with . Then, we fine-tune the model with the desired value, except for single-source experiments where (see Sec. 4.4).
The person re-identification Market-1501 dataset  is composed of 32,668 images of 1,501 different persons captured from 6 surveillance cameras. This dataset is challenging because of the high diversity in pose, background, viewpoint and illumination, and because of the low-resolution images (12864). To train our model, we need tuples of images of the same person in different poses. As this dataset is relatively noisy, we follow the preprocessing described in . The images where no human body is detected using the HPE are removed. Other methods [19, 20, 9, 26] generate all the possible pairs for each identity. However, in our approach, since we consider tuples of size ( sources and 1 target image), considering all the possible tuples is computationally infeasible. In addition, Market-1501 suffers from a high person-identity imbalance and computing all the possible tuples, would exponentially increase this imbalance. Hence, we generate tuples randomly in such a way that we obtain the same identity repartition than it is obtained when sampling all the possible pairs. In addition, this solution also allows for a fair comparison with single-source methods which sample based on pairs. Eventually, we get 263K tuples for training. For testing, following , we randomly select 12K tuples without person is in common between the training and the test split.
The DeepFashion dataset (In-shop Clothes Retrieval Benchmark)  consists of 52,712 clothes images with a resolution of 256256 pixels. For each outfit, we dispose of about 5 images with different viewpoints and poses. Thus, we only perform experiments using up to sources. Following the training/test split adopted in , we create tuples of images following the same protocol as for the market-1501 dataset. After removing the images where the HPE does not detect any human body, we finally collect about 89K tuples for training and 12K tuples for testing.
Evaluation metrics in the context of generation tasks is a problem in itself. In our experiments we adopt the evaluation metrics proposed in  which is used by most of the single-source methods. Specifically, we use: Structural Similarity (SSIM) , Inception Score (IS)  and their corresponding masked versions mask-SSIM and mask-IS . The masked versions of the metrics are obtained by masking-out the image background. The motivation behind the use of masked metrics is that no background information is given to the network, and therefore, the network cannot guess the correct background of the target image. For a fair comparison, we adopt the masks as defined in .
It is worth noting that the SSIM-based metrics compare the generated image with the ground-truth. Thus, they measure how well the model transfers the appearance of the person from the source image. Conversely, IS-based metrics evaluate the distribution of generated images, jointly assessing the degree of realism and diversity of the generated outcomes, but do not take into account any similarity with the conditioning variables. These two metrics are each other complementary  and should be interpreted jointly.
4.3 Comparison with previous work
Quantitative comparison. In Tab. 1 we show a quantitative comparison with state-of-the-art single-source methods. Note that, except from , none of the compared methods, including ours, is conditioned on background information. On the other hand, the mask-based metrics focus on only the region of interest (i.e., the foreground person) and they are not biased by the randomly generated background. For these reasons, we believe the mask-based metrics are the most informative ones. However, on the DeepFashion dataset, following , we do not report the masked values being the background uniform in most of the images. On both datasets, we observe that the SSIM and masked-SSIM increase when we input more images to our model. This confirms the idea that multi-source image generation is an effective direction to improve the generation quality. Furthermore, it illustrates that the proposed model is able to combine the information provided by the different source images. Interestingly, our method reaches high SSIM scores while keeping high IS values, thus showing that it is able to transfer better the appearance without loosing image quality and diversity.
Concerning the comparison with the state of the art, our method reports the highest performance according to both the mask-SSIM and the mask-IS metrics on the Market-1501 dataset when we use 10 source images. When we employ fewer images, only Siarohin et al  obtain better masked-SSIM but at the cost of a significantly lower IS. Similarly, we observe that  achieves a really high SSIM score, but again at the cost of a drastically lower IS, meaning that we can generate more diverse and higher quality images. Moreover, we notice that  obtains a lower masked-SSIM. This seems to indicate that their high SSIM score is mostly due to a better background generation. Similar conclusions can be drawn for the DeepFashion dataset. We obtain the best IS and rank second in SSIM. Only 
outperforms our model in terms of SSIM at the cost of a much lower IS value. The gain in performance seems smaller than on the market-1501 dataset. This is probably due to the lower pose diversity of the DeepFashion dataset.
Qualitative comparison. Fig. 4 shows some images obtained using the Market-1501 dataset. We compare our results with the images generated by three methods for which the code is publicly available [9, 19, 26]. The source images are shown in the first column. Note that the single-source methods use only the leftmost image. The target pose is extracted from the ground-truth target image. We display the generated images varying . We also show the corresponding saliency tensors (see Sec. 3.2) at the highest resolution . Specifically, we use and, at each location in , we average the values over the channel axis () using a color scale from dark blue (0 values) to orange (1 values).
The qualitative results confirm the quantitative evaluation since we clearly obtain better images when we increase the number of source images. The images become sharper and with more details and contain less artifacts. By looking at the saliency maps, we observe that our model uses mostly the source images in wich the human pose is similar to the target pose. For instance in row 1 and 4, the model has high attention values for the two frontal images but very low values for the back view images. Interestingly, in row 1, among the two source images with a pose similar to the target pose, the saliency values are lower for the more blurry image. This illustrates that, between two images with similar poses, our attention model favours the image with the highest quality. Concerning the comparison with the state of the art, we observe that our model better preserves the details of the source images. In general, we obtain higher-quality details and less artefacts. For instance, in row 3, the three other methods do not generate the white hat nor the small logo of the shirt. In particular, the V-UNet architecture proposed in  generates realistic images but with less accurate details. This can be easily observed in the last two rows where the colors of the clothes are wrongly generated.
4.4 Ablation study and qualitative analysis
In this section we present an ablation study to clarify the impact of each part of our proposal on the final performance. We first describe the compared methods, obtained by “amputating” important parts of the full-pipeline presented in Sec. 3. The discriminator architecture is the same for all the methods.
Avg No-d: In this baseline version of our method we use the encoder described in Sec. 3.3 without the deformation-based alignment of the features with the target pose. For the decoder, we use a standard U-Net decoder without attention module. More precisely, the tensors provided by the skip connections of each encoder are simply averaged and concatenated with the decoder tensors as in the original U-Net. In other words, Eq. (2) is replaced by the average over each convolution layer of the decoder, similarly to (1).
Avg: We use the encoder described in Sec. 3.3 and the same decoder of Avg No-d.
Full: This is the full-pipeline as described in Sec. 3.
Tab. 2 shows a quantitative evaluation. First, we notice that our method without spatial deformation performs poorly on both datasets. This is particularly evident with the SSIM-based scores. This confirms the importance of source-target alignment before computing a position-dependent attention. Interestingly, when using only two source images, Avg, Att. 2D and Full perform similarly to each other on the Market-1501 dataset. However, when we dispose of more source images we clearly observe the benefit of using our proposed attention approach. Avg performs constantly worst than our Full pipeline. The 2D attention model outputs images with higher SSIM-based scores but with lower IS values. Concerning the DeepFashion dataset, our attention model performs that the simpler approach with 2 and 5 source images.
In Fig. 5 we compare Avg with Full using . The advantage of using Full is is clearly illustrated by the fact that Avg mostly performs an average of the front and back images. In the second row, Full reduces the amount of artefacts. Interestingly, in the last row, Full fails to generate correctly the new viewpoint but we see that it chooses to focus on the back view in order to generate the collar.
In this work we introduced a generalization of the person-image generation problem. Specifically, a human image is generated conditioned on a target pose and a set of source images. This makes it possible to exploit multiple and possibly complementary images. We introduced an attention-based decoder which extends the U-Net architecture to a multiple-input setting. Our attention mechanism selects relevant information from different sources and image regions. We experimentally validate our approach on two different datasets. We expect that the practical advantages of the multi-source approach, as demonstrated in this work, will attract the interest of the community.
We thank the NVIDIA Corporation for the donation of the GPUs used in this work. This project has received funding from the European Research Council (ERC) (Grant agreement No.788793-BACKUP).
-  S. Azadi, M. Fisher, V. Kim, Z. Wang, E. Shechtman, and T. Darrell. Multi-content gan for few-shot font style transfer. In , 2018.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473, 2014.
-  G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag. Synthesizing images of humans in unseen poses. In CVPR, 2018.
-  Y. Blau and T. Michaeli. The perception-distortion tradeoff. In CVPR, pages 6228–6237, 2018.
-  Z. Cao, T. Simon, S. Wei, and Y. Sheikh. Realtime multi-person 2D pose estimation using part affinity fields. In CVPR, 2017.
-  C. Chan, S. Ginosar, T. Zhou, and A. A. Efros. Everybody dance now. arXiv preprint arXiv:1808.07371, 2018.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. Ieee, 2009.
-  H. Dong, X. Liang, K. Gong, H. Lai, J. Zhu, and J. Yin. Soft-gated warping-gan for pose-guided person image synthesis. arXiv preprint arXiv:1810.11610, 2018.
-  P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. In CVPR, pages 8857–8866, 2018.
-  S. Gidaris and N. Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR, 2018.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  R. A. Guler, N. Neverova, and I. Kokkinos. Densepose: Dense human pose estimation in the wild. 2018.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. CVPR, 2017.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
-  C. Lassner, G. Pons-Moll, and P. V. Gehler. A generative model of people in clothing. In ICCV, 2017.
-  J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu. Pose transferrable person re-identification. In CVPR, pages 4099–4108, 2018.
-  L. Liu, W. Xu, M. Zollhoefer, H. Kim, F. Bernard, M. Habermann, W. Wang, and C. Theobalt. Neural animation and reenactment of human actor videos. arXiv preprint arXiv:1809.03658, 2018.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016.
-  L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In NIPS, 2017.
-  L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. In CVPR, pages 99–108, 2018.
-  N. Neverova, R. Alp Guler, and I. Kokkinos. Dense pose transfer. In ECCV, 2018.
-  H. Park, Y. Yoo, and N. Kwak. Mc-gan: Multi-conditional generative adversarial network for image synthesis. In BMVC, 2018.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo.
Convolutional LSTM network: A machine learning approach for precipitation nowcasting.In NIPS, 2015.
-  C. Si, W. Wang, L. Wang, and T. Tan. Multistage adversarial losses for pose-based human image synthesis. In CVPR, pages 118–126, 2018.
-  A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe. Deformable gans for pose-based human image generation. In CVPR, 2018.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
-  K. Sricharan, R. Bala, M. Shreve, H. Ding, K. Saketh, and J. Sun. Semi-supervised conditional GANs. arXiv:1708.05789, 2017.
-  S.-H. Sun, M. Huh, Y.-H. Liao, N. Zhang, and J. J. Lim. Multi-view to novel view: Synthesizing novel views with self-learned confidence. In ECCV, pages 155–171, 2018.
-  D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
-  O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv:1511.06391, 2015.
-  O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. In NIPS, 2016.
-  O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In NIPS, 2015.
-  J. Walker, K. Marino, A. Gupta, and M. Hebert. The pose knows: Video forecasting by generating pose futures. In ICCV, 2017.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE TIP, 2004.
-  J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv:1410.3916, 2014.
-  G. Yildirim, C. Seward, and U. Bergmann. Disentangling multiple conditional inputs in gans. ACM SIGKDD W, 2018.
-  M. Zanfir, A.-I. Popa, A. Zanfir, and C. Sminchisescu. Human appearance transfer. In CVPR, pages 5391–5399, 2018.
-  B. Zhao, X. Wu, Z. Cheng, H. Liu, and J. Feng. Multi-view image generation from a single-view. arXiv:1704.04886, 2017.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In ICCV, 2015.
-  Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In ICCV, 2017.