Pose Guided Fashion Image Synthesis Using Deep Generative Model

06/17/2019 ∙ by Wei Sun, et al. ∙ NC State University JD.com, Inc. 14

Generating a photorealistic image with intended human pose is a promising yet challenging research topic for many applications such as smart photo editing, movie making, virtual try-on, and fashion display. In this paper, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. In order to formulate the framework, we employ one generator and two discriminators for image synthesis. The generator includes an image encoder, a pose encoder and a decoder. The two encoders provide good representation of visual and geometrical context which will be utilized by the decoder in order to generate a photorealistic image. Unlike existing pose-guided image generation models, we exploit two discriminators to guide the synthesis process where one discriminator differentiates between generated image and real images (training samples), and another discriminator verifies the consistency of appearance between a target pose and a generated image. We perform end-to-end training of the network to learn the parameters through back-propagation given ground-truth images. The proposed generative model is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. Some examples of generated images produced by our proposed model. First column represents the input images, and top row shows the target poses. Given an input image of a person and a target pose, the proposed model is able to replace a person’s current pose with the target pose. Best viewable in color.

Over the past few years, online fashion industry has been shaped by recent technological innovations such as augmented reality, virtual reality, wearable tech, and connected fitting rooms. In order to attract online shoppers and to deliver rich and intuitive online experience, retailers strive to provide high-quality and informative pictures of the products. Online shoppers usually expect to see multiple photos of a garment item from different viewpoints, or multiple photos of a fashion model wearing the same garment from different angles or under different poses. In such scenarios, image synthesis techniques can be exploited to enhance shopping experience for shoppers and to reduce cost for retailers. In computer vision, image generative models

(Goodfellow et al., 2014; Yoo et al., 2016; van den Oord et al., 2016; Van Den Oord et al., 2016), which are capable of generating high quality photorealistic images, have been successfully applied in numerous applications. In this paper, our main objective is to develop an image generative model in order to transfer a person from its current pose to an intended target pose.

Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is one of the prominent approaches for image synthesis that has been widely used. For fashion applications, there have been some prior works that utilize generative models at conditional settings. In (Ma et al., 2017), a reference image has been utilized to transfer a person with given pose to intended pose. Shape information has been incorporated in (Esser et al., 2018) to aid image generation process. Unlike these two methods, which use one discriminator for pose guided image generation task, we utilize two specific discriminators - one discriminator differentiates between real image and generated image, and another one enhances the consistency between the generated image and the target pose. For virtual try-on, Han et al. propose a VITON network (Han et al., 2018) that virtually dresses a person with an different fashion item. The objective is different between VITON and our work-VITON allows for a user to virtually try on different garments, while our work allows for a online retailer to easily generate various display photos. Moreover, online retailers usually provide multiple photos. In such scenario, it will be advantageous to utilize multiple photos as input in order to extract visual-semantic features both for training and image generation. Unlike most of the image generation approaches (Han et al., 2018; Ma et al., 2017), we exploit a set of images of the same fashion item, either garment itself or a fashion model wearing the garment, from which a meaningful representation is learned.

In this paper, we aim to develop a novel generative model to produce photorealistic images of a person with new pose different from its current pose. The proposed framework exploits a bi-directional convolutional LSTM (Xingjian et al., 2015; Donahue et al., 2015) network and U-Net architecture (Ronneberger et al., 2015) for image generation process. The LSTM network is utilized to discover the common attributes from multiple images by observing the change in the various semantic image features, such as colors, textures, and shapes. The network is also capable of distinguishing background or noise from the variation in semantic features. A U-Net encoder is used to learn a compact representation of appearance. The representations learned from convolutional LSTM and U-Net encoder are then exploited to synthesize a new image. Two discriminators are designed and deployed in order to guide the image generation process. We perform end-to-end training of generator and discriminator networks. We show both quantitative and qualitative analysis to evaluate the performance of our image generative model on two datasets.

Main Contributions. Our major contributions are as follows.

  • In this paper, we present a novel generative model which employs two encoders ( and ) and one decoder to generate a new image. The representations learned by the two encoders from multiple images of the same fashion item is compact and meaningful, which can be applied in other tasks such as image search and garment parsing.

  • The proposed framework exploits two discriminators where one discriminator enforces photorealism of the generated images, and the other discriminator enhances the consistency between generated image and target pose.

  • Using multiple images (e.g., images of a person wearing same garment item with different poses) allows the convolutional LSTM network to learn more visual-semantic context that helps guide the image generation process.

2. Related Works

Recently, image generative modeling has gained a lot of attention from both scientific communities and fashion industry. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are the most popular generative models for the tasks of image synthesis and image modification. There have been some works (van den Oord et al., 2016; Isola et al., 2017) that exploit GAN in conditional setting. In (Denton et al., 2015; van den Oord et al., 2016), generative models are developed conditioning upon class labels. Text(Reed et al., 2016; Zhu et al., 2017a) and images (Ma et al., 2017; Isola et al., 2017; Yoo et al., 2016; Wang et al., 2018; Lassner et al., 2017) have also been used as conditions to build image generative models.

Computer Vision in Fashion. Recent state-of-the-art approaches demonstrate promising performance in a few computer vision tasks such as object detection (Ren et al., 2015; Redmon et al., 2016), semantic segmentation (Long et al., 2015)

, pose estimation

(Papandreou et al., 2017), and image synthesis (Goodfellow et al., 2014; Esser et al., 2018). Recent state-of-the-art approaches has applied in fashion related tasks, for instance, visual search (Yang et al., 2017), cloth parsing or segmentation (Yamaguchi et al., 2015; Tangseng et al., 2017), et al. In (Hadi Kiapour et al., 2015)

, the authors present a deep learning based matching algorithm to solve street-to-shop problem. In

(Liu et al., 2015b)

, parametric matching convolutional neural network (M-CNN) and non-parametric KNN approaches have been proposed for human parsing. In

(Liu et al., 2015a), pose estimation and fashion parsing are performed where SIFT flow and super-pixel matching are used to find correspondences across frames. In garment retrieval task, fine-grained attribute prediction(Wei et al., 2013), parsing (Yamaguchi et al., 2012), and cross-scenario retrieval (Fu et al., 2012) have been utilized in order to improve performance. Most of the approaches (Yoo et al., 2016; Ma et al., 2017; Lassner et al., 2017; Zhu et al., 2017a) exploit deep learning based encoder to generate new images due to its superior performance. In (Yoo et al., 2016), an image-conditional image generation model has been proposed to transfer an input domain to a target domain in semantic level. To generate images of the fashion model with new poses, there have been a few previous efforts. An image synthesis technique conditioned upon text has been portrayed in (Yoo et al., 2016).

Image Synthesis in Fashion. These techniques have also been applied (Han et al., 2018; Zhu et al., 2017a; Ma et al., 2017) to exploit image generative models in fashion technology. An image based virtual try-on network has been proposed in (Han et al., 2018) where the generative model transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. A novel approach is presented in (Zhu et al., 2017a) for generating new clothing on a wearer through generative adversarial learning by utilizing textual information. In (Esser et al., 2018)

, a conditional U-Net has been used to generate image guided by shape information, and conditioned on the output of a variational autoencoder for appearance. In

(Ma et al., 2017), the authors present a generative model conditioned upon pose to manipulate a person in an image to an arbitrary pose. (Chan et al., 2018) study similar task with us, instead they transfer poses to target person from video in a frame-by-frame manner.

Even though we aim at solving similar problem as (Ma et al., 2017), our work differs from (Ma et al., 2017) in terms of architectural choices both in generator and discriminator. Unlike most of the image generative approaches, we exploit multiple images of a fashion item as input which are usually available on e-commerce shopping platforms.

3. Proposed Model

Given a set of images and pose maps as input, our objective is to generate photorealistic image of a person with a new pose different from current one. The proposed framework has two basic components - (a) generator, and (b) discriminator. Fig. 2 demonstrates the overall architecture.

Figure 2. The figure presents the overview of our image generative framework. In this figure, and encoders exploit bi-directional convolutional LSTM network and U-Net encoder respectively. The fusion of encoded features from both encoders are utilized by U-Net decoder to synthesize an image. During training phase, two discriminators and have been utilized in order to generate the photo-realistic image of a person while maintaining shape consistency with an intended pose. Best viewable in color.

3.1. Generator

In this paper, we develop a generator to produce a photorealistic image of a person with a target pose. Our generator has three parts: (a) encoder, (b) encoder, and (c) a decoder. Fig. 2 illustrates how different components are utilized to form an image generator. The generator exploits visual-semantic context and pose information obtained from and encoder respectively, which will then be fed into a decoder in order to generate a new image.

3.1.1. Image Encoder

The objective of is to learn a semantic representation from a set of images or from a single image. To extract visual features from images, we use ResNet (He et al., 2016) architecture that includes several residual blocks. At each layer, the network learn different features, e.g., texture, color, edges, contours. Next, these features are fed to a bidirectional convolutional LSTM (bC-LSTM). While LSTM has been used in several recognition tasks to extract sequential information. The main motivation of using the bC-LSTM network in our work is to connect the common features from the same person wearing same fashion item at different viewpoints. The bC-LSTM network observes the transition of various semantic image features, such as colors, textures, and shapes from one image to other. As a result, the network can also distinguish background and noise from the variation in features.

After training, is able to learn useful visual-semantic representation of the images. The learned representation or ‘codes’ match the concepts of different aspects of fashion items, such as semantic components of the garment (e.g., sleeves, neck, etc.) and certain textural information of the fashion item. We denote the representation as . The representative code learned from will be utilized by a decoder to generate new image.

3.1.2. Pose Encoder

Fig. 2 also shows the encoder used in our framework. We use a U-Net architecture (Ronneberger et al., 2015) to encode the pose information. We provide pose feature maps with channels (R, G, B) as input to the network. Human pose estimation method (Cao et al., 2017) is used to generate locations of keypoints. We create a pose map by joining the keypoints with a straight line using different colors as shown in Fig. 2.

The map will then be used by U-Net encoder to aggregate geometrical features. The U-Net encoder includes two

convolutions. Each convolution is followed by a rectified linear unit (ReLU) and a

max pooling operations. We also increase the number of feature channels by as in (Ronneberger et al., 2015). Each layer of U-Net encoder is connected to later layers of U-Net decoder by skip connections in order to produce high level features. Finally, we obtain a representation . In the following section, we will discuss how the outputs of encoder and encoder have been further utilized in the decoder network.

3.1.3. Decoder

The primary focus of the decoder is to generate a new image by decoding the representative codes and obtained from and encoders respectively. The encoded features and are concatenated in the intermediate stage which will be taken as input to Decoder. Fig. 2 shows the steps of image synthesis process. For decoder, we use convolutional decoder from U-Net architecture with skip connections. The advantage of using the skip connections with encoder is that it allows the network to align the visual-semantic features with appearance context learned in U-Net encoder.

We fuse the visual and pose encoded features computed from and respectively. The fused feature maps are fed to the U-Net decoder. At each layer of decoder, we first aggregate the feature maps obtained from previous layer and the precomputed feature maps at early stage chained by skip connection. Next, we upsample the feature map which is followed by up-convolution. This operation also decreases the number of channels by half. Up-convolution is followed by convolution and ReLU operation. Finally, we obtain a synthesized image as output of U-Net decoder.

3.2. Discriminator

The main objective of discriminator is to guide the image generation process to be photorealistic by comparing synthesized images against genuine ones. During the training process of the network, we apply two discriminators: discriminator classifying whether an image is real or fake (generated); and discriminator aiming to estimate whether a pair, e.g., an image of a person and a pose, is consistent. The architectures for and are shown in bottom right of Fig. 2.

Similar to other traditional GAN models, we use a discriminator network to guide the generation of an image. The discriminator distinguishes between an real image and fake (generated) image. Sometimes, the generated images looks ‘real’, but not consistent with pose provided. In this paper, we propose another discriminator, denoted as , which aims to distinguish between a generated image-pose pair and a real image-pose pair by checking the consistency between them. Here, and represent the real, generated (fake) and pose map. This discriminator plays a vital role to align a person with a target pose. Thus, our model can also generate images with complicated pose by enforcing consistency. Exploitation of two discriminators makes our image generation process more robust, consistent and photorealistic.

3.3. Training

During the training of generator, we define the loss function in a way so that the generated image is judged as ‘real’ and ‘consistent’ with corresponding pose provided by the discriminators. In contrast, the loss functions for discriminators are chosen to predict the newly generated image as fake or inconsistent with high confidence. We take advantage of the adversarial examples to train the whole network simultaneously. After optimization of the parameters, the proposed generator is able to generate photorealistic images similar to the training images which cannot be distinguished from real images by the two discriminators.

Let us denote a set of images that belong to a same person wearing same fashion garment with different pose as , and represent the corresponding pose maps 111For simplicity, we often omit the subscript., where is the number of images. The generator generates a set of images given , and . Here, indicates a combination of Image Encoder, Pose Encoder, and Decoder. The generator model learns a mapping function . Using the ground-truth images, we can write the loss function for the generator as

(1)

Our goal is to generate an image which resembles ground-truth . The first term of Eqn. 1 is the loss function. denotes the feature maps of an image at -th layer of a visual perception network. We use (Simonyan and Zisserman, 2014)

network which is trained on ImageNet dataset.

is a hyperparameter which represents the importance of

-th layer towards loss function. The second term in Eqn. 1 measures the perceptual similarity between an input image and an output image . We refer as reconstruction loss.

In order to train discriminators, we also consider additional poses taken from different fashion item as shown in Fig. 2. Let us denote these additional poses as . With as input, the generator will produce new images . discriminator aims to identify generated images as ‘fake’. In order to learn the parameters of , we adopt adversarial training as presented in (Goodfellow et al., 2014). The loss function can be written as

(2)

Similarly, discriminator distinguishes between real and fake by checking the consistency between given image and pose pair. The loss function for can be written as

(3)

and represent image samples different from input image and corresponding pose map in training set respectively. We formulate our full objective as

(4)

and are the weights on the loss functions of two discriminators.

4. Experimental Results

In this section, we demonstrate our experimental results for generating photorealistic images of a person guided by target pose. We evaluate the proposed network on two datasets: DeepFashion (Liu et al., 2016) and Market-1501 (Zheng et al., 2015). We show both qualitative and quantitative results and compare our approach against recent state-of-the-art pose-guided image generation methods.

DeepFashion Market-1501
Methods SSIM IS SSIM IS
Real Data
pix2pix (Isola et al., 2017)
PG() (Ma et al., 2017)
PG (Ma et al., 2017)
Varational U-Net (Esser et al., 2018) 0.786 0.353
()-S
()-M 3.147 3.514

Table 1. The table demonstrates the structural similarity (SSIM) and the inception score (IS) of our proposed model and other state-of-the-art methods on DeepFashion (Liu et al., 2016) and Market-1501 (Zheng et al., 2015) datasets. denotes proposed model with single input image, while indicates proposed model that takes multiple images as input.

4.1. Dataset

In our experiment, DeepFashion (Liu et al., 2016) and Market-1501 (Zheng et al., 2015) datasets are used for evaluation. We use In-shop Cloth Retrieval benchmark from DeepFashion dataset. DeepFashion includes multiple images of a person with different poses. This dataset contains in-shop clothes images. We use the same training and testing set as presented in (Ma et al., 2017). The resolution of an image is . We also demonstrate our experimentation on Market-1501 dataset. This dataset is very challenging due to variety of pose, illumination, and background. It has images with resolution of persons captured from six different view points. For fair comparison, we follow PG (Ma et al., 2017) for splitting training and testing sets.

Figure 3. The figure shows some image generation results by our proposed framework. First and second columns represent input images. Third and fourth columns show the target pose and target image respectively. Synthetic images produced by and methods are shown in fifth and sixth columns respectively. Please see Sec. 4.4 for more details.

4.2. Implementation Details

Our U-Net encoder and decoder follow the network architecture as presented in (Zhu et al., 2017b)

. The network contains two stride-2 convolution layers and several residual blocks, and two fractionally-strided convolutions with stride

. Each layer of the image encoder only contains convolutional residual blocks. For DeepFashion dataset, we use residual blocks. In order to train two discriminators and , we adopt the training procedure as PatchGAN (Isola et al., 2017). The discriminator uses patch and averages all scores by sliding across the image to determine the final output. This allows us to capture high frequency structures. To optimize the network parameters, we use Adam optimizer (Kingma and Ba, 2014) with and . We use batch size of , with initial learning rate , decay every epochs. Here, batch size represents one SKU which includes multiple images of a fashion item ranging from to . These images are further used for data augmentation. We randomly crop images and flip the image left-right for data augmentation. We also randomly rotate images to increase the training set.

4.3. Quantitative Results

In order to evaluate our proposed model, we consider two metrics to measure the quality of image synthesis. We utilize Structural Similarity (SSIM) (Wang et al., 2004) and the Inception Score (IS) (Salimans et al., 2016) as evaluation criteria. Table. 1 shows the quantitative results on DeepFashion and Market-1501 datasets. Next, we will compare against some baseline and state-of-the-art methods.

4.3.1. Impact of Two Discriminators

Unlike most of the pose guided image generation methods (Ma et al., 2017; Esser et al., 2018), we take advantage of adversarial training with two discriminators - and . In order to analyze the effect of these two discriminators we remove one discriminator at a time, and evaluate the performance. If we remove from the network, the loss function in Eqn. 4 does not have any impact. In other words, the mapping function in Eqn. 4 has no contribution in the network. We denote this model as . To verify the effectiveness of the two discriminators, we pick DeepFashion dataset to run the framework with different configurations. Furthermore, we provide the results of our proposed model with two discriminators on Market1501 dataset as shown in Table. 1. As can be seen in Table. 1, after removal of , both SSIM and IS scores have been significantly dropped by and respectively compared with . Since distinguishes whether an image is real or generated, model can not generate photorealistic images with high SSIM and IS score.

Similarly, we refer the removal of discriminator from the proposed architecture as . helps the model to generate photorealistic images of a person with target pose by comparing between real image-pose pair and generated image-pose pair. From Table. 1, we can see that the achieves and in IS and SSIM scores respectively. We observe a large drop () in SSIM score with compared to proposed model . The SSIM score can be improved by exploiting discriminator as shown in Table. 1.

4.3.2. Effect of Using Multiple Images

In our proposed architecture, we exploit multiple photos of a same fashion item to extract visual-semantic features. During the training process, we allow bi-directional convolutional LSTM (bC-LSTM) network to learn common attributes by observing the transition between multiple images. These attributes or visual-semantic features are utilized to generate photorealistic images. The proposed model is also capable of taking single image as input. Table. 1 shows its SSIM and IS score on DeepFashion and Market-1501.

Multi-Image mode outperforms Single-Image model by large margin () on DeepFashion and Market-1501 datasets in terms of IS score. Multi-Image model also achieves better SSIM score than Single-Image model on Market-1501 dataset. For DeepFashion dataset, both models achieve similar SSIM score. From Table. 1, we can conclude that bC-LSTM in the generator learns visual-semantic contextual details by exploiting multiple images as input.

4.3.3. Compare against State-of-the-art

In this section, we compare our proposed model against other state-of-the-art deep generative models. We choose some recent works - PG (Ma et al., 2017), PG (Ma et al., 2017), pix2pix (Isola et al., 2017), and variational U-Net (Esser et al., 2018) to evaluate the performance of our proposed model. From the Table. 1, we can see that the proposed method achieves comparable results with other exiting works in terms of SSIM score. To measure the quality of image generation process, we also compare the proposed approach with other state-of-the-art models by exploiting IS score as shown in Table. 1. The proposed model outperforms PG (Ma et al., 2017), PG (Ma et al., 2017), pix2pix (Isola et al., 2017), and variational U-Net (Esser et al., 2018) by large margin in IS score on DeepFashion and datasets. Market-1501 (Zheng et al., 2015) dataset has images with various background which becomes very difficult to predict as there is no information of background in target image from the input. Our model is able to generate photorealistic images with high SSIM and IS score on Market-1501 dataset as presented in Table. 1

. Our model get state-of-the-art performance in terms of Inception Score, which indicates our model not only generate realistic images, but also output a high diversity of images, and with a lower probability to mode collapse. As for SSIM, we achieve improvement to

(Ma et al., 2017) and comparable results with (Esser et al., 2018).

4.4. Qualitative Analysis

Given an image or multiple images of a fashion item along with target pose, our proposed model is able to transfer a person’s current pose to intended pose. Fig. 3 illustrates some image synthesis results produced by Single-Image model and Multi-Image model. The examples are taken from DeepFashion and Market-1501 datasets. takes a single image as shown in column of Fig. 3 and a pose map as input. The synthesis results by are shown in column. In Fig. 3, we also show the generation results obtained from Multi-Image model. model takes couple of input images as illustrated in and columns of Fig. 3. column exhibits the generated images created by . From Fig. 3, we can see the high resemblance between the synthetic ( and columns) and target ground truth images ( column) as shown in the figure. Furthermore, the proposed model is also able to predict reasonable face details such as mouth, eyes and nose of a person as illustrated in Fig. 3.

5. Conclusion

In this paper, we present a novel generative model to produce photorealistic images of a person change to a target pose. We utilize convolutional LSTM, and U-Net architecture to develop the generator, in which we 1) exploit multiple images of the same person in order to learn semantic visual context from the convolutional LSTM network; 2) apply a U-Net encoder to learn the appearance/geometrical information; and 3) use a U-Net decoder to generate an image by exploiting visual and appearance context. In order to better guide the image generation process, we apply two discriminators specifically designed for image authenticity and pose consistency. Our experimental results show that the proposed model can produce high-quality images both in qualitative and quantitative measures. As future direction, we will explore the usage of visual and appearance context for human parsing.

References

  • (1)
  • Cao et al. (2017) Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    . 7291–7299.
  • Chan et al. (2018) Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2018. Everybody dance now. arXiv preprint arXiv:1808.07371 (2018).
  • Denton et al. (2015) Emily L Denton, Soumith Chintala, Rob Fergus, et al. 2015. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems. 1486–1494.
  • Donahue et al. (2015) Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2625–2634.
  • Esser et al. (2018) Patrick Esser, Ekaterina Sutter, and Björn Ommer. 2018. A Variational U-Net for Conditional Appearance and Shape Generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8857–8866.
  • Fu et al. (2012) Jianlong Fu, Jinqiao Wang, Zechao Li, Min Xu, and Hanqing Lu. 2012. Efficient clothing retrieval with semantic-preserving visual phrases. In Asian conference on computer vision. Springer, 420–431.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672–2680.
  • Hadi Kiapour et al. (2015) M Hadi Kiapour, Xufeng Han, Svetlana Lazebnik, Alexander C Berg, and Tamara L Berg. 2015. Where to buy it: Matching street clothing photos in online shops. In Proceedings of the IEEE international conference on computer vision. 3343–3351.
  • Han et al. (2018) Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S Davis. 2018. Viton: An image-based virtual try-on network. In CVPR.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
  • Isola et al. (2017) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017.

    Image-to-image translation with conditional adversarial networks.

    arXiv preprint (2017).
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Lassner et al. (2017) Christoph Lassner, Gerard Pons-Moll, and Peter V Gehler. 2017. A generative model of people in clothing. In Proceedings of the IEEE International Conference on Computer Vision, Vol. 6.
  • Liu et al. (2015a) Si Liu, Xiaodan Liang, Luoqi Liu, Ke Lu, Liang Lin, Xiaochun Cao, and Shuicheng Yan. 2015a. Fashion parsing with video context. IEEE Transactions on Multimedia 17, 8 (2015), 1347–1358.
  • Liu et al. (2015b) Si Liu, Xiaodan Liang, Luoqi Liu, Xiaohui Shen, Jianchao Yang, Changsheng Xu, Liang Lin, Xiaochun Cao, and Shuicheng Yan. 2015b. Matching-cnn meets knn: Quasi-parametric human parsing. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1419–1427.
  • Liu et al. (2016) Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. 2016. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1096–1104.
  • Long et al. (2015) Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3431–3440.
  • Ma et al. (2017) Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. 2017. Pose guided person image generation. In Advances in Neural Information Processing Systems. 406–416.
  • Papandreou et al. (2017) George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, and Kevin Murphy. 2017. Towards accurate multi-person pose estimation in the wild. In CVPR, Vol. 3. 6.
  • Redmon et al. (2016) Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779–788.
  • Reed et al. (2016) Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 (2016).
  • Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems. 91–99.
  • Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 234–241.
  • Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems. 2234–2242.
  • Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • Tangseng et al. (2017) Pongsate Tangseng, Zhipeng Wu, and Kota Yamaguchi. 2017. Looking at outfit to parse clothing. arXiv preprint arXiv:1703.01386 (2017).
  • Van Den Oord et al. (2016) Aäron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio.. In SSW. 125.
  • van den Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. 2016. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems. 4790–4798.
  • Wang et al. (2018) Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8798–8807.
  • Wang et al. (2004) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600–612.
  • Wei et al. (2013) Di Wei, Wah Catherine, Bhardwaj Anurag, Pira-muthu Robinson, and Sundaresan Neel. 2013. Style finder: fine-grained clothing style recognition and retrieval. In Computer Vision and Pattern Recognition Workshops. 8–13.
  • Xingjian et al. (2015) SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. 2015.

    Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In

    Advances in neural information processing systems. 802–810.
  • Yamaguchi et al. (2012) Kota Yamaguchi, M Hadi Kiapour, Luis E Ortiz, and Tamara L Berg. 2012. Parsing clothing in fashion photographs. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 3570–3577.
  • Yamaguchi et al. (2015) Kota Yamaguchi, M Hadi Kiapour, Luis E Ortiz, and Tamara L Berg. 2015. Retrieving similar styles to parse clothing. IEEE transactions on pattern analysis and machine intelligence 37, 5 (2015), 1028–1040.
  • Yang et al. (2017) Fan Yang, Ajinkya Kale, Yury Bubnov, Leon Stein, Qiaosong Wang, Hadi Kiapour, and Robinson Piramuthu. 2017. Visual search at ebay. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2101–2110.
  • Yoo et al. (2016) Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. 2016. Pixel-level domain transfer. In European Conference on Computer Vision. Springer, 517–532.
  • Zheng et al. (2015) Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. 2015. Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision. 1116–1124.
  • Zhu et al. (2017b) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017b. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint (2017).
  • Zhu et al. (2017a) Shizhan Zhu, Sanja Fidler, Raquel Urtasun, Dahua Lin, and Chen Change Loy. 2017a. Be your own prada: Fashion synthesis with structural coherence. arXiv preprint arXiv:1710.07346 (2017).