End-to-End Learning of Geometric Deformations of Feature Maps for Virtual Try-On

06/04/2019 ∙ by Thibaut Issenhuth, et al. ∙ Criteo 0

The 2D virtual try-on task has recently attracted a lot of interest from the research community, for its direct potential applications in online shopping as well as for its inherent and non-addressed scientific challenges. This task requires to fit an in-shop cloth image on the image of a person. It is highly challenging because it requires to warp the cloth on the target person while preserving its patterns and characteristics, and to compose the item with the person in a realistic manner. Current state-of-the-art models generate images with visible artifacts, due either to a pixel-level composition step or to the geometric transformation. In this paper, we propose WUTON: a Warping U-net for a Virtual Try-On system. It is a siamese U-net generator whose skip connections are geometrically transformed by a convolutional geometric matcher. The whole architecture is trained end-to-end with a multi-task loss including an adversarial one. This enables our network to generate and use realistic spatial transformations of the cloth to synthesize images of high visual quality. The proposed architecture can be trained end-to-end and allows us to advance towards a detail-preserving and photo-realistic 2D virtual try-on system. Our method outperforms the current state-of-the-art with visual results as well as with the Learned Perceptual Image Similarity (LPIPS) metric.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A photo-realistic virtual try-on system would be a significant improvement for online shopping. Whether used to create catalogs of new products or to propose an immersive environment for shoppers, it could impact e-shop and open the door for new easy image editing possibilities. The training data we consider is made of paired images that are made of the picture of one cloth and the same cloth worn by a model. Then providing an unpaired tuple of images: one picture of cloth and one picture of a model with a different cloth, we aim to replace the cloth worn by the model.

An early line of work addressed this challenge using 3D measurements and model-based methods guan2012drape ; hahn2014subspace ; pons2017clothcap . However, these are by nature computationally intensive and require expensive material, which would not be acceptable at scale for shoppers. Recent works aim to leverage deep generative models to tackle the virtual try-on problem jetchev2017conditional ; han2018viton ; wang2018toward ; dong2019towards . CAGAN jetchev2017conditional proposes a U-net based Cycle-GAN isola2017image approach. However, this method fails to generate realistic results since these networks cannot handle large spatial deformations. In VITON han2018viton , the authors use the shape context matching algorithm belongie2002shape to warp the cloth on a target person and learn an image composition with a U-net generator. To improve this model, CP-VTON wang2018toward incorporates a convolutional geometric matcher rocco2017convolutional which learns the parameters of geometric deformations (i.e thin-plate spline transform bookstein1989principal ) to align the cloth with the target person. In MG-VTON dong2019towards , the task is extended to a multi-pose try-on system, which requires to modify the pose as well as the upper-body cloth of the person.

In this second line of approach, a common practice is to use what we call a human parser which is a pre-trained system able to segment the area to replace on the model pictures: the upper-body cloth as well as neck and arms. In the rest of this work, we also assume this parser to be known.

The recent methods for a virtual try-on struggle to generate realistic spatial deformations, which is necessary to warp and render clothes with complex patterns. Indeed, with solid color tees, unrealistic deformations are not an issue because they are not visible. However, for a tee-shirt with stripes or patterns, it will produce unappealing images with curves, compressions and decompressions. Figure 2 shows these kinds of unrealistic geometric deformations generated by CP-VTON wang2018toward .

To alleviate this issue, we propose an end-to-end model composed of two modules: a convolutional geometric matcher rocco2017convolutional and a siamese U-net generator. We train end-to-end, so the geometric matcher benefits from the losses induced by the final synthetic picture generation. Our architecture removes the need for a final image composition step and generates images of high visual quality with realistic geometric deformations. Main contributions of this work are:

  • We propose a simple end-to-end architecture able to generate realistic deformations to preserve complex patterns on clothes such as stripes. This is made by back-propagating the loss computed on the final synthesized images to a learnable geometric matcher.

  • We suppress the need for a final composition step present in the best current approaches such as wang2018toward using an adversarially trained generator. This performs better on the borders of the replaced object and provides a more natural look of shadows and contrasts.

  • We show that our approach significantly outperforms the state-of-the-art with visual results and with a quantitative metric measuring image similarity, LPIPS zhang2018unreasonable .

  • We identify the contribution of each part of our net by an ablation study. Moreover, we exhibit a good resistance to low-quality human parser at inference time.

2 Problem statement and related work

Given the 2D images of a person and of a clothing item, we want to generate the image where the person from wears the cloth from . The task can be separated in two parts : the geometric deformation required to align with , and the refinement that fits the aligned cloth on

. These two sub-tasks can be modelled with learnable neural networks, i.e spatial transformers networks

jaderberg2015spatial ; rocco2017convolutional that output parameters of geometric deformations, and conditional generative networks that give .

Because it would be costly to construct a dataset with triplets, previous works han2018viton ; wang2018toward propose to use an agnostic person representation where the clothing items in are hidden but identity and shape of the persons are preserved.

is built with pre-trained human parsers and pose estimators :

. These triplets allow to train for reconstrution. are the inputs, the output and the ground-truth. We finally have the conditional generative process :

(1)

Conditional image generation.

Generative models for image synthesis have shown impressive results with the arrival of adversarial training goodfellow2014generative . Combined with deep networks radford2015unsupervised , this approach has been extended to conditional image generation in mirza2014conditional

and performs increasingly well on a wide range of tasks, from image-to-image translation

isola2017image ; zhu2017unpaired to video editing shetty2018adversarial . However, these models cannot handle large spatial deformations and fail to modify the shape of objects mejjati2018unsupervised , which is necessary for a virtual try-on system.

Image composition.

Recent approaches for image composition combine STNs jaderberg2015spatial with GANs to align and merge two images. In lin2018st , Lin et al. use a sequence of warp generated by an STN to place a foreground object in a background image. Recently, SF-GAN zhan2018spatial separated the task in two stages: an STN warping the object, and a refinement network adapting the texture and appearance of the object.

Geometric deformations in generative models.

The problem of handling large spatial deformations in generative models has mainly been studied in the context of pose-guided person image generation. This task consists in generating a person image, given a reference person and a target pose. Some approaches use disentanglement to separate pose and shape information from appearance, which allows reconstructing a reference person in a different pose ma2018disentangled ; lorenz2019unsupervised ; esser2018variational . However, recent state-of-the-art approaches for pose-guided person generation include explicit spatial transformations in their architecture, whether learnt jaderberg2015spatial or not. In balakrishnan2018synthesizing , the different body parts of a person are segmented and moved to the target pose by part-specific learnable affine transformations, which are applied at the pixel level. The deformable GAN from siarohin2018deformable is a U-net ronneberger2015u generator whose skip connections are deformed by part-specific affine transformations. These transformations are computed from the source and target pose information. Instead, dong2018soft use the convolutional geometric matcher from rocco2017convolutional

to learn a thin-plate-spline (TPS) transform between the source human parsing and a synthesized target parsing, and align the deep feature maps of an encoder-decoder generator.

Appearance transfer.

Close to the virtual try-on task, there is a body of work on human appearance transfer. Given two images of different persons, the goal is to transfer the appearance of a part of the person A on the person B. Approaches using pose and appearance disentanglement mentioned in the previous section lorenz2019unsupervised ; ma2018disentangled fit this task but others are specifically designed for it. SwapNet raj2018swapnet propose a dual path network to generate a new human parsing of the reference person and region of interest pooling to transfer the texture. In wu2018m2e , the method relies on DensePose information alp2018densepose , which provides a 3D surface estimation of a human body, to perform a warping and align the two persons. The transfer is then done with segmentation masks and refinement networks.

Virtual try-on.

Most of the approaches for a virtual try-on system come from computer graphics and rely on 3D measurements or representations. Drape guan2012drape learns a deformation model to render clothes on 3D bodies of different shapes. In hahn2014subspace , Hahn et al. use subspace methods to accelerate physics-based simulations and generate realistic wrinkles. ClothCap pons2017clothcap aligns a 3D cloth-template to each frame of a sequence of 3D scans of a person in motion. However, these methods are targetting the dressing of virtual avatars, e.g for the gaming or movie industry.

The task we are interested in is the one introduced in CAGAN jetchev2017conditional and further studied by VITON han2018viton and CP-VTON wang2018toward , which we defined in the problem statement. In CAGAN jetchev2017conditional , Jetchev et al. propose a cycle-GAN approach that requires three images as input: the reference person, the cloth worn by the person and the target in-shop cloth. Thus, it limits its practical uses. VITON han2018viton proposes to learn a generative composition between the warped cloth and a coarse result. The warping is done with a non-parametric geometric transform belongie2002shape . To improve this model, CP-VTON wang2018toward incorporates a learnable geometric matcher rocco2017convolutional which aligns with .

3 Our approach

Figure 1: WUTON : our proposed end-to-end warping U-net architecture. Dotted arrows correspond to the forward pass only performed during training. Green arrows are the human parser. The geometric transforms share the same parameters but do not operate on the same spaces. The different training procedure for paired and unpaired pictures is explained in section 3.2.

Our task is to build a virtual try-on system that is able to fit a given in-shop cloth on a reference person. We propose a novel architecture trainable end-to-end and composed of two existing modules, _meaning:NTF . i.e _catcode:NTF a i.e. i.e. a convolutional geometric matcher rocco2017convolutional and a U-net ronneberger2015u generator whose skip connections are deformed by . The joint training of and allows us to generate realistic deformations that help to synthesize high-quality images. Also, we use an adversarial loss to make the training procedure closer to the actual use of the system which is to replace clothes in the unpaired situation. In previous works han2018viton ; wang2018toward ; dong2019towards , the generator is only trained to reconstruct images with supervised triplets (ap, c, p) extracted from the paired. Thus, when generating images in the test-setting, it can struggle to generalize and to warp clothes different from the one worn by the reference person. The adversarial training allows us to train our network in the test-setting, where one wants to fit a cloth on a reference person wearing another cloth.

3.1 Warping U-net

Our warping U-net is composed of two connected modules, as shown in Fig.1. The first one is a convolutional geometric matcher, which has a similar architecture as rocco2017convolutional ; wang2018toward . It outputs the parameters of a geometric transformation, a TPS transform in our case. This geometric transformation aligns the in-shop cloth image with the reference person. However, in contrast to previous work han2018viton ; wang2018toward ; dong2019towards , we use the geometric transformation on the features maps of the generator rather than at a pixel-level. Thus, we learn to deform the feature maps that pass through the skip connections of the second module, a U-net ronneberger2015u generator which synthesizes the output image .

The architecture of the convolutional geometric matcher is taken from CP-VTON wang2018toward , which reuses the generic geometric matcher from rocco2017convolutional . It is composed of two feature extractors and

, which are standard convolutional neural networks. The local vectors of feature maps

and are then L2-normalized and a correlation map is computed as follows :

(2)

where k is the index for the position (m, n). This correlation map captures dependencies between distant locations of the two feature maps, which is useful to align the two images. is the input of a regression network, which outputs the parameters and allows to perform the geometric transformation . We use TPS transformations bookstein1989principal , which generate smooth sampling grids given control points. Since we transform deep feature maps of a U-net generator, we generate a sampling grid for each scale of the U-net with the same parameters .

The input of the U-net generator is also the tuple of pictures . Since these two images are not spatially aligned, we cannot simply concatenate them and feed a standard U-net. To alleviate this, we use two different encoders and processing each image independently and with non-shared parameters. Then, the feature maps of the in-shop cloth are transformed at each scale :

. Then, the feature maps of the two encoders are concatenated and feed the decoder. With aligned feature maps, the generator is able to compose them and to produce realistic results. Because we simply concatenate the feature maps and let the U-net decoder compose them instead of enforcing a pixel-level composition, experiments will show that it has more flexibility and can produce more natural results. We use instance normalization in the U-net generator, which is more effective than batch normalization

ioffe2015batch for image generation ulyanov2017improved .

3.2 Training procedure

Along with a new architecture for the virtual try-on task (Fig. 1), we also propose a new training procedure, i.e. a different data representation and an adversarial loss for unpaired images.

While previous works use a rich person representation with more than 20 channels representing human pose, body shape and the RGB image of the head, we only mask the upper-body of the reference person. Our agnostic person representation is thus a 3-channel RGB image with a masked area. We compute the upper-body mask from pose and body parsing information provided by a pre-trained neural network from liang2019look . Precisely, we mask the areas corresponding to the arms, the upper-body cloth and a fixed bounding box around the neck keypoint. However, we show in an ablation study that our method is not sensitive to non-accurate masks at inference time since it can generate satisfying images with simple bounding box masks.

Using the dataset from dong2019towards , we have pairs of in-shop cloth image and a person wearing the same cloth . Using a human parser and a human pose estimator, we generate . From the parsing information, we can also isolate the cloth on the image and get , the cloth worn by the reference person. Moreover, we get the image of another in-shop cloth, . The inputs of our network are the two tuples and . The outputs are respectively and .

The cloth worn by the person allows us to guide directly the geometric matcher with a loss:

(3)

The image of the reference person provides a supervision for the whole pipeline. Similarly to CP-VTON wang2018toward , we use two different losses to guide the generation of the final image , the pixel-level loss and the perceptual loss johnson2016perceptual . We focus on losses since they are known to generate less blur than for image generation zhao2016loss

. The latter consists of using the features extracted with a pre-trained neural network, VGG

simonyan2014very in our case. Specifically, our perceptual loss is:

(4)

where are the feature maps of an image I extracted at the i-th layer of the VGG network. Furthermore, we exploit adversarial training to train the network to fit on the same agnostic person representation , which is extracted from a person wearing . This is only feasible with an adversarial loss, since there is no available ground-truth for this pair . Thus, we feed the discriminator with the synthesized image and real images of persons from the dataset. This adversarial loss is also back-propagated to the convolutional geometric matcher, which allows to generate much more realistic spatial transformations. We use the relativistic adversarial loss jolicoeur-martineau2018 with gradient-penalty gulrajani2017improved ; arjovsky2017wasserstein , which trains the discriminator to predict relative realness of real images compared to synthesized ones. Finally, the objective function of our network is :

(5)

We use the Adam optimizer kingma2014adam to train our network.

4 Experiments and analysis

Figure 2: Comparison of our method with CP-VTON wang2018toward , the state-of-the-art for the virtual try-on task. More examples and higher resolution pictures are provided in Appendix.
Figure 3:

Our unpaired adversarial loss function improves the performance of our generator in the case of significant shape changes from the source cloth to the target cloth. Specifically, when going from short sleeves to long sleeves, it tends to gum the shape of the short sleeves. With the paired adversarial loss, we do not observe this phenomenon since the case never happens during training.

Figure 4: Left: Our method can handle low-quality masks at cost of generic arm pose. Right: Some common failure cases of our method. Detection of initial cloth can fail beyond the capacity of our U-net generator (first row), and uncommon poses are not properly rendered (second row).

We first describe the dataset. Then, we compare our approach with CP-VTON 222We use the public implementation from https://github.com/sergeywong/cp-vton. wang2018toward , the current state-of-the-art for the virtual try-on task. We present visual and quantitative results proving that WUTON significantly outperforms the current state-of-the-art for a virtual try-on. Finally, we describe the impact of each main component of our approach in an ablation study and show that WUTON can also generate high-quality images with a non-accurate mask at inference time.

4.1 Dataset

We use the Image-based Multi-pose Virtual try-on dataset333The dataset is available at: http://sysu-hcp.net/lip/overview.php from MG-VTON dong2019towards . This dataset contains 35,687/13,524 person/cloth images at (256,192) resolution. For each in-shop cloth image, there are multiple images of a model wearing the given cloth from different views and in different poses. We remove images tagged as back images since the in-shop cloth image is only from the front. We process the images with a neural human parser and pose estimator, specifically the joint body parsing and pose estimation (JPP) network444This human parser and pose estimator is open-source and available at: https://github.com/Engineering-Course/LIP_JPPNet. liang2019look ; gong2017look .

4.2 Visual results

Visual results of our method and CP-VTON are shown in Fig. 2. CP-VTON has trouble to realistically deform and render complex patterns like stripes or flowers. Control points of the transform are visible and lead to unrealistic curves and deformations on the clothes. Also, the edges of cloth patterns and body contours are blurred.

Our method surpasses the previous state-of-the-art on different challenges. On the two first rows, our method generates spatial transformations of a much higher visual quality, which is specifically visible for stripes (2nd row). It is able to preserve complex visual patterns of clothes and presents less blur than CP-VTON on the edges. Also, it can distinguish the relevant parts of the in-shop cloth image (3rd row). Generally, our method generates results of high visual quality while preserving the characteristics of the target cloth. We also show some failure cases in Fig. 4. Problems happen when the human parser does not properly detect the original cloth or when models have uncommon poses.

4.3 LPIPS metric

To further evaluate our method, we use the linear perceptual image patch similarity (LPIPS) metric developed in zhang2018unreasonable . This metric is very similar to the perceptual loss we use in training (see Section 3.2) since the idea is to use the feature maps extracted by a pre-trained neural network to quantify the perceptual difference between two images. Different from the basic perceptual loss, they first unit-normalize each layer in the channel dimension and then rescale by learned weights :

(6)

where is the unit-normalized vector at i-th layer and (h,w) is the spatial location extracted by a neural network, AlexNet krizhevsky2014one in their case.

We evaluate the LPIPS on the test set. We can only use this method in the paired setting since there is no available ground-truth in the unpaired setting. Thus, it does not exactly evaluate the real task we aim for. Results are shown in Table 1. Our approach significantly outperforms the state-of-the-art on this metric. Here the best model uses adversarial loss on paired data, but visual investigation suggests that the unpaired adversarial loss is better in the real use case of our work. We evaluate CP-VTON wang2018toward on their agnostic person representation (20 channels with RGB image of head and shape/pose information) and on our .

Method LPIPS
CP-VTON on 0.182 0.049
CP-VTON on 0.131 0.058
WUTON 0.101 0.047
Impact of loss
functions on WUTON:
W/o adv. loss 0.107 0.049
W. paired adv. loss 0.099 0.046
Not end-to-end 0.112 0.053
Method LPIPS
Impact of composition
on WUTON:
W. composition 0.105 0.047
Impact of mask quality
box masked person:
CP-VTON 0.185 0.078
WUTON 0.151 0.069
Table 1: LPIPS metric on paired setting. Lower is better, reports std. dev.

4.4 Ablation studies

To prove the effectiveness of our approach, we perform several ablation studies. In Fig. 3, we show visual comparisons of different variants of our approach: our WUTON with unpaired adversarial loss; with an adversarial loss on paired data (i.e the adversarial loss is computed with the same synthesized image as the L1 and VGG losses); without the adversarial loss; without back-propagating the loss of the synthesized images () to the geometric matcher.

The adversarial loss generates sharper images and improves the contrast. This is confirmed by the LPIPS metric in Table 1 and with visual results in Fig. 3. With the unpaired adversarial setting, the system better handles large variations between the shape of the cloth worn by the person and the shape of the new cloth. The results in Fig. 3 as well as the LPIPS score in Table 1 show the importance of our end-to-end learning of geometric deformations. When the geometric matcher only benefits from , it only learns to align with the masked area in . However, it does not preserve the inner structure of the cloth. Back-propagating the loss computed on the synthesized images alleviates this issue. Finally, our approach removes the need for learning a composition between the warped cloth and a coarse result. To prove it, we re-design our U-net to generate a coarse result and a composition mask. The synthesized image is then the composition between the coarse result and the warped cloth. With this configuration, the LPIPS score slightly decreases.

We also show that our method can generate realistic results if the human parser is not accurate at inference time. Hence, we train and test our method with the upper-body of the person masked by a gray bounding box. It is to be noted that we still require the accurate human parsing during training for the warping loss .

We also tried to learn the architecture without using , but this lead to major trouble with the convergence of the networks. On one hand, a possible future direction is to try to reduce the dependency to the human parser by learning the segmentation in a self-supervised way (providing several pictures of the same cloth on different models or using videos). On the other hand, the presence of this parser can ease the handling of pictures with complex backgrounds.

5 Conclusion

In this work, we propose an architecture trainable end-to-end which combines a U-net with a convolutional geometric matcher and significantly outperforms the state-of-the-art for the virtual try-on task. The end-to-end training procedure with an unpaired adversarial loss allows to generate realistic geometric deformations, synthesize sharp images with proper contrast and preserve complex patterns, including stripes and logos.

References

  • [1] Peng Guan, Loretta Reiss, David A Hirshberg, Alexander Weiss, and Michael J Black. Drape: Dressing any person. ACM Trans. Graph., 31(4):35–1, 2012.
  • [2] Fabian Hahn, Bernhard Thomaszewski, Stelian Coros, Robert W Sumner, Forrester Cole, Mark Meyer, Tony DeRose, and Markus Gross. Subspace clothing simulation using adaptive bases. ACM Transactions on Graphics (TOG), 33(4):105, 2014.
  • [3] Gerard Pons-Moll, Sergi Pujades, Sonny Hu, and Michael J Black. Clothcap: Seamless 4d clothing capture and retargeting. ACM Transactions on Graphics (TOG), 36(4):73, 2017.
  • [4] Nikolay Jetchev and Urs Bergmann. The conditional analogy gan: Swapping fashion articles on people images. In

    Proceedings of the IEEE International Conference on Computer Vision

    , pages 2287–2292, 2017.
  • [5] Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S Davis. Viton: An image-based virtual try-on network. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 7543–7552, 2018.
  • [6] Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, and Meng Yang. Toward characteristic-preserving image-based virtual try-on network. In Proceedings of the European Conference on Computer Vision (ECCV), pages 589–604, 2018.
  • [7] Haoye Dong, Xiaodan Liang, Bochao Wang, Hanjiang Lai, Jia Zhu, and Jian Yin. Towards multi-pose guided virtual try-on network. arXiv preprint arXiv:1902.11026, 2019.
  • [8] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros.

    Image-to-image translation with conditional adversarial networks.

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
  • [9] Serge Belongie, Jitendra Malik, and Jan Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002.
  • [10] Ignacio Rocco, Relja Arandjelovic, and Josef Sivic. Convolutional neural network architecture for geometric matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6148–6157, 2017.
  • [11] Fred L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on pattern analysis and machine intelligence, 11(6):567–585, 1989.
  • [12] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018.
  • [13] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017–2025, 2015.
  • [14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [15] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016.
  • [16] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  • [17] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2223–2232, 2017.
  • [18] Rakshith R Shetty, Mario Fritz, and Bernt Schiele. Adversarial scene editing: Automatic object removal from weak supervision. In Advances in Neural Information Processing Systems, pages 7717–7727, 2018.
  • [19] Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, and Kwang In Kim. Unsupervised attention-guided image-to-image translation. In Advances in Neural Information Processing Systems, pages 3693–3703, 2018.
  • [20] Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, and Simon Lucey. St-gan: Spatial transformer generative adversarial networks for image compositing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9455–9464, 2018.
  • [21] Fangneng Zhan, Hongyuan Zhu, and Shijian Lu. Spatial fusion gan for image synthesis. arXiv preprint arXiv:1812.05840, 2018.
  • [22] Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, and Mario Fritz. Disentangled person image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 99–108, 2018.
  • [23] Dominik Lorenz, Leonard Bereska, Timo Milbich, and Björn Ommer. Unsupervised part-based disentangling of object shape and appearance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  • [24] Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8857–8866, 2018.
  • [25] Guha Balakrishnan, Amy Zhao, Adrian V Dalca, Fredo Durand, and John Guttag. Synthesizing images of humans in unseen poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8340–8348, 2018.
  • [26] Aliaksandr Siarohin, Enver Sangineto, Stéphane Lathuilière, and Nicu Sebe. Deformable gans for pose-based human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3408–3416, 2018.
  • [27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [28] Haoye Dong, Xiaodan Liang, Ke Gong, Hanjiang Lai, Jia Zhu, and Jian Yin. Soft-gated warping-gan for pose-guided person image synthesis. In Advances in Neural Information Processing Systems, pages 472–482, 2018.
  • [29] Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, and James Hays. Swapnet: Garment transfer in single view images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 666–682, 2018.
  • [30] Zhonghua Wu, Guosheng Lin, Qingyi Tao, and Jianfei Cai. M2e-try on net: Fashion from model to everyone. arXiv preprint arXiv:1811.08599, 2018.
  • [31] Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7297–7306, 2018.
  • [32] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In

    International Conference on Machine Learning

    , pages 448–456, 2015.
  • [33] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6924–6932, 2017.
  • [34] Xiaodan Liang, Ke Gong, Xiaohui Shen, and Liang Lin. Look into person: Joint body parsing & pose estimation network and a new benchmark. IEEE transactions on pattern analysis and machine intelligence, 41(4):871–885, 2019.
  • [35] Justin Johnson, Alexandre Alahi, and Li Fei-Fei.

    Perceptual losses for real-time style transfer and super-resolution.

    In European conference on computer vision, pages 694–711. Springer, 2016.
  • [36] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging, 3(1):47–57, 2016.
  • [37] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  • [38] Alexia Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard GAN. In International Conference on Learning Representations, 2019.
  • [39] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pages 5767–5777, 2017.
  • [40] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214–223, 2017.
  • [41] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  • [42] Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 932–940, 2017.
  • [43] Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.

Appendix A Appendix

a.1 Implementation details

Convolutional geometric matcher.

To extract the feature maps, we apply five times one standard convolution layer followed by a 2-strided convolution layer which downsamples the maps. The depth of the feature maps at each scale is (16,32,64,128,256). The correlation map is then computed and feeds a regression network composed of two 2-strided convolution layers, two standard convolution layers and one final fully connected layer predicting a vector

. We use batch normalization [32]

and relu activation. The parameters of the two feature maps extractors are not shared.

Siamese U-net generator. We use the same encoder architecture as in the convolutional geometric matcher, but we store the feature maps at each scale. The decoder has an architecture symmetric to the encoder. There are five times one standard convolution layer followed by an 2-strided deconvolution layer which upsamples feature maps. After a deconvolution, the feature maps are concatenated with the feature maps passed through skip connections. In the generator, we use instance normalization, which shows better results for image and texture generation [33], with relu activation.

Discriminator. We adopt the fully convolutional discriminator from Pix2Pix [8], but with five downsampling layers instead of three in the original version. Each of it is a 2-strided convolution layer with batch normalization and a leaky relu activation.

Adversarial loss. We use the relativistic formulation of the adversarial loss [38]. In this formulation, the discriminator is trained to predict that real images are more real than synthesized ones, rather than trained to predict that real images are real and synthesized images are synthesized.

Optimization. We use the Adam optimizer [41] with , , a learning rate of and a batch size of 8. Also, we use .

Hardware. We use a NVIDIA Tesla V100 with 16GB of RAM. The training takes around 3 days. For inference, WUTON processes a mini-batch of 4 images in 0.31s.