E-commerce means not being able to try on a product, which is difficult for fashion consumers . Sites now routinely put up photoshoots of models wearing products, but volume and turnover mean doing so is very expensive and time consuming . There is a need to generate realistic and accurate images of fashion models wearing different sets of clothing. One could use 3D models of posture [8, 14]. The alternative – synthesize product-model images without 3D measurements [17, 45, 39, 11, 15]
– is known as virtual try-on. These methods usually consist of two components: 1) a spatial transformer to warp the product image using some estimate of the model’s pose and 2) an image generation network that combines the coarsely aligned, warped product with the model image to produce a realistic image of the model wearing the product.
It is much easier to transfer with simple garments like t-shirts, which are emphasized in the literature. General garments (unlike t-shirts) might open in front; have sophisticated drapes; have shaped structures like collars and cuffs; have buttons; and so on. These effects severely challenge existing methods (examples in Supplementary Materials). Warping is significantly improved if one uses the product image to choose a model image that is suited to that garment (Figure 1).
At least in part, this is a result of how image generation networks are trained. We train using paired images – a product and a model wearing a product [17, 45, 53]. This means that the generation network always expects the target image to be appropriate for the product (so it is not trained to, for example, put a sweater onto a model wearing a dress, Figure 1). An alternative is to use adversarial training [11, 12, 38, 13, 37]; but it is hard to preserve specific product details (for example, a particular style of buttons; a decal on a t-shirt) in this framework. To deal with this difficulty, we learn an embedding space for choosing product-model pairs that will result in high-quality transfers (Figure 2). The embedding learns to predict what shape a garment in a model image would take if it were in a product image. Products are then matched to models wearing similarly shaped garments. Because models typically wear many garments, we use a spatial attention visual encoder to parse each category (top, bottom, outerwear, all-body, etc.) of garment and embed each separately.
Another problem arises when a garment is open (for example, an unbuttoned coat). In this case, the target of the warp might have more than one connected component. Warpers tend to react by fitting one region well and the other poorly, resulting in misaligned details (the buttons of Figure 1). Such errors may make little contribution to the training loss, but are very apparent and are considered severe problems by real users. We show that using multiple coordinated specialized warps produces substantial quantitative and qualitative improvements in warping. Our warper produces multiple warps, trained to coordinate with each other. An inpainting network combines the warps and the masked model, and creates a synthesized image. The inpainting network essentially learns to choose between the warps, while also provides guidance to the warper, as they are trained jointly. Qualitative evaluation confirms that an important part of the improvement results from better predictions of buttons, pockets, labels, and the like.
We show large scale quantitative evaluations of virtual try-on. We collected a new dataset of 422,756 pairs of product images and studio photos by mining fashion e-commerce sites. The dataset contains multiple product categories. We compare with prior work on the established VITON dataset  both quantitatively and qualitatively. Quantitative result shows that choosing the product-model pairs using our shape embedding yields significant improvements for all image generation pipelines (table 2). Using multiple warps also consistently outperform the single warp baseline, demonstrated through both quantitative (table 2, figure 5) and qualitative (figure 7) results. Qualitative comparison with prior work shows that our system preserves the details of both the to-change garment and the target model more accurately than prior work. We conducted a user study simulating the cost for e-commerce to replace real model with synthesized model. Result shows 40% of our synthesized model are thought as real models.
As a summary of our contributions:
we introduce a matching procedure that results in significant qualitative and quantitative improvements in virtual try-on, whatever warper is used.
we introduce a warping model that learns multiple coordinated-warps and consistently outperforms baselines on all test sets.
our generated results preserve details accurately and realistically enough to make shoppers think that some of the synthesized images are real.
2 Related Work
Image synthesis:Spatial transformer networks estimate geometric transformations using neural networks . Subsequent work [28, 39] shows how to warp one object to another. Warping can be used to produce images of rigid objects [26, 30] and non-rigid objects (e.g., clothing) [17, 12, 45]. In contrast to prior work, we use multiple spatial warpers.
Our warps must be combined into a single image, and our U-Net for producing this image follows trends in inpainting (methods that fill in missing portions of an image, see [48, 31, 49, 50]). Han et al. [16, 52] show inpainting methods can complete missing clothing items on people.
In our work, we use FID to quantitatively evaluate our method. This is based on the Fréchet Inception Distance (FID) , a common metric in generative image modelling [5, 54, 29]. Chong et al.  recently showed that FID is biased; extrapolation removes the bias, to an unbiased score (FID).
Generating clothed people: Zhu et al.  used a conditional GAN to generate images based on pose skeleton and text descriptions of garment. SwapNet  learns to transfer clothes from person A to person B by disentangling clothing and pose features. Hsiao et al.  learned a fashion model synthesis network using per-garment encodings to enable convenient minimal edit to specific items. In contrast, we warp products onto real model images.
Shape matching underlies our method to match product to model. Tsiao et al.  built a shape embedding to enable matching between human body and well-fitting clothing items. Prior work estimated the shape of human body [4, 27], clothing items [10, 25] and both [35, 40], through 2D images. The DensePose  descriptor helps modeling the deformation and shading of cloth and, therefore, has been adopted by recent work [36, 13, 47, 51, 7, 52].
Virtual try-on (VTO) maps a product to a model image. VITON  uses a U-Net to generate a coarse synthesis and a mask on the model where the product is presented. A mapping from the product mask to the on-model mask is learned through Thin plate spline (TPS) transformation . The learned mapping is applied on the product image to create a warp. Following their work, Wang et al.  improved the architecture using a Geometric Matching Module  to estimate the TPS transformations parameters directly from pairs of product image and target person. They train a separate refinement network to combine the warp and the target image. VTNFP  extends the work by incorporatiing body segments prediction and later works follow similar procedure [37, 24, 42, 23, 2]. However, TPS transformation fails to produce reasonable warps, due to the noisiness of generated masks in our dataset, as shown in Figure 6 right. Instead, we adopt affine transformations which we have found to be more robust to imperfections instead of TPS transformation. A group of following work extended the task to multi-pose. Warping-GAN  combined adversarial training with GMM, and generate post and texture separately using a two stage network. MG-VTON  further refine the generation method using a three-stage generation network. Other work [21, 55, 51, 7, 46] followed similar procedure. Han et al. 
argued that TPS transformation has low degree of freedom and proposed a flow-based method to create the warp.
Much existing virtual try-on work [17, 12, 21, 47, 55, 53, 24, 37] is evaluated on datasets that only have tops (t-shirt, shirt, etc.). Having only tops largely reduces the likelihood of shape mismatch as tops have simple and similar shapes. In our work, we extend the problem to include clothing items of all categories(t-shirt, shirt, pants, shorts, dress, skirt, robe, jacket, coat, etc.), and propose a method for matching the shape between the source product and the target model. Evaluation shows that using pairs that match in shape significantly increases the generation quality for both our and prior work (table 2).
In addition, real studio outfits are often covered by an unzipped/unbuttoned outerwear, which is also not presented in prior work [17, 12, 21, 47, 55, 53, 37]. This can cause partition or severe occlusion to the garment, and is not addressed by prior work as shown in Figure 6. We show that our multi-warp generation module ameliorates these difficulties.
3 Proposed Method
Our method has two components. A Shape Matching Net (SMN; Figure 2 and 3) learns an embedding for choosing shape-wise compatible garment-model pairs to perform transfer. Product and model images are matched by finding product (resp. model) images that are nearby in the embedding space. A Multi-warp Try-on Net (MTN; Figure 4) takes in a garment image, a model image and a mask covering the to-change garment on the model and generates a realistic synthesis image of the model wearing the provided garment. The network consists of a warper and an inpainting network, trained jointly. The warper produces warps of the product image, each specialized on certain features. The inpainting network learns to combine warps by choosing which features to look for from each warp. SMN and MTN are trained separately.
For the rest of the paper we will define the following terms. Let represent a product image of type indexed by , the model image, and the corresponding product mask on . Note that is the groundtruth image of a model wearing product .
3.1 Shape Matching Net
Given an arbitrary product , our goal is to retrieve a set of model images that is compatible with the shape of ; and vice versa. To support such query, we train a Shape Matching net that maps model and product images of similar shapes close together in an embedding space. We perform k nearest neighbors search in this embedding space to retrieve product-model pairs for creating synthesis images.
We use product images to learn a shape embedding, because product images follow a similar geometrical layout. From every product image , we create a contour image by converting it into grayscale, applying a mean filter, Gaussian Adaptive Threshold and a contour finding algorithm . The contour images preserve the shape information and remove other unimportant information (e.g., color, pattern, material, etc..). A shape auto-encoder is trained to reconstruct the contour image using mean squared error as reconstruction loss and regularization on the embedding space.
When parsing a fashion model image , we need to retrieve product information conditioned on types . As our dataset contains pairs of product images and model images, we exploit such cues from the pairs and use spatial attention layers to identify the subset of features corresponding to each type of product on a model image. The garment visual encoder
outputs an embedding vectorfor a product image and an embedding vector per type for a model image. We embed pairs of product image of type and model image , such that is embedded closer to than a different product image or a different garment on the model using Triplet loss . We sample randomly from items of the same type as and uniformly at random. Additionally, we minimize the squared distance between and . An regularization is enforced on the embedding space. The attention loss can be written as
The embedding loss is used to capture the feature correspondence of the two domain and help enforce the attention mechanism embed in the network architecture. Details about the spatial attention architecture are in Supplementary Materials.
To perform shape matching, we are only interested in the shape information extracted from the model image, rather than the full visual information. Therefore, we map the visual embedding into the shape embedding using a two-layers fully connected network , such that . We use to reconstruct from , and compute the reconstruction loss. Additionally, we compute the triplet loss between a pair of original and transferred shape embedding, and the embedding of a different item. The loss is written as
The full training loss for the Shape Matching Net is
3.2 Multi-warp Try-on Net
At train time, the network takes pairs of and learns to reconstruct . At test time, is replaced with and the network generates . This transfer works well when and follow similar geometric layout, ensured by the shape matching process.
As with prior work [17, 45], our system also consists of two modules: (a) a warper to create multiple specialized warps, by aligning the product image with the mask; (b) an inpainting module to combine the warps with the masked model and produce the synthesis image. Unlike prior work [17, 45], the two modules are trained jointly rather than separately, so the inpainter guides the warper.
The warper consist of a spatial transformer network  that takes and as input, and output sets of affine transformation parameters . Then, we apply the predicted affine transformations to using to generate warps . The warps are optimized to match the pixels in the masked region of the target person using per pixel loss written as:
are pixel locations; are the image width and height; controls the ratio of the loss enforced only on the mask region but not on the background. This is necessary because the majority of the masks we use are noisy, as they are produced by a pre-trained segmentation model. A balanced ratio encourage the warp to match the pixel values in the masked region, while attempting to keep all pixels within the mask region (examples in Supplementary Materials). This loss is sufficient to train a single warp baseline model.
Cascade Loss: With multiple warps, each warp is trained to address the mistakes made by previous warps where . For the th warp, we compute the minimum loss among all the previous warps at every pixel, written as
The cascade loss computes the average loss for all warps. An additional regularization terms is enforced on the transformation parameters, so all the later warps stay close to the first warp.
The cascade loss enforce a hierarchy among all warps, making it more costly for an earlier warp to make a mistake than for a later warp. This prevents possible oscillation during the training (multiple warps compete for optimal). The idea is comparable with boosting, but yet different because all the warps share gradient, making it possible for earlier warps to adjust according to later warps.
The Inpainting Module concatenates all the warps () and the masked target image (), and learns to inpaint the masked region on the target image. This is different from a standard inpainting task because the exact content to the masked region has been provided through warps. Rather, the Inpainting Module learns to combine the different warps to synthesize a final realistic and accurate image. We use a U-Net architecture with skip connections to help learn the identity and adopt the inpainting losses proposed by Liu et al. . We also experimented adding adversarial loss and conditional adversarial loss during training, and both yield no improvement.
The total loss for the Multi-warp Try-on Net is written as
The VITON dataset  contains pairs of product image (front-view, laying flat, white background) and studio images, 2D pose maps and pose key-points. It has been used by many works [45, 11, 15, 53, 24, 23, 2, 37]. Some works [47, 15, 13, 51] on multi-pose matching used DeepFashion  or MVC  and other self-collected datasets [12, 21, 47, 55]. These datasets have the same product worn by multiple people, but do not have a product image, therefore not suitable for our task.
The VITON dataset only has tops. This likely biases performance up, because (for example): the drape of trousers is different from the drape of tops; some garments (robes, jackets, etc.) are often unzipped and open, creating warping issues; the drape of skirts is highly variable, and depends on details like pleating, the orientation of fabric grain and so on. To emphasize these real-world problem, we collected a new dataset of 422,756 fashion products through web-scraping fashion e-commerce sites. Each product contains a product image (front-view, laying flat, white background), a model image (single person, mostly front-view), and other metadata. We use all categories except shoes and accessories, and group them into four types (top, bottoms, outerwear, or all-body). Type details appear in the supplementary materials.
We randomly split the data into 80% for training and 20% for testing. Because the dataset does not come with segmentation annotation, we use Deeplab v3  pre-trained on ModaNet dataset  to obtain the segmentation masks for model images. A large portion of the segmentation masks are noisy, which further increases the difficulty (see Supplementary Materials).
4.2 Training Process
We train our model on our newly collected dataset and the VITON dataset  to facilitate comparison with prior work. When training our method on VITON dataset, we only extract the part of the 2D pose map that corresponds to the product to obtain segmentation mask, and discard the rest. The details of the training procedure is in Supplementary Materials.
4.3 Quantitative Evaluation
Quantitative comparison with state of the art is difficult. Reporting the FID in other papers is meaningless, because the value is biased and the bias depends on the parameters of the network used [9, 37]. We use the FID score, which is unbiased. We cannot compute FID for most other methods, because results have not been released; in fact, recent methods (eg [15, 53, 24, 24, 42, 23, 2]) have not released an implementation. CP-VTON  has, and we use this as a point of comparison.
also computed the FID score on the original test set of VITON, which consists of only 2,032 synthesized pairs. Because of the small dataset, this FID score is not meaningful. The variance arising from the calculation will be high which leads to a large bias in the FID score, rendering it inaccurate. To ensure an accurate comparison, we created a larger test set of synthesized 50,000 pairs through random matching, following the procedure of the original work. We created new test sets using our shape matching model by selecting the top 25 nearest neighbors in the shape embedding space for every item in the original test set. We produce two datasets each of 50,000 pairs using colored image and grayscale images to compute the shape embedding. The grayscale ablation tells us whether the shape embedding looks at color features.
The number of warps is chosen by computing the
error and Perceptual error (using VGG19 pre-trained on ImageNet) using warpers with differenton the test set of our dataset. Here the warper is evaluated by mapping a product to a model wearing that product. As shown in figure 5, consistently outperforms . However, having more than two warps also reduce performance using the current training configuration, possibly due to overfitting.
We choose by training a single warp model with different values using 10% of the dataset, then evaluating on test. Table 1 shows that a that is too large or two small cause the performance to drop. happens to be the best, and therefore is adopted. Qualitative comparison are available in supplementary materials.
|Perceptual Test Error||0.774||0.722||0.745||0.810|
With this data, we can compare CP-VTON, our method using a single warp (), and two warps (), and two warp blended. The blended model takes in the average of two warps instead of the concatenation. Results appear in Table 2. We find:
for all methods, choosing the model gets better results;
there is little to choose between color and grayscale matching, so the match attends mainly to garment shape;
having two warpers is better than having one;
combining with a u-net is much better than blending.
We believe that quantitative results understate the improvement of using more warpers, because the quantitative measure is relatively crude. Qualitative evidence supports this (figure 7).
|Test set||Random||Match (color)||Match (grayscale)|
|Ours k=2 (blended)||15.4||15.26||15.37|
4.4 Qualitative Results
We have looked carefully for matching examples in [15, 24, 53, 37] to produce qualitative comparisons. Comparison against MG-VTON  is not applicable, as the work did not include any fixed-pose qualitative example. Note that the comparison favors prior work because our model trains and tests only using the region corresponding to the garment in the 2D pose map while prior work uses the full 2D pose map and key-point pose annotations.
Generally, garment transfer is hard, but modern methods now mainly fail on details. This means that evaluating transfer requires careful attention to detail. Figure 6 shows some comparisons. In particular, attending to image detail around boundaries, textures, and garment details exposes some of the difficulties in the task. As shown in Figure 6 left, our method can handle complicated texture robustly (col. a, c) and preserve details of the logo accurately (col. b, e, f, g, i). The examples also show clear difference between our inpainting-based method and prior work – our method only modifies the area where the original cloth is presented. This property allows us to preserve the details of the limb (col. a, d, f, g, h, j) and other clothing items (col. a, b) better than most prior work. Some of our results (col. c, g) show color artifacts from the original cloth on the boundary, because the edge of the pose map is slightly misaligned (imperfect segmentation mask). This confirms that our method rely on fine-grain segmentation mask to produce high quality result. Some pairs are slightly mis-matched in shape(col. d, h). This will rarely occur with our method if the test set is constructed using the shape embedding. Therefore, our method does not attempt to address it.
Two warps are very clearly better than one (Figure 7), likely because the second warp can fix the alignment and details that single warp model fails to address. Particular improvements occur for unbuttoned/unzipped outerwear and for product images with tags. These improvement may not be easily captured by quantitative evaluation because the differences in pixel values are small.
4.5 User Study
We used a user study to check how often users could identify synthesized images. A user is asked whether an image of a model wearing a product (which is shown) is real or synthesized. Display uses the highest possible resolution (512x512), as in figure 8.
|Participants||Accuracy||False Positive||False Negative|
We used examples where the mask is good, giving a fair representation of the top 20 percentile of our results. Users are primed with two real vs. fake pairs before the study. Each participant is then tested with 50 pairs of 25 real and 25 fake, without repeating products. We test two populations of users (vision researchers, and randomly selected participants).
Mostly, users are fooled by our images; there is a very high false-positive (i.e. synthesized image marked real by a user) rate (table 3). Figure 8 shows two examples of synthesized images that 70% of the general population reported as real. They are hard outerwear examples with region partition and complex shading. Nevertheless, our method managed to generate high quality synthesis. See supplementary material for all questions and complete results of the user study.
In this paper, we propose two general modifications to the virtual try-on framework: (a) carefully choose the product-model pair for transfer using a shape embedding and (b) combine multiple coordinated warps using inpainting. Our results show that both modifications lead to significant improvement in generation quality. Qualitative examples demonstrate our ability to accurately preserve details of garments. This lead to difficulties for shoppers to distinguish between real and synthesized model images, shown by user study results.
-  (2018-06) DensePose: dense human pose estimation in the wild. In , Cited by: §2.
-  (2019-10) Powering virtual try-on via auxiliary human segmentation learning. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Cited by: §2, §4.1, §4.3.
-  (2002) Shape matching and object recognition using shape contexts. PAMI. Cited by: §2.
-  (2016) Keep it SMPL: automatic estimation of 3D human pose and shape from a single image. In ECCV, Cited by: §2.
-  (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §2.
-  (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, Cited by: §4.1.
-  (2019) Improving fashion landmark detection by dual attention feature enhancement. In ICCV Workshops, Cited by: §2, §2, §4.2.
-  (2015) Synthesizing training images for boosting human 3d pose estimation. Cited by: §1.
-  (2019) Effectively unbiased fid and inception score and where to find them. arXiv preprint arXiv:1911.07023. Cited by: §2, §4.3.
-  (2017) DeepGarment : 3d garment shape estimation from a single image. Comput. Graph. Forum. Cited by: §2.
-  (2018) Soft-gated warping-gan for pose-guided person image synthesis. In NeurIPS, Cited by: §1, §1, §2, §4.1, §4.2, §4.4.
-  (2019) Towards multi-pose guided virtual try-on network. In ICCV, Cited by: §1, §2, §2, §2, §2, §4.1, §4.4.
-  (2019) Coordinate-based texture inpainting for pose-guided human image generation. CVPR. Cited by: §1, §2, §4.1, §4.2.
-  (2012) Drape: dressing any person. ACM Transactions on Graphics - TOG. Cited by: §1.
-  (2019) ClothFlow: a flow-based model for clothed person generation. In ICCV, Cited by: §1, §2, §4.1, §4.2, §4.3, §4.4.
Compatible and diverse fashion image inpainting. Cited by: §2.
-  (2018) VITON: an image-based virtual try-on network. In CVPR, Cited by: §1, §1, §1, §2, §2, §2, §2, §3.2, §4.1, §4.2, §4.2, §4.3, §4.4.
-  (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637. Cited by: §2.
-  (2019) Dressing for diverse body shapes. ArXiv. Cited by: §2.
-  (2019) Fashion++: minimal edits for outfit improvement. In In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
-  (2019) FashionOn: semantic-guided image-based virtual try-on with detailed human and clothing information. In MM ’19, Cited by: §2, §2, §2, §4.1.
-  (2015) Spatial transformer networks. In NeurIPS, Cited by: §2, §3.2.
-  (2019) LA-viton: a network for looking-attractive virtual try-on. In ICCV Workshops, Cited by: §2, §4.1, §4.2, §4.3.
-  (2020) SieveNet: a unified framework for robust image-based virtual try-on. In WACV, Cited by: §2, §2, §4.1, §4.2, §4.3, §4.3, §4.4.
-  (2015) Garment capture from a photograph. Journal of Visualization and Computer Animation. Cited by: §2.
-  (2017) Deep view morphing. In CVPR, Cited by: §2.
-  (2018) End-to-end recovery of human shape and pose. CVPR. Cited by: §2.
-  (2016) WarpNet: weakly supervised matching for single-view reconstruction. In CVPR, Cited by: §2.
-  (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §2.
-  (2018) ST-gan: spatial transformer generative adversarial networks for image compositing. In CVPR, Cited by: §2.
-  (2018) Image inpainting for irregular holes using partial convolutions. In ECCV, Cited by: §2, §3.2.
-  (2016) MVC: a dataset for view-invariant clothing retrieval and attribute prediction. In ICMR, Cited by: §4.1.
-  (2016) DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In CVPR, Cited by: §4.1.
-  (2019) State of the fashion industry 2019. Cited by: §1.
-  (2019) SiCloPe : silhouette-based clothed people – supplementary materials. In CVPR, Cited by: §2.
-  (2018) Dense pose transfer. In ECCV, Cited by: §2.
-  (2020) GarmentGAN: photo-realistic adversarial fashion transfer. Cited by: §1, §2, §2, §2, §4.1, §4.2, §4.3, §4.3, §4.4.
-  (2018) SwapNet: image based garment transfer. In ECCV, Cited by: §1, §2.
-  (2017) Convolutional neural network architecture for geometric matching. In CVPR, Cited by: §1, §2, §2.
-  (2019) PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. ICCV. Cited by: §2.
FaceNet: a unified embedding for face recognition and clustering.. In CVPR, Cited by: §3.1.
-  (2019) SP-viton: shape-preserving image-based virtual try-on network. Multimedia Tools and Applications. Cited by: §2, §4.3.
-  (1985) Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing. Cited by: §3.1.
-  (2018) Designing the future of personal fashion. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Cited by: §1.
-  (2018) Toward characteristic-preserving image-based virtual try-on network. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1, §1, §2, §2, §3.2, §4.1, §4.2, §4.3, §4.4.
-  (2019) Down to the last detail: virtual try-on with detail carving. ArXiv. Cited by: §2.
-  (2018) M2E-try on net: fashion from model to everyone. In MM ’19, Cited by: §2, §2, §2, §4.1, §4.2.
-  (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In CVPR, Cited by: §2.
-  (2018) Generative image inpainting with contextual attention. In CVPR, Cited by: §2.
-  (2019) Free-form image inpainting with gated convolution. In ICCV, Cited by: §2.
-  (2019) Inpainting-based virtual try-on network for selective garment transfer. IEEE Access. Cited by: §2, §2, §4.1, §4.2.
-  (2019) Inpainting-based virtual try-on network for selective garment transfer. IEEE Access. Cited by: §2, §2.
-  VTNFP: an image-based virtual try-on network with body and clothing feature preservation. Cited by: §1, §2, §2, §2, §4.1, §4.2, §4.3, §4.4.
-  (2018) Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318. Cited by: §2.
-  (2019) Virtually trying on new clothing with arbitrary poses. In MM ’19, Cited by: §2, §2, §2, §4.1.
-  (2018) ModaNet: a large-scale street fashion dataset with polygon annotations. In ACM Multimedia, Cited by: §4.1.
-  (2017) Be your own prada: fashion synthesis with structural coherence. In CVPR, Cited by: §2.