Compatible and Diverse Fashion Image Inpainting

02/04/2019 ∙ by Xintong Han, et al. ∙ 0

Visual compatibility is critical for fashion analysis, yet is missing in existing fashion image synthesis systems. In this paper, we propose to explicitly model visual compatibility through fashion image inpainting. To this end, we present Fashion Inpainting Networks (FiNet), a two-stage image-to-image generation framework that is able to perform compatible and diverse inpainting. Disentangling the generation of shape and appearance to ensure photorealistic results, our framework consists of a shape generation network and an appearance generation network. More importantly, for each generation network, we introduce two encoders interacting with one another to learn latent code in a shared compatibility space. The latent representations are jointly optimized with the corresponding generation network to condition the synthesis process, encouraging a diverse set of generated results that are visually compatible with existing fashion garments. In addition, our framework is readily extended to clothing reconstruction and fashion transfer, with impressive results. Extensive experiments with comparisons with state-of-the-art approaches on fashion synthesis task quantitatively and qualitatively demonstrate the effectiveness of our method.



There are no comments yet.


page 7

page 14

page 15

page 18

page 19

page 20

page 21

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent breakthroughs in deep generative models, especially Variational Autoencoders (VAEs)

[25], Generative Adversarial Networks (GANs) [12], and their variants [20, 38, 7, 26]

, open a new door to a myriad of fashion applications in computer vision, including fashion design

[23, 48], language-guided fashion synthesis [72, 46, 13], virtual try-on systems [15, 58, 5], clothing-based appearance transfer [43, 68], etc. Unlike generating images of rigid objects, fashion synthesis is more complicated as it involves multiple clothing items that form a compatible outfit. Items in the same outfit might have drastically different appearances like texture and color (e.g., cotton shirts, denim pants, leather shoes, etc.), yet they are complementary to one another when assembled together, constituting a stylish ensemble for a person. Therefore, exploring compatibility among different garments, an integral collection rather than isolated elements, to synthesize a diverse set of fashion images is critical for producing satisfying virtual try-on experiences and stunning fashion design portfolios.

In light of this, we propose to explicitly explore visual compatibility relationships for fashion image synthesis. In particular, we formulate this problem as image inpainting, which aims to fill in a missing region in an image based on its surrounding pixels. It is worth noting that generating an entire outfit while modeling visual compatibility among different garments at the same time is extremely challenging, as it requires to render clothing items varying in both shape and appearance onto a person. Instead, we take the first step to model visual compatibility by narrowing it down to image inpainting, using images with people in clothing. The goal is to render a diverse set of photorealistic clothing items to fill in the region of a missing item in an image, while matching the style of existing garments. This can be readily used for a number of fashion applications like fashion recommendations, garment transfer, and fashion design.

However, this is still a challenging problem in deep generative models. On one hand, we wish to encourage diversity in generated images, i.e. clothing items of various shape and appearance, such that customers and designers can select from a diverse set of results. On the other hand, the synthesized garments are expected to be compatible with one another. For example, given sweat pants, different types of shoes can be generated, but only sneakers rather than slippers or leather shoes are compatible.

Visual compatibility, usually learned from the co-occurrence or co-purchase of clothing items [14, 57], is similar in spirit to contextual relationships among objects [42]. Recent work demonstrates that deep generative models can effectively exploit context to inpaint missing regions, producing a unique result consistent with its surroundings, for image synthesis [41, 64, 67]. Generalizing this idea to fashion synthesis, while appealing, is challenging since fashion synthesis is essentially a multi-modal problem—given a fashion image with one missing garment, various items, different in both shape and appearance, can be generated to be compatible with the existing set. For instance, in the second example in Figure 1, one can have different types of bottoms in shape (e.g., shorts or pants), and each bottom type may have various colors in visual appearance (e.g., blue, gray or black). So, the synthesis of a missing fashion item requires modeling of both shape and appearance. However, coupling their generation simultaneously usually fails to handle clothing shapes and boundaries, thus creating unsatisfactory results [27, 72].

To address these issues, we propose FiNet, a two-stage framework, which fills in a missing fashion item in an image at the pixel level through generating a set of realistic and compatible fashion items with diversity. In particular, we utilize a shape generation network and an appearance generation network to generate shape and appearance sequentially. The generation network contains an encoder-decoder generator that synthesizes new images through reconstruction, and two encoder networks that interact with each other to encourage diversity while preserving visual compatibility. In particular, one encoder learns a latent representation of the missing item, which is constrained by the latent code from the second encoder, whose inputs are from neighboring garments of the missing item. The latent representations are jointly learned with the corresponding generator to condition the generation process. This allows both generation networks to learn high-level compatibility correlations among different garments, enabling our framework to produce synthesized fashion items with meaningful diversity (multi-modal outputs) and strong compatibility, as shown in Figure 1. We provide extensive experimental results on the DeepFashion [33] dataset, with comparisons to state-of-the-art approaches on fashion synthesis, confirming the effectiveness of our method.

2 Related Work

Visual Compatibility Modeling. Visual compatibility plays an essential role in fashion recommendation and retrieval [32, 51, 53, 54]. Metric learning based methods have been adopted by projecting two compatible fashion items close to each other in a style space [36, 57, 56]. Recently, beyond modeling pairwise compatibility, sequence models [14, 29] and subset selection algorithms [17]

capable of capturing the compatibility among a collection of garments have also been introduced. Unlike these approaches which attempt to estimate fashion compatibility, we incorporate compatibility information into an image inpainting framework that generates a fashion image containing complementary garments. Furthermore, most existing systems rely heavily on manual labels for supervised learning. In contrast, we train our networks in a self-supervised manner, assuming that multiple fashion items in an outfit presented in the original catalog image are compatible to each other, since such catalogs are usually designed carefully by fashion experts. Thus, minimizing a reconstruction loss jointly during learning to inpaint can learn to generate compatible fashion items.

Image Synthesis. There has been a growing interest in image synthesis with GANs [12] and VAEs [25]. To control the quality of generated images with desired properties, various supervised knowledge or conditions like class labels [40, 2], attributes [49, 63], text [44, 69, 62], images [20, 59], etc., are used. In the context of generating fashion images, existing fashion synthesis methods often focus on rendering clothing conditioned on poses [34, 39, 27, 50], textual descriptions [72, 46], textures [61], a clothing product image [15, 58, 66, 21], clothing on another person [68, 43], or multiple disentangled conditions [7, 35, 65]. In contrast, we make our generative model aware of fashion compatibility, which has not been explored previously. To make our method more applicable to real-world applications, we formulate the modeling of fashion compatibility as a compatible inpainting problem that captures high-level dependencies among various fashion items or fashion concepts.

Furthermore, fashion compatibility is a many-to-many mapping problem, since one fashion item can match with multiple related items of various shapes and appearances. Therefore, our method is related to multi-modal generative models [71, 28, 9, 18, 59, 7]. In this work, we propose to learn a compatibility latent space, where the compatible fashion items are encouraged to have similar distributions.

Image Inpainting. Our method is also closely related to image inpainting [41, 64, 19, 67], which synthesizes missing regions in an image given contextual information. Compared with traditional image inpainting, our task is more challenging—we need to synthesize realistic fashion items with meaningful diversity in shape and appearance, and at the same time ensure that the inpainted clothing items are compatible in fashion style to existing garments in the current image. This requires explicitly encoding the compatibility by learning inherent relationships between various garments, rather than simply modeling the context itself. Another significant difference is that people expect multi-modal outputs in fashion image synthesis, whereas traditional image inpainting is typically a uni-modal problem.

Figure 2: FiNet framework. The shape generation network (Sec. 3.1) aims to fill a missing segmentation map given shape compatibility information, and the appearance generation network (Sec. 3.2) uses the inpainted segmentation map and appearance compatibility information for generating the missing clothing regions. Both shape and appearance compatibility modules carry uncertainty, allowing our network to generate diverse and compatible fashion items as in Figure 1.
Figure 3: Our shape generation network.

3 FiNet: Fashion Inpainting Networks

Our task is, given an image with a missing fashion item (e.g., by deleting the pixels in the corresponding area), we explicitly explore visual compatibility among neighboring fashion garments to fill in the region, synthesizing a diverse set of photorealistic clothing items varying in both shape (e.g., maxi, midi, mini dresses) and appearance (e.g., solid color, floral, dotted, etc.). Each synthesized result is expected not only to blend seamlessly with the existing image but also to be compatible with the style of current garments (see Figure 1). As a result, these generated images can be readily used for tasks like fashion recommendation. Further, in contrast to rigid objects, clothing items are usually subject to severe deformations, making it difficult to simultaneously synthesize both shape and appearance without introducing unwanted artifacts. To this end, we propose a two-stage framework named Fashion Inpainting Networks (FiNet) that contains a shape generation network (Sec 3.1) and an appearance generation network (Sec 3.2), to encourage diversity while preserving visual compatibility when filling in missing region. Figure 2 illustrates an overview of the proposed framework. In the following, we present the components of FiNet in detail.

3.1 Shape Generation Network

Figure 3 shows an overview of the shape generation netowrk. It contains an encoder-decoder based generator to synthesize a new image through reconstruction, and two encoders, working collaboratively to condition the generation process, producing compatible synthesized results with diversity. More formally, the goal of the shape generation network is to learn a mapping with that projects a shape context with a missing region as well as a person representation to a complete shape map , conditioned on the shape information captured by a shape encoder .

To obtain the shape maps for training the generator, we leverage an off-the-shelf human parser [10] pretrained on the Look Into Person dataset [11]. In particular, given an input image , we first obtain its segmentation maps with the parser, and then re-organize the parsing results into 8 categories: face and hair, upper body skin (torso + arms), lower body skin (legs), hat, top clothes (upper-clothes + coat), bottom clothes (pants + skirt + dress), shoes 111We only consider 4 types of garments: hat, top, bottom, shoes in this paper, but our method is generic and can be extended to more fine-grained categories if segmentation masks are accurately provided., and background (others). The 8-category parsing results are then transformed into an 8-channel binary map , which is used as the ground truth of the reconstructed segmentation maps for the input. The input shape map with a missing region is generated by masking out the area of a specific fashion item in the ground truth maps. For example, in Figure 3, when synthesizing top clothes, the shape context is produced by masking out the plausible top region, represented by a bounding box covering the regions of the top and upper body skin.

In addition, to preserve the pose and identity information in shape reconstruction, we employ similar clothing-agnostic features as described in [15, 58], which includes a pose representation, and the hair and face layout. More specifically, the pose representation contains an 18-channel heatmap extracted by an off-the-shelf pose estimator [4] trained on the COCO keypoints detection dataset [30], and the face and hair layout is computed from the same human parser [10] represented by a binary mask whose pixels in the face and hair regions are set to 1. Both representations are then concatenated to form , where is the number of channels.

Directly using and to reconstruct , i.e.,

, using standard image-to-image translation networks

[20, 34, 15]

, although feasible, will lead to a unique output without diversity. We draw inspiration from variational autoencoders, and further condition the generation process with a latent vector

, that encourages diversity through sampling during inference. As our goal is to produce various shapes of clothing items to fill in a missing region, we train to encode shape information with . Given an input shape ( is the ground truth binary segmentation map of the missing fashion item obtained by ), the shape encoder outputs

, leveraging a re-parameterization trick to enable a differentiable loss function 

[71, 6], i.e., .

is usually forced to follow a Gaussian distribution

during training, which enables stochastic sampling at the test time when is unknown:


where is the KL divergence. The learned latent code , together with the shape context and person representation are input to the generator to produce a complete shape map with missing regions filled: . Further, the shape generator is optimized by minimizing the cross entropy segmentation loss between and :


where is the number of channels of the segmentation map. The shape encoder and the generator can be optimized jointly by minimizing:


where is a weight balancing two loss terms. At test time, one can directly sample from to generate , enabling the reconstruction of a diverse set of results with .

Although the shape generator now is able to synthesize different garment shapes, it fails to consider visual compatibility relationships. Consequently, many generated results are visually unappealing (as will be shown in experiments). To mitigate this problem, we constrain the sampling process via modeling the visual compatibility relationships using existing fashion garments in the current image, which we refer to as contextual garments, denoted as . To this end, we introduce a shape compatibility encoder , with the goal of learning the correlations between the shapes of synthesized garments and contextual garments. This intuition is that if a fashion garment is compatible with those contextual garments, its shape can be determined by looking at the context. For instance, given a men’s tank top in the contextual garments, the synthesized shape of the missing garment is more likely to be a pair of men’s shorts than a skirt. The idea is conceptually similar to two popular models in the text domain, i.e., continuous bag-of-words (CBOW) [37] and skip-gram models [37]; learning to predict the representation of a word given the representations of contextual words around it and vice versa.

We first extract image segments of contextual garments using . Then, we form the visual representations of the contextual garments by concatenating these image segments from hat to shoes. The compatibility encoder then projects into a compatibility latent vector , i.e., . In order to use as a prior for generating , we posit that a target garment and its contextual garments should share the same latent space. This is similar to the shared latent space assumption applied in unpaired image-to-image translation [31, 18, 28]). Thus, the KL divergence in Eqn. 1 can be modified as,


which penalizes the distribution of encoded by for being too far from its compatibility latent vector encoded by . The shared latent space of and can be also considered as a compatibility space, which is similar in spirit to modeling pairwise compatibility using metric learning [57, 56]. Instead of reducing the distance between two compatible samples, we minimize the difference between two distributions as we need randomness for generating diverse multi-modal results. Through optimizing Eqn. 4, the generation of not only is aware of the inherent compatibility information embedded in contextual garments, but also enables compatibility-aware sampling during inference when is not available—we can simply sample from , and compute the final synthesized shape map using . Consequently, the generated clothing layouts should be visually compatible to existing contextual garments. The final objective function of our shape generation network is


3.2 Appearance Generation Network

Figure 4: Our appearance generation network.

As illustrated in Figure 2, the generated compatible shapes of the missing item are input into the appearance generation network to generate compatible appearances. The network has an almost identical structure as our shape generation network, consisting of an encoder-decoder generator for reconstruction, an appearance encoder that encodes the desired appearance into a latent vector , and an appearance compatibility encoder that projects the appearances of contextual garments into a latent appearance compatibility vector . Nevertheless, the appearance generation differs from the one modeling shapes in the following aspects. First, as shown in Figure 4, the appearance encoder takes the appearance of a missing clothing item as input instead of its shape, to produce a latent appearance code as input to for appearance reconstruction.

In addition, unlike that reconstructs a segmentation map by minimizing a cross entropy loss, the appearance generator focuses on reconstructing the original image in RGB space, given the appearance context , in which the fashion item of interest is missing. Further, the person representation that is input to consists of the ground truth segmentation map (at test time, we use the segmentation map generated by our first stage as is not available), as well as a face and hair RGB segment. The segmentation map contains richer information than merely using keypoints about the person’s configuration and body shape, and the face and hair image constrains the network to preserve the person’s identity in the reconstructed image . To reconstruct from , we adopt the losses widely used in style transfer [22, 60, 61], which contains a perceptual loss that minimizes the distance between the corresponding feature maps of and

in a perceptual neural network, and a style loss that matches their style information:


where is the -th feature map of image in a VGG-19 [52]

network pre-trained on ImageNet. When

, we use conv1_2, conv2_2, conv3_2, conv4_2, and conv5_2 layers in the network, while . In the second term, is the Gram matrix [8], which calculates the inner product between vectorized feature maps:


where is the same as in the perceptual loss term, and is its channel dimension. and in Eqn. 6 are hyper-parameters balancing the contributions of different layers, and are set automatically following [15, 3]. By minimizing Eqn. 6, we encourage the reconstructed image to have similar high-level contents as well as detailed textures and patterns as the original image.

In addition, to encourage diversity in synthesized appearance (i.e., different textures, colors, etc.), we also leverage an appearance compatibility encoder , taking the contextual garments as inputs to condition the generation by a KL divergence term . The objective function of our appearance generation network is:


Similar to the shape generation network, our appearance generation network, by modeling appearance compatibility, can render a diverse set of visually compatible appearances conditioned on the generated clothing segmentation map and the latent appearance code during inference: , where .

3.3 Discussion

While sharing the exact same network architecture and inputs, the shape compatibility encoder and the appearance compatibility encoder model different aspects of compatibility; therefore, their weights are not shared. During training, we use ground truth segmentation maps as inputs to the appearance generator to reconstruct the original image. During inference, we first generate a set of diverse segmentations using the shape generation network. Then, conditioned on these generated semantic layouts, the appearance generation network renders textures onto them, resulting in compatible synthesized fashion images with both diverse shapes and diverse appearances. Some examples are presented in (Figure 1). In addition to compatibly inpainting missing regions with meaningful diversity trained with reconstruction losses, our framework also has the ability to render garments onto people with different poses and shapes as will be demonstrated in Sec 4.5.

Note that our framework does not involve adversarial training [20, 31, 34, 72] (hard to stabilize the training process) or bidirectional reconstruction loss [28, 18] (requires carefully designed loss functions and selected hyper-parameters), thus making the training easier and faster. We expect more realistic results if adversarial training is involved, as well as more diverse synthesis if the output and the latent code is invertible.

4 Experiments

4.1 Experimental Settings

Dataset. We conduct our experiments on DeepFashion (In-shop Clothes Retrieval Benchmark) dataset [33] originally consisting of 52,712 person images with fashion clothes. In contrast to previous pose-guided generation approaches that use image pairs that contain people in the same clothes with two different poses for training and testing, we do not need paired data but rather images with multiple fashion items in order to model the compatibility among them. As a result, we filter the data and select 13,821 images that contains more than 3 fashion items to conduct our experiments. We randomly select 12,615 images as our training data and the other 1,206 for testing, while ensuring that there is no overlap in fashion items between two splits.

Network Structure. Our shape generator and appearance generator share similar network structure. and have an input size of , and are built upon a U-Net [45] structure with 2 residual blocks [16]

in each encoding and decoding layer. We use convolutions with a stride of 2 to downsample the feature maps in encoding layers, and utilize nearest neighborhood interpolation to upscale the feature map resolution in the decoding layers. Symmetric skip connections

[45] are added between encoder and decoder to enforce spatial correspondence between input and output. Based on the observations in [71], we set the length of all latent vectors to 8, and concatenate the latent vector to each intermediate layer in the U-Net after spatially replicating it to have the same spatial resolution. , , and all have similar structure as the U-Net encoder; except that their input is and a fully-connected layer is employed at the end to output and for sampling the Gaussian latent vectors. All convolutional layers have kernels, and the number of channels in each layer is identical to [20]. The detailed network structure is visualized in the supplementary material.

Training Setup. Similar to recent encoder-decoder based generative networks, we use the Adam [24] optimizer with and and a fixed learning rate of 0.0001. We train the compatible shape generator for 20K steps and the appearance generation network for 60K steps, both with a batch size of 16. We set for both shape and appearance generators.

Figure 5: Inpainting comparison of different methods conditioned on the same input.

4.2 Compared Methods

To validate the effectiveness of our method, we compare FiNet with the following methods:

FiNet w/o two-stage. We use a one-step generator to directly reconstruct image without the proposed two-stage framework. The one-step generator has the same network structure and loss function as our compatible appearance generator; the only difference is that it takes the pose heatmap, face and hair segment, and as input (i.e., merging the input of two stages into one).

FiNet w/o comp. Our method without compatibility encoder, i.e., minimizing instead of in both shape and appearance generation networks.

FiNet w/o two-stage w/o comp. Our full method without two-stage training and compatibility encoder, which reduces FiNet to a one-stage conditional VAE [25].

pix2pix + noise [20]. The original image-to-image translation frameworks are not designed for synthesizing missing clothing, thus we modify the input of this framework to have the same input as FiNet w/o two-stage. We add a noise vector for producing diverse results as in [71]. Due to the inpainting nature of our problem, it can also be considered as a variant of a conditional context encoder [41].

BicyleGAN [71]. Because pix2pix can only generate single output, we also compare with BicyleGAN, which can be trained on paired data and output multimodal results. Note that we do not take multimodal unpaired image-to-image translation methods [18, 28] into consideration since they usually produce worse results.

VUNET [7]. A variational U-Net that models the interplay of shape and appearance. We make the similar modification to the network input such that it models shape based on the same input as FiNet w/o two-stage and models the appearance using the target clothing appearance .

ClothNet [27]. We replace the SMPL [1] condition in ClothNet by our pose heatmap and reconstruct the original image. Note that ClothNet can generate diverse segmentation maps, but only outputs a single reconstructed image per segmentation map.

Compatibility Loss. Since most of the compared methods do not model the compatibility among fashion items, we also inject into these frameworks and design a compatibility loss to ensure that the generated clothing matches its contextual garments. It is similar to the loss of matching-aware discriminators in text-to-image synthesis [44, 69] and can be easily injected into a generative model framework and trained end-to-end. Adding this loss to a network aims to inject compatibility information for fair comparison.

All these methods are trained with similar setup and hyper-parameters. To further guarantee fair comparison, we modify the main network structures for all these methods to be the same as ours instead of using their original ones, which usually have worse performance than U-Net with residual blocks that we use.

Method Compatibility Diversity Human IS
Random Real Data 1.000 0.676 0.086 50.0% 3.907 0.051
pix2pix [20] 0.628 0.057 0.037 13.3% 3.629 0.046
BicyleGAN [71] 0.548 0.419 0.153 16.4% 3.596 0.038
VUNET [7] 0.652 0.128 0.096 30.7% 3.559 0.041
ClothNet [27] 0.621 0.212 0.127 15.9% 3.573 0.058
FiNet w/o two-stage w/o comp 0.513 0.417 0.128 12.8% 3.681 0.050
FiNet w/o comp 0.528 0.424 0.125 12.3% 3.688 0.030
FiNet w/o two-stage 0.666 0.261 0.144 25.6% 3.570 0.046
FiNet (Ours full) 0.683 0.297 0.141 36.6% 3.564 0.043
Table 1: Quantitative comparisons in terms of compatibility, diversity and realism.

4.3 Qualitative Results

In Figure 5, we show 3 generated images of each method conditioned on the same input. We can see that FiNet generates visually compatible bottoms with different shapes and appearances. Without generating the semantic layout as intermediate guidance, FiNet w/o two-stage cannot properly determine the clothing boundaries. FiNet w/o two-stage w/o comp also produces boundary artifacts and the generated appearances do not match the contextual garments. pix2pix [20] + noise only generates results with limited diversity—it tends to learn the average shape and appearance based on distributions of the training data. BicyleGAN [71] improves diversity, but the synthesized images are incompatible and suffer from artifacts brought by adversarial training. We found VUNET suffers from overfitting and only generate similar shapes. ClothNet [27] can generate diverse shapes but with similar appearances because it also uses a pix2pix-like structure for appearance generation.

We show more results of our proposed FiNet in Figure 1, which further illustrates the effectiveness of FiNet for generating different types of garments with high compatibility and diversity. Note that FiNet is also able to generate fashion items that do not exist in the original image as shown in the last example in Figure 1.

Figure 6: Conditioning on different inputs, FiNet can achieve clothing reconstruction (left), and clothing transfer (right).

4.4 Quantitative Comparisons

We compare different methods in terms of compatibility, diversity and realism.

Compatibility. To properly evaluate the compatibility of generated images, we trained a compatibility predictor adopted from [57]. The training labels also come from the same weakly-supervised compatibility assumption—if two fashion items co-occur in a same catalog image, we regard them as a positive pair, otherwise, these two are considered as negative [53, 14]. We fine-tune an Inception-V3 [55] pre-trained on ImageNet on DeepFashion training data for 100K steps, with an embedding dimension of 512 and default hyper-parameters. We use the RGB clothing segments as input to the network.

Following [7, 43]

that measure visual similarity using feature distance in a pretrained VGG-16 network, we measure the compatibility between a generated clothing segment and the ground truth clothing segment by their cosine similarity in the learned 512-D compatibility embedding space.

Diversity. Besides compatibility, diversity is also a key performance metric for our task. Thus, we utilize LPIPS [70] to measure the diversity of generated images (only inpainted regions) as in [71, 28, 18]. 2,000 image pairs generated from 100 fashion images are used to calculate LPIPS.

Realism. We conduct a user study to evaluate the realism of generated images. Following [4, 34, 71], we perform time-limited (0.5s) real or synthetic test with 10 human raters. The human fooling rate (chance is 50%, higher is better) indicates realism of a method. As a popular choice, we also report Inception Score [47] (IS) for evaluating realism.

We make the following observations based on the comparisons present in Table 1:

(1) FiNet yields the highest compatibility score with meaningful diversity. Without the compatibility-aware KL regulation, BicyleGAN and our baselines w/o comp fail to inpaint compatible clothes. These methods learn to project all potential garments into the same distribution independent of contextual garments, resulting in incompatible but highly diverse images. In contrast, VUNET, ClothNet give higher compatibility while sacrificing diversity. pix2pix ignores the injected noise and cannot generate diverse outputs. Their quantitative performances match their qualitative behaviors given in Figure 5.

(2) Our method achieves the highest human fooling rate. We find that Inception score does not correlate well with the human perceptual study, making it unsuitable for our task. This is because IS tends to reward image content with rich textures and large diversity. For example, colorful shoes usually look inharmonious but have a high IS. Similar observations have also been made in [43, 15].

(3) Images with low compatibility scores usually have lower human fooling rates. This confirms that incompatible garments also looks unrealistic to human.

4.5 Clothing Reconstruction and Transfer

Trained with a reconstruction loss, FiNet can also be adopted as a two-stage clothing transfer framework. More specifically, for an arbitrary target garment with shape and appearance , can generate the shape of in the missing region of , while can synthesize an image with the appearance of filling in . This can produce promising results for clothing reconstruction (when , where is the original missing garment) and garment transfer (when ) as shown in Figure 6. FiNet inpaints the shape and appearance of target garment natually onto a person, which further demonstrates its ability in generating realistic fashion images.

5 Conclusion

We introduce FiNet, a two-stage generation network for synthesizing compatible and diverse fashion images. By decomposition of shape and appearance generation, FiNet can inpaint garments in target region with diverse shapes and appearances. Moreover, we integrate a compatibility module that encodes compatibility information to the network, constraining the generated shapes and appearances to be close to the existing clothing pieces in a learned latent style space. The superior performance of FiNet suggests that it can be potentially used for compatibility-aware fashion design and new fashion item recommendation.


Larry S. Davis and Zuxuan Wu are partially supported by the Office of Naval Research under Grant N000141612713.


  • [1] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In ECCV, 2016.
  • [2] A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  • [3] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. In ICCV, 2017.
  • [4] Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded pyramid network for multi-person pose estimation. In CVPR, 2018.
  • [5] C.-T. Chou, C.-H. Lee, K. Zhang, H. C. Lee, and W. H. Hsu. Pivtons: Pose invariant virtual try-on shoe with conditional image completion. 2018.
  • [6] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
  • [7] P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. In CVPR, 2018.
  • [8] L. A. Gatys, A. S. Ecker, and M. Bethge.

    Image style transfer using convolutional neural networks.

    In CVPR, 2016.
  • [9] A. Ghosh, V. Kulharia, V. Namboodiri, P. H. Torr, and P. K. Dokania. Multi-agent diverse generative adversarial networks. 2018.
  • [10] K. Gong, X. Liang, Y. Li, Y. Chen, M. Yang, and L. Lin. Instance-level human parsing via part grouping network. In ECCV, 2018.
  • [11] K. Gong, X. Liang, X. Shen, and L. Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In CVPR, 2017.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  • [13] M. Günel, E. Erdem, and A. Erdem. Language guided fashion image manipulation with feature-wise transformations. 2018.
  • [14] X. Han, Z. Wu, Y.-G. Jiang, and L. S. Davis. Learning fashion compatibility with bidirectional lstms. In ACM Multimedia, 2017.
  • [15] X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis. Viton: An image-based virtual try-on network. In CVPR, 2018.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [17] W.-L. Hsiao and K. Grauman. Creating capsule wardrobes from fashion images. In CVPR, 2018.
  • [18] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018.
  • [19] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and locally consistent image completion. ACM TOG, 2017.
  • [20] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In CVPR, 2017.
  • [21] N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion articles on people images. In ICCVW, 2017.
  • [22] J. Johnson, A. Alahi, and L. Fei-Fei.

    Perceptual losses for real-time style transfer and super-resolution.

    In ECCV, 2016.
  • [23] W.-C. Kang, C. Fang, Z. Wang, and J. McAuley. Visually-aware fashion recommendation and design with generative image models. In ICDM, 2017.
  • [24] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [25] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [26] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016.
  • [27] C. Lassner, G. Pons-Moll, and P. V. Gehler. A generative model of people in clothing. In ICCV, 2017.
  • [28] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang. Diverse image-to-image translation via disentangled representations. In ECCV, 2018.
  • [29] Y. Li, L. Cao, J. Zhu, and J. Luo.

    Mining fashion outfit composition using an end-to-end deep learning approach on set data.

    IEEE TMM, 2016.
  • [30] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
  • [31] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017.
  • [32] S. Liu, J. Feng, Z. Song, T. Zhang, H. Lu, C. Xu, and S. Yan. Hi, magic closet, tell me what to wear! In ACM Multimedia, 2012.
  • [33] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016.
  • [34] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In NIPS, 2017.
  • [35] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. In CVPR, 2018.
  • [36] J. McAuley, C. Targett, Q. Shi, and A. Van Den Hengel. Image-based recommendations on styles and substitutes. In ACM SIGIR, 2015.
  • [37] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
  • [38] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  • [39] N. Neverova, R. A. Guler, and I. Kokkinos. Dense pose transfer. In ECCV, 2018.
  • [40] A. Odena, C. Olah, and J. Shlens.

    Conditional image synthesis with auxiliary classifier gans.

    In ICML, 2017.
  • [41] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
  • [42] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In ICCV. IEEE, 2007.
  • [43] A. Raj, P. Sangkloy, H. Chang, J. Lu, D. Ceylan, and J. Hays. Swapnet: Garment transfer in single view images. In ECCV, 2018.
  • [44] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016.
  • [45] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
  • [46] N. Rostamzadeh, S. Hosseini, T. Boquet, W. Stokowiec, Y. Zhang, C. Jauvin, and C. Pal. Fashion-gen: The generative fashion dataset and challenge. arXiv preprint arXiv:1806.08317, 2018.
  • [47] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
  • [48] O. Sbai, M. Elhoseiny, A. Bordes, Y. LeCun, and C. Couprie. Design: Design inspiration from generative networks. arXiv preprint arXiv:1804.00921, 2018.
  • [49] W. Shen and R. Liu. Learning residual images for face attribute manipulation. CVPR, 2017.
  • [50] A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe. Deformable gans for pose-based human image generation. In CVPR, 2018.
  • [51] E. Simo-Serra, S. Fidler, F. Moreno-Noguer, and R. Urtasun. Neuroaesthetics in fashion: Modeling the perception of fashionability. In CVPR, 2015.
  • [52] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [53] X. Song, F. Feng, X. Han, X. Yang, W. Liu, and L. Nie. Neural compatibility modeling with attentive knowledge distillation. In SIGIR, 2018.
  • [54] X. Song, F. Feng, J. Liu, Z. Li, L. Nie, and J. Ma. Neurostylist: Neural compatibility modeling for clothing matching. In ACM MM, 2017.
  • [55] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
  • [56] M. I. Vasileva, B. A. Plummer, K. Dusad, S. Rajpal, R. Kumar, and D. Forsyth. Learning type-aware embeddings for fashion compatibility. In ECCV, 2018.
  • [57] A. Veit, B. Kovacs, S. Bell, J. McAuley, K. Bala, and S. Belongie. Learning visual clothing style with heterogeneous dyadic co-occurrences. In CVPR, 2015.
  • [58] B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang. Toward characteristic-preserving image-based virtual try-on network. In ECCV, 2018.
  • [59] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, 2018.
  • [60] Z. Wu, X. Han, Y.-L. Lin, M. G. Uzunbas, T. Goldstein, S. N. Lim, and L. S. Davis. Dcan: Dual channel-wise alignment networks for unsupervised scene adaptation. In ECCV, 2018.
  • [61] W. Xian, P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays. Texturegan: Controlling deep image synthesis with texture patches. In CVPR, 2018.
  • [62] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In CVPR, 2018.
  • [63] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In ECCV, 2016.
  • [64] R. A. Yeh, C. Chen, T.-Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do. Semantic image inpainting with deep generative models. In CVPR, 2017.
  • [65] G. Yildirim, C. Seward, and U. Bergmann. Disentangling multiple conditional inputs in gans. arXiv preprint arXiv:1806.07819, 2018.
  • [66] D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel-level domain transfer. In ECCV, 2016.
  • [67] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Generative image inpainting with contextual attention. In CVPR, 2018.
  • [68] M. Zanfir, A.-I. Popa, A. Zanfir, and C. Sminchisescu. Human appearance transfer. 2018.
  • [69] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.
  • [70] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang.

    The unreasonable effectiveness of deep features as a perceptual metric.

    In CVPR, 2018.
  • [71] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In NIPS, 2017.
  • [72] S. Zhu, S. Fidler, R. Urtasun, D. Lin, and C. L. Chen. Be your own prada: Fashion synthesis with structural coherence. In ICCV, 2017.

Appendix A Network Details

We illustrate the detailed network structures of our shape generation network and appearance generation network in Figure 7 and Figure 8, respectively. There are some details to be noted:

Our residual block module contains two residual blocks [16] with one convolution in the beginning to make the number of input and output channels consistent after concatenation with the latent vector. We use convolution for all the other convolutional operations in our network.

We use the softmax function at the end of our shape generation network to generate segmentation maps, and the tanh activation is applied when our appearance generation network outputs synthesized images. Other activations utilize ReLu

. No batch normalization is used in our network.

To create layouts / images with a missing fashion item (i.e., shape context and appearance context ), we need to mask out pixels of a fashion item, which is determined by the plausible region of a specific garment category. Given the human parsing results generated by [10], for a top item, we mask out the bounding box covering the regions of the top and upper body skin; for a bottom item, its plausible region contains bottom and lower body skin; for both hats and shoes, we use the bounding box of the corresponding fashion item to decide which region to mask out. The bounding boxes are slightly enlarged to ensure full coverage of these regions.

For both shape and appearance, the inpainted region is first resized to , and the reconstruction losses are only computed over this resized inpainted region. Finally, we paste this region back to the input image and obtain the final result.

For constructing contextual garments, we first utilize human parsing results , generated by [10], to extract an image segment for each garment in its corresponding plausible region. Then, we resize these extracted image segments to , and concatenate them in the order of hat, top, bottom, shoes with the target garment category one set to all ’s (e.g., is the target category in Figure 7 and 8). This not only encodes the information of all contextual garments but also tells the network which category is missing. Finally, we have a contextual garment representation .

Figure 7: Network structure of our shape generation network.
Figure 8: Network structure of our appearance generation network.

Appendix B Latent Space Visualization

Shape latent space. In Figure 9, we visualize the generated segmentation maps of an image when varying different dimensions of the shape latent vector . In the left example, the shape generation network generates different top garment layouts when we change the values of the -th, -th, and -th dimension of . We can find that different dimension controls different characteristics of the generated layouts: the -th dimension mainly controls the sleeve length—long sleeve middle sleeve short sleeve sleeveless; the -th dimension determines the length of the clothing as well as the sleeve; and the -th dimension measures if the top opens in the middle. As for the right example, in which we generate bottom garment, the -th dimension is related to how the bottom interacts with the top; the length of the pants are changed when we vary the -th dimension; and the last dimension correlates with the exposure of the knee. Note that, for different garment categories, the same dimension of controls different characteristics. For example, has something to do with the length of bottoms but not the length of tops.

Figure 9: Generated layouts by our shape generation network when we change the values in different dimensions of the learned latent vector.

Further, in Figure 10 and 11, we show the compatibility space for two images by projecting the generated layouts in a 2D plane, whose and axes correspond to the -th (sleeve length) and -th (clothing length) dimension of that is used to generate these layouts. and

represent the mean and standard deviation of

’s distribution in its -th dimension. Consequently, layouts corresponding to latent vectors that are far from are unlikely to be generated.

In Figure 10, as we want to generate a compatible top for a man wearing a pair of long pants, the generated top layouts usually have long or short sleeves; and sleeveless tops (images in the lower right corner) are less compatible and realistic, thus these layouts are not likely to be generated (outside of ). In contrast, when we generate layouts for a girl with a pair of shorts as shown in Figure 11, the generated layouts tend to have shorter sleeve length as well as clothing length because they are more compatible with shorts. By the comparison between Figure 10 and 11, we can see that our shape generation network effectively models the compatibility and can generate compatible garment shapes according to contextual information.

Figure 10: Shape compatibility space visualization. and axes correspond to the -th and -th dimensions of , respectively.
Figure 11: Shape compatibility space visualization. and axes correspond to the -th and -th dimensions of , respectively.

Appearance latent space. As shown in Figure 12, we further present similar visualization for the appearance latent vector . Note that for visualization purposes, we use the ground truth segmentation map to generate appearance for simplicity. Unlike shape, the same dimension of the appearance latent vector correlates to the same appearance characteristic for different garment categories. The -st, -rd, -th dimensions correspond to brightness, color, texture of the generated images, respectively. This also indicates the importance of learning a compatible space; otherwise, if we project all appearances in the same latent space as FiNet w/o comp or BicyleGAN [71] without conditioning on the contextual garments, incompatible and visually unappealing shoes (shoes of all different colors as in the right side of Figure 12) may be generated.

Figure 12: Generated layouts by our shape generation network when we change the values in different dimensions of the learned latent vector.

We further plot the appearance compatibility space for three exemplar images in Figure 13, 14 and 15 for better understanding our appearance generation network. The generated appearances are arranged according to the -st (brightness) and -rd (color) dimension of . In particular, in Figure 13, given the gray top, our network considers dark bottoms of black or blue as compatible, and the ground truth pants also present these visual characteristics. In Figure 14, we can find that since the white graphic T-shirt is more compatible with lighter bottoms, our generation network creates such shorts accordingly. Unlike these two cases, in Figure 15, there is no strong constraint in the color of a compatible dress, so the appearance generation network outputs dresses with various colors. The results illustrated in these figures again validate that we inject compatibility information into our network to ensure diverse and compatible image inpainting results.

Figure 13: Appearance compatibility space visualization. and axes correspond to the -rd and -st dimensions of , respectively.
Figure 14: Appearance compatibility space visualization. and axes correspond to the -rd and -st dimensions of , respectively.
Figure 15: Appearance compatibility space visualization. and axes correspond to the -rd and -st dimensions of , respectively.

Appendix C Clothing Reconstruction and Transfer

Although clothing reconstruction and transfer is not our main contribution, we show more transfer results in Figure 16 and 17 to demonstrate that our method, by reconstructing a target garment and fill it in the missing regions of an input image, can transfer the target garment naturally to the input image. Note that FiNet transfers shape and appearance by inpainting a specific clothing item, which is different from most existing approaches that generate the full person as a whole [34, 7, 15]. This potentially provides a new solution for applications like virtual try-on [15] and generating people in diverse clothes [27].

Figure 16: Clothing transfer results of tops. Each row corresponds to an input image whose top garment is transfered from different target tops. The diagonal images are reconstruction results, since the input and target images are the same. FiNet can naturally render the shape and appearance of the target garment onto other people with various poses and body shapes.
Figure 17: Clothing transfer results of bottoms. Each row corresponds to an input image whose bottom garment is transfered from different target bottoms. The diagonal images are reconstruction results, since the input and target images are the same. FiNet can naturally render the shape and appearance of the target garment onto other people with various poses and body shapes.