The fashion domain is fast becoming a confluence of research ideas for computer vision. New problems involving key concepts of computer vision are emerging in tandem with the fashion industry’s rapid evolution towards creating an online, social, and personalized fulfillment model. Recent development in this field, namely style models(Hsiao and Grauman, 2017; Kiapour et al., 2014; Lee et al., 2017; Matzen et al., 2017; Simo-Serra and Ishikawa, 2016), trend forecasting (Al-Halah et al., 2017), interactive search (Kovashka et al., 2012; Zhao et al., 2017), and recommendation (Hu et al., 2015; Lin et al., 2018; Liu et al., 2012; Veit et al., 2015) all require visual understanding of fashion data with rich detail and subtle context. Research in this area has the promise to directly influence what people buy, how they shop, and how the fashion industry changes its paradigms and grows along with the research directions.
Fashion recommendation plays an increasingly important role in the online retail market. The purpose of fashion recommendation is to promote people’s interest and participation in online shopping by recommending fashionable outfits that they may be interested in. For now, information retrieval (IR) based recommendation dominates the fashion market, where other similar users buying similar things together is considered the strongest signal to recommend an item. Early studies on fashion recommendation are based on small but expert-annotated datasets (Iwata et al., 2011; Liu et al., 2012)
, which cannot be directly translated to the development of complex models that need large amounts of training data (e.g., deep learning (DL) based models). One prominent push towards the creation of these large, web scale data is the proliferation of fashion-oriented online communities, e.g., Polyvore111http://www.polyvore.com/ and Chictopia222http://www.chictopia.com/, where people can not only consume recent fashion trends but also share and comment on outfit and accessory compositions. In addition to a large number of outfit compositions, such crowd-sourced data also contains valuable information (e.g., user comments) for building more accurate and intelligent recommender systems.
In this paper, we address the task of complementary outfit recommendation based on a conditional representation of the clothing items. Given a top (i.e., upper garment), we need to recommend a selection of bottoms (e.g., trousers). We move away from the more commonly attempted IR based solution and actually try to draw a sample from the conditional distribution of the lower garment given the upper garment
. These samples are learned to be realistic looking which can then be used to fetch purchasable items from any image search / e-commerce platform. We perform the conditional distribution estimation and sampling from a conditional generative adversarial network denoted as (GAN) with realistic generation constraints. The proposed GAN is built by incorporating multiple enhancements over the original cGAN method by Mirza et al. (Mirza and Osindero, 2014). The GAN generates new samples from the learned conditional distribution, while maintaining strong correspondence with the underlying patterns, thereby making it somewhat easier to explain why the top and the bottom match.
2. Related Work
No previous work has proposed a model similar to GAN to generate fashion items starting with minimum manual intervention at scale. We briefly survey related work on fashion recommendation and on explainable recommendation, respectively and the move onto defining the integral parts of the work proposed in this paper.
2.1. Fashion recommendation
Given a photograph of a fashion item (e.g., tops), a fashion recommender system attempts to recommend a photograph of other fashion items (e.g., bottoms). This simple definition does not take into account whether the photograph contains only the apparel images or it contains a model wearing that apparel. The second scenario is more difficult than the first one. We tackle the second situation, but many of the large scale systems tackle the first scenario. Iwata et al. (Iwata et al., 2011) propose a probabilistic topic model to recommend tops for bottoms by learning information about coordinates from visual features in each fashion item. Liu et al. (Liu et al., 2012)
study both outfit and item recommendation problems. They propose a latent Support Vector Machine (lSVM) based model for occasion-oriented fashion recommendation, i.e., given a user-input occasion, suggesting the most suitable clothing, or recommending items to pair with the reference clothing. Jagadeesh et al.(Jagadeesh et al., 2014) propose two classes of fashion recommenders, namely deterministic and stochastic, while they mainly focus on color modeling for fashion recommendation.
Several recent publications have moved away from smaller manually annotated datasets and resorted to other sources, where rich data can be harvested automatically. One such effort in the personalized whole outfit recommendation space is by Hu et al. (Hu et al., 2015)
, who propose a functional tensor factorization method to model interactions between users and fashion items over a dataset collected from Polyvore. McAuley et al.(McAuley et al., 2015) employ a general framework to model human visual preference for a pair of objects of the Amazon co-purchase dataset. Their primary idea is to extract visual features by using deep convolutional neural nets (CNNs) and then introduce a similarity metric to measure visual relationships. Similarly, He and McAuley (He and McAuley, 2016) introduce a scalable matrix factorization approach that incorporates visual signals into predictors of people's opinions. To take contextual information (such as titles and categories) into consideration, Li et al. (Li et al., 2017)classify a given outfit as popular or non-popular through a multi-modal and multi-instance deep learning system. To aggregate multi-modal data of fashion items and contextual information, Song et al. (Song et al., 2017) first employ an auto-encoder to encode the latent compatibilities existing between the two spaces, and then employ Bayesian personalized ranking to exploit pairwise preferences between tops and bottoms. Han et al. (Han et al., 2017) propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end manner. They train a bidirectional LSTM model to sequentially predict the next item conditioned on previous ones. Lin et al. (Lin et al., 2018) propose a neural fashion recommender (NFR) model that simultaneously provides fashion recommendations and generates abstractive comments.
The problem we want to solve in this paper is demonstrated in Fig. 1. We posit that it is easy to find matching pieces from a database when the user has some idea what she is looking for. Many a times knowing what one is looking for is difficult. A conditional GAN can be trained to fill up this missing piece by generating possible clothing choices which can be subsequently used for searching over an e-commerce or fashion search platform. The proposed model learns to sample from the conditional distribution of or the reverse one is used to generate plausible samples from this distribution. These samples are then used as search queries to match the users search intentions.
In this work we present an end to end system to generate fashion recommendations for bottom apparels given a particular top clothing item. More specifically:
We propose a novel perceptual loss function using a simple discrete cosine transform (DCT) which is extremely fast and captures most of the high resolution details needed for image reconstruction,
We propose a novel discriminator cost function called randomized label flipping, which is similar in ideology to the lens network proposed in (Sajjadi et al., 2018), but with a single parameter which requires no additional training data by itself,
Propose a scalable model for fashion recommendation with data gathered from the web. We use existing web based tools for data cleaning, which makes it extremely simple for other researchers to reproduce our work.
3. GAN Model
Conditional GANs were introduced by Mirza and Osindero (Mirza and Osindero, 2014) where in they fed the data on which they want to condition the generation process to both the generator and the discriminator. Basic GAN as proposed by Goodfellow et al. (Goodfellow et al., 2014) can be extended to a conditional model if both the generator and discriminator are conditioned on some extra information . The extra information encoded by could be any kind of auxiliary information, such as class labels or data from other modalities. We can perform the conditioning by feeding into the both the discriminator and generator as additional input layer. In the generator the prior input noise , and
are combined in joint hidden representation, and the adversarial training framework allows for considerable flexibility in how this hidden representation is composed. In the discriminatorand are presented as inputs and to a discriminative function. The objective function of a two-player minimax game is shown in Eq. 3.
In our work the fashion images, which are the images where a human model is visible with both top and bottom apparels, are segmented to represent the two inputs to the system and . Since represents the actual image to be generated it is encoded as an RGB image of size . The condition input
is encoded by passing the image through a deep network, Resnet-50 in our case, to generate a 1000 dimensional vector. The noise dimension is a hyperparameter in our model. Our goal for the generator is to train a function G that estimates, for a given conditional image featureas input, its corresponding fashion image counterpart as output. To achieve this, we train a generator network as a feed-forward CNN, parameterized by . Here denotes the weights and biases of a -layer deep network and is obtained by optimizing a conditional fashion image generation specific loss function denoted as . For training pairs for , we solve
In this work we will specifically design a joint loss function as a weighted combination of several loss components that model distinct desirable characteristics of the recovered fashion image. The individual loss functions are described in more detail in the following sections.
3.1. Generator Loss Function
The generator loss function can be defined as
3.1.1. MSE Loss
3.1.2. Perceptual Loss
Ledig et al. (Ledig et al., 2017) propose a perceptual loss encoder, where the ground truth image and its reconstruction are passed through a static VGG-19 network (Simonyan and Zisserman, 2014) and then the difference between the feature maps are obtained for different layers for the pair of images. We claim that this method is essentially computing the different filter responses learned by the network. Since these responses are obtained from a static network, we replace it by a linear set of filters encoded by the two dimensional discrete cosine transform (DCT) of the images. DCT has been used for JPEG compression for a long time and has been shown to be efficient in preserving the high frequency components of the images. The DCT based perceptual loss essentially guarantees that the reconstructed image and the original input match each other under several levels of smoothing and down-sampling operations. The simplified DCT based loss function can be written as
where is the 2D DCT matrix of size
. The DCT is applied channel wise to the zero padded images in our implementation.
3.1.3. Adversarial Loss
In addition to the content losses described so far, we also add the adversarial component of our GAN to the perceptual loss. This encourages our network to favor solutions that reside on the manifold of natural images, by trying to fool the discriminator network. The adversarial loss
is defined based on the probabilities of the discriminatorover all training samples as:
Here, is the probability that the reconstructed image is a valid reconstruction of the fashion element. For better gradient behavior we minimize instead of (Goodfellow et al., 2014; Ledig et al., 2017).
3.2. Discriminator Loss Function
In the recent work by Sajjadi et al. (Sajjadi et al., 2018)
, the authors propose a lens network which shifts the distribution of the true images towards the generator images, at least at the beginning of the training, thereby leading to smoother training behavior for GANs. As higher epochs are reached the effect of the lens network is slowly taken off to let the network learn the true distribution of the real images.
Based on a similar idea, we introduce a novel randomized label flipping concept which is built on the following set of equations. Let an intermediate image be generated as
and the label for the intermediate image be generated as
where is the threshold for the process and treated as a hyper-parameter. We call this the Randomized Label Flip (RLF) based training.
Lower values of force the network to modify the fake images to be treated as real by the discriminator, and vice-versa. At lower values the generator favors all the modes present in the data due to lesser weightage being given to the fake images. As the threshold is raised, the generator starts latching onto single modes and starts matching the real data closely. The thresholding scheme along with the random sampling confuses the discriminator in two ways. The threshold measures on average how much percentage of the data combinations of the form shown in Eq. 7 are labeled as fake or real.
The randomness in sampling ensures that even though the decision boundary falls on one side of the threshold, there is still some randomness for the discriminator to get confused about. Our experiments show that initially starting with low values of the threshold leads to stable generator training. During the later epochs the value is raised high enough which leads to diminishing of the this behavior.
4. Data Generation
Numerous fashion datasets have been published recently which have increased the enthusiasm for research into this field. DeepFashion (Liu et al., 2016) and the very recent ModaNet (Shuai Zheng and Piramuthu, 2018) are the largest fashion datasets. For our work, the freshness of the style trends is the most important aspect and hence we gather our own data based on web crawling for the most recent trends. We also need to make sure that the dataset can be easily updated to reflect the fashion trends of the coming seasons. We describe an end to end pipeline to generate data for learning the GAN model proposed in the previous sections.
4.1. Fashion Images Scrapping
We start by gathering large amount of images from Bing images portal333https://www.bing.com/images/. The searches are seeded by using around 1000 fashion oriented queries like men’s fall fashion, men’s fall fashion 2018, women’s fall fashion, women’s latest fall fashion etc., similar to (Liu et al., 2016). These leads to the first set of raw images which are fed to the next process in the pipeline. We collect around 100K images through this method.
4.2. Fashion Model Images
For our model to train properly, we need fashion images which show a person/model from head to toe, without missing any part. This is a crucial requirement, and we train a Faster R-CNN model (Ren et al., 2015) to detect full human body from these images. We segment all the correct detection results and pass them onto the next process in the pipleine. At this point we are left with around 80K images.
4.3. Segmentation to Top and Bottom Components
We provide the segmented full body images to crowd sourced workers, who segment the images into tops and corresponding bottoms. They are instructed to ignore the shoes as much as possible. The segmentation boundaries are simple rectangular axis aligned boxes for now. These components are then passed through resolution and aspect ratio checks. Note that many mistakes in the dataset still persist, such as repetitions, wrong segmentation etc. due to the errors introduced by the crowd. We let these persist through the dataset, to make the replication of this effort easier. We also remove non pants candidates such as skirts, frocks etc. for the results reported in this paper. Although including them is a simple addition to the data collection model, but the distribution for these is very different from the pants and trousers that we model in this paper. The methods described in this paper can be used to learn separate models for these candidates as well. These filtering operations remove more than 50% of the data and we are left with around 35K image pairs for tops and corresponding bottoms. Some example images from our dataset are shown in Fig. 2. Note that research datasets used for GANs have been heavily normalized and rectified to align them with each other. The eye co-ordinates in the CelebA dataset (Liu et al., 2015)
are aligned to a couple of pixels from each other. The mean and variance for the aligned face features for the CelebA dataset are shown in Fig.3
. Note that most of the GAN based face generation works report results on the aligned faces and not on the faces in the wild dateset. We perform no such alignment and the data is used as is to learn the conditional GAN model. The large amount of variability in the dataset used for our experiments is a real challenge for the current day techniques and we plan to release this data to the community to further improve the state of the art in this field. We perform a simple K-means based clustering on the intensity field of the images to get the dominant modes present in the data. Few of the dominant clusters are shown in Fig.4. Note that clustering seems to identify the pose of the two legs of the pants to some degree, but does not cluster across shades. Some of the dominant clusters capture artifacts of the loosely defined labeling process such as the low hanging coat in the fifth row from the top.
The problem we are trying to address in this work is extremely hard due to the fact that the scale at which data is collected, and also the seasonality and timeliness requirements of the data make it hard to perform human annotation as well as rectification. Our experiments to generate the pants for given tops, while using the basic cGAN method proposed by Mirza et al. (Mirza and Osindero, 2014) did not give expected results. Empirically, we observed that the generator for the original cGAN method latches onto the various combinations of the dominant modes present in the data as shown in Fig. 5, and does not converge to the ideal set of images. We hypothesize that this happens since the generator does not find any strong low variance regions, such as the eye corners for CelebA dataset, to latch onto during the initial iterations. Two possible ways to break this deadlock is either to pre-align all the training data to an average output and then generate that, or to learn a warping functional in addition to the generation cost. In this work, we take a third approach motivated by the tempered adversarial networks proposed by Sajjadi et al. (Sajjadi et al., 2018) and move the real images closer to the generated distribution to lead to more stable workload for the discriminator. We also perform a pre-clustering of the data to create semantically similar batches for the training epoch. More details of these techniques are elucidated in the following sections.
5.1. Experimental Setup
We are using conditional GAN network for generating images. As shown in Eq. 3, the network uses the noise vector of dimension and is extracted feature vector of top apparels having dimension . Both these and are concatenated and fed to the generator network. The generator and discriminator network architectures used in our work are shown in Fig. 6.
Since we do not perform any alignment of the training data, the generator does find all the modes suitable for generation and hence keeps on generating combination of those. To counter this behavior, we converted all the input images into gray scale such that only pose variations are clustered, featurized the gray scale images and performed K-Means clustering in the feature space. This is done to separate some of the very distinct poses in the pants image dataset. Once this is done, the training batches are selected such that images belonging to one unique shape cluster is fed to the GAN in each training batch.
Initially the generator used only adversarial loss as proposed by Mirza et al. (Mirza and Osindero, 2014). Empirically, this does not work very well since the bottom apparels data consists of various poses, colours and designs. To accommodate all the variations, we included MSE loss and perceptual loss in generator loss function step by step. We observed that the MSE loss helped in colour matching while the perceptual loss helped in sharpening of the images. Various stages of generation has been shown in Fig. 7. With only cGAN objective as proposed in (Mirza and Osindero, 2014) the images were largely inadequate to be called apparels. This was due to the fact that the generated images had almost no texture on them. Adding MSE loss brings color matching and texture, but still high frequency details were missing. Finally, adding the perceptual loss generates the best quality images, both in variations in color and texture and also in details.
5.2. Generation using RLF training
The generated images for lower threshold, as shown in Fig. 8, provide a clear picture of how the generator is learning. Our aim for the generator is to choose the most prominent mode out of multiple modes appearing in the generated images. Once the model has trained for about 200 epochs with randomly permuted batches, we switch the training regime to clustering driven batches, where each batch contains only one dominant mode of training data. Our target pants data has lot of pose variation apart from the existing colour and design variation. At the end we need to generate pants which are of any pose but must be correct in texture and design. Switching to the clustering driven training batch regime with a high value of threshold leads to more stable generations as shown in Fig. 8, bottom panel.
We baseline out method against a simple IR baased solution, where the test clothing item, shirt in our case, is mapped to its closest shirt in the dataset and then the corresponding pants for the match are shown as the complementary recommendation. The matching algorithm used to get the closest match can be successively used to generate the top
matches. For the results mentioned in this work, we generate Resnet50 features for the images and then generate approximate nearest neighbors by using cosine similarity over an index created using non-metric spaces library (NMSLib)444https://pypi.org/project/nmslib/.
Note that once the matching metric and the dataset is fixed, IR based algorithm matches the same set for the given query. GAN framework, on the other hand has the amazing property to generate different results for the same input, based on the noise vector appended to the input query. This can be further exploited to correlate the noise vector with seasonality, gender, color and other such dimensions which has not been studied in this work, but remain possible extensions to our method. The comparative examples for IR based against the GAN framework is shown in Fig. 9.
6.2. Objective Evaluation
To objectively measure the performance of our proposed method, we take inspiration from the Inception Score (IS) proposed by Salimans et al. (Salimans et al., 2016). IS tries to evaluate the outcome of GANs on two separate axes.
6.2.1. Image Quality
The most important criteria for the generated images is whether they represent valid pants objects. We do not train a separate classifier for pants category as needed by the IS computation method. We resort to furnishing the generated images as search queries to Bing image search. If the retrieved results contain jeans / pants then we take it as good. Else we take it as a bad quality generation. We call this the search engine criteria (SEC) and report it independently.
We do not want all the images generated to look similar, which points to the generator favoring one mode over the others. We evaluate diversity by measuring the color diversity in the generated batches. To evaluate the diversity, we create a joint channel wise color histogram of all the images in a batch and compute the entropy for this distribution. A higher entropy points towards higher diversity in the generated results. In the experiments we call it the diversity criteria (DC). The histograms for each of the variants mentioned in this work is shown in Fig. 10. The combined results for the two scores for all the variants of the proposed GAN network are shown in Table. 1. Note that MSE loss improves SEC score, which means that more results look like jeans and pants, but addition of perceptual loss also improves the diversity in these results. Finally addition of the RLF scheme improves both these metrics over the baseline method, as well as addition of individual improvements. Some of the images generated from our method are shown in Fig. 11.
|Generator Loss Method||SEC Score||DC Score|
|Adversarial only||29.68 %||156.02|
|Adversarial + MSE||67.18 %||164.85|
|Adversarial + MSE + Perceptual||64.06 %||193.62|
|RLF + Adversarial + MSE + Perceptual||81.25 %||212.92|
7. Deployment Status
The proposed method described in this work powers the backend of the feature called ‘Goes Well With’ in Bing Shopping vertical. The outputs generated by our method are matched against current ads corpus and then framed into a requery for the user. A screen-shot of the experience is shown in Fig. 12. The live experience can be seen at www.bing.com/shop vertical for USA and UK markets, for search queries such as sweatshirts, t-shirts etc.
8. Conclusion and Future Work
In this work we start with a simple idea of generating relevant bottom item recommendation based on the input top item. We propose an enhanced conditional GAN model called GAN. The proposed GAN model modifies the generator with a classical MSE loss and also a simplified perceptual loss using DCT coefficients of the generated as well as the target images. We introduce a simplified lensing technique to the discriminator to stabilize the generator training. We cluster the data by using a very simple K-Means clustering technique which can then be used to enforce mode normalization across training batches. The final outcome of all these methods is a simple yet powerful technique which can be used to generate very meaningful fashion items which can then be used to search for similar products in e-commerce portals. Though we tackle only pants in this work, but similar techniques can be used to generate varied fashion items such as jackets, skirts etc. In the future, we would like to continue building such models to capture more categories of fashion items and also investigate the commonalities hidden in these generative models.
- Al-Halah et al. (2017) Ziad Al-Halah, Rainer Stiefelhagen, and Kristen Grauman. 2017. Fashion Forward: Forecasting Visual Style in Fashion. ICCV (2017), 388–397.
- Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In NIPS. 2672–2680.
- Han et al. (2017) Xintong Han, Zuxuan Wu, Yu-Gang Jiang, and Larry S. Davis. 2017. Learning Fashion Compatibility with Bidirectional LSTMs. In ACM Multimedia.
- He and McAuley (2016) Ruining He and Julian McAuley. 2016. VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback. In AAAI.
- Hsiao and Grauman (2017) Wei-Lin Hsiao and Kristen Grauman. 2017. Learning the Latent ”Look”: Unsupervised Discovery of a Style-Coherent Embedding from Fashion Images. CoRR abs/1707.03376 (2017). arXiv:1707.03376 http://arxiv.org/abs/1707.03376
- Hu et al. (2015) Yang Hu, Xi Yi, and Larry S. Davis. 2015. Collaborative Fashion Recommendation: A Functional Tensor Factorization Approach.. In ACM Multimedia, Xiaofang Zhou, Alan F. Smeaton, Qi Tian, Dick C. A. Bulterman, Heng Tao Shen, Ketan Mayer-Patel, and Shuicheng Yan (Eds.). ACM, 129–138. http://dblp.uni-trier.de/db/conf/mm/mm2015.html#HuYD15
- Iwata et al. (2011) Tomoharu Iwata, Shinji Wanatabe, and Hiroshi Sawada. 2011. Fashion Coordinates Recommender System Using Photographs from Fashion Magazines. In IJCAI.
- Jagadeesh et al. (2014) Vignesh Jagadeesh, Robinson Piramuthu, Anurag Bhardwaj, Wei Di, and Neel Sundaresan. 2014. Large Scale Visual Recommendations from Street Fashion Images. In KDD. 1925–1934.
- Kiapour et al. (2014) M. Hadi Kiapour, Kota Yamaguchi, Alexander C. Berg, and Tamara L. Berg. 2014. Hipster Wars: Discovering Elements of Fashion Styles. European Conference on Computer Vision (ECCV).
et al. (2012)
Adriana Kovashka, Devi
Parikh, and Kristen Grauman.
WhittleSearch: Image search with relative attribute
2012 IEEE Conference on Computer Vision and Pattern Recognition(2012), 2973–2980.
Ledig et al. (2017)
Christian Ledig, Lucas
Theis, Ferenc Huszar, Jose Caballero,
Andrew P. Aitken, Alykhan Tejani,
Johannes Totz, Zehan Wang, and
Wenzhe Shi. 2017.
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), 105–114.
- Lee et al. (2017) Hanbit Lee, Jinseok Seol, and Sang goo Lee. 2017. Style2Vec: Representation Learning for Fashion Items from Style Sets. CoRR abs/1708.04014 (2017).
- Li et al. (2017) Yuncheng Li, Liangliang Cao, Jiang Zhu, and Jiebo Luo. 2017. Mining Fashion Outfit Composition Using an End-to-End Deep Learning Approach on Set Data. IEEE Transactions on Multimedia 19 (2017), 1946–1955.
- Lin et al. (2018) Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and Maarten de Rijke. 2018. Explainable Fashion Recommendation with Joint Outfit Matching and Comment Generation. CoRR abs/1806.08977 (2018).
- Liu et al. (2012) Si Liu, Jiashi Feng, Zheng Song, Tianzhu Zhang, Hanqing Lu, Changsheng Xu, and Shuicheng Yan. 2012. Hi, Magic Closet, Tell Me What to Wear!. In Proceedings of the 20th ACM International Conference on Multimedia (MM ’12). ACM, New York, NY, USA, 619–628. https://doi.org/10.1145/2393347.2393433
- Liu et al. (2016) Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. 2016. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In CVPR.
- Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV).
- Matzen et al. (2017) Kevin Matzen, Kavita Bala, and Noah Snavely. 2017. StreetStyle: Exploring world-wide clothing styles from millions of photos. CoRR abs/1706.01869 (2017). arXiv:1706.01869 http://arxiv.org/abs/1706.01869
- McAuley et al. (2015) Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-Based Recommendations on Styles and Substitutes.. In SIGIR. 43–52.
- Mirza and Osindero (2014) Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial Nets. CoRR abs/1411.1784 (2014). http://dblp.uni-trier.de/db/journals/corr/corr1411.html#MirzaO14
- Pathak et al. (2016) Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei Efros. 2016. Context Encoders: Feature Learning by Inpainting.
- Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.. In NIPS, Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett (Eds.). 91–99. http://dblp.uni-trier.de/db/conf/nips/nips2015.html#RenHGS15
- Sajjadi et al. (2018) Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, and Bernhard Schölkopf. 2018. Tempered Adversarial Networks. In ICML. 4451–4459.
- Salimans et al. (2016) Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved Techniques for Training GANs. CoRR abs/1606.03498 (2016). arXiv:1606.03498 http://arxiv.org/abs/1606.03498
- Shuai Zheng and Piramuthu (2018) M. Hadi Kiapour Shuai Zheng, Fan Yang and Robinson Piramuthu. 2018. ModaNet: A Large-Scale Street Fashion Dataset with Polygon Annotations. ACM Multimedia Conference (ACM MM).
Edgar Simo-Serra and
Hiroshi Ishikawa. 2016.
Fashion Style in 128 Floats: Joint Ranking and Classification Using Weak Data for Feature Extraction.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), 298–307.
- Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs/1409.1556 (2014). arXiv:1409.1556 http://arxiv.org/abs/1409.1556
- Song et al. (2017) Xuemeng Song, Fuli Feng, Jinhuan Liu, Zekun Li, Liqiang Nie, and Jun Ma. 2017. NeuroStylist: Neural Compatibility Modeling for Clothing Matching. In ACM Multimedia.
- Veit et al. (2015) Andreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. 2015. Learning Visual Clothing Style with Heterogeneous Dyadic Co-Occurrences. In ICCV. 4642–4650.
- Xu et al. (2017) Xiangyu Xu, Deqing Sun, Jinshan Pan, Yujin Zhang, Hanspeter Pfister, and Ming-Hsuan Yang. 2017. Learning to Super-Resolve Blurry Face and Text Images. In ICCV. IEEE Computer Society, 251–260.
- Zhao et al. (2017) Bo Zhao, Jiashi Feng, Xiao Wu, and Shuicheng Yan. 2017. Memory-Augmented Attribute Manipulation Networks for Interactive Fashion Search. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), 6156–6164.