Attention-based Fusion for Outfit Recommendation

08/28/2019 ∙ by Katrien Laenen, et al. ∙ 4

This paper describes an attention-based fusion method for outfit recommendation which fuses the information in the product image and description to capture the most important, fine-grained product features into the item representation. We experiment with different kinds of attention mechanisms and demonstrate that the attention-based fusion improves item understanding. We outperform state-of-the-art outfit recommendation results on three benchmark datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

With the explosive growth of e-commerce content on the Web, recommendation systems are essential to overcome consumer over-choice and to improve user experience. Often users shop online to buy a full outfit or to buy items matching other items in their closet. Webshops currently only offer limited support for these kinds of searches. Some webshops offer a people also bought feature as suggestions for compatible clothing items. However, items that are bought together by others are not necessarily compatible with each other, nor do they necessarily correspond with the taste and style of the current user. Another feature some webshops provide is shop the look. This enables to buy all clothing items worn together with the viewed item in an outfit which is usually put together by a fashion stylist. However, this scenario does not provide alternatives that might appeal more to the user.

In this work, we tackle the problem of outfit recommendation. The goal of this task is to compose a fashionable outfit either from scratch or starting from an incomplete set of items. Outfit recommendation has two main challenges. The first is item understanding. Fine details in the garments can be important for making combinations. For example, the items in Figure 1 match nicely because of the red heels of the sandals, the red flowers on the dress and the red pendants of the bracelet. These fine-grained product details should be captured in the item representations. Moreover, usually there is also a short text description associated with the product image. These descriptions point out certain product features and contain information which is useful for making combinations as well. Hence, there is a need to effectively integrate the visual and textual item information into the item representations. The second challenge in outfit recommendation is item matching. Item compatibility is a complex relation. For instance, assume items and are both compatible with item . In that case items and can be, but are not necessarily, visually similar. Moreover, items and can be, but are not necessarily, also compatible with each other. Furthermore, different product features can play a role in determining compatibility depending on the types of items being matched, as illustrated in (Vasileva et al., 2018).

This work will focus on item understanding. Our outfit recommendation system operates on region-level and word-level representations to bring product features which are important to make item combinations to the forefront as needed. The contributions of our work are threefold. Firstly, our approach works on a finer level of image regions and words. In contrast, previous approaches to outfit recommendation work on a more coarse level of full images and sentences. Secondly, we explore different attention mechanisms and propose an attention-based fusion method which fuses the visual and textual information to capture the most relevant product features into the item representations. Attention mechanisms have not yet been explored in outfit recommendation systems to improve item understanding. Thirdly, we improve state-of-the-art outfit recommendation results on three datasets.

The remainder of this paper is structured as follows. In Section 2 we review other works on outfit recommendation. Then, Section 3 describes our model architecture. Next, Section 4 contains our experimental setup. The results of the conducted experiments are analysed in Section 5. Finally, Section 6 provides our conclusions and directions for future work.

2. Related Work

The task of outfit fashionability prediction requires to uncover which items go well together based on item style, color and shape. This can be learned from visual data, language data or a combination of the two. Currently, two approaches are common to tackle outfit fashionability prediction. The first one is to infer a feature space where visually compatible clothing items are close together. (Veit et al., 2015)

use a Siamese convolutional neural network (CNN) architecture to infer a compatibility space of clothing items. Instead of only one feature space, multiple feature spaces can also be learned to focus on certain compatibility relationships.

(He et al., 2016b) propose to learn a compatibility space for different types of relatedness (e.g., color, texture, brand) and weight these spaces according to their relevance for a particular pair of items. (Vasileva et al., 2018) infer a compatibility space for each pair of item types (i.e., tops and bottoms, tops and handbags) and demonstrate that the embeddings specialize to features that dominate the compatibility relationship for that pair of types. Moreover, their approach also uses the textual descriptions of items to further improve the results. The second common approach to outfit fashionability prediction is to obtain outfit representations and to train a fashionability predictor on these outfit representations. In (Simo-Serra et al., 2015) a conditional random field scores the fashionability of a picture of a person’s outfit based on a bag-of-words representation of the outfit and visual features of both the scenery and person. Their method also provides feedback on how to improve the fashionability score. In (Li et al., 2017) neural networks are used to acquire multimodal representations of items based on the item image, category and title, to pool these into one outfit representation and to score the outfit’s fashionability. Other approaches to outfit fashionability prediction also exist. In (Han et al., 2017)

an outfit is treated as an ordered sequence and a bidirectional long short-term memory (LSTM) model is used to learn the compatibility relationships among the fashion items. In

(Hsiao and Grauman, 2018) the visual compatibility of clothing items is captured with a correlated topic model to automatically create capsule wardrobes. (Lin et al., 2019) build an end-to-end learning framework that improves item recommendation with co-supervision of item generation. Given an image of a top and a description of the requested bottom (or vice versa) their model composes outfits consisting of one top piece and one bottom piece.

None of the above approaches work with region-level and word-level representations, nor make use of an attention mechanism. In contrast, we infer which product features are most important for the outfit recommendation task through the use of an attention mechanism on regions and words.

3. Methodology

Section 3.1 describes our baseline model, which fuses the visual and textual information with standard common space fusion. Next, Section 3.2 elaborates our model architecture which fuses the visual and textual information through attention.

In all formulas, matrices are written with capital letters and vectors are bolded. We use letters

and

to refer to respectively the weights and bias in linear and non-linear transformations.

3.1. Baseline

Our baseline model is the method of (Vasileva et al., 2018). The model receives two triplets as input: a triplet of image embeddings of dimension and a triplet of corresponding sentence embeddings of dimension . How these image and sentence embeddings are obtained is detailed in Section 4.4. Embeddings and represent images of respectively type and type which are compatible. Compatible means that the images represented by and appear together in some outfit. Meanwhile represents a randomly sampled image of the same type as that has not been seen in an outfit with and is therefore considered to be incompatible with .

The triplets are first projected to a common, semantic space of dimension

. The purpose of the common space is to better capture the notions of image similarity, text similarity and image-text similarity. Therefore, three losses are defined on the common space. A visual-semantic loss

enforces that each image should be closer to its own description than to the descriptions of the other images in the triplet:

(1)
(2)
(3) with
(4) and

with and projections to the common space, the standard triplet loss, the margin, and

the cosine similarity.

and are computed analogous to Eq. 2. A visual similarity loss enforces that an image of type should be closer to an image of the same type than to an image of another type :

(5)

with the image projection to the common space and the standard triplet loss of Eq. 3. Finally, a textual similarity loss is defined analogous to Eq. 5.

Next, a type-specific compatibility space of dimension is inferred for each pair of types and . In a compatibility loss enforces that compatible images are closer together than non-compatible images:

(6)

with the image projection to the common space, the projection associated with , and the standard triplet loss of Eq. 3.

The final training loss is:

(7)

with , and scalar parameters.

3.2. Attention-based Fusion for Outfit Recommendation

The downside of the baseline model is that the item representations are quite coarse and the interaction between the visual and textual modality is quite limited. Instead, we would like to highlight certain parts of an image or words in a description which correspond to important product features for making fashionable item combinations, and integrate this into a multimodal item representation. Therefore we propose an attention-based fusion model, which we obtain by making a few adjustments to the baseline model.

Firstly, the first input to the attention-based fusion model is a triplet of region-level image features of dimension , where denotes the number of regions. Depending on the attention mechanism used, the other input is either a triplet of description-level features as before or a triplet of word-level features of dimension , where denotes the number of words. Details on how these features are obtained can be found in Section 4.4. Since and are formulated at the level of full images, we obtain image-level representations by simply taking the average of the region-level representations, i.e., . In the same way we obtain description-level representations from word-level representations for .

Secondly, we use an attention mechanism to fuse the visual and textual information and obtain a triplet of multimodal item representations. These multimodal item representations are more fine-grained and allow more complex interactions between the vision and language data. Finally, we project these multimodal item representations to the type-specific compatibility spaces.

How we identify important product features depends on the attention mechanism used. Section 3.2.1 describes visual dot product attention. Section 3.2.2 describes stacked visual attention. Finally, Section 3.2.3 discusses a co-attention mechanism. Furthermore, we also experimented with self-attention (Vaswani et al., 2017) on the image regions and words, and some other co-attention and multimodal attention mechanisms (Lu et al., 2016; Seo et al., 2017; Nam et al., 2017), but these did not improve performance.

3.2.1. Visual Dot Product Attention

Given region-level image features and description-level features , visual dot product attention produces attention weights based on the dot product of the representations of the description and each region:

(8)

with the ’th row of . Next, the attention weights are normalized and used to compute the visual context vector:

(9)

with the ’th row of . The visual context vector is concatenated with description , i.e., with the concatenation operator, to obtain a multimodal item representation of dimension .

3.2.2. Stacked Visual Attention

Given region-level image features and description-level features , stacked visual attention (Yang et al., 2016) produces a multimodal context vector in multiple attention hops, each extracting more fine-grained visual information. In the ’th attention hop, the attention weights and context vector are calculated as:

(10)
(11)

with and learnable weights,

the bias vector,

the query vector from the previous hop, and the elementwise sum operator. The query vector is initialized to . At the ’th hop, the query vector is updated as:

(12)

This process is repeated times, with the number of attention hops. Afterwards, the final query vector is concatenated with description , i.e., with the concatenation operator, to obtain a multimodal item representation of dimension .

3.2.3. Co-attention

The co-attention mechanism of (Yu et al., 2017) attends to both the representations of the image regions and the representations of the description words as follows.

First, the description words are attended independent of the image regions. The assumption here is that the most relevant words of the description can be inferred independent of the image content, i.e., words referring to color, shape, style and brand can be considered relevant independent of whether they are displayed in the image or not. Given word-level features , the textual attention weights and textual context vector are obtained as:

(13)
(14)

where Convolution1D refers to the 1D-convolution operation with in input channels, out output channels and kernel size k.

Next, the image regions are attended in attention hops. In the ’th attention hop, the textual context vector is merged with each of the region-level image features in using multimodal factorized bilinear pooling (MFB). MFB consists of an expand stage where the unimodal representations are projected to a higher dimensional space of dimension (with

a hyperparameter) and then merged with elementwise multiplication followed by a

squeeze stage where the merged feature is transformed back to a lower dimension . For a detailed explanation of MFB the reader is referred to (Yu et al., 2017). The MFB operation results in a multimodal feature matrix . Then, the visual attention weights and context vector are calculated based on this merged multimodal feature matrix :

(15)
(16)

The visual context vectors of all hops are concatenated and transformed to obtain the final visual context vector :

(17)

with and the concatenation operator. Finally, the final visual context vector is merged with the textual context vector using MFB to acquire a multimodal item representation of dimension .

4. Experimental Setup

4.1. Experiments and Evaluation

All models are evaluated on two tasks. In the fashion compatibility (FC) task, a candidate outfit is scored based on how compatible its items are with each other. More precisely, the outfit compatibility score is computed as the average compatibility score across all item pairs in the outfit. Since the compatibility of two items is measured with cosine similarity, the outfit compatibility score will lie in the interval . The performance of the FC task is evaluated using the area under a ROC curve (AUC). In the fill-in-the-blank (FITB) task the goal is to select from a set of four candidate items the item which is the most compatible with the remainder of the outfit. More precisely, the most compatible candidate item is the one which has the highest total compatibility score with the items in the remainder of the outfit. Performance for this task is evaluated with accuracy.

FC questions and FITB questions that consist of images without a description are discarded to keep evaluation fair for all models. Also note that if a pair of items have a type combination that was never seen during training, the model has not learned a type-specific compatibility space for that pair. Such pairs are ignored during evaluation. Hence, we also use the training set to determine which pairs of types do not effect outfit fashionability.

4.2. Datasets

We evaluate all models on three different datasets: Polyvore68K-ND, Polyvore68K-D and Polyvore21K.

4.2.1. Polyvore68K

The Polyvore68K dataset111https://github.com/mvasil/fashion-compatibility (Vasileva et al., 2018) originates from Polyvore. Two different train-test splits are defined for the dataset. Polyvore68K-ND contains 53,306 outfits for training, 10,000 for testing, and 5,000 for validation. It consists of 365,054 items, some of which occur both in the training and test set. However, no outfit appearing in one of the three sets is seen in the other two. The other split, Polyvore68K-D, contains 32,140 outfits, of which 16,995 are used for training, 15,145 for testing and 3,000 for validation. It has 175,485 items in total, where no item seen during training appears in the validation or test set. Both splits have their own FC questions and FITB questions.

Each item in the dataset is represented by a product image and a short description. Items have one of 11 coarse types (see Table 2 in Appendix A).

4.2.2. Polyvore21K

Another dataset collected from Polyvore is the Polyvore21K dataset222https://github.com/xthan/polyvore-dataset (Han et al., 2017). It contains items of 380 different item types, however not all are fashion related, e.g., furniture, toys, skincare, food and drinks, etc. We delete all items with types unrelated to clothing, clothing accessories, shoes and bags. The remaining 180 types are all fashion related, but some of them are very-fine grained. We make the item types more coarse to avoid an abundance of type-specific compatibility spaces, i.e. more than 5,000, which is unfeasible. The remaining 37 types can be found in Table 2 in Appendix A. Eventually, this leaves 16,919 outfits for training, 1,305 for validation and 2,701 for testing. There are no overlapping items between the three sets. Each item has an associated image and description.

During evaluation we use the FC questions and FITB questions supplied by (Vasileva et al., 2018) for the Polyvore21K dataset, after removal of fashion unrelated items.

4.3. Comparison with Other Works

This work uses a slightly different setup than the work of (Vasileva et al., 2018) and therefore our results are not exactly comparable with theirs. Firstly, we do not evaluate our models on the same set of FC and FITB questions. This is because we discard questions consisting of images without a description as explained in Section 4.1. Secondly, the item types used for the Polyvore21K dataset are different. It is unclear from (Vasileva et al., 2018) how they obtain and use the item types of the Polyvore21K dataset, as these have only been made public recently. In this work, we used the publicly available item types after cleaning as detailed in Section 4.2.2.

4.4. Training Details

Figure 2. Examples of fill-in-the-blank questions on the Polyvore68K-ND dataset and answers generated by the baseline model and our attention-based fusion model based on stacked visual attention.

All images are represented with the ResNet18 architecture (He et al., 2016a)

pretrained on ImageNet. More precisely, as in

(Vasileva et al., 2018) we take the output of the res4b_relu layer. For the models operating on image regions this results in 49 regions for every image, each with a dimension of 256. For the models working with full images, we use an additional average pooling layer to obtain one image-level representation, also with a dimension equal to 256. The text descriptions are represented with a bidirectional LSTM of which the forward and backward hidden state at timestep are concatenated, with the number of words in the descriptions. For models operating on the level of words instead of full descriptions, we concatenate the forward and backward hidden state of the bidirectional LSTM at each timestep to obtain the representation for the ’th word. The parameters of the ResNet18 architecture and the bidirectional LSTM are finetuned on our dataset during training. Dimensions , , and are equal to 512. Hyperparameters are set based on the validation set. For the attention mechanisms, the number of attention hops is set to 2 and hyperparameter for MFB is set to 2.

All models are trained for epochs using the ADAM optimizer with a learning rate of 5e-5 and a batch size of 128. In the loss functions, factors and are 5e-5, is set to 5e-3 and margin is . All models are trained for 5 runs. We do this to counteract the effect of the negative sampling which is done at random during training. To compute performance, we take the average performance on the FC task and FITB task across these 5 runs. In qualitative results, we use a voting procedure to determine the final answer on FC and FITB questions.

5. Results

Polyvore68K-ND Polyvore68K-D Polyvore21K
FC FITB FC FITB FC FITB
Common space fusion
baseline (Vasileva et al., 2018) 85.62 56.55 85.07 56.91 86.28 58.35
Attention-based fusion
visual dot product attention 89.43 61.55 86.85 60.12 88.59 63.11
stacked visual attention 89.68 61.92 87.25 60.48 88.89 62.52
co-attention 89.58 61.20 86.25 59.00 85.04 58.20
Table 1. Results on the fashion compatibility and fill-in-the-blank tasks for the Polyvore68K dataset versions and the Polyvore21K dataset.

Table 1 shows the results of the discussed models on the Polyvore68K dataset versions and the Polyvore21K dataset. We outperform standard common space fusion on all three datasets for both the FC and FITB tasks. On the Polyvore68K dataset versions the best results for both tasks are achieved with the fusion method based on stacked visual attention. For the Polyvore21K dataset the best results for the FC task are obtained with the fusion method based on stacked visual attention and for the FITB task with the fusion method based on visual dot product attention. Generally, we observe that a basic attention mechanism such as visual dot product attention obtains comparable results with more complex attention mechanisms such as stacked visual attention or co-attention.

When focusing on the separate tasks, our attention-based fusion models seem better at distinguishing randomly generated outfits from human-generated outfits than the standard common space fusion models. Especially on the Polyvore68K-ND dataset this observation is apparent. Furthermore, our attended multimodal item representations enable the generation of more fashionable outfits as can be seen from the results on the FITB task. Figure 2 shows some FITB questions and answers generated by the standard common space fusion model and our fusion model based on stacked visual attention for the Polyvore68K-ND dataset. For each of these FITB questions, the ground truth item needs to be selected because of some small details in other items of the outfit which are picked up by our model but not by the baseline. More precisely, for the first example the light blue handbag matches especially well with the light blue clasp of the pump. In the second example, the striped pattern of the handbag returns in the slippers and the yellow of the flower on the handbag returns in the sunglasses. In the third example, the green belt matches well with the green accents in the handbag and mules. In the last example, the T-shirt of an elephant looks nice in combination with the elephant-shaped earrings.

Hence, both quantitative and qualitative results demonstrate that highlighting certain product features in the item representations for making outfit combinations is meaningful and can be achieved with attention.

6. Conclusion

In this work we showed that attention-based fusion integrates visual and textual information in a more meaningful way than standard common space fusion. Attention on region-level image features and word-level text features allows to bring certain product features to the forefront in the multimodal item representations, which benefits the outfit recommendation results. We demonstrated this on three datasets, improving over state-of-the-art results on an outfit compatibility prediction task and an outfit completion task.

As future work and to further improve the results, we would like to investigate neural architectures that still better recognise fine-grained fashion attributes in images, to benefit more from the attention-based fusion. Furthermore, we would like to design novel co-attention mechanisms which still better integrate fine-grained visual and textual attributes.

Acknowledgements.
The first author is supported by a grant of the Research Foundation - Flanders (FWO) no. 1S55420N.

References

  • X. Han, Z. Wu, Y. Jiang, and L. S. Davis (2017) Learning fashion compatibility with bidirectional lstms. In ACM Multimedia, Cited by: §2, §4.2.2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016a) Deep residual learning for image recognition.

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 770–778.
    Cited by: §4.4.
  • R. He, C. Packer, and J. McAuley (2016b) Learning compatibility across categories for heterogeneous item recommendation. In International Conference on Data Mining, Cited by: §2.
  • W. Hsiao and K. Grauman (2018) Creating capsule wardrobes from fashion images. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7161–7170. Cited by: §2.
  • Y. Li, L. Cao, J. Zhu, and J. Luo (2017)

    Mining fashion outfit composition using an end-to-end deep learning approach on set data

    .
    IEEE Transactions on Multimedia 19, pp. 1946–1955. Cited by: §2.
  • Y. Lin, P. Ren, Z. Chen, Z. Ren, J. Ma, and M. de Rijke (2019) Improving outfit recommendation with co-supervision of fashion generation. In The World Wide Web Conference, WWW ’19, pp. 1095–1105. Cited by: §2.
  • J. Lu, J. Yang, D. Batra, and D. Parikh (2016) Hierarchical question-image co-attention for visual question answering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pp. 289–297. Cited by: §3.2.
  • H. Nam, J. Ha, and J. Kim (2017) Dual attention networks for multimodal reasoning and matching. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.2.
  • M. J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi (2017) Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §3.2.
  • E. Simo-Serra, S. Fidler, F. Moreno-Noguer, and R. Urtasun (2015) Neuroaesthetics in fashion: modeling the perception of fashionability.. In CVPR, pp. 869–877. Cited by: §2.
  • M. I. Vasileva, B. A. Plummer, K. Dusad, S. Rajpal, R. Kumar, and D. A. Forsyth (2018) Learning type-aware embeddings for fashion compatibility. In ECCV, Cited by: §1, §2, §3.1, §4.2.1, §4.2.2, §4.3, §4.4, Table 1.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5998–6008. Cited by: §3.2.
  • A. Veit, B. Kovacs, S. Bell, J. McAuley, K. Bala, and S. Belongie (2015) Learning visual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pp. 4642–4650. Cited by: §2.
  • Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola (2016) Stacked attention networks for image question answering.. In CVPR, pp. 21–29. Cited by: §3.2.2.
  • Z. Yu, J. Yu, J. Fan, and D. Tao (2017) Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. IEEE International Conference on Computer Vision (ICCV), pp. 1839–1848. Cited by: §3.2.3, §3.2.3.

Appendix A Dataset Item Types

Table 2 gives an overview of the different item types in the Polyvore68K dataset and the types that remain in the Polyvore21K dataset after cleaning.

Item Types
Polyvore68K Accessories, All body, Bags, Bottoms, Hats,
Jewellery, Outerwear, Scarves, Shoes, Sun-
glasses, Tops
Polyvore21K Accessories, Activewear, Baby, Bags and Wallets,
Belts, Boys, Cardigans and Vests, Clothing, Cos-
tumes, Cover-ups, Dresses, Eyewear, Girls, Gloves,
Hats, Hosiery and Socks, Jeans, Jewellery, Jumpsuits,
Juniors, Kids, Maternity, Outerwear, Pants, Scarves,
Shoes, Shorts, Skirts, Sleepwear, Suits, Sweaters and
Hoodies, Swimwear, Ties, Tops, Underwear, Watches,
Wedding Dresses
Table 2. Item types kept in the Polyvore68K and Polyvore21K datasets.