Fashion Image Retrieval with Capsule Networks

08/26/2019 ∙ by Furkan Kınlı, et al. ∙ 0

In this study, we investigate in-shop clothing retrieval performance of densely-connected Capsule Networks with dynamic routing. To achieve this, we propose Triplet-based design of Capsule Network architecture with two different feature extraction methods. In our design, Stacked-convolutional (SC) and Residual-connected (RC) blocks are used to form the input of capsule layers. Experimental results show that both of our designs outperform all variants of the baseline study, namely FashionNet, without relying on the landmark information. Moreover, when compared to the SOTA architectures on clothing retrieval, our proposed Triplet Capsule Networks achieve comparable recall rates only with half of parameters used in the SOTA architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fashion has recently become one of the most featured topics of interdisciplinary studies in Computer Science. With the emergence of deep learning based solutions, fashion-related researches start to get promising results on various subjects including clothing recognition, attribute prediction, clothing retrieval, body segmentation, and style prediction. Retrieving the desired clothing image from a collection is one of the most challenging tasks in fashion domain, and it is attacked by such a mechanism that learns to capture different notions of the similarities between the images in a common subspace.

There has been numerous studies [3, 4, 7, 1, 12, 13, 9, 2, 6]

to employ Convolutional Neural Networks (CNNs) to their solutions. However, CNNs, by their nature, have some limitations such as losing the hierarchical spatial information of the objects and not being robust to affine transformations. Recently, an alternative deep learning architecture, namely

Capsule Networks, and a novel dynamic routing algorithm have been proposed by Sabour and Hinton [10]. In this design, with the help of the routing-by-agreement algorithm, it is possible to learn more descriptive information about the objects without losing the intrinsic spatial relationship between the object and its parts. Therefore, Capsule Networks have the capacity for recognizing the images regardless of the visual angle and without requiring different transformations, since this architecture can inherently learn higher dimensional pose configuration of the images.

Figure 1: Some examples of retrieved images by our architectures. Blue: query, Green: correct, Red: wrong.

In this study, we employ Capsule Networks to clothing retrieval problem by extending their capabilities with some improvements. First, we extract the features of larger-sized clothing images by more powerful methods (stacked or residual-connected convolutional layers), and forward these features to fully-connected capsules. Next, we introduce a Triplet-based design of Capsule Networks that learns the similarity between triplets. Lastly, we train our proposed architectures on in-shop partition of DeepFashion data set [7], and compare our results with the baseline study, namely FashionNet [7] and the other SOTA methods.

2 Related Works

Figure 2: Illustration of our proposed architectures containing different feature extraction blocks.

Clothing retrieval has become more important after some major developments in Computer Science and the emergence of e-commerce. Recent studies generally attack to this task by using deep convolutional networks. [3] introduces an excessively challenging task, namely Exact Street to Shop, where the goal is to match the exact same item in the photos captured by users to online shopping photos. [4] proposes Dual Attribute-aware Network (DARN) to address the cross-domain image matching problem. [7] introduces a new data set, namely DeepFashion, which has a vast amount of large-scale clothing images annotated with numerous attributes, landmark information and cross-domain image correspondences. [1]

demonstrates that integrating bag-of-words approach to weakly-supervised learning process can achieve promising results on clothing retrieval task.

[12]

proposes a Visual Attention Model (VAM), and introduces a novel Dropout-like connection after attention layers.

[13] addresses the issues of defining a model with right complexity and choosing hard samples carefully during training. [9] shows how to improve the robustness of the feature embeddings by exploiting the independence within ensembles. [2] introduces hierarchical triplet loss (HTL) to address the random sampling issue during training a triplet loss. [6] proposes multiple-way attention-based ensemble architecture that learns the feature embeddings with multiple attention masks.

3 Methodology

3.1 Capsules

Capsules are groups of neurons that convey higher dimensional information throughout the network in more refined way. This information is interpreted as the pose configuration and the existence probability of an instance. Each capsule in a higher level is formed by the routing of incoming votes from the capsules in lower level. At this point, these votes are calculated by the linear transformation of the pose configuration. During dynamic routing

[10], the linear combination of incoming votes weighted by their coefficients (coupling coefficients) forms the non-activated outputs in higher level capsules. For each iteration, the weights of these votes are updated with respect to the dot product of the incoming votes and the outputs in higher level capsules. This is called agreement between capsules. Finally, the output of each capsule in lower level is determined by squashing function as proposed in [10].

3.2 Proposed Architectures

In our design, we adjust the original Capsule Network structure to a Triplet-based version, so that the network can learn the similarity between two images by feeding the objective function with the embedded representations extracted by capsules. At this point, our Capsule Network design aims to minimize the Triplet loss shown in Equation 1, where is the Euclidean distance metric, is distance margin, , , are the latent capsule embeddings extracted from the anchor image , positive image and negative image respectively. During forming these embeddings, we normalize latent capsules by L2-norm, and then we mask all capsules but the one that belongs to the correct class to zero.

(1)

As illustrated in Figure 2, Capsule Networks essentially contain two main blocks: feature extraction block and capsule layers. There is only one feature extraction block that has a single convolutional layer with 64 filters in the original design proposed by Sabour and Hinton [10]. Extracting the features by such a shallow structure may be enough for one-channel handwritten digit images with the size of [10]. However, fully-connected capsules need more complex features to achieve better results on more complicated image-related problems. Therefore, we design two different feature extraction blocks to form more powerful features as the input of capsules. First, a number of convolutional layers are stacked without using any pooling operation between them, and the latter is to connect these layers as residual. In both of our designs, leaky form of linear rectifier [8]

is used as activation function, and batch normalization

[5] is applied between convolutional layers.

Furthermore, capsule layers are kept identical in both designs. There are two fully-connected capsule layers, namely Primary Capsule and Class Capsule. Primary Capsule is the layer where the extracted features are grouped with respect to the capsule dimensionality. In our designs, this layer has 32 channels of 16-dimensional capsules that are fully-connected to Class Capsule. Next, there are number of 16-dimensional capsules in Class Capsule layer, where

is the number of classes in the data set. Activations and the latent capsule vectors of Class Capsule are calculated via dynamic routing with 3 iterations. Any kind of reconstruction methods (as in

[10]) is not applied to our Capsule Network designs.

4 Experiments

The experiments for proposed Stacked-convolutional (SCCapsNet) and Residual-connected (RCCapsNet) architectures are conducted on in-shop partition of DeepFashion data set [7]. Both are trained on 25k training images, and tests are performed by using 14k query and 12k gallery images. Since this task is an information retrieval task, the performance is measured by Recall@K metric, where K is 1 or multiplies of 10 up to 50. Moreover, as mentioned in Schroff [11], negative hard sampling strategy improves the convergence behavior of the model significantly. Based on this strategy, the negative images are picked as the closest image to the anchor provided that they are of different categories; whereas we pick each possible positive image in the data set as the positive one.

As shown in Table 1, SCCapsNet and RCCapsNet achieve better retrieval performance than all variants of the baseline study (FashionNet) by a wide margin. It is important to note that both of our proposed architectures use only images during training in contrast to the baseline study where the network is supported by different number of attributes and the landmark information. These experiments demonstrate that our Capsule Network designs can inherently learn pose configuration of the objects without any requirement of recovering pose information.

Models Top-20 (%) Top-50 (%)
FashionNet+100A+L 57.3 62.5
FashionNet+500A+L 64.6 69.5
FashionNet+1000A+J 68.0 73.5
FashionNet+1000A+P 70.0 75.0
FashionNet+1000A+L 76.4 80.0
SCCapsNet (ours) 81.8 90.9
RCCapsNet (ours) 84.6 92.6
Table 1: Recall@K performance of the variants of the baseline study [7] and our proposed model. FashionNet has different building blocks where the model has different numbers of attributes (A) (i.e. 100, 500 and 1000), or fashion landmarks (L) are replaced with human joints (J) or poselets (P). SCCapsNet and RCCapsNet do not use any extra side information during training.

Table 2 summarizes in-shop clothing retrieval results of SCCapsNet, RCCapsNet, and the SOTA methods. These figures indicate how successful our proposed designs are, and what the main limitations of them are when compared to the SOTA CNN-based architectures. First, both of our designs outperform the earlier methods (i.e. WTBI [3] and DARN [4]) which both disparately use semantic attributes to improve the overall performance, but neglect pose configurations of the images during training. According to Top-20 Recall@K scores, while SCCapsNet improves the scores of the best FashionNet variant by 31% and 14%, RCCapsNet has even better performance with a margin of 34% and 17% respectively. The other approach whose performance falls behind in ours is the method of leveraging weakly-annotated textual descriptors of the images proposed by Corbiére [1]. In this design, these textual descriptors (i.e. bag-of-words) represent different coarse semantic concepts such as texture information, color and shape. Capsules can directly learn these concepts from the images in a sophisticated way, and hence, SCCapsNet and RCCapsNet can achieve higher Recall@K scores than this approach without taking advantage of bag-of-words descriptors.

Models # of Top-1 Top-10 Top-20 Top-30 Top-40 Top-50
Params (M) (%) (%) (%) (%) (%) (%)
WTBI [3] 60 35.0 47.0 50.6 51.5 53.0 54.5
DARN [4] 105 38.0 56.0 67.5 70.0 72.0 72.5
FashionNet [7] 134 53.2 72.5 76.4 77.0 79.0 80.0
Corbiére et al. [1] 25 39.0 71.8 78.1 81.6 83.8 85.6
SCCapsNet (ours) 2.5 32.1 72.4 81.8 86.3 89.2 90.9
RCCapsNet (ours) 4.5 33.9 75.2 84.6 88.6 91.0 92.6
HDC [13] 5 62.1 84.9 89.0 91.2 92.3 93.1
VAM [12] 6 66.6 88.7 92.3 - - -
BIER [9] 5 76.9 92.8 95.2 96.2 96.7 97.1
HTL [2] 5 80.9 94.3 95.8 97.2 97.4 97.8
A-BIER [9] 5 83.1 95.1 96.9 97.5 97.8 98.0
ABE [6] 10 87.3 96.7 97.9 98.2 98.5 98.7
Table 2: Experimental results of in-shop clothing retrieval task on DeepFashion data set. ”-”: not reported.

In addition to all these, our proposed architectures cannot achieve the performances of more advanced CNN-based architectures. In these designs, there are various techniques applied to CNNs to boost the overall performance, which are alternative hard sampling strategies [13], more advanced objective functions [2, 9], network ensembling [9, 6] and attention-based mechanisms [12, 6]. Although these techniques may significantly improve the overall performance in CNNs, in principle, they increase the model complexities by a wide margin, or increase training time considerably. At this point, the numbers of trainable parameters in SCCapsNet and RCCapsNet are respectively 2.5 and 4.5 million, while the SOTA methods have twice as many trainable parameters in their models. Capsule Networks need more time for training than CNNs since dynamic routing algorithm is a relatively slow routing mechanism when compared to the pooling variants. Therefore, within limited computational resources, these techniques are not yet applied to our models to boost the overall performance of our Capsule Network designs, and left as future research ideas.

5 Conclusion

In this study, we present two different Triplet-based designs of Capsule Networks with more powerful feature extraction blocks, and employ them to clothing retrieval task. Experiments show promising results where both of our designs outperform all FashionNet variants without any extra information besides to the images. Moreover, when compared to the SOTA methods, our designs perform comparably well with only the half of the number of parameters as in the SOTA methods, and it shows the potential of Capsule idea in case of the computational burdens are lightened.

References

  • [1] C. Corbière, H. Ben-younes, A. Ramé, and C. Ollion (2017) Leveraging weakly annotated data for fashion image retrieval and label prediction.

    2017 IEEE International Conference on Computer Vision Workshops (ICCVW)

    , pp. 2268–2274.
    Cited by: §1, §2, Table 2, §4.
  • [2] W. Ge (2018-09) Deep Metric Learning with Hierarchical Triplet Loss. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2, Table 2, §4.
  • [3] M. Hadi Kiapour, X. Han, S. Lazebnik, A. C. Berg, and T. L. Berg (2015-12) Where to Buy It: Matching Street Clothing Photos in Online Shops. pp. 3343–3351. External Links: Document Cited by: §1, §2, Table 2, §4.
  • [4] J. Huang, R. Feris, Q. Chen, and S. Yan (2015) Cross-domain image retrieval with a dual attribute-aware ranking network. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1062–1070. External Links: ISBN 978-1-4673-8391-2, Link, Document Cited by: §1, §2, Table 2, §4.
  • [5] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. pp. 448–456. External Links: Link Cited by: §3.2.
  • [6] W. Kim, B. Goyal, K. Chawla, J. Lee, and K. Kwon (2018-09) Attention-based ensemble for deep metric learning. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2, Table 2, §4.
  • [7] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang (2016-06) DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §1, §1, §2, Table 1, Table 2, §4.
  • [8] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Cited by: §3.2.
  • [9] M. Opitz, G. Waltner, H. Possegger, and H. Bischof (2017) BIER : boosting Independent Embeddings Robustly. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5199–5208. Cited by: §1, §2, Table 2, §4.
  • [10] S. Sabour, N. Frosst, and G. E. Hinton (2017) Dynamic routing between capsules. In Advances in Neural Information Processing Systems 30, pp. 3856–3866. Cited by: §1, §3.1, §3.2, §3.2.
  • [11] F. Schroff, D. Kalenichenko, and J. Philbin (2015-06)

    FaceNet: A unified embedding for face recognition and clustering

    .
    In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 815–823. External Links: Document, ISSN 1063-6919 Cited by: §4.
  • [12] Z. Wang, Y. Gu, Y. Zhang, J. Zhou, and X. Gu (2017) Clothing retrieval with visual attention model. 2017 VCIP, pp. 1–4. Cited by: §1, §2, Table 2, §4.
  • [13] Y. Yuan, K. Yang, and C. Zhang (2017-10) Hard-Aware Deeply Cascaded Embedding. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 814–823. External Links: Document, ISSN 2380-7504 Cited by: §1, §2, Table 2, §4.