Improving the Annotation of DeepFashion Images for Fine-grained Attribute Recognition

07/31/2018 ∙ by Roshanak Zakizadeh, et al. ∙ 0

DeepFashion is a widely used clothing dataset with 50 categories and more than overall 200k images where each image is annotated with fine-grained attributes. This dataset is often used for clothes recognition and although it provides comprehensive annotations, the attributes distribution is unbalanced and repetitive specially for training fine-grained attribute recognition models. In this work, we tailored DeepFashion for fine-grained attribute recognition task by focusing on each category separately. After selecting categories with sufficient number of images for training, we remove very scarce attributes and merge the duplicate ones in each category, then we clean the dataset based on the new list of attributes. We use a bilinear convolutional neural network with pairwise ranking loss function for multi-label fine-grained attribute recognition and show that the new annotations improve the results for such a task. The detailed annotations for each of the selected categories are provided for public use.



There are no comments yet.


page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-label attribute recognition of an item (such as a single instance of a bird, a car or a dress) at a fine-grained level includes retrieving the detailed attributes which describe that item, for instance a dress can be labelled for its pattern (e.g. striped), length (e.g. maxi), fabric (e.g. chiffon), etc. There are very few publicly available datasets which provide such detailed annotations including CUB200 Birds [1], DeepFashion [2] and some face datasets such as LFW [3].

DeepFashion dataset is mostly used for clothes recognition task, it contains over 200k images from 50 categories of clothes including Dress, Kimono, Shorts, etc. There are overall 1000 attributes describing images at a fine-grained level. However, not all of these categories, by themselves, contain enough images to train a fine-grained attribute recognition model. Further, not all of the 1000 attributes apply to images in every category. To this end, in this paper, we focus on nine relatively large categories of DeepFashion dataset and by removing the scare attributes and merging the visually similar ones will make these nine categories of clothes more suitable for the fine-grained attribute recognition task. We make the training, test and validation sets with their new list of annotations per category available for public use 111The nine categories with updated annotations are available for download here..

2 DeepFashion for Fine-grained Attribute Recognition

Figure 1: Examples of repetitive attributes in DeepFashion dataset

The following steps explain the process of extending notations of DeepFashion dataset per category of clothes:

  1. Fine-grained categories with sufficient training samples: Each fine-grained category in DeepFashion dataset contains different number of images. Out of the 50 categories 35 have only a few hundred images or less, among the remaining categories again six more contain around 5000 or less samples. Through empirical experiments with the convolutional neural network that we have used for fine-grained attribute recognition, we came to conclusion that at least 6000 samples are required for the loss to converge. After removing categories with not enough images, nine categories out of the original 50 remain. These remaining categories and the number of images for each are: dress (50837), tee (24956), blouse (17095), shorts (13637), tank (10692), skirt (10568), cardigan (9050), sweater (8901) and top (7053).

  2. Available attributes and enough attributes: There are in total 1000 attributes annotated for DeepFashion dataset. However, not all of these attributes apply to all 50 fine-grained categories. So, we had to check within each category of clothing whether there are any samples available for each attribute. For instance, there are only 690 attributes assigned to the images in the cardigan category. This means if the number of classes for fine-grained attribute recognition is set to the original 1000 attributes, for the 310 remaining attributes there are no samples available in the cardigan category and this has to be noted during the design of the training model. Further, we need to consider the attribute imbalance problem, the ratio of the attributes for some fine-grained categories is about 1:10000. To mitigate this problem, we set a threshold for how few the distribution of samples can be for an attribute and discard attributes with the distribution of almost less than of the images within each category. Even after discarding such scarce attributes most categories are still unbalanced, but if we set the threshold higher than there will be very few attributes left per category. Following this step a second pass is required to remove images which now have no attributes. The final training size of the nine categories is as follows: dress (45869), tee(20216), blouse (15030), shorts (10641), skirt (9943), tank (8601), cardigan (7861), sweater (7273) and top (6083).

  3. Merging duplicate annotations: After close investigation of the attributes we realized there are some annotated attributes which are very close in definition. This is especially true for the texture descriptive attributes such as striped, printed, dot, etc. An example of this can be seen in Figure 1 where the three dresses are annotated as polkadot, dot or even both (see the last image on the right), however all three dresses are visually recognized as the same pattern. There are several examples of this kind in DeepFashion dataset. Overlapping attributes contributes to the imbalance of the dataset. We have improved the annotations by merging the visually similar attributes and have removed the duplicates. After removing the duplicate and scare attributes, the number of attributes per category is as follows: blouse (35), cardigan (36), sweater (31), tank (31), tee(23), top (29), shorts (31), skirt (32) and dress (35).

3 Experiments and Results

For fine-grained attribute recognition we chose the model shown in Figure 2 (which we call FineTag). The model is a VGG16 [4] based fully convolutional architecture for multi-label attribute recognition at fine-grained level. It has a bilinear-pool layer [5] and uses a pairwise ranking loss function [6]. We use VGG16 pre-trained weights [7, 8]

on the ImageNet dataset 

[7] only for initializing the convolutional layers. Then we truncate the model at the last convolutional layer after the non-linearities and generate by projecting a copy of the feature map () into a 20 dimensional ICA projection space [9] (which is generated beforehand based on the feature maps from the same dataset). Then, the sum of the outer product of and at each location is calculated which is passed through a fully connected layer. The loss function used is the smooth pairwise ranking loss function proposed by [6]. The number of labels is different depending on the category, for instance there are 35 attributes for blouse category after merging the duplicates. The reason to choose FineTag architecture for our experiment is that it requires much less parameters to train compared to very deep networks like VGG16 but can produce results with the same precision score. Depending on the number of attributes this model is about smaller than VGG16 in terms of parameters.

Figure 2: The bilinear network with pairwise ranking loss for extracting the attributes at fine-grained level in an image.

We trained the model for each of the nine categories mentioned above twice: before and after merging the repetitive attributes. The model is built on Tensorflow framework and the experiments are run on an NVIDIA Tesla V100 GPU. For each category the model is trained in average for 30 epochs with the batch size of 40 using Adam optimizer 

[10] with the learning rate of 0.000001.

In Table 1 for each category, we are reporting the results for the ranking-based average precision [11] and the weighted mean average precision [12, 13] weighting by the frequency of instances per label before and after removing the duplicates (denoted as AvgPrec_b, AvgPrec_a, wmap_b and wmap_a respectively) . We can see that there is significant improvements in the results just by merging the duplicated attributes similar to those in Figure 1.

category AvgPrec_b AvgPrec_a wmap_b wmap_a
blouse 0.49 0.51 0.31 0.33
cardigan 0.48 0.49 0.27 0.28
sweater 0.50 0.54 0.31 0.35
tank 0.50 0.53 0.30 0.33
tee 0.60 0.63 0.37 0.41
top 0.54 0.58 0.34 0.38
shorts 0.56 0.57 0.38 0.39
skirt 0.65 0.66 0.48 0.49
dress 0.59 0.61 0.40 0.42
Table 1: Ranking-based average precision (AvgPrec) over all images and weighted mean average precision (wmap) before and after removing the duplicate attributes

Table 2 shows the weighted mean average precision score for two selected pairs of duplicate attributes per category (depending on the category the number of duplicate pairs vary, here we have chosen two pairs per category for demonstration). The categories are grouped based on the common attributes (shorts category has only one pair of repetitive attributes) and the merged attribute for each pair is indicated by adding an _m and in bold. We can see that the results per attribute improve significantly by merging the duplicate annotations in the dataset. Further, before merging the duplicates one of the attributes in the duplicate pair is often learned and extracted very poorly by the network which results in a low mean average precision score over the whole dataset. For instance, the printed label for the tee category in Table 2 is recognized by the network with a very low score of and is improved significantly (to printed_m with precision score) after being merged with the print attribute. It is important to notice that per attribute precision scores are significantly important for retrieval applications where the query has a specific attribute that we are interested in. For instance, in case of querying a striped dress we can see that the improved score of striped_m could enhance the search results significantly.

category attributes
print printed printed_m stripe striped striped_m
tee 0.49 0.06 0.54 0.40 0.42 0.75

0.50 0.06 0.54 0.24 0.45 0.64

0.57 0.10 0.62 0.33 0.42 0.71

0.60 0.12 0.65 - - -

crop cropped cropped_m stripe striped striped_m
sweater 0.10 0.31 0.41 0.33 0.55 0.78

0.30 0.12 0.42 0.28 0.52 0.69

dot polkadot polkadot_m print printed printed_m
blouse 0.60 0.61 0.65 0.52 0.09 0.59

dot polkadot polkadot_m stripe striped striped_m
skirt 0.39 0.53 0.53 0.27 0.46 0.64

fringe fringed fringed_m stripe striped striped_m
cardigan 0.18 0.18 0.32 0.16 0.55 0.62

Table 2: Weighted mean average precision (wmap) over all labels before and after merging the duplicate attributes

4 Conclusions

In this paper, we extracted nine categories of clothes from the DeepFashion dataset which provide sufficient samples and comprehensive annotations for fine-grained attribute recognition. Further, we showed merging duplicate attributes for DeepFashion improves the attribute recognition results over the samples and per attribute. This is mainly because duplicate attributes contribute to poor annotation of the images and one of the two repetitive attributes is always under-sampled which results in poor overall results.


  • [1] Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-ucsd birds-200-2011 dataset. (2011)
  • [2] Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: Deepfashion: Powering robust clothes recognition and retrieval with rich annotations.

    In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 1096–1104

  • [3] Learned-Miller, E., Huang, G.B., RoyChowdhury, A., Li, H., Hua, G.: Labeled faces in the wild: A survey.

    In: Advances in face detection and facial image analysis.

    Springer (2016) 189–248
  • [4] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [5] Lin, T.Y., RoyChowdhury, A., Maji, S.: Bilinear cnn models for fine-grained visual recognition. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1449–1457
  • [6] Li, Y., Song, Y., Luo, J.: Improving pairwise ranking for multi-label image classification. (2017)
  • [7] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2009) 248–255
  • [8] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
  • [9] Hyvärinen, A.:

    Survey on independent component analysis.

  • [10] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  • [11] Fürnkranz, J., Hüllermeier, E., Mencía, E.L., Brinker, K.: Multilabel classification via calibrated label ranking. Machine learning 73(2) (2008) 133–153
  • [12] Schütze, H., Manning, C.D., Raghavan, P.: Introduction to information retrieval. Volume 39. Cambridge University Press (2008)
  • [13] Zhao, F., Huang, Y., Wang, L., Tan, T.:

    Deep semantic ranking based hashing for multi-label image retrieval.

    In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2015) 1556–1564