Adversarially Trained Deep Neural Semantic Hashing Scheme for Subjective Search in Fashion Inventory

06/30/2019 ∙ by Saket Singh, et al. ∙ IIT Kharagpur 0

The simple approach of retrieving a closest match of a query image from one in the gallery, compares an image pair using sum of absolute difference in pixel or feature space. The process is computationally expensive, ill-posed to illumination, background composition, pose variation, as well as inefficient to be deployed on gallery sets with more than 1000 elements. Hashing is a faster alternative which involves representing images in reduced dimensional simple feature spaces. Encoding images into binary hash codes enables similarity comparison in an image-pair using the Hamming distance measure. The challenge, however, lies in encoding the images using a semantic hashing scheme that lets subjective neighbors lie within the tolerable Hamming radius. This work presents a solution employing adversarial learning of a deep neural semantic hashing network for fashion inventory retrieval. It consists of a feature extracting convolutional neural network (CNN) learned to (i) minimize error in classifying type of clothing, (ii) minimize hamming distance between semantic neighbors and maximize distance between semantically dissimilar images, (iii) maximally scramble a discriminator's ability to identify the corresponding hash code-image pair when processing a semantically similar query-gallery image pair. Experimental validation for fashion inventory search yields a mean average precision (mAP) of 90.65 53.26 retrieval.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Retrieval of subjectively similar results such as in images becomes challenging in the era of big data, when its computationally challenging to employ pixel-wise or feature-wise image-paid difference measures for comparison in very large datasets. The concept of semantic hashing (Salakhutdinov and Hinton, 2009) was introduced in this regards to be able to develop subjectively similar search for retrieval. With growth of e-commerce this has gained center stage with demand for retrieving subjectively similar fashion inventory. The concept is to be able to represent an image in terms of binary hash codes to be able to compute inexpensive similarity measures for fast pair-wise matching for retrieval. The caveat though is to be able to design hash codes where subjectively similar entries are within a tolerable radius of each other as expected in Fig. 1.

Figure 1.

Approach of semantic hashing based retrieval, where the CNN generates continuous value feature vector

corresponding to the query

which is subsequently binarized to yield the hash code

. Hamming distance is computed to measure similarity with an image from the gallery set. The process yields subjectively similar results from the gallery independent of pose and other invariances.

Hashing was originally used in cryptography to encode high dimensional data into smaller compact codes, sequences or strings using a derived hashing function. In image retrieval hashing involves encoding images into a fixed length vector representation. The code vector is typically binary represented to enable use of the computationally inexpensive normalized Hamming distance for fast pair-wise comparison between a query with an image from the gallery. The challenge however being to achieve codes which enable subjectively similar images to be within a tolerable search neighbourhood, Cauchy probability function had been presented in earlier works

(Cao et al., 2018). Similarity learning in simple words aims at generating similar hash codes for similar data whereas dissimilar data should show considerable variation in their hash codes (Li et al., 2015). Here similar may refer to visually similar or semantically or subjectively similar. Inspired by the robustness of convolutional neural networks (CNN) (LeCun et al., 1989)

in solving several computer vision tasks, in this paper we propose to train a CNN framework to produce binary hash codes under certain constraints defining its Hamming distance neighbourhood and pair wise relationship, expected during the comparison.

The paper is organized to detail related prior work in Sec. 2, our proposed method in Sec. 3, experiments and results obtained thereof in Sec. 4, followed by discussion of the results obtained in Sec. 4.4, and conclusion of the work in Sec. 5.

2. Related Work

Supervised learning of CNNs for hashing of images have proven to be better at generating hash codes. They typically incorporate the class label information of an image to be able to learn features characteristic of each class of objects, viz. in case of search in fashion databases, different clothing types have characteristic features such as shirts have features characteristically different from trousers or skirts, etc. Recent works employ pair-wise image labels for generating effective hash functions. Such methods employing pair-wise similarity learning generally perform better (Cao et al., 2018; Liu et al., 2016b) than non-similarity based hashing (Lin et al., 2015) which are easier while not requiring any label information for understanding similarity.

Earlier approaches employing non-similarity matched hashing employed image classification models such as with CNNs that were modified to generate binary codes of features extracted in the penultimate layers, with use of functions like sigmoid or tanh for generating binary codes from continuous valued data. The retrieval task typically is performed in two stages as coarse and fine (Lin et al., 2015). The coarse stage retrieves a large set of candidates using inexpensive distance measures like the Hamming distance. In the fine stage, the distance measures like Euclidean are employed on the continuous valued features for finding the closest match.

Recent approaches in line with similarity matched hashing have employed deep Cauchy hashing. This approach predicts the similarity label using Cauchy function and also uses quantization loss to compensate the relaxation provided by the binary hash code generating function(Cao et al., 2018)

. Cauchy function has proved to be more effective than sigmoid in estimating optimal values of the similarity index and penalizing the losses obtained. Quantization loss ensures that the generated hash codes are close to exact limits of binary values

(Cao et al., 2017)

, with the limitation being the large number of epochs required to train these networks.

Although, the supervised hashing methods, especially those employing deep learnt hash functions have showed remarkable performance in representing input data using binary codes, they require costly to acquire human-annotated labels for training. In absence of annotated large datasets, their performance significantly degrades. The unsupervised hashing methods on the other hand easily address this issue by providing learning frameworks that do not require any labelled input. Semantic hashing is one of the early studies, which adopts restricted Boltzmann machine (RBM) as a deep hash function 

(Hinton and Salakhutdinov, 2006).

3. Hashing Method for Subjectively Similar Search

Figure 2. Figure shows the categorization of dataset.
(a) Overview of the learning scheme for semantic hashing.
(b) Stage 1: Learning of clothing item features.
(c) Stage 2: Learning with Cauchy similarity measure.
(d) Stage 3: Adversarial learning of relational similarity.
Figure 3. Framework for learning of the deep neural semantic hashing scheme for subjective search across images. Blocks in gray represent units with non-learnable parameters.

In pairwise similarity based training the input is a pair of images along with their similarity index calculated based on their shared attributes that are obtained from their annotations. In this approach, the Cauchy probability function (Cao et al., 2018) is used to predict the similarity label. Given an image and another in a pair, when they belong to the same class they are regarded as similar and indicated with the similarity index and when they belong to different classes they are regarded as dissimilar with . As can be seen in Fig. 2 this relationship is described in terms of view and pose variations across different types of clothes. Type 0 indicates all images of the same item under different poses or background variations, and a pair selected from this set is represented as . Type 1 indicates same class of clothing item viz. only shirts but each of different color, and a pair selected from this set is represented as . Type 2 represents different classes of clothing items viz. shirts and shorts, etc. and a pair selected from this set is represented as . Subjective similarity is defined as when and when . On the other hand a relational similarity within a class can be defined as when and when . is not defined for . The complete approach is presented in Fig. 3(a) and described subsequently.

3.1. Architecture of feature representation learning and associated networks

A CNN represented as is employed to learn feature representation in an image. We employ a network similar to as used in (Cao et al., 2018) which is a modified version of AlexNet (Krizhevsky et al., 2012). The first 7 learnable layers are preserved and the output obtained then is represented as . This is fed subsequently to a classifier which predicts the class of the clothing item as which is a one-hot vector. consists of 3 fully connected layer arranged as where

denotes the number of classes of clothes being looked into. The tensor

is also fed through a fully-connected layer for generating the -element long hashing tensor which is subsequently passed through a function to generate the binary hash code corresponding to an image . The discriminator network consists of 1 convolutional layer with kernels followed by 4 fully connected layers arranged as

with sigmoid activation function used in the last layer.

3.2. Learning of the semantic hashing network

The approach for learning this network consists of the following 3 stages executed in subsequence per epoch.

Stage 1: Given an image and its corresponding class label the objective is to minimize the classification loss with respect to the prediction obtained from , thereby updating parameters in and as illustrated in Fig. 3(b). This stage assists to learn features characteristic of representing different clothes. is evaluated using cross entropy (CE) loss between and .

Stage 2: Given a pair of images and and their corresponding type identifier , the learnable parameters in and are updated to minimize the Cauchy losses and as illustrated in Fig. 3(c). The subjective similarity is predicted as using Cauchy probability function (Cao et al., 2018)


where is the predicted subjective similarity index, is a scale parameter and is the Hamming distance measure. Binary cross entropy (BCE) extended with the Cauchy probability function is used to calculate the loss and is termed as Cauchy cross entropy loss (Cao et al., 2018).


where is the Cauchy cross entropy loss, is a hyper parameter, and the normalized hamming distance between two code vectors and is defined as


where denotes the bit length of the binary hash code. The loss is minimized to obtain best subjective similarity for all possible image pairs with . While minimizing relational similarity, is minimized for image pairs with and not assessed for . Learnable parameters of only and are updated in the process with and being relative weights associated with and respectively.

The function is used during the training to generate binary hash codes. However, it is not used during inference and is replaced directly with a sign based binarizer.

Stage 3: Following Fig. 3(d), the hash codes and that are generated corresponding to an input image pair and , are concatenated with channel shuffling in place. Given as the channel ordering at input to the shuffler, when shuffling takes places the channel ordering in output is , else it remains same as . The task of is to identify if the shuffler had performed a shuffling operation and learning of parameters in minimizes . Since this stage is invoked only when , and the objective being to have and as closest Hamming distance neighbours, learnable parameters in and are updated adversarially to maximally confuse and increase which is evaluated with BCE. denotes the relative weight of adversarial update of and .

3.3. Retrieval as an inference problem

On completion of the training process, every image in the gallery set is converted to a corresponding -bit binary hash code on being processed through , and a binarizer. Given a query image , it is first converted to obtain a binary hash code . The normalized Hamming distance is then calculated for the pair and the images in gallery set are ranked in ascending order of . The images in the gallery set that have the least Hamming distance with the query image constitute the top retrievals as illustrated in Fig. 1.

The performance of retrieval is evaluated based on the standard metric of mean average precision (mAP). Given the query set with images , the average precision (AP) is calculated based on the top- retrievals, which correspond to the set of closest neighbours of evaluated based on


where is an indicator function holding values as if the corresponding ranked retrieved image and query image pair has or , otherwise . is the precision value for top- retrieved images


where denotes the ground truth relevance between the query image and the retrieved image from the gallery upto -closest neighbours. when and otherwise. The mean of is represented as value of retrieval.

Mean AP for top most retrievals is calculated for a query if at least one image in the top- retrieved results from the gallery belongs to the same class as the query. In that case when and otherwise. is calculated as the mean over all possible .

4. Experiments

(a) Distribution of images in men’s inventory.
(b) Distribution of images in women’s inventory.
Figure 4. Distribution of various classes of clothing items in men’s and women’s inventory in the MVC dataset.

4.1. Dataset

The performance of our scheme is experimentally validated using the MVC Dataset (Liu et al., 2016a), that is popularly used for benchmarking performance of view-invariant clothing item retrieval and clothing attribute prediction. The version of dataset used here consists of images each of size px. The dataset is provided as two subsets for Men and Women clothing items. The images are further manually filtered to remove wrong labelling and corrupted files, to obtain images of men’s clothing items and images of women’s clothing item. Men’s clothing items constitute of 8 classes viz. coats, pants, jeans, sleep wear, sweaters, swim wear, shirts tops, and underwear. Women’s clothing items constitute of 9 classes viz. coats, jeans, pants, dresses, sleep wear, sweaters, swimwear, underwear, and tops. The distribution of these items is detailed in Fig. 4. The images are distributed into Test, Train, Gallery and Query sets. Train set comprises of of total dataset, Test set comprises of of the elements. These together are used during the training process. The performance validation is performed on a Query and Gallery set where alternate poses of a clothing item present in the Query set make up the Gallery set, but there are no common images between these sets, and all the 4 sets are non-intersection sets, as illustrated with Fig. 5.

Figure 5. An example of images of the same clothing item under different pose variations. During training, any pair of images taken from this set would have . During validation of retrieval performance, if any one of the images here constitutes a part of the Query set, then the remaining are part of the Gallery set.

The training was carried out on men’s and women’s clothing items separately, and both combined together. Men’s clothing item experiments are performed where the network is trained using randomly selected images from different classes paired with other randomly selected images. A total of combinations of paired images belonging to Type 2, pairs of Type 1 and

pairs of Type 0 created from the training dataset. The loss functions are defined to be able to handle this kind of a data imbalance.

Women’s clothing item experiments are performed using a total of combinations of paired images belonging to Type 2, pairs of Type 1 and pairs of Type 0 created from the training dataset. Combined clothing items experiments are performed using a total of combinations of paired images belonging to Type 2, pairs of Type 1 and pairs of Type 0 created from the training dataset.

Figure 6. Men inventory retrieval result
Figure 7. Women Inventory retrieval result
Model mAP@10 mAP@top-1 mAP@top-3 mAP@top-5 mAP@top-15 ( hits) mAP@top-15 ( hits)
DMC-CD 90.65 95.20 98.17 98.63 97.94 86.98
DMC-C 90.11 93.97 97.53 98.08 94.52 84.38
DMC 84.13 87.44 96.11 98.6 93.83 75.34
Vanilla (Cao et al., 2018) 53.26 42.46 68.9 81.85 65.29 33.49
Table 1. Performance Evaluation of the retrieval task for Men clothing inventory.
Model mAP@10 mAP@top-1 mAP@top-3 mAP@top-5 mAP@top-15 ( hits) mAP@top-15 ( hits)
DMC-CD 82.67 85.55 96.11 97.22 91.11 74.44
DMC-C 82.04 85.05 95.27 97.5 93.16 68.5
DMC 80.44 84.16 95.55 97.44 90.66 61.18
Vanilla(Cao et al., 2018) 30.48 19.04 41.42 56.67 25.55 5.77
Table 2. Performance Evaluation of the retrieval task for Women Clothing Inventory.
Model mAP@10 mAP@top-1 mAP@top-3 mAP@top-5 mAP@top-15 ( hits) mAP@top-15 ( hits)
DMC-CD 83.88 86.56 97.2 99.2 95.65 73.91
DMC-C 83.73 88.14 96.44 98.44 96.04 75.09
DMC 76.03 78.46 91.89 96.34 86.06 54.0
Vanilla(Cao et al., 2018) 25.12 13.04 32.46 47.56 13.27 1.01
Table 3. Performance Evaluation of the retrieval task for MVC dataset.

4.2. Training

Pretrained weights of AlexNet (Krizhevsky et al., 2012)

used for solving the ImageNet for Large Scale Visual Recognition Challenge (ILSVRC) 

(Russakovsky et al., 2015) task are used to initialize . , , and were initialized with random weights. The images of size px were resized to

using bilinear interpolation to match the input size requirement for

. The input images were horizontally flipped at random during training to induce view invariance in the learned model. Adam optimizer (Kingma and Ba, 2014) was used during learning of parameters in , , , and with learning rate of . The batch size was and the training continued till losses and accuracy trends across epochs were observed to saturate, at about   epochs. The model parameters were defined as , , and . We had observed best performance for these parameters by varying following (Cao et al., 2018) and length of binary hash code is

. Experiments were performed on a Server with 2x Intel Xeon 4110 CPU, 12x8 GB DDR4 ECC Regd. RAM, 4 TB HDD, 4x Nvidia GTX 1080Ti GPU with 11 GB DDR5 RAM, and Ubuntu 16.04 LTS OS. The algorithms were implemented on Anaconda Python 3.7 with Pytorch 1.0.

4.3. Results

The experimental validation was performed separately for men’s clothing items, women’s clothing items and combined clothing items. Qualitative comparison of the performance in retrieving men’s clothing items is presented in Fig. 6 where each row corresponds to a class in the dataset and the first column in each row indicates a representative query image used, and subsequent 7 columns present the retrieved images. The results are quantitatively summarized in Table 1 as per measures detailed in Sec. 3.3. In case of , a successful hit is considered only if hits occur within the top retrieved results, and also if only hits occur. The different baselines considered include the following. Vanilla (Cao et al., 2018) is directly implemented as per prior art. Deep multi-stage Cauchy (DMC) is implemented with only and and learning to minimize only and . DMC-C includes the classifier along with the configuration of DMC and also looks to minimize . DMC-CD includes the discriminator along with DMC-C and while the optimizer on works to minimize , the optimization of and maximizes as an adversarial learning approach.

Similarly the qualitative performance in retrieving women’s clothing items is presented in Fig. 7 and quantitatively summarized in Table 2. Similarly retrieval performance in combined clothing items is summarized in Table 3. Across each of the sets of experiments it can be clearly observed that inclusion of a classifier, Cauchy cross entropy loss and finally a discriminator for adversarial learning has significantly improved the performance of retrieval by enabling generation of characteristic binary hash codes.

(a) Vanilla Cauchy Hashing
(b) DMC Hashing
Figure 8. Figure shows the relation between hamming distance and number of epochs of training performed.
(a) Vanilla(Men)
(b) DMC(Men)
(c) DMC-C(Men)
(d) DMC-CD(Men)
(e) Vanilla(Women)
(f) DMC(Women)
(g) DMC-C(Women)
(h) DMC-CD(Women)
(i) Vanilla(MVC)
(j) DMC(MVC)
(k) DMC-C(MVC)
Figure 9. The t-SNE visualizations for the proposed architecture and its variants for hash codes generated using MVC dataset

4.4. Discussion

4.4.1. Learning with two Cauchy cross entropy losses

As compared to learning with only which is similar to the Vanilla (Cao et al., 2018) approach of increasing Hamming distance based separation margin between samples of Type 2, enabling learning by also including increases the separation margin between the hash codes for samples of Type 1. This can be clearly observed in Fig. 8(a) where across epochs of training, the separation between samples of Type 2 is very high by using only but no significant difference is observed for samples of Type 1 from Type 0, which is possible with inclusion of as can be observed in Fig. 8(b). This is possible due to the increase in spectral spread of generated by as can be observed in the tSNE plots in Fig. 9. Use of DMC forces increase in spectral spread, away from being focally concentrated around manifold distribution of observed in the vanilla implementation.

4.4.2. Learning with a classifier

The feature learning network is generally initialized with weights from a network used to perform ImageNet classification task and is suited to represent natural image characteristics. While features obtained in may not be characteristic to discriminate the different classes of images present, including while optimizing its weights along with that of while minimizing helps to obtain features characteristic of different classes of clothing items. This helps to improve performance by resulting in characteristic features for each class of clothing item and these features tend to exhibit clustering behaviour as seen with DMC-C in Fig. 9.

4.4.3. Adversarial learning with a discriminator

One of the aspects desirable of the generated hash codes is that they are pose and view invariant for the same item. Essentially this implies that all images in Fig. 5 should have the same hash code. We have achieved this by using the discriminator with the purpose to identify if the first channel corresponds to and second corresponds to or vice-versa. The purpose in adversarial learning is to optimize weights in and such that it leads to maximize confusion for leading to increase in . This leads to assigning of similar hash binary codes and for items of Type 0. The tSNE plot in Fig. 9. exhibits the close clustering achieved with DMC-CD.

5. Conclusion

This work presents a Deep Multi Cauchy Hashing framework and its variants to perform view invariant fast subjective search in fashion inventory with high accuracy. In this direction, the work establishes a comparison between baseline DMC model and its variants in Table 1, 2 and 3. The proposed scheme maximizes the hamming distance between semantically dissimilar images and minimizes the same between semantically similar images. The formation of discriminative clusters as shown in figure 9 justifies the claim. Extensive experiments show that the model can show state of art performance as can be seen in results obtained on MVC Dataset in figures 6 and 7. With rapid expansion of e-commerce, the proposed technique can be essential in retrieval tasks not limited to just fashion industry.