Retrieval of subjectively similar results such as in images becomes challenging in the era of big data, when its computationally challenging to employ pixel-wise or feature-wise image-paid difference measures for comparison in very large datasets. The concept of semantic hashing (Salakhutdinov and Hinton, 2009) was introduced in this regards to be able to develop subjectively similar search for retrieval. With growth of e-commerce this has gained center stage with demand for retrieving subjectively similar fashion inventory. The concept is to be able to represent an image in terms of binary hash codes to be able to compute inexpensive similarity measures for fast pair-wise matching for retrieval. The caveat though is to be able to design hash codes where subjectively similar entries are within a tolerable radius of each other as expected in Fig. 1.
Hashing was originally used in cryptography to encode high dimensional data into smaller compact codes, sequences or strings using a derived hashing function. In image retrieval hashing involves encoding images into a fixed length vector representation. The code vector is typically binary represented to enable use of the computationally inexpensive normalized Hamming distance for fast pair-wise comparison between a query with an image from the gallery. The challenge however being to achieve codes which enable subjectively similar images to be within a tolerable search neighbourhood, Cauchy probability function had been presented in earlier works(Cao et al., 2018). Similarity learning in simple words aims at generating similar hash codes for similar data whereas dissimilar data should show considerable variation in their hash codes (Li et al., 2015). Here similar may refer to visually similar or semantically or subjectively similar. Inspired by the robustness of convolutional neural networks (CNN) (LeCun et al., 1989)
in solving several computer vision tasks, in this paper we propose to train a CNN framework to produce binary hash codes under certain constraints defining its Hamming distance neighbourhood and pair wise relationship, expected during the comparison.
2. Related Work
Supervised learning of CNNs for hashing of images have proven to be better at generating hash codes. They typically incorporate the class label information of an image to be able to learn features characteristic of each class of objects, viz. in case of search in fashion databases, different clothing types have characteristic features such as shirts have features characteristically different from trousers or skirts, etc. Recent works employ pair-wise image labels for generating effective hash functions. Such methods employing pair-wise similarity learning generally perform better (Cao et al., 2018; Liu et al., 2016b) than non-similarity based hashing (Lin et al., 2015) which are easier while not requiring any label information for understanding similarity.
Earlier approaches employing non-similarity matched hashing employed image classification models such as with CNNs that were modified to generate binary codes of features extracted in the penultimate layers, with use of functions like sigmoid or tanh for generating binary codes from continuous valued data. The retrieval task typically is performed in two stages as coarse and fine (Lin et al., 2015). The coarse stage retrieves a large set of candidates using inexpensive distance measures like the Hamming distance. In the fine stage, the distance measures like Euclidean are employed on the continuous valued features for finding the closest match.
Recent approaches in line with similarity matched hashing have employed deep Cauchy hashing. This approach predicts the similarity label using Cauchy function and also uses quantization loss to compensate the relaxation provided by the binary hash code generating function(Cao et al., 2018)
. Cauchy function has proved to be more effective than sigmoid in estimating optimal values of the similarity index and penalizing the losses obtained. Quantization loss ensures that the generated hash codes are close to exact limits of binary values(Cao et al., 2017)
, with the limitation being the large number of epochs required to train these networks.
Although, the supervised hashing methods, especially those employing deep learnt hash functions have showed remarkable performance in representing input data using binary codes, they require costly to acquire human-annotated labels for training. In absence of annotated large datasets, their performance significantly degrades. The unsupervised hashing methods on the other hand easily address this issue by providing learning frameworks that do not require any labelled input. Semantic hashing is one of the early studies, which adopts restricted Boltzmann machine (RBM) as a deep hash function(Hinton and Salakhutdinov, 2006).
3. Hashing Method for Subjectively Similar Search
In pairwise similarity based training the input is a pair of images along with their similarity index calculated based on their shared attributes that are obtained from their annotations. In this approach, the Cauchy probability function (Cao et al., 2018) is used to predict the similarity label. Given an image and another in a pair, when they belong to the same class they are regarded as similar and indicated with the similarity index and when they belong to different classes they are regarded as dissimilar with . As can be seen in Fig. 2 this relationship is described in terms of view and pose variations across different types of clothes. Type 0 indicates all images of the same item under different poses or background variations, and a pair selected from this set is represented as . Type 1 indicates same class of clothing item viz. only shirts but each of different color, and a pair selected from this set is represented as . Type 2 represents different classes of clothing items viz. shirts and shorts, etc. and a pair selected from this set is represented as . Subjective similarity is defined as when and when . On the other hand a relational similarity within a class can be defined as when and when . is not defined for . The complete approach is presented in Fig. 3(a) and described subsequently.
3.1. Architecture of feature representation learning and associated networks
A CNN represented as is employed to learn feature representation in an image. We employ a network similar to as used in (Cao et al., 2018) which is a modified version of AlexNet (Krizhevsky et al., 2012). The first 7 learnable layers are preserved and the output obtained then is represented as . This is fed subsequently to a classifier which predicts the class of the clothing item as which is a one-hot vector. consists of 3 fully connected layer arranged as where
denotes the number of classes of clothes being looked into. The tensoris also fed through a fully-connected layer for generating the -element long hashing tensor which is subsequently passed through a function to generate the binary hash code corresponding to an image . The discriminator network consists of 1 convolutional layer with kernels followed by 4 fully connected layers arranged as
with sigmoid activation function used in the last layer.
3.2. Learning of the semantic hashing network
The approach for learning this network consists of the following 3 stages executed in subsequence per epoch.
Stage 1: Given an image and its corresponding class label the objective is to minimize the classification loss with respect to the prediction obtained from , thereby updating parameters in and as illustrated in Fig. 3(b). This stage assists to learn features characteristic of representing different clothes. is evaluated using cross entropy (CE) loss between and .
Stage 2: Given a pair of images and and their corresponding type identifier , the learnable parameters in and are updated to minimize the Cauchy losses and as illustrated in Fig. 3(c). The subjective similarity is predicted as using Cauchy probability function (Cao et al., 2018)
where is the predicted subjective similarity index, is a scale parameter and is the Hamming distance measure. Binary cross entropy (BCE) extended with the Cauchy probability function is used to calculate the loss and is termed as Cauchy cross entropy loss (Cao et al., 2018).
where is the Cauchy cross entropy loss, is a hyper parameter, and the normalized hamming distance between two code vectors and is defined as
where denotes the bit length of the binary hash code. The loss is minimized to obtain best subjective similarity for all possible image pairs with . While minimizing relational similarity, is minimized for image pairs with and not assessed for . Learnable parameters of only and are updated in the process with and being relative weights associated with and respectively.
The function is used during the training to generate binary hash codes. However, it is not used during inference and is replaced directly with a sign based binarizer.
Stage 3: Following Fig. 3(d), the hash codes and that are generated corresponding to an input image pair and , are concatenated with channel shuffling in place. Given as the channel ordering at input to the shuffler, when shuffling takes places the channel ordering in output is , else it remains same as . The task of is to identify if the shuffler had performed a shuffling operation and learning of parameters in minimizes . Since this stage is invoked only when , and the objective being to have and as closest Hamming distance neighbours, learnable parameters in and are updated adversarially to maximally confuse and increase which is evaluated with BCE. denotes the relative weight of adversarial update of and .
3.3. Retrieval as an inference problem
On completion of the training process, every image in the gallery set is converted to a corresponding -bit binary hash code on being processed through , and a binarizer. Given a query image , it is first converted to obtain a binary hash code . The normalized Hamming distance is then calculated for the pair and the images in gallery set are ranked in ascending order of . The images in the gallery set that have the least Hamming distance with the query image constitute the top retrievals as illustrated in Fig. 1.
The performance of retrieval is evaluated based on the standard metric of mean average precision (mAP). Given the query set with images , the average precision (AP) is calculated based on the top- retrievals, which correspond to the set of closest neighbours of evaluated based on
where is an indicator function holding values as if the corresponding ranked retrieved image and query image pair has or , otherwise . is the precision value for top- retrieved images
where denotes the ground truth relevance between the query image and the retrieved image from the gallery upto -closest neighbours. when and otherwise. The mean of is represented as value of retrieval.
Mean AP for top most retrievals is calculated for a query if at least one image in the top- retrieved results from the gallery belongs to the same class as the query. In that case when and otherwise. is calculated as the mean over all possible .
The performance of our scheme is experimentally validated using the MVC Dataset (Liu et al., 2016a), that is popularly used for benchmarking performance of view-invariant clothing item retrieval and clothing attribute prediction. The version of dataset used here consists of images each of size px. The dataset is provided as two subsets for Men and Women clothing items. The images are further manually filtered to remove wrong labelling and corrupted files, to obtain images of men’s clothing items and images of women’s clothing item. Men’s clothing items constitute of 8 classes viz. coats, pants, jeans, sleep wear, sweaters, swim wear, shirts tops, and underwear. Women’s clothing items constitute of 9 classes viz. coats, jeans, pants, dresses, sleep wear, sweaters, swimwear, underwear, and tops. The distribution of these items is detailed in Fig. 4. The images are distributed into Test, Train, Gallery and Query sets. Train set comprises of of total dataset, Test set comprises of of the elements. These together are used during the training process. The performance validation is performed on a Query and Gallery set where alternate poses of a clothing item present in the Query set make up the Gallery set, but there are no common images between these sets, and all the 4 sets are non-intersection sets, as illustrated with Fig. 5.
The training was carried out on men’s and women’s clothing items separately, and both combined together. Men’s clothing item experiments are performed where the network is trained using randomly selected images from different classes paired with other randomly selected images. A total of combinations of paired images belonging to Type 2, pairs of Type 1 and
pairs of Type 0 created from the training dataset. The loss functions are defined to be able to handle this kind of a data imbalance.Women’s clothing item experiments are performed using a total of combinations of paired images belonging to Type 2, pairs of Type 1 and pairs of Type 0 created from the training dataset. Combined clothing items experiments are performed using a total of combinations of paired images belonging to Type 2, pairs of Type 1 and pairs of Type 0 created from the training dataset.
|Model||mAP@10||mAP@top-1||mAP@top-3||mAP@top-5||mAP@top-15 ( hits)||mAP@top-15 ( hits)|
|Vanilla (Cao et al., 2018)||53.26||42.46||68.9||81.85||65.29||33.49|
|Model||mAP@10||mAP@top-1||mAP@top-3||mAP@top-5||mAP@top-15 ( hits)||mAP@top-15 ( hits)|
|Vanilla(Cao et al., 2018)||30.48||19.04||41.42||56.67||25.55||5.77|
|Model||mAP@10||mAP@top-1||mAP@top-3||mAP@top-5||mAP@top-15 ( hits)||mAP@top-15 ( hits)|
|Vanilla(Cao et al., 2018)||25.12||13.04||32.46||47.56||13.27||1.01|
Pretrained weights of AlexNet (Krizhevsky et al., 2012)
used for solving the ImageNet for Large Scale Visual Recognition Challenge (ILSVRC)(Russakovsky et al., 2015) task are used to initialize . , , and were initialized with random weights. The images of size px were resized to
using bilinear interpolation to match the input size requirement for. The input images were horizontally flipped at random during training to induce view invariance in the learned model. Adam optimizer (Kingma and Ba, 2014) was used during learning of parameters in , , , and with learning rate of . The batch size was and the training continued till losses and accuracy trends across epochs were observed to saturate, at about epochs. The model parameters were defined as , , and . We had observed best performance for these parameters by varying following (Cao et al., 2018) and length of binary hash code is
. Experiments were performed on a Server with 2x Intel Xeon 4110 CPU, 12x8 GB DDR4 ECC Regd. RAM, 4 TB HDD, 4x Nvidia GTX 1080Ti GPU with 11 GB DDR5 RAM, and Ubuntu 16.04 LTS OS. The algorithms were implemented on Anaconda Python 3.7 with Pytorch 1.0.
The experimental validation was performed separately for men’s clothing items, women’s clothing items and combined clothing items. Qualitative comparison of the performance in retrieving men’s clothing items is presented in Fig. 6 where each row corresponds to a class in the dataset and the first column in each row indicates a representative query image used, and subsequent 7 columns present the retrieved images. The results are quantitatively summarized in Table 1 as per measures detailed in Sec. 3.3. In case of , a successful hit is considered only if hits occur within the top retrieved results, and also if only hits occur. The different baselines considered include the following. Vanilla (Cao et al., 2018) is directly implemented as per prior art. Deep multi-stage Cauchy (DMC) is implemented with only and and learning to minimize only and . DMC-C includes the classifier along with the configuration of DMC and also looks to minimize . DMC-CD includes the discriminator along with DMC-C and while the optimizer on works to minimize , the optimization of and maximizes as an adversarial learning approach.
Similarly the qualitative performance in retrieving women’s clothing items is presented in Fig. 7 and quantitatively summarized in Table 2. Similarly retrieval performance in combined clothing items is summarized in Table 3. Across each of the sets of experiments it can be clearly observed that inclusion of a classifier, Cauchy cross entropy loss and finally a discriminator for adversarial learning has significantly improved the performance of retrieval by enabling generation of characteristic binary hash codes.
4.4.1. Learning with two Cauchy cross entropy losses
As compared to learning with only which is similar to the Vanilla (Cao et al., 2018) approach of increasing Hamming distance based separation margin between samples of Type 2, enabling learning by also including increases the separation margin between the hash codes for samples of Type 1. This can be clearly observed in Fig. 8(a) where across epochs of training, the separation between samples of Type 2 is very high by using only but no significant difference is observed for samples of Type 1 from Type 0, which is possible with inclusion of as can be observed in Fig. 8(b). This is possible due to the increase in spectral spread of generated by as can be observed in the tSNE plots in Fig. 9. Use of DMC forces increase in spectral spread, away from being focally concentrated around manifold distribution of observed in the vanilla implementation.
4.4.2. Learning with a classifier
The feature learning network is generally initialized with weights from a network used to perform ImageNet classification task and is suited to represent natural image characteristics. While features obtained in may not be characteristic to discriminate the different classes of images present, including while optimizing its weights along with that of while minimizing helps to obtain features characteristic of different classes of clothing items. This helps to improve performance by resulting in characteristic features for each class of clothing item and these features tend to exhibit clustering behaviour as seen with DMC-C in Fig. 9.
4.4.3. Adversarial learning with a discriminator
One of the aspects desirable of the generated hash codes is that they are pose and view invariant for the same item. Essentially this implies that all images in Fig. 5 should have the same hash code. We have achieved this by using the discriminator with the purpose to identify if the first channel corresponds to and second corresponds to or vice-versa. The purpose in adversarial learning is to optimize weights in and such that it leads to maximize confusion for leading to increase in . This leads to assigning of similar hash binary codes and for items of Type 0. The tSNE plot in Fig. 9. exhibits the close clustering achieved with DMC-CD.
This work presents a Deep Multi Cauchy Hashing framework and its variants to perform view invariant fast subjective search in fashion inventory with high accuracy. In this direction, the work establishes a comparison between baseline DMC model and its variants in Table 1, 2 and 3. The proposed scheme maximizes the hamming distance between semantically dissimilar images and minimizes the same between semantically similar images. The formation of discriminative clusters as shown in figure 9 justifies the claim. Extensive experiments show that the model can show state of art performance as can be seen in results obtained on MVC Dataset in figures 6 and 7. With rapid expansion of e-commerce, the proposed technique can be essential in retrieval tasks not limited to just fashion industry.
et al. (2018)
Yue Cao, Mingsheng Long,
Bin Liu, and Jianmin Wang.
Deep Cauchy Hashing for Hamming Space Retrieval.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Cao et al. (2017) Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. HashNet: Deep Learning to Hash by Continuation. CoRR abs/1702.00758 (2017). arXiv:1702.00758
- Hinton and Salakhutdinov (2006) G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the Dimensionality of Data with Neural Networks. Science 313, 5786 (2006), 504–507. https://doi.org/10.1126/science.1127647 arXiv:http://science.sciencemag.org/content/313/5786/504.full.pdf
- Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014). arXiv:1412.6980
- Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS’12). Curran Associates Inc., USA, 1097–1105.
- LeCun et al. (1989) Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1, 4 (Dec. 1989), 541–551. https://doi.org/10.1162/neco.19126.96.36.1991
- Li et al. (2015) Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang. 2015. Feature Learning based Deep Supervised Hashing with Pairwise Labels. CoRR abs/1511.03855 (2015). arXiv:1511.03855
- Lin et al. (2015) K. Lin, H. Yang, J. Hsiao, and C. Chen. 2015. Deep learning of binary hash codes for fast image retrieval. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 27–35. https://doi.org/10.1109/CVPRW.2015.7301269
- Liu et al. (2016b) H. Liu, R. Wang, S. Shan, and X. Chen. 2016b. Deep Supervised Hashing for Fast Image Retrieval. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2064–2072. https://doi.org/10.1109/CVPR.2016.227
- Liu et al. (2016a) Kuan-Hsien Liu, Ting-Yen Chen, and Chu-Song Chen. 2016a. MVC: A Dataset for View-Invariant Clothing Retrieval and Attribute Prediction. In ICMR.
- Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115, 3 (2015), 211–252. https://doi.org/10.1007/s11263-015-0816-y
- Salakhutdinov and Hinton (2009) Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Semantic hashing. International Journal of Approximate Reasoning 50, 7 (2009), 969–978.