Deep Saliency Hashing

07/04/2018 ∙ by Sheng Jin, et al. ∙ 0

In recent years, hashing methods have been proved efficient for large-scale Web media search. However, existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have subtle difference. To solve this problem, we for the first time introduce attention mechanism to the learning of hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. The saliency loss guides the attention network to mine discriminative regions from pairs of images. We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on Oxford Flowers-17 and Stanford Dogs-120 demonstrate that our DSaH performs the best for fine-grained retrieval task and beats the existing best retrieval performance (DPSH) by approximately 12 hashing methods on general datasets, including CIFAR-10 and NUS-WIDE.



There are no comments yet.


page 3

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Searching for content relevant images in a large scale dataset is widely used in pratical application. Such retrieval tasks remain a challenge because of the large computational cost and the high accuracy requirement. To address the efficiency and effectiveness problems, a great number of hashing methods are proposed to map images to binary codes. These hashing methods can be classified into two categories: data-independent

[1] and data-dependent [3, 7]. Since data-dependent methods preserve the semantic structure of the data, they usually achieve better performance.

Figure 1: The main idea of our method. We propose a deep hashing method for the retrieval of fine-grained objects which share similar appearances. To produce more discriminative hashing codes, our method highlights the discriminative regions of input images using the attention network.

Data-dependent methods can be further divided into three categories: unsupervised methods, semi-supervised methods, and supervised methods. Compared to the former two categories, supervised methods use semantic information in terms of reliable class labels to improve performance. Many representative works have been proposed along this direction, e.g., Binary Reconstruction Embedding [14], Column Generation Hashing [17], Kernel-based Supervised Hashing [22], Minimal Loss Hashing [27], Hamming Distance Metric Learning [28], and Semantic Hashing [30]

. The success of supervised methods demonstrates that class information can dramatically improve the quality of hashing codes. However, these shallow hashing methods use hand-crafted features to represent images and generate the hashing codes. Thus, the quality of hashing codes depends heavily on feature selection, which is the most crucial limitation of such methods.

It is hard to ensure that these hand-crafted features preserve sufficient semantic information. To solve this limitation, Xia et al. [37]

introduce deep learning to hashing (

CNNH), which performs feature learning and hashing codes learning simultaneously. Following this work, many deep hashing methods have been proposed, including Deep Cauchy Hashing [3], Deep Triplet Quantization[20], Deep Supervised Hashing [21], and Deep Semantic Ranking Hashing [45]. Extensive experiments demonstrate that deep hashing methods achieve significant improvements.

Nevertheless, existing deep hashing methods are mostly studied and validated on general datasets, e.g., CIFAR-10 [12] and NUS-WIDE [6]. These datasets contain only a few categories with a large number of images per class. Besides, different classes have significant differences in appearance which make the problem simpler than real-world cases. To support practical applications, two crucial issues still need to be considered. Firstly, a robust hashing method should be able to distinguish fine-grained objects

. The major challenge is that these fine-grained objects share similar overall appearance, making the inter-class differences more important than the intra-class variance. Secondly, hashing methods should be able to support

a large number of categories. Different from CIFAR-10, the existing fine-grained dataset consists of much more categories with small amounts of images per class. This raises another challenge which is how to generate hashing codes for much more categories with relatively fewer data.

Similar challenges exist in other fields of computer vision, e.g., fine-grained classification

[36] and person re-id [46]. In these fields, representative methods deal with the above challenges by mining discriminative parts for each category, either manually [36] or automatically [8, 46]. The deep methods proposed by [8] and [46] automatically mine salient regions and achieve remarkable improvements compared to traditional state-of-the-arts. Inspired by such methods, we propose a novel deep hashing method for fine-grained retrieval, termed Deep Saliency Hashing (DSaH), to solve the two above-mentioned challenges jointly.

The main idea of the proposed saliency hashing is depicted in Fig. 1. Specifically, DSaH is an end-to-end CNN model that simultaneously mines discriminative parts and learns hashing codes. DSaH contains three components. The first component, named attention module, is a full convolutional network aiming to generate a saliency image from the original image. The second component, named hashing module, is a deep network based on VGG-16 model aiming to map images to hashing codes. And the third component is a loss function. The loss function contains 1) a novel semantic loss to measure the semantic quality of hashing codes based on the pairwise labels; 2) a saliency loss of image quadruples to mine discriminative region; and 3) a quantization loss to learn the optimal hashing codes from the binary-like codes. All the components are intergraded into a unified framework. We iteratively update the attention network and the hashing network to learn better hashing codes. The main contributions of DSaH are three-fold:

  • We propose a deep hashing method by integrating saliency mechanism into hashing. To the best of our knowledge, DSaH is the first attentional deep hashing model specially designed for fine-grained tasks.

  • A novel saliency loss of image quadruples is proposed to guide the attention network for automatic discriminative regions mining. Experimental results demonstrate that the fine-grained categories can be better distinguished based on these attractive regions.

  • Experimental results on both general hashing datatsets and fine-grained retrieval datasets demonstrate the superior performance of our method in comparison with many state-of-art hashing methods.

Figure 2: The proposed framework for deep saliency hashing (DSaH). DSaH is comprised of three components: (1) an attention network based on fcn-16s for learning a saliency image, (2) a hashing network based on vgg-16 for learning hashing codes, (3) a set of loss functions including a semantic loss (Sem-L), a saliency loss (Sal-L) and a quantization loss (Qua-L) for optimization. In the training stage, the attention network and the hashing network are trained iteratively to mine discriminative regions (Step1) and learn semantic-preserving hashing codes (Step2). For attention network, the whole set of loss functions are used. For hashing network, we only use the semantic loss and quantization loss.

2 Related Work

We introduce the most related works from two aspects: hashing code learning and discriminative part mining.

Deep Hashing

In the last few years, deep convolutional neural network

[37] has been employed to supervised hashing. Followed by [37], some representative works have been proposed [16, 19, 21, 45]. These deep methods are proved to be effective for general object retrieval, where different categories have significant visual differences (e.g., CIFAR-10). Recently, deep hashing methods emerge as a promising solution for efficient person re-id [43, 48]. Different from object retrieval, human bodies share similar appearance with subtle difference in some salient regions. Zhang [43] et al. propose DRSCH and introduce hashing into person re-id. DRSCH is a triplet-based model and encodes the entire person image to hashing codes without considering the part-level semantics. PDH [48] integrates the part-based model into the triplet model and achieves significant improvements. However, the part partition strategy of PDH is specified based on human structure. Since there are huge variations in scale, multiple instances etc, in typical fine-grained datasets (e.g., CUB Bird [35]), the part partitioning strategy of PDH is not suitable for non-human fine-grained objects. In this paper, we introduce the hashing method into fine-grained retrieval, where an attention network is embedded to mine salient regions for accurate code learning.

Mining Discriminative Regions The key challenge of learning accurate hashing codes for fine-grained objects is to locate the discriminative regions in images. Facing a similar challenge, to boost the performance of fine-grained image classification, researchers have proposed various salient region localization approaches [34, 39, 47]. Previous works locate the salient regions either by unsupervised methods [24, 38] or by leveraging manual part annotations [40, 41]. Following these works, the recent hashing methods [2, 32] locate salient regions to improve performance in an unsupervised manner. DPH [2] uses GBVS [10] to calculate the saliency scores for each pixel. Then a series of salient regions are generated by increasing the threshold values. Shen et al. propose a cross-model hashing method, named TVDB [32], which adopts RPN [29] to detect salient regions and encodes the regional information of image, the semantic dependencies, as well as the cues between words by two modal-specific networks. However, these hashing methods use the off-the-shelf models to locate salient regions, which might not be accurate for new images or specific tasks. Instead, our model trains a saliency prediction network jointly with the hashing network, where the two modules are optimized together toward a unified objective.

Recent methods [24, 44] try to discover discriminative regions automatically by deep networks. These deep methods do not require labeling information, such as the labeled part masks or boxes, but only use the class label information. Zhao et al. [46] use the similarity information (a pair of person images about the same person or not) to train part model specially for person matching. Similar intuitions can be found in recent fine-grained classification methods. Fu et al. [8] propose a novel recurrent attention convolutional neural network, named RA-CNN, to discover salient regions and learn region-based feature representation recursively. The basic idea of this method is that salient region localization and fine-grained feature learning are mutually correlated and thus can reinforce each other. Motivated by [8, 46], we adopt a novel attention network to automatically mine the salient region for learning better hashing codes. To the best of our knowledge, it is the first time that attention mechanism is formally employed to fine-grained hashing.

3 Deep Saliency Hashing

Fig. 2 shows the proposed DSaH

framework. Our method includes three main components. The first component is an attention network. The attention network maps each input image into a saliency image. The second component is a hashing network. The hashing network learns binary-like codes from an original image and its saliency image. The third component is a set of loss terms, including pairwise semantic loss, saliency loss of image quadruples, and quantization loss. The semantic loss requires the hashing codes learned from each image pair to preserve semantic information. The saliency loss guides the attention network to highlight the salient regions of original images. The quantization loss is devised to measure the loss between binary-like codes and hashing codes after binarization. The whole cost function is written as below:


where represents the semantic loss, represents the saliency loss and represents the quantization loss.

3.1 The Attention Network

The attention network is proposed to map the original image to the saliency image : . This module includes two stages. In the first stage, we assign a saliency value of each pixel in the original image. Then we obtain the saliency image by highlighting the salient pixels.

As described above, a dense prediction problem needs to be solved in the first stage. The location in image is denoted as . We denote the learned saliency value of each pixel in image as . Long. et al [25]

prove that FCN is effective for dense prediction which maps each pixel of an image to a label vector.

Motivated by FCN [25], we propose an FCN-based attention network, as illustrated in Fig. 2, to discover the salient region automatically. Different from semantic segmentation approaches, our method does not predict a label vector but assign a saliency value for each pixel.

In the first stage, the proposed FCN-based attention network maps each pixel of images to a saliency value:


To regularize the output, we further normalize the saliency map so its value is between 0 and 1:


Then the saliency image is computed through a matrix dot product by the original image and their saliency map:


Then we encode the saliency image by the hashing network. We can obtain the saliency loss defined in Eq. 13. By iteratively updating the parameters with the saliency loss, the attention network is gradually fine-tuned to mine discriminative regions automatically.

3.2 The Hashing Network

As shown in Fig. 2, We directly adopt a pre-trained VGG-16 [33] as the base model of the hashing network. The raw image pixel, from either the original image and the saliency image, is the input of the hashing model. The output layer of VGG is replaced by a hashing layer where the dimension is defined based on the length of the required hashing code. The hashing network is trained by the semantic loss (Eq.6) and quantization loss (Eq. 10).

3.3 Loss Functions

Semantic Loss Similar to other hashing methods, our goal is to learn efficient binary codes for images: , where denotes the number of hashing bits. Since discrete optimization is difficult to be solved by deep networks, we firstly ignore the binary constraint and concentrate on binary-like code for network training, where the binary-like code is denoted as . Then we obtain the optimal hashing codes from . We define a pairwise semantic loss to ensure the binary-like codes preserve relevant semantic information. Since each image in the dataset owns a unique class label, image pairs could be further labeled as similar or dissimilar:


where denotes the pairwise label of images , . To preserve semantic information, the binary-like codes of similar images should be as close as possible while the binary-like codes of dissimilar images should be as far as possible. Since hashing methods select the hamming distance to measure similarity in the testing phase, we use to calculate the distance of image and . Given , the inner product of and is in the range of . Thus we adopt to transform the inner product to

. The result of such linear transformation is regarded as an estimation of the pairwise label

. The semantic loss of original image pairs is written as below:


Our method also requires hashing codes to learn semantic information from the salient region. To achieve this objective, the hashing codes of the saliency image also need to preserve semantic information. Specifically, we use the attention network to map the original image pairs (, ) into saliency image pairs (, ). Similar to Eq. 6, the semantic loss of saliency image pairs is defined as below:


To learn the saliency image, we propose an attention network. The key idea is that the hashing codes learned from saliency images are more discriminative. A saliency loss is defined to guide the attention model to highlight the salient regions of the original image. Firstly, the proposed attention network outputs the saliency image

from the original image : . Then the original image and its saliency image are mapped to binary-like codes by the hashing model, which is denoted as , respectively.

Saliency Loss Similar to the semantic loss, we use image pairs to define the saliency loss. The original images and are taken as the original image pair. Their saliency images and are regarded as the saliency image pair. The original image pair and their saliency image pair construct an image quadruple. The binary-like codes of saliency image and are more similar or dissimilar than those of the original image pair and according to whether images and share the same labels or not. Eq. 6, is used to approximate the value of pairwise label. We denote distance of the pairwise label and the estimated value from original (saliency) image pairs as ():


As described before, the saliency loss with respect to the quadruples of images is written as:


where is a margin threshold. When the value of is below the margin threshold , the saliency loss punishes the attention network to make it better highlight the salient regions.

Quantization Loss

The binary constraint of hashing codes makes it intractable to train an end-to-end deep model with backpropagation algorithm. As discussed in


, some widely-used relaxation scheme working with non-linear functions, such as sigmoid function, would inevitably slow down or even restrain the convergence of the network

[13]. To overcome such limitation, we adopt a similar regularizer on the real-valued network outputs to approach the desired binary codes. The quantization loss is written as:


where . Since only appears in the quantization loss, we minimize this loss to obtain the optimal hashing codes. Obviously, the sign of should be same as that of the binary-like codes . Thus the hashing codes can be directly optimized as:


3.4 Alternating Optimization

The overall training objective of DSaH integrates the pairwise semantic loss defined in Eq. 6, the saliency loss of image quadruples defined in Eq. 9 and the quantization loss defined in Eq. 10. DSaH is a two-stage end-to-end deep model which consists of an attention network, i.e. , for automatic saliency image estimation and a shared hashing network, i.e. , for discriminative hashing codes generation. As shown in Algorithm. 1, we train the attention network and the hashing network iteratively.

In particular, for the shared hashing model, we update its parameters according to the following overall loss:


By minimizing this term, the shared hashing model is trained to preserve the relative similarity in of original image pairs and that of saliency image pairs.

The attention network is trained by the following loss:


By minimizing this term, the attention network is trained to mine salient and semantic-preserving regions of the input image, leading to more discriminative hashing codes.


  Training set and their corresponding class label, Total epochs

of deep optimization;
0:  Hashing function: Attention function: .
1:  For the entire training set, construct the pairwise label matrix according to Eq. 5.
2:  for  do
3:     Compute to Eq. 16
4:     Update according to Eq. 13
5:     Update according to Eq. 12
6:  end for
7:  return  , .
Algorithm 1 Deep Saliency Hashing

3.5 Out-of-Sample Extension

After the model is trained, we can use it to encode an input image with a -bit binary-like code. Since the deep saliency hashing model consists of two networks, firstly the image is mapped to the salient image :


Then the hashing networks map to binary-like codes:


As discussed in Quantization Loss according to Eq. 11, we adopt the sign function to produce the hashing codes


4 Experiments

In order to test the performance of our proposed DSaH

method, we conduct experiments on two widely used image retrieval datasets, i.e. CIFAR-10 and NUS-WIDE, to verify the effectiveness of our method for the general hashing task. Then we conduct experiments on three fine-grained datasets: Oxford Flower-17, Stanford Dogs-120 and CUB Bird to prove that (1) the discriminative region of images can improve retrieval performance of hashing codes on fine-grained cases, and (2) the attention model can effectively mine the saliency region of images.

4.1 Dataset and Evaluation Metric

CIFAR-10[12] consists of 60000 3232 images in 10 classes. Each image in dataset belongs to one class (6000 images per class). We randomly select 100 images per class as the test set and 500 images per class from the remaining images as the training set.

NUSWIDE[6] is a multi-label dataset, including nearly 270k images with 81 semantic concepts. Followed [23] and [37], we select the 21 most frequent concept. Each of concepts is associated with at least 5000 images. We sample 100 images from each concept to form a test set and 500 images per class from the rest images to form a train set.

Oxford Flower-17[11] dataset consists of 1360 images belonging to 17 mutually classes. Each class contains 80 images. The dataset is divided into three parts, including a train set, test set, and validation set, with 40 images, 20 images, and 20 images respectively. We combine the validation set and train set to form the new training set.

Stanford Dogs-120[26] consists of 20,580 images in 120 classes. Each class contains about 150 images. The dataset is divided into: the train set (100 images per class) and test set (totally 8580 images for all categories).

CUB Bird[35] includes 11,788 images in mutually 200 classes. The dataset is divided into: the train set(5794 images) and the test set(5994 images).

We mainly use Mean Average Precision (MAP)and Precision-Recall curves for quantitative evaluation.

4.2 Comparative Methods

For the general datasets, including CIFAR-10 and NUSWIDE dataset, we compare our method (DSaH) with six deep hashing method: CNNH [37], DNNH [15], DSH [21], DQN [5], DVSQ [4], DPSH [16] and three shallow methods: ITQ-CCA [9], KSH [22], SDH [31]

. For shallow hashing method, we use deep features extracted by VGG-16 to represent an image. For fair comparison, we replace the VGG-F network used as base model in DPSH

[16] which achieves the best retrieval performance in comparative methods, with the VGG-16 model, named DPSH++. We mainly compared with DPSH++.

For the fine-grained datasets, our method (DSaH) is compared with five deep methods: DSH [21], DQN [5], DPSH [16], DCH [3], DTQ [20] and four shallow method: SDH [31], LFH [42], KSH [22], FastH [18]. For fair comparison, firstly we finetune VGG-16 on each fine grained dataset respectively for classification, respectively. Then these non-deep hashing methods use CNN features extracted by the output of the second full-connected layer (fc7) in the finetuned VGG-16 network. To be more fair, we replace the base model(vgg16) of our method with alexnet, which is same as DCH [3] and DTQ [20], named DSaH-.

12 bits 24 bits 36 bits 48 bits 12 bits 24 bits 36 bits 48 bits
ITQ-CCA[9] 0.435 0.435 0.435 0.435 0.526 0.575 0.572 0.594
KSH[22] 0.556 0.572 0.581 0.588 0.618 0.651 0.672 0.682
SDH[31] 0.558 0.596 0.607 0.614 0.645 0.688 0.704 0.711
CNNH[37] 0.439 0.476 0.472 0.489 0.611 0.618 0.628 0.608
DNNH[15] 0.552 0.566 0.558 0.581 0.674 0.697 0.713 0.715
DPSH[16] 0.713 0.727 0.744 0.757 0.794 0.822 0.838 0.851
DQN[5] 0.554 0.558 0.564 0.580 0.768 0.776 0.783 0.792
DSH[21] 0.6157 0.6512 0.6607 0.673 0.695 0.713 0.732 0.6755
DVSQ[4] 0.715 0.730 0.749 0.760 0.788 0.792 0.795 0.803
DPSH++[16] 0.7834 0.8183 0.8294 0.8317 0.8271 0.8508 0.8592 0.8649
DSaH 0.8003 0.8457 0.8476 0.8478 0.838 0.854 0.864 0.873
Table 1: Mean Average Precision (MAP) results for different number of bits on general datasets
Dataset Oxford Flower-17 Stanford Dogs-120 CUB Bird
12 bits 24 bits 36 bits 48 bits 12 bits 24 bits 36 bits 48 bits 12 bits 24 bits 36 bits 48 bits
DQN[5] 0.476 0.537 0.562 0.573 0.0089 0.0127 0.0347 0.0531 - - - -
DSH[21] 0.566 0.614 0.637 0.680 0.0119 0.0115 0.0117 0.0119 0.0108 0.0107 0.0108 0.0109
DCH[3] 0.9023 0.9117 0.9449 0.9534 0.0287 0.1971 0.3090 0.3073 0.0198 0.0725 0.1112 0.1676
DTQ[20] 0.9077 0.9155 0.9203 0.9324 0.0253 0.0273 0.0268 0.0271 0.0198 0.0233 0.0241 0.0228
DSaH- 0.9273 0.9354 0.9471 0.9565 0.2442 0.2874 0.3628 0.4075 0.0912 0.2087 0.2318 0.2847
SDH[31] 0.1081 0.1399 0.1169 0.1446 0.0091 0.0176 0.090 0.0365 0.0148 0.0151 0.0154 0.0156
LFH[49] 0.1887 0.4755 0.6363 0.8137 0.0249 0.0247 0.0211 0.0244 0.0064 0.0064 0.0065 0.0067
KSH[22] 0.2431 0.5012 0.2530 0.3553 0.0136 0.1228 0.1343 0.1930 - - - -
FastH[15] 0.4018 0.5244 0.5281 0.5355 0.0434 0.2231 0.3643 0.3927 0.0228 0.0372 0.0423 0.0564
DPSH++[16] 0.6578 0.8295 0.8605 0.8982 0.2778 0.4409 0.5054 0.5247 0.0723 0.0764 0.0838 0.0792
DSaH 0.9325 0.9467 0.9692 0.9756 0.3976 0.5283 0.5950 0.6452 0.1408 0.2817 0.3428 0.4313
Table 2: MAP results for different number of bits on three fine-grained datasets

4.3 Implementation Details

The DSaH

method is implemented based on PyTorch and the deep model is trained by batch gradient descent. As shown in Fig. 

2, our model consists of an attention network and a hashing model. We use VGG-16 as the base model. It worth mentioning that VGG-16 is not finetuned on each dataset. The full convolutional network [25] is adopted as the base model for the attention network. As discussed in [25], FCN is improved with multi-resolution layer combinations. We use the fusing method of FCN-16s to improve performance. Practically, we train the attention network before the hashing network. If we first train the hashing network, the attention network might output a semantic-irrelevant saliency image, which would be a bad sample and guide the training of hashing model to a wrong direction.

Since we propose a novel attention model for hashing, we conduct additional experiments on three fine-grained datasets, Oxford Flower-17, Stanford Dogs-120 and CUB Bird to further prove its effectiveness. Finally, we also show some typical examples of the saliency images learned by proposed attention network.

Additionally, we conduct analytical experiments to discuss these problems: (1)the analysis of hyper-parameters, (2)the convergence of the two networks, (3)the effectiveness of each loss. (4)the learned salient region

Network Parameters In our method, the value of hyper-parameter is 30 and

is 40. We use the mini-batch stochastic gradient descent with 0.9 momentum. We set the value of the margin parameters

as , where is the bits of hashing codes. The mini-batch size of images is fixed as 32 and the weight decay parameter as 0.0005.

4.4 Experimental Results for Retrieval

Performance on general hashing datasets The Mean Average Precision (MAP,%) results of different methods for different numbers of bits on NUSWIDE and CIFAR-10 dataset are shown in TABLE 1. Experimental results on CIFAR-10 dataset show that DSaH outperforms existing best retrieval performance (DPSH [21]) by , , , correspond to different hash bits. Similar to the other hashing methods, we also conduct experiments for large-scale image retrieval. For NUSWIDE dataset, we follow the setting in [23] and [37], and if two images share at least one same label, they are considered same. The experimental results of NUSWIDE dataset on TABLE 1 show that our proposed method outperforms the best retrieval baseline (DPSH [21]) by , , , . According to the experimental results, DSaH can be clearly seen to be more effective for traditional hashing task.To ensure fairness, we conduct experiments on different hashing methods based on the same base model. The experimental results shown in TABLE 1 prove that our method still outperform DPSH++ by , , , on NUSWIDE dataset and by , , , on CIFAR-10 dataset.

Performance on fine-grained datasets The MAP results of different methods on fine-grained datasets are shown in TABLE 2. The precision curves are shown in Fig. 3. Results on Oxford Flower-17 show that DSaH outperforms existing best retrieval performance by a very large margin , , , correspond to different hash bits. We also conduct experiments on a large fine-grained dataset. For Stanford dog-120 and CUB Bird, this dataset contains more categories and has smaller inter-class variations across different classes. The MAP results of all methods on these datasets are listed in TABLE  2 which show that the proposed DSaH method substantially outperforms all the comparison methods. DSaH achieves absolute increases of , , , and , , , . To ensure fairness, DSaH- use the same base-model as DTQ[20] and DCH[3]. DSaH- still outperfoms these methods by about 10% on Stanford dog-120 and CUB Bird datasets. Compared with the MAP results on traditional hashing task, our method is proved to achieve a significant improvement in fine-grained retrieval.

Figure 3: Comparison of retrieval performance of DSH method and the other hashing methods on Stanford Dogs

4.5 Exploration Experiment

Hyper-Parameters Analysis In this subsections, we study the effect of the hyper-parameters. The experiments are conducted on Oxford Flower-17. The quantization penalty parameter and saliency penalty parameter is selected by cross-validation from 1 to 100 with an additive step-size 10. Fig. 4(a) shows that DSaH is not sensitive to the hyper-parameters and in a large range. For example, DSaH can achieve good performance on Oxford-17 with . As shown in Fig. 4(b), the value of margin parameters should not be too large or too small. This is because that according to Eq. 9, if the value of margin is too large, the saliency loss is equal to the semantic loss. If the value of margin is too small, the saliency loss punishes the saliency image to be similar to the original images.

Convergence of Networks Since our method trains the attention network and the hashing network iteratively, we study the convergence of the proposed networks in CIFAR-10 dataset. As shown in Fig. 4, it can be seen that both the attention network and the hashing network converges after a few epochs, which shows the efficiency of our solution.

Component Analysis of the Loss Function Our loss function consists of two major components: semantic loss and saliency loss . To evaluate the contribution of each loss, we study the effect of different loss combinations on the retrieval performance. The experimental results are shown in TABLE 3. An interesting observation is that for the attention networks, achieves better performance than . The result is understandable because we use the attention networks to highlight the most discriminative regions. Yet the semantic loss only punishes the network so that it can locate the semantic-preserving regions. Another interesting finding is that for the hashing networks, using the combination of and can obtain even worse performance than using only. A possible reason is that when we use the saliency loss for the hashing networks, the binary codes learned from saliency image is required to be more discriminative than from the original images. This might force the hashing codes of the original image to become worse and make less effective in guiding the attention network to highlight salient regions. As shown in TABLE 3, the best performance is achieved when we use the combination of the two components, and , for the attention networks and only use the semantic loss for the hashing networks.

Figure 4: Sesitiveity to hyper-parameters (a, b) and the convergence of the attention and hashing networks.
Hashing-Net Attention-Net Stanford Dog-120
12 bits 24 bits 48 bits
- - -
- - -
0.3864 0.5032 0.6355
0.3374 0.4738 0.5931
0.3756 0.5051 0.6275
0.3976 0.5283 0.6452
Table 3: The MAP of DSaH on Stanford Dog-120 using different combinations of components.

Learned Salient Region Fig. 5 shows some typical samples, including multi-objects, occlusion and so on. Each row of Fig. 5 is corresponding to a single category. We have several observations about the learned saliency regions. Most importantly, these learned saliency regions always cover the heads of dogs. This is because the head region is important for distinguishing the breed of dog. The typical samples are detailed as:

(1) The first image in Fig. 5(a) has a complex background. (2) The first image in Fig. 5(b) shows that the body of a dog is overshadowed (3) Compared the first image in Fig. 5(c) with Fig. 5(d), the dogs are in different positions (sitting on or lying on grassland). The head region is accurately mined no matter how the face is oriented (frontal or not). (4) For the second image of each line in Fig. 5, human body was regarded as background. (5) The third image of each line in Fig. 5 exists more than one dog. The discriminative region of both dogs could be detected. Compared with the third image Fig. 5(c) and Fig. 5(d), the distance between two dogs does not affect the saliency results. (6) The scales of the objects shown in the first image of Fig. 5(a) and the second image of Fig. 5(b) has a significant difference. The result shows that the heads of dogs are also correctly highlighted in the saliency images.

Figure 5: Examples of the salient region learned by the attention network for Stanford Dogs-120 dataset. As the most import part, the heads of dogs are correctly highlighted in the saliency images under various conditions.

5 Conclusion

In this paper, we propose a novel supervised deep hashing method for fine-grained retrieval, named deep saliency hashing (DSaH). To distinguish fine-grained objects, our method consists of an attention network to automatically mine discriminative region and a parallel hashing network to learn semantic-preserving hashing codes. We train the attention model and the hashing model alternatively. The attention model is trained based on the semantic loss, quantization loss, and saliency loss. Based on semantic loss and quantization loss, we obtain semantic-preserving hashing codes from the hashing model. Extensive experiments on CIFAR-10 and NUSWIDE dataset demonstrate that our proposed method is comparable to the state-of-art methods for traditional hashing retrieval task. And the experiments on Oxford Flower-17, Stanford Dogs-120 and CUB Bird datasets show that our method achieves a significant improvement for fine-grained retrieval.


  • [1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 459–468. IEEE, 2006.
  • [2] J. Bai, B. Ni, M. Wang, Y. Shen, H. Lai, C. Zhang, L. Mei, C. Hu, and C. Yao. Deep progressive hashing for image retrieval. In Proceedings of the 2017 ACM on Multimedia Conference, pages 208–216. ACM, 2017.
  • [3] Y. Cao, M. Long, L. Bin, and J. Wang. Deep cauchy hashing for hamming space retrieval. In CVPR, 2018.
  • [4] Y. Cao, M. Long, J. Wang, and S. Liu. Deep visual-semantic quantization for efficient image retrieval. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 1328–1337, 2017.
  • [5] Y. Cao, M. Long, J. Wang, H. Zhu, and Q. Wen. Deep quantization network for efficient image retrieval. In AAAI, pages 3457–3463, 2016.
  • [6] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng. Nus-wide: a real-world web image database from national university of singapore. In Proceedings of the ACM international conference on image and video retrieval, page 48. ACM, 2009.
  • [7] Y. Duan, J. Lu, Z. Wang, J. Feng, and J. Zhou. Learning deep binary descriptor with multi-quantization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [8] J. Fu, H. Zheng, and T. Mei. Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [9] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2916–2929, 2013.
  • [10] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Advances in neural information processing systems, pages 545–552, 2007.
  • [11] A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li. Novel dataset for fine-grained image categorization: Stanford dogs. In Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC), volume 2, page 1, 2011.
  • [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
  • [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [14] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In Advances in neural information processing systems, pages 1042–1050, 2009.
  • [15] H. Lai, Y. Pan, Y. Liu, and S. Yan. Simultaneous feature learning and hash coding with deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3270–3278, 2015.
  • [16] W.-J. Li, S. Wang, and W.-C. Kang. Feature learning based deep supervised hashing with pairwise labels. arXiv preprint arXiv:1511.03855, 2015.
  • [17] X. Li, G. Lin, C. Shen, A. Hengel, and A. Dick. Learning hash functions using column generation. In

    International Conference on Machine Learning

    , pages 142–150, 2013.
  • [18] G. Lin, C. Shen, Q. Shi, A. van den Hengel, and D. Suter.

    Fast supervised hashing with decision trees for high-dimensional data.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1963–1970, 2014.
  • [19] K. Lin, H.-F. Yang, J.-H. Hsiao, and C.-S. Chen. Deep learning of binary hash codes for fast image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 27–35, 2015.
  • [20] B. Liu, Y. Cao, M. Long, J. Wang, and J. Wang. Deep triplet quantization. MM, ACM, 2018.
  • [21] H. Liu, R. Wang, S. Shan, and X. Chen. Deep supervised hashing for fast image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2064–2072, 2016.
  • [22] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2074–2081. IEEE, 2012.
  • [23] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 1–8, 2011.
  • [24] X. Liu, T. Xia, J. Wang, and Y. Lin. Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition. arXiv preprint arXiv:1603.06765, 2016.
  • [25] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [26] M.-E. Nilsback and A. Zisserman. A visual vocabulary for flower classification. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 1447–1454. IEEE, 2006.
  • [27] M. Norouzi and D. M. Blei. Minimal loss hashing for compact binary codes. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 353–360, 2011.
  • [28] M. Norouzi, D. J. Fleet, and R. R. Salakhutdinov. Hamming distance metric learning. In Advances in neural information processing systems, pages 1061–1069, 2012.
  • [29] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  • [30] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978, 2009.
  • [31] F. Shen, C. Shen, W. Liu, and H. Tao Shen. Supervised discrete hashing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 37–45, 2015.
  • [32] Y. Shen, L. Liu, L. Shao, and J. Song. Deep binaries: Encoding semantic-rich cues for efficient textual-visual cross retrieval. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 4117–4126. IEEE, 2017.
  • [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [34] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In Computer Vision–ECCV 2012, pages 73–86. Springer, 2012.
  • [35] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, 2011.
  • [36] Y. Wang, J. Choi, V. Morariu, and L. S. Davis. Mining discriminative triplets of patches for fine-grained classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1163–1172, 2016.
  • [37] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation learning. In AAAI, volume 1, pages 2156–2162, 2014.
  • [38] T. Xiao, Y. Xu, K. Yang, J. Zhang, Y. Peng, and Z. Zhang. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 842–850, 2015.
  • [39] Y. Xu, L. Lin, W.-S. Zheng, and X. Liu. Human re-identification by matching compositional template with cluster sampling. In proceedings of the IEEE International Conference on Computer Vision, pages 3152–3159, 2013.
  • [40] H. Zhang, T. Xu, M. Elhoseiny, X. Huang, S. Zhang, A. Elgammal, and D. Metaxas. Spda-cnn: Unifying semantic part detection and abstraction for fine-grained recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1143–1152, 2016.
  • [41] N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection. In European conference on computer vision, pages 834–849. Springer, 2014.
  • [42] P. Zhang, W. Zhang, W.-J. Li, and M. Guo. Supervised hashing with latent factor models. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 173–182. ACM, 2014.
  • [43] R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang. Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Transactions on Image Processing, 24(12):4766–4779, 2015.
  • [44] B. Zhao, X. Wu, J. Feng, Q. Peng, and S. Yan. Diversified visual attention networks for fine-grained object classification. arXiv preprint arXiv:1606.08572, 2016.
  • [45] F. Zhao, Y. Huang, L. Wang, and T. Tan. Deep semantic ranking based hashing for multi-label image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1556–1564, 2015.
  • [46] L. Zhao, X. Li, J. Wang, and Y. Zhuang. Deeply-learned part-aligned representations for person re-identification. arXiv preprint arXiv:1707.07256, 2017.
  • [47] L. Zheng, Y. Huang, H. Lu, and Y. Yang. Pose invariant embedding for deep person re-identification. arXiv preprint arXiv:1701.07732, 2017.
  • [48] F. Zhu, X. Kong, L. Zheng, H. Fu, and Q. Tian. Part-based deep hashing for large-scale person re-identification. IEEE Transactions on Image Processing, 2017.
  • [49] H. Zhu, M. Long, J. Wang, and Y. Cao. Deep hashing network for efficient similarity retrieval. In AAAI, pages 2415–2421, 2016.