Deep Policy Hashing Network with Listwise Supervision

04/03/2019 ∙ by Shaoying Wang, et al. ∙ SUN YAT-SEN UNIVERSITY 0

Deep-networks-based hashing has become a leading approach for large-scale image retrieval, which learns a similarity-preserving network to map similar images to nearby hash codes. The pairwise and triplet losses are two widely used similarity preserving manners for deep hashing. These manners ignore the fact that hashing is a prediction task on the list of binary codes. However, learning deep hashing with listwise supervision is challenging in 1) how to obtain the rank list of whole training set when the batch size of the deep network is always small and 2) how to utilize the listwise supervision. In this paper, we present a novel deep policy hashing architecture with two systems are learned in parallel: a query network and a shared and slowly changing database network. The following three steps are repeated until convergence: 1) the database network encodes all training samples into binary codes to obtain a whole rank list, 2) the query network is trained based on policy learning to maximize a reward that indicates the performance of the whole ranking list of binary codes, e.g., mean average precision (MAP), and 3) the database network is updated as the query network. Extensive evaluations on several benchmark datasets show that the proposed method brings substantial improvements over state-of-the-art hashing methods.



There are no comments yet.


page 3

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In the big data era, the amount of images has increased rapidly in social networks and search engines. To guarantee high quality and computation efficiency, hashing methods (Wang et al., 2017b, 2016), which map high-dimensional media data to compact binary codes such that similar images are mapped to similar binary hash codes, have received considerable attention because of retrieval efficiency.

Figure 1. Illustration of three similarity preserving manners: (a) pairwise approaches take image pairs as instances in learning; (b) triplet approaches protect relative similarities among three images, in which the query image is more similar to the positive image than the negative one; (c) our listwise approach takes the query to retrieval the whole data lists in learning.

Many hashing methods have been proposed in the literature. The shallow architectures are firstly proposed to learn the codes. For example, Locality Sensitive Hashing (LSH) (Gionis et al., 1999), Iterative Quantization (ITQ) (Gong et al., 2013), Fast Supervised Discrete Hashing (FSDH) (Gui et al., 2018), Spectral hashing (SH) (Weiss et al., 2009) use the hand-crafted features. Recently, the deep hashing methods (Cao et al., 2017; Zhang et al., 2018a)

have achieved impressive results in image retrieval due to the powerful features extracted from the deep networks. Pairwise and triplet losses are two widely used manners to learn the similarity preserving hash functions. The pairwise methods take two images as input and characterize the relationship between the two images, i.e., if the two images are similar, the Hamming distance between the learned binary codes should be small; otherwise, the Hamming distance should be large. The representative works include Deep Asymmetric Pairwise Hashing (DAPH) 

(Shen et al., 2017), Deep Supervised Hashing (DSH) (Liu et al., 2016), Deep Supervised Hashing with Pairwise Bit Loss (Wang et al., 2017a) and so on. The triplet approaches (Lai et al., 2015; Zhuang et al., 2016) preserve relative similarity relations of three images. For instance, given three images that is more similar to than , the goal of triplet loss is to preserve the similarities of the learned binary codes of these three images.

Figure 2.

Overview the structure of our propose deep policy hashing network, which consists of two sub-network sharing weights, a query network and a fixed database network. The database network encodes all images in the training set to binary codes, which regarded as the database for retrieval, and the query network maps the query images into binary codes and searches in the database to obtain the average precision (AP) and rewards. Then the gradients of rewards are backpropagated to update the query network and adjust the policy constantly. The weights of the query network are assigned to the database network every

training epochs to update the database.

Although the pairwise and triplet approaches offer several advantages, e.g., easy training, these two manners do not consider the fact that the sample lists instead of pairs or triplets are returned when an image is taken as query to retrieve the relevant data. It may result in sub-optimal solution when do not consider the whole rank lists. For example, suppose that is a query code and are the database codes, the query and are similar and dissimilar with other three codes. Taken as the query, two rank lists are returned by two ranked models: and . Which model performs better? From the pairwise perspective, is worse than since there are 3 wrong pairs (i.e., ) in and only 2 wrong pairs (i.e., ) in . While when the precision

, i.e., only consider the first returned image, is used as the evaluation metric,

is better than . In such case, the pairwise loss does not provide the optimal rank list.

In this paper, we propose to employ the listwise supervision, in which the whole binary codes are used in learning as illustrated in Figure 1

. Since the deep networks always have a large number of parameters, it is impossible to load all training data into one batch. Hence, the significant questions are then 1) how to obtain the whole binary codes of the training data and 2) how to define a listwise loss function to use the entire binary codes.

We propose a Deep Policy Hashing Network (DPHN) to address problems. To solve the first problem, we asynchronously execute two hashing networks in parallel inspired by the asynchronous reinforcement learning, e.g., asynchronous advantage actor-critic (A3C) (Mnih et al., 2016). These networks have the same network architecture that maps an input image to an approximate hash code, which are referred to query network and database network. The database network is the copy of the query network. It is similar to asynchronous Q-learning that the parameters in the query network are updated in each mini-batch while the database network is updated later. In training, the database network is to encode all training data into binary codes. The whole binary codes generated from the database network are used as the retrieval database. Second, to learn the query network with all training codes as the listwise supervision, we adopt the policy gradients to update the query network by performing approximate gradient ascents directly. Especially, for each bit, it has two binary values: 0 or 1. Hence, the query network can be viewed as a policy network that decides which action we should do (move to ‘0’ or ‘1’) for each bit. This policy network can be trained using reinforcement learning to maximize a reward that measures search accuracy when taking the query to retrieve the whole training codes. Since Mean Average Precision (MAP) is the widely used evaluation measure, we adopt it as the reward of the deep policy hashing network. Please note that other evaluation measures can be used as reward. We conduct extensive experiments on three widely used datasets, demonstrating that our proposed approach yields better performance compared with other state-of-the-art methods.

The main contributions of our proposed method can be summarized as follows:

  1. The whole ranking list information is utilized during training the policy network. The whole train data is regarded as a retrieval database during the training process, which ensures that we consider the whole rank list instead of only image pairs or triplets used as instances in learning.

  2. A deep policy hashing network is proposed to learn binary hash codes with listwise supervision directly. The hashing network is viewed as a policy network to generate binary hash codes. Also, we use the evaluation measure, e.g., MAP, as the reward scheme. It will encourage the deep policy hashing network to improve the evaluation measure directly.

2. Related Works

Existing image hashing methods can be divided into two categories: unsupervised hashing and supervised hashing. The unsupervised methods learn hash network that maps images to binary codes without labels information. Locality Sensitive Hashing (LSH) (Gionis et al., 1999) is a typical unsupervised method, which generates binary codes by random linear projection. Spectral Hashing (SH) (Weiss et al., 2009) uses the relationship between pairwise images to preserve similarity. Iterative Quantization (ITQ) (Gong et al., 2013) is proposed to learn hashing function by reducing the quantization gap between real feature space and binary Hamming space. Topology Preserving Hashing (TPH) (Zhang et al., 2013) tries to preserve the consistent neighbourhood rankings of data by learning a Hamming space.

Supervised hashing methods are proposed to take advantage of labels information. Predictable Discriminative Binary Code (DBC) (Rastegari et al., 2012)

learns hash function in hyperplanes and Minimal Loss Hashing (MLH)

(Norouzi and Blei, 2011) learns hash network by optimizing a hinge-like loss. To deal with linearly inseparable data, Supervised Hashing with Kernels (KSH) (Liu et al., 2012) and Binary Reconstructive Embbeding (BRE) (Kulis and Darrell, 2009) are proposed. Supervised Discrete Hashing (SDH) (Shen et al., 2015)

improves retrieval accuracy by integrating classify and binary codes generation during training. Recently, many deep hashing methods are proposed 

(Zhu et al., 2016; Yang et al., 2018; Li et al., 2015; Zhang et al., 2018a; Zhuang et al., 2016; Gui et al., 2018; Erin Liong et al., 2015). According to the forms of similarity preserving manners, two supervised information are widely used: 1) the pairwise-based methods and 2) the triplet-based methods. The pairwise-based methods take the image pairs as input. For example, Wang et al. (Wang et al., 2017a) proposed a deep supervised hashing network with pairwise labels. The triplet methods consider the relationships among three images. Network in Network Hashing (NINH) (Lai et al., 2015) learns hash network by optimizing triplet loss. Deep hashing methods improve retrieval quality to some extent. Nevertheless, there still exist some shortcomings which limit higher retrieval quality. These methods only consider the relationship between pairwise or triple images. Our proposed approach will overcome the deficiency by taking whole ranking list information in training.

Very recently, Zhang (Zhang et al., 2018b) proposed a deep reinforcement learning for image hashing, which learns the correlation between different hashing functions. The main difference is that they seek to learn each hashing functions sequentially while our method aims at learning whole ranking list information. Listwise methods are also proposed to learn hash codes. In (Wang et al., 2013), the listwise supervision is represented by a set of triplets. Different from that, we directly consider the whole binary codes.

3. Deep Policy Hashing Network

Given an image , our goal is to learn a hashing function , which encodes image to a -bits compact binary code . To preserve semantic similarity, the hashing function should map similar images to similar compact binary codes in Hamming space, and vice versa. In this paper, we propose a deep policy hashing network as shown in Figure 2. The proposed method consists of two “parallel” networks: a query network and a database network. These networks have the same architecture that maps an input image to a hash code. The database network encodes all training images into binary codes, which is used to obtain the whole list of codes. To utilize the listwise supervision, the query network is learned with all training codes via reinforcement learning. We will present the detail of these two networks in the following parts.

3.1. Database Network

As shown in Figure 2

, the database network consists of a basic deep network and a threshold function. The deep network includes stacked convolutional and fully-connected layers, followed by a sigmoid function that generates the intermediate features. To extract powerful feature representations, we adopt the VGG-19 network

(Simonyan and Zisserman, 2014) as the basic architecture. The first 18 layers follow the same settings in the VGG-19 network, while the last fully-connected layer (fch) is replaced by a -dimensional fully connective layer, where is the bits of binary codes. Then, a sigmoid function is added to restrict the values in the range . We denote the output of the basic deep network as , where is an input image.

Further, we also add a threshold function to generate binary codes of image as


where is the -th entry of , and is the -th binary code of the image .

Given training samples , we can obtain binary codes . These codes are used as a retrieval database when training the query network, which will be described in the Subsection 3.2.

3.2. Query Network

Given a query, taking all retrieval samples into account has not been well-explored in the most existing deep hashing methods. In this paper, we consider the advantages of reinforcement learning and design a policy network to learn with listwise supervision.

We consider the whole training data into the training phase, which aims to learn the hashing network as shown in Figure 2. The query network considers two inputs: 1) the query image and 2) the retrieval database binary codes (generated by the database network). Formally, for each mini-batch in the training phase, an image goes through the query network and is encoded as a query code . Note that the query network uses the same basic deep network as the database network, which is also built on the VGG-19 layer net. The symbol denotes the code is taken as the query to retrieve the relevant data in the retrieval database . Our query network works as follows: we generate query codes to retrieval the database and aim to obtain good performance. A policy learning is proposed to learn the network. The state, actions and reward of our proposed model are shown in detail as follows.

Actions: Since each bit has only two values, i.e., ‘0’ or ‘1’, the number of actions for each bit are two: ‘0’ or ‘1’. For -bit binary code, there are actions in total. The deep hashing network can be viewed to decide the value to be ‘0’ or ‘1’ for each bit.

State: The input image is considered as an observing state, which provides enough information for the agent to determine the actions.

Reward: Since average precision (AP) 111MAP is mean AP for all query images. It is only one query, so, the AP but not MAP is used. is a widely used evaluation measure. Hence, AP is applied to guide the agent to learn the policy network.

1:Initialize the first 18 layers of Basic Deep Network (BDN) with the weights of the pre-trained VGG-19 network, and initialize the last layer randomly
3:     Perform a gradient descent step on w.r.t the wieghts of BDN defined as
4:until  is convergent
5:Initialize the Query Network (QN) with weights
6:Initialize the Database Network (DN) with weights
7:Regard all training data as database images for retrieval
8:Generate database binary codes with DN
9:for  do
10:     Generate query binary code with QN
12:     Compute and reward
13:     Compute the overall loss with Equation 9
14:     Update QN with gradients computed with Equation 11
15:     if  then
16:         reset
17:         update with new DN
18:     end if
19:end for
Algorithm 1 The Pseudo Code for Our approach DPHN

3.3. Optimization

In training, the database network encodes all training data into binary codes , after which the query network is learnt via policy gradients. The database network is updated as the query network every training epochs. Thus, we only need to optimize the parameters in the query network. In this section, we will show how to train the query network.

Unlike standard reinforcement learning algorithms, we train the policy to predict all actions at once, which can be viewed as a single-step Markov Decision Process (MDP) 

(Sutton and Barto, 2011). Given an image , inspired by the idea in BlockDrop (Wu et al., 2018)

, we define the policy of binarized behaviour as a

-dimensional Bernoulli Distribution:


where is the bits of binary codes, is the parameters of the query network, is the -th value of the intermediate feature, and is the action for the -th bit, which is also the binary code for image .

Given a query image , the reward function is:


where is the binary code of the image , and is evaluation value for which takes as a query to retrieval the database , which is defined as


where is the number of images in the database, is the total number of similar images in database w.r.t the query . And is the number of similar images in the top images, and if the -th returned image is similar to the query, and otherwise . Note that the data in database are binary codes, it is efficient to calculate the AP. The determines whether the agent should be rewarded or punished. If , the agent will receive a positive reward. Otherwise, it will receive a negative reward. The smaller the AP, the greater the penalty. Our goal is to maximize the following expected reward:


By utilizing policy gradient (Williams, 1992), we can calculate the gradients of the expected reward and update the parameters of the query network with backpropagation algorithm. We define the gradients as the following equation:


We adopt Monte-Carlo algorithm to sample data and replace the expected gradients by estimated gradients. The estimated gradients are unbiased but with high variance. A self-critical baseline

is utilized to reduce variance, and the gradients can be modified as:


where is the batch size, and is selected based on with a threshold function: if and otherwise. For the unity of expression, we defined the overall policy loss as:


However, it is difficult to guarantee the convergence of the policy network. We joint the policy loss with the triplet ranking loss to accelerate the convergence of network. The overall loss function can be written as:


where the triplet ranking loss is defined as:


where is a margin hyper parameter and ,, are the intermediate features of training image ,,, where image is more similar to than to . Then the overall gradients can be calculated with the following equation:


where the detail calculation of has been explained in (Lai et al., 2015), and we will not repeat here. Algorithm 1 shows the complete training process for our approach DPHN.

4. Experiments

In this section, we will present our experiments on three image retrieval datasets and the results compared with several state-of-the-art hashing methods, including DRLIH (Zhang et al., 2018b), HashNet (Cao et al., 2017), NINH (Lai et al., 2015), CNNH (Xia et al., 2014), DSH (Liu et al., 2016), LSH (Gionis et al., 1999), SDH (Shen et al., 2015) and SH (Weiss et al., 2009).

4.1. Datasets

The experiments are conducted on three benchmark image retrieval datasets: CIFAR-10 (Kulis and Darrell, 2009), NUS-WIDE (Chua et al., 2009) and MIRFlickr (Huiskes and Lew, 2008).

  • CIFAR-10 consists of color images from classes with the size of . There are images per class. To fairly compare with other methods, we randomly select images per class as query set and 500 images per class as training set as suggested by (Zhang et al., 2018b). All images except those in the query set are selected as the retrieval database.

  • NUS-WIDE is a public web image dataset which consists of images associated with one or multiple labels from 81 categories. Following (Zhang et al., 2018b), we filter most common categories which each category has images at least. The images per category are selected as the query set (totally images), and images per category are selected as the training set (totally images). All images except the query set are regarded as the database.

  • MIRFlickr is a dataset which contains images selected from Flickr, and each image is labelled from categories. Following the setting of  (Zhang et al., 2018b), images and images are selected as the query set and the training set, respectively. Apart from the images in the query set, the rest images are considered as the database set.

Methods CIFAR10 NUS-WIDE MIRFlickr
12bits 24bits 32bits 48bits 12bits 24bits 32bits 48bits 12bits 24bits 32bits 48bits
DPHN (ours) 0.844 0.862 0.868 0.878 0.834 0.849 0.850 0.854 0.827 0.840 0.837 0.847
HashNet 0.765 0.823 0.840 0.843 0.812 0.833 0.830 0.840 0.777 0.782 0.785 0.785
DSH 0.708 0.712 0.751 0.720 0.793 0.804 0.815 0.800 0.651 0.681 0.684 0.686
NINH 0.792 0.818 0.832 0.830 0.808 0.827 0.827 0.827 0.772 0.756 0.760 0.778
CNNH 0.683 0.692 0.667 0.623 0.768 0.784 0.790 0.740 0.763 0.757 0.758 0.755
SDH-VGG19 0.430 0.652 0.653 0.665 0.730 0.797 0.819 0.830 0.762 0.739 0.737 0.747
SH-VGG19 0.224 0.213 0.213 0.209 0.712 0.697 0.689 0.682 0.618 0.604 0.598 0.595
LSH-VGG19 0.133 0.171 0.178 0.198 0.518 0.567 0.618 0.651 0.575 0.584 0.604 0.614
SDH 0.255 0.330 0.344 0.360 0.460 0.510 0.519 0.525 0.595 0.601 0.608 0.605
SH 0.124 0.125 0.125 0.126 0.452 0.445 0.443 0.437 0.561 0.562 0.563 0.562
LSH 0.116 0.121 0.124 0.131 0.436 0.414 0.432 0.442 0.557 0.564 0.562 0.569
Table 1. MAP scores with different length of binary codes on CIFAR-10, NUS-WIDE and MIRFlickr datasets. The MAP scores of NUS-WIDE dataset are calculated based on top 5000 returned images, while the MAP scores of another two datasets are based on all returned images. The best results are shown in bold.

4.2. Experiment Settings and Evaluation Metrics

The implementation of our proposed method is based on the deep learning framework: PyTorch 

222 We utilize stochastic gradient descend with momentum as the optimizer. The parameters of the first

layers are initialized with the pre-trained VGG-19 model on ImageNet dataset

(Russakovsky et al., 2015). The inputs of our network are raw images and their corresponding labels. We set the initial learning rate to be and decrease it by a factor of every epochs. The weight decay are fixed to be . The in reward function is set to be , which controls the balance between the reward and punishment. For the margin in the triplet ranking loss, we set it to when the binary bits is and respectively. The batch size is , and the database network is updated every epoch.

We compare our method with

state-of-the-art hashing methods, including the deep supervised hashing methods and the unsupervised methods. For deep hashing methods, raw images are used as input. For a fair comparison, the same VGG-19 network architecture is utilized as the basic structure for all deep hashing methods. We also show the results of the shallow hashing methods with two different features: deep features and hand-crafted features. The deep features are the

-dimensional vectors that are extracted from pre-trained VGG-19 models. Following 

(Zhang et al., 2018b), the 512-dimensional GIST features are utilized for CIFAR-10 and MIRFlickr and 500-dimensional bag-of-words features for NUS-WIDE. For a fair comparison, the results of all other hashing methods are cited from (Zhang et al., 2018b) directly.

We evaluate image retrieval performance based on three standard evaluation metrics: Mean Average Precision (), Precision within Hamming distance 2 () and Precision at top (). Mean Average Precision () is the mean of average precision () computed in Equation 1. Precision within Hamming distance 2 () is the precision of returned images with Hamming distance within 2 using hash search. This standard evaluation metric is significant for retrieval effectiveness since Hamming ranking require only time for each query. Precision at top () is the precision concerning the different number of top returned samples from ranking list.

Figure 3. Precision within Hamming distance 2 using hashing lookup on three datasets.
Figure 4. Precision at top K returned results on three datasets.

4.3. Comparison with State-of-the-art Methods

In this set of experiments, we evaluate and compare the proposed method with several state-of-the-art methods.

Table 1 shows the comparison results of the MAP on three databases. It is divided into three parts: deep hashing methods, shallow methods with deep features extracted by VGG-19 models and shallow methods with hand-crafted features. All these methods are pairwise, or triplet approaches, except ours. It can be observed that the proposed DPHN performs significantly better than all baselines. Specifically, on CIFAR-10 dataset, DPHN achieves a MAP of 0.844 on 12 bits, which shows an increment of about 8% over the second best method HashNet. Similar to CIFAR-10, DPHN also performs better than other methods on NUS-WIDE dataset. DPHN achieves a MAP of 0.834 on 12 bits, compared with 0.812 of HashNet. On MIRFlickr dataset, the MAP of our approach DPHN is 0.847 on 48 bits, which shows increments of 6% over HashNet, 10% over SDH-VGG19 and 24% over SDH.

Figure 3 shows the results of precision within Hamming distance 2 () on three datasets. We can see that DPHN almost achieves the best performance on three databases. Even with short binary codes, like 12-bits, DPHN achieves best performance than other methods, demonstrating that our approach learns more compact information. The precisions of some methods decrease when using longer binary codes since many queries fail to return images within Hamming radius 2. While our approach achieves higher precision with longer binary codes, the reason may be that our method optimizes the evaluation measure directly.

Figure 4 shows the compared results of the precision at top K. Again, DPHN achieves the best performance compared with state-of-the-art methods. Notably, the average precision at top K of DPHN is higher than 0.9 on MIRFlickr dataset, which outperforms other methods.

Methods CIFAR10 NUS-WIDE MIRFlickr
12bits 24bits 32bits 48bits 12bits 24bits 32bits 48bits 12bits 24bits 32bits 48bits
DPHN (ours) 0.844 0.862 0.868 0.878 0.834 0.849 0.850 0.854 0.827 0.840 0.837 0.847
DRLIH 0.816 0.843 0.855 0.853 0.823 0.846 0.845 0.853 0.796 0.811 0.810 0.814
Table 2. The comparing results of MAP scores between baseline and our proposed approach with different length of binary codes on CIFAR10, NUS-WIDE and MIRFlickr datasets. The best results are shown in bold.
Figure 5. Precision within Hamming distance 2 using hashing lookup compared with baseline method on three datasets.
Figure 6. Precision at top K returned results compared with baseline method on three datasets.

4.4. Comparison with the Policy Learning Method

In this set of our experiments, we do ablation study to clarify the impact of the policy learning of our method on the final performance. DRLIH, a reinforcement learning method with a different policy from ours, is selected as the baseline. The purpose of our strategy is to learn whole ranking list information and improve evaluation measure directly, while DRLIH proposed a policy to capture ranking errors and learn each hashing function sequentially. These comparisons can answer us whether our policy learning can contribute the accuracy or not.

Table 2 shows the comparison results of MAP. On CIFAR10 dataset, the average MAP of DPHN is 0.863, which achieves an improvement of 2% compared with the average MAP of the baseline (average 0.842). Our proposed approach improves the average MAP from 0.842 to 0.845 and from 0.808 to 0.838 on NUS-WIDE dataset and MIRFlickr, respectively. Therefore, the comparison results prove the effectiveness of our policy learning for improving the quality of image retrieval.

Figure 5 shows the precision within Hamming distance 2 compared with the baseline method. DPHN performs better than DRLIH, especially on MIRFlickr dataset. We can observe the same trend in precision at top K returned samples shown on Figure 6. The results shown above demonstrate that our policy performs better than the policy of the baseline method.

Figure 7. The top 10 images returned by DPHN using Hamming ranking on 48bits binary codes.

In Figure 7, we visualize the top 10 returned images of three query images on CIFAR10, NUS-WIDE, MIRFlickr with our approach DPHN. It shows that our approach can obtain satisfactory results.

Two observations can be made from comparison results mentioned above: 1) our proposed method with listwise supervision can achieve better performance than the pairwise techniques, e.g., DSH, the triplet methods, e.g., NINH. It is desired to take listwise supervision in training. And 2) our deep policy network also performs better than other reinforcement learning method DRLIH.

5. Conclusion

In this paper, we proposed DHPN, a hashing method to encode images into binary codes for effective retrieval. First of all, a database network was proposed to encode all images in training set into binary codes. Secondly, a policy network was further proposed to take the whole binary codes into account. Since the reward of the policy network is associated with AP, we could improve MAP directly. Experiments conducted on three widely used datasets proved that our approach achieves higher retrieval quality than existing hashing methods.


  • (1)
  • Cao et al. (2017) Zhangjie Cao, Mingsheng Long, Jianmin Wang, and S Yu Philip. 2017. HashNet: Deep Learning to Hash by Continuation.. In

    Proceedings of the International Conference on Computer Vision

    . 5609–5618.
  • Chua et al. (2009) Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. 2009. NUS-WIDE: A Real-world Web Image Database from National University of Singapore. In Proceedings of the ACM International Conference on Image and Video Retrieval. Article 48, 9 pages.
  • Erin Liong et al. (2015) Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, and Jie Zhou. 2015. Deep hashing for compact binary codes learning. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    . 2475–2483.
  • Gionis et al. (1999) Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity Search in High Dimensions via Hashing. In Proceedings of the 25th International Conference on Very Large Data Bases. 518–529.
  • Gong et al. (2013) Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. 2013. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 12 (2013), 2916–2929.
  • Gui et al. (2018) Jie Gui, Tongliang Liu, Zhenan Sun, Dacheng Tao, and Tieniu Tan. 2018. Fast supervised discrete hashing. IEEE transactions on pattern analysis and machine intelligence 40, 2 (2018), 490–496.
  • Huiskes and Lew (2008) Mark J. Huiskes and Michael S. Lew. 2008. The MIR Flickr Retrieval Evaluation. In Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval (MIR ’08). ACM, New York, NY, USA, 39–43.
  • Kulis and Darrell (2009) Brian Kulis and Trevor Darrell. 2009. Learning to hash with binary reconstructive embeddings. In Proceedings of the Advances in neural information processing systems. 1042–1050.
  • Lai et al. (2015) Hanjiang Lai, Yan Pan, Ye Liu, and Shuicheng Yan. 2015.

    Simultaneous feature learning and hash coding with deep neural networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition. 3270–3278.
  • Li et al. (2015) Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang. 2015. Feature learning based deep supervised hashing with pairwise labels. arXiv preprint arXiv:1511.03855 (2015).
  • Liu et al. (2016) Haomiao Liu, Ruiping Wang, Shiguang Shan, and Xilin Chen. 2016. Deep supervised hashing for fast image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2064–2072.
  • Liu et al. (2012) Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. 2012. Supervised hashing with kernels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2074–2081.
  • Mnih et al. (2016) Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In

    International conference on machine learning

    . 1928–1937.
  • Norouzi and Blei (2011) Mohammad Norouzi and David M Blei. 2011. Minimal loss hashing for compact binary codes. In Proceedings of the 28th international conference on machine learning. Citeseer, 353–360.
  • Rastegari et al. (2012) Mohammad Rastegari, Ali Farhadi, and David Forsyth. 2012. Attribute discovery via predictable discriminative binary codes. In Proceedings of the European Conference on Computer Vision. Springer, 876–889.
  • Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211–252.
  • Shen et al. (2017) Fumin Shen, Xin Gao, Li Liu, Yang Yang, and Heng Tao Shen. 2017. Deep Asymmetric Pairwise Hashing. In Proceedings of the 25th ACM International Conference on Multimedia (MM ’17). ACM, New York, NY, USA, 1522–1530.
  • Shen et al. (2015) Fumin Shen, Chunhua Shen, Wei Liu, and Heng Tao Shen. 2015. Supervised discrete hashing. In Proceedings of the IEEE conference on computer vision and pattern recognition. 37–45.
  • Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • Sutton and Barto (2011) Richard S Sutton and Andrew G Barto. 2011. Reinforcement learning: An introduction. (2011).
  • Wang et al. (2017a) Jiabao Wang, Yang Li, Xiancai Zhang, Zhuang Miao, and Gang Tao. 2017a. Deep Supervised Hashing with Pairwise Bit Loss. In Proceedings of the 2017 International Conference on Deep Learning Technologies (ICDLT ’17). ACM, New York, NY, USA, 70–74.
  • Wang et al. (2016) Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. 2016. Learning to hash for indexing big data: a survey. Proc. IEEE 104, 1 (2016), 34–57.
  • Wang et al. (2013) Jun Wang, Wei Liu, Andy X Sun, and Yu-Gang Jiang. 2013. Learning hash codes with listwise supervision. In Proceedings of the IEEE International Conference on Computer Vision. 3032–3039.
  • Wang et al. (2017b) Jingdong Wang, Ting Zhang, Nicu Sebe, Heng Tao Shen, et al. 2017b. A survey on learning to hash. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).
  • Weiss et al. (2009) Yair Weiss, Antonio Torralba, and Rob Fergus. 2009. Spectral hashing. In Advances in neural information processing systems. 1753–1760.
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8, 3-4 (1992), 229–256.
  • Wu et al. (2018) Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. 2018. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8817–8826.
  • Xia et al. (2014) Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan. 2014. Supervised hashing for image retrieval via image representation learning.. In

    Proceedings of the Association for the Advancement of Artificial Intelligence

    , Vol. 1. 2.
  • Yang et al. (2018) Huei-Fang Yang, Kevin Lin, and Chu-Song Chen. 2018. Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 2 (2018), 437–451.
  • Zhang et al. (2018b) Jian Zhang, Yuxin Peng, and Zhaoda Ye. 2018b. Deep Reinforcement Learning for Image Hashing. arXiv preprint arXiv:1802.02904 (2018).
  • Zhang et al. (2013) Lei Zhang, Yongdong Zhang, Jinhui Tang, Xiaoguang Gu, Jintao Li, and Qi Tian. 2013. Topology Preserving Hashing for Similarity Search. In Proceedings of the 21st ACM International Conference on Multimedia (MM ’13). ACM, New York, NY, USA, 123–132.
  • Zhang et al. (2018a) Xi Zhang, Hanjiang Lai, and Jiashi Feng. 2018a. Attention-Aware Deep Adversarial Hashing for Cross-Modal Retrieval. In Proceedings of the European Conference on Computer Vision. 614–629.
  • Zhu et al. (2016) Han Zhu, Mingsheng Long, Jianmin Wang, and Yue Cao. 2016. Deep Hashing Network for Efficient Similarity Retrieval.. In Proceedings of the Association for the Advancement of Artificial Intelligence. 2415–2421.
  • Zhuang et al. (2016) Bohan Zhuang, Guosheng Lin, Chunhua Shen, and Ian Reid. 2016. Fast training of triplet-based deep binary embedding networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5955–5964.