1 Introduction
Approximate Nearest Neighbour (ANN) search has attracted everincreasing attention in the era of big data. Thanks to the extremely low costs for computing Hamming distances, binary coding/hashing has been appreciated as an efficient solution to ANN search. Similar to other feature learning schemes, hashing techniques can be typically subdivided into supervised and unsupervised ones. Supervised hashing [11, 23, 25, 38, 44, 47], which highly depends on labels, is not always preferable since largescale data annotations are unaffordable. Conversely, unsupervised hashing [12, 16, 15, 20, 33, 46, 45], which is the focus of this paper, provides a costeffective solution for more practical applications. To exploit data similarities, existing unsupervised hashing methods [29, 30, 42, 35, 40] have extensively employed graphbased paradigms. Nevertheless, existing methods usually suffer from the ‘static graph’ problem. More concretely, they often adopt explicitly precomputed graphs, introducing biased prior knowledge of data relevance. Besides, graphs cannot be adaptively updated to better model the intrinsic data structure. In other words, there are no interactions between hash function learning and graph construction. The ‘static graph’ problem greatly hinders the effectiveness of graphbased unsupervised hashing mechanisms.
In this work, we tackle the above longstanding challenge by proposing a novel adaptive graph, which is directly driven by the learned binary codes. The graph is then seamlessly embedded into a generative network that has recently been verified effective for learning reconstructive binary codes [8, 10, 39, 49]. In general, our network can be regarded as a variant of Wasserstein AutoEncoder [41] with two kinds of bottlenecks (i.e., latent variables). Hence, we call the proposed method TwinBottleneck Hashing (TBH). Fig. 1 illustrates the differences between TBH and the related models. As shown in Fig. 1 (c), the binary bottleneck (BinBN) contributes to constructing the codedriven similarity graph, while the continuous bottleneck (ConBN) mainly guarantees the reconstruction quality. Furthermore, Graph Convolutional Networks (GCNs) [19] are leveraged as a ‘tunnel’ for the graph and the ConBN to fully exploit data relationships. As a result, similaritypreserving latent representations are fed into the decoder for highquality reconstruction. Finally, as a reward, the updated network setting is backpropagated through the ConBN to the encoder, which can better fulfill our ultimate goal of binary coding.
More concretely, TBH tackles the ‘static graph’ problem by directly leveraging the latent binary codes to adaptively capture the intrinsic data structure. To this end, an adaptive similarity graph is computed directly based on the Hamming distances between binary codes, and is used to guide the ConBN through neural graph convolution [14, 19]. This design provides an optimal mechanism for efficient retrieval tasks by directly incorporating the Hamming distance into training. On the other hand, as a side benefit of the twinbottleneck module, TBH can also overcome another important limitation in generative hashing models [5, 8], i.e., directly inputting the BinBN to the decoder leads to poor data reconstruction capability. For simplicity, we call this problem as ‘deficient BinBN’. Particularly, we address this problem by leveraging the ConBN, which is believed to have higher encoding capacity, for decoding. In this way, one can expect these continuous latent variables to preserve more entropy than binary ones. Consequently, the reconstruction procedure in the generative model becomes more effective.
In addition, during the optimization procedure, existing hashing methods often employ alternating iteration for auxiliary binary variables
[34] or even discard the binary constraints using some relaxation techniques [9]. In contrast, our model employs the distributional derivative estimator
[8]to compute the gradients across binary latent variables, ensuring that the binary constraints are not violated. Therefore, the whole TBH model can be conveniently optimized by the standard Stochastic Gradient Descent (SGD) algorithm.
The main contributions of this work are summarized as:

We propose a novel unsupervised hashing framework by incorporating twin bottlenecks into a unified generative network. The binary and continuous bottlenecks work collaboratively to generate discriminative binary codes without much loss of reconstruction capability.

A codedriven adjacency graph is proposed with efficient computation in the Hamming space. The graph is updated adaptively to better fit the inherent data structure. Moreover, GCNs are leveraged to further exploit the data relationships.

The autoencoding framework is novelly leveraged to determine the reward of the encoding quality on top of the codedriven graph, shaping the idea of learning similarities by decoding.

Extensive experiments show that the proposed TBH model massively boosts the stateoftheart retrieval performance on four largescale image datasets.
2 Related Work
Learning to hash, including the supervised and unsupervised scenario [5, 6, 9, 12, 24, 27, 29, 30], has been studied for years. This work is mostly related to the graphbased approaches [42, 29, 30, 35] and deep generative models based ones [5, 8, 10, 39, 49].
Unsupervised hashing with graphs. As a wellknown graphbased approach, Spectral Hashing (SpH) [42] determines pairwise code distances according to the graph Laplacians of the data similarity affinity in the original feature space. Anchor Graph Hashing (AGH) [30] successfully defines a small set of anchors to approximate this graph. These approaches assume that the original or midlevel data feature distance reflects the actual data relevance. As discussed in the problem of ‘static graph’, this is not always realistic. Additionally, the precomputed graph is isolated from the training process, making it hard to obtain optimal codes. Although this issue has already been considered in [35]
by an alternating code updating scheme, its similarity graph is still only dependent on realvalued features during training. We decide to build the batchwise graph directly upon the Hamming distance so that the learned graph is automatically optimized by the neural network.
Unsupervised generative hashing. Stochastic Generative Hashing (SGH) [8]
is closely related to our model in the way of employing the autoencoding framework and the discrete stochastic neurons. SGH
[8] simply utilizes the binary latent variables as the encoderdecoder bottleneck. This design does not fully consider the code similarity and may lead to high reconstruction error, which harms the training effectiveness (‘deficient BinBN’). Though autoencoding schemes apply deterministic decoding error, we are also aware that some existing models [10, 39, 49] are proposed with implicit reconstruction likelihood such as the discriminator in Generative Adversarial Network (GAN) [13]. Note that TBH also involves adversarial training, but only for regularization purpose.3 Proposed Model
TBH produces binary features of the given data for efficient ANN search. Given a data collection , the goal is to learn an encoding function . Here refers to the set size; indicates the original data dimensionality and is the target code length. Traditionally, the code of a data point, e.g.
, an image or a feature vector, is obtained by applying an elementwise sign function (
i.e., ) to the encoding function:(1) 
where is the binary code. Some autoencoding hashing methods [8, 37] introduce stochasticity on the encoding layer (see Eq. (2)) to estimate the gradient across . We also adopt this paradigm to make TBH fully trainable with SGD, while during test, Eq. (1) is used to encode outofsample data points.
3.1 Network Overview
The network structure of TBH is illustrated in Fig. 2. It typically involves a twinbottleneck autoencoder for our unsupervised feature learning task and two WAE [41] discriminators for latent variable regularization. The network setting, including the numbers of layers and hidden states, is also provided in Fig. 2.
An arbitrary datum is firstly fed to the encoders to produce two sets of latent variables, i.e., the binary code and the continuous feature . Note that the backpropagatable discrete stochastic neurons are introduced to obtain the binary code . This procedure will be explained in Sec. 3.2.1. Subsequently, a similarity graph within a training batch is built according to the Hamming distances between binary codes. As shown in Fig. 2, we use an adjacency matrix to represent this batchwise similarity graph. The continuous latent variable is tweaked by Graph Convolutional Network (GCN) [19] with the adjacency , resulting in the final latent variable (see Sec. 3.2.2) for reconstruction. Following [41], two discriminators and are utilized to regularize the latent variables, producing informative 01 balanced codes.
3.1.1 Why Does This Work?
Our key idea is to utilize the reconstruction loss of the auxiliary decoder side as sort of reward/critic to score the encode quality through the GCN layer and encoder. Hence, TBH directly solves both ‘static graph’ and ‘deficient BinBN’ problems. First of all, the utilization of continuous latent variables mitigates the information loss on the binary bottleneck in [5, 8], as more detailed data information can be kept. This design promotes the reconstruction quality and training effectiveness. Secondly, a direct backpropagation pathway from the decoder to the binary encoder is established through the GCN [19]. The GCN layer selectively mixes and tunes the latent data representation based on code distances so that data with similar binary representations have stronger influences on each other. Therefore, the binary encoder is effectively trained upon successfully detecting relevant data for reconstruction.
3.2 AutoEncoding TwinBottleneck
3.2.1 Encoder: Learning Factorized Representations
Different from conventional autoencoders, TBH involves a twinbottleneck architecture. Apart from the bit binary code , the continuous latent variable is introduced to capture detailed data information. Here refers to the dimensionality of . As shown in Fig. 2, two encoding layers, respectively for and , are topped on the identical fullyconnected layer which receives original data representations . We denote these two encoding functions, i.e., , as follows:
(2) 
where and indicate the network parameters. Note that overlaps with w.r.t. the weights of the shared fullyconnected layer. The first layer of and
comes with a ReLU
[32]nonlinearity. The activation function for the second layer of
is the sigmoid function to restrict its output values within an interval of
, while uses ReLU [32] nonlinearity again on the second layer.More importantly, in Eq. (2) is the elementwise discrete stochastic neuron activation [8]
with a set of random variables
, which is used for backpropagation through the binary variable . A discrete stochastic neuron is defined as:(3) 
where the superscript denotes the th element in the corresponding vector. During the training phase, this operation preserves the binary constraints and allows gradient estimation through distributional derivative [8] with Monte Carlo sampling, which will be elaborated later.
3.2.2 Bottlenecks: CodeDriven Hamming Graph
Different from the existing graphbased hashing approaches [29, 30, 47] where graphs are basically fixed during training, TBH automatically detects relevant data points in a graph and mixes their representations for decoding with a backpropagatable scheme.
The outputs of the encoder, i.e., and , are utilized to produce the final input to the decoder. For simplicity, we use batchwise notations with capitalized letters. In particular, and respectively refer to the continuous and binary latent variables for a batch of data points. The inputs to the decoder are therefore denoted as . We construct the graph based on the whole training batch with each datum as a vertex, and the edges are determined by the Hamming distances between the binary codes. The normalized graph adjacency is computed by:
(4) 
where is a matrix full of ones. Eq. (4) is an equilibrium of for each entry of . Then this adjacency, together with the continuous variables , is processed by the GCN layer [19], which is defined as:
(5) 
Here is a set of trainable projection parameters and .
As the batchwise adjacency is constructed exactly from the codes, a trainable pathway is then established from the decoder to . Intuitively, the reconstruction penalty scales up when unrelated data are closely located in the Hamming space. Ultimately, only relevant data points with similar binary representations are linked during decoding. Although GCNs [19] are utilized as well in [37, 47], these works generally use precomputed graphs and cannot handle the ‘static graph’ problem.
3.2.3 Decoder: Rewarding the Hashing Results
The decoder is an auxiliary component to the model, determining the code quality produced by the encoder. As shown in Fig. 2, the decoder of TBH consists of two fullyconnected layers, which are topped on the GCN [19] layer. We impose an ReLU [32] nonlinearity on the first layer and an identity activation on the second layer. Therefore, the decoding output is represented as , where refers to the network parameters within the scope of the decoder and is a row vector of generated by the GCN [19]. We elaborate the detailed loss of the decoding side in Sec. 3.4.
To keep the content concise, we do not propose a large convolutional network receiving and generating raw images, since our goal is to learn compact binary features. The decoder provides deterministic reconstruction penalty, e.g., the norm, back to the encoders during optimization. This ensures a more stable and controllable training procedure than the implicit generation penalties, e.g., the discriminators in GANbased hashing [10, 39, 49].
3.3 Implicit Bottleneck Regularization
The latent variables in the bottleneck are regularized to avoid wasting bits and align representation distributions. Different from the deterministic regularization terms such as bit decorrelation [9, 27] and entropylike loss [8], TBH mimics WAE [41] to adversarially regularize the latent variables with auxiliary discriminators. The detailed settings of the discriminators, i.e., and with network parameters and , are illustrated in Fig. 2, particularly involving two fullyconnected layers successively with ReLU [32] and sigmoid nonlinearities.
In order to balance zeros and ones in a binary code, we assume that
is priored by a binomial distribution
, which could maximize the code entropy. Meanwhile, regularization is also applied to the continuous variables after the GCN for decoding. We expectto obey a uniform distribution
to fully explore the latent space. To that end, we employ the following two discriminators and for and , respectively:(6) 
where and are random signals sampled from the targeted distributions for implicit regularizing and respectively.
The WAElike regularizers focus on minimizing the distributional discrepancy between the produced feature space and the target one. This design fits TBH better than deterministic regularizers [8, 9], since such kinds of regularizers (e.g.
, bit decorrelation) impose direct penalties on each sample, which may heavily skew the similarity graph built upon the codes and consequently degrades the training quality. Experiments also support our insight (see Table
3).3.4 Learning Codes and Similarity by Decoding
As TBH involves adversarial training, two learning objectives, i.e., for the autoencoding step and for the discriminating step, are respectively introduced.
3.4.1 AutoEncoding Objective
The AutoEncoding objective is written as follows:
(7) 
where is a hyperparameter controlling the penalty of the discriminator according to [41]. is obtained from Eq. (3), is computed according to Eq. (5), and the decoding result is obtained from the decoder. is used to optimize the network parameters within the scope of . Eq. (7) comes with an expectation term over the latent binary code, since is generated by a sampling process.
Inspired by [8], we estimate the gradient through the binary bottleneck with distributional derivatives by utilizing a set of random signals . The gradient of w.r.t. is estimated by:
(8) 
We refer the reader to [8] and our Supplementary Material for the details of Eq. (8). Notably, a similar approach of samplingbased gradient estimation for discrete variables was employed in [2], which has been proved as a special case of the REINFORCE algorithm [43].
3.4.2 Discriminating Objective
The Discriminating objective is defined by:
(9) 
Here refers to the same hyperparameter as in Eq. (7). Similarly, optimizes the network parameters within the scope of . As the discriminating step does not propagate error back to the autoencoder, there is no need to estimate the gradient through the indifferentiable binary bottleneck. Thus the expectation term in Eq. (7) is deprecated in Eq. (9).
The training procedure of TBH is summarized in Algorithm 1, where refers to the adaptive gradient scaler, for which we adopt the Adam optimizer [18]. Monte Carlo sampling is performed on the binary bottleneck, once a data batch is fed to the encoder. Therefore, the learning objective can be computed using the network outputs.
3.5 OutofSample Extension
After TBH is trained, we can obtain the binary codes for any outofsample data as follows:
(10) 
where denotes a query data point. During the test phase, only is required, which considerably eases the binary coding process. Since only the forward propagation process is involved for test data, the stochasticity on the encoder used for training in Eq. (2) is not needed.
CIFAR10  NUSWIDE  MSCOCO  
Method  Reference  16 bits  32 bits  64 bits  16 bits  32 bits  64 bits  16 bits  32 bits  64 bits 
LSH [6]  STOC02  0.106  0.102  0.105  0.239  0.266  0.266  0.353  0.372  0.341 
SpH [42]  NIPS09  0.272  0.285  0.300  0.517  0.511  0.510  0.527  0.529  0.546 
AGH [30]  ICML11  0.333  0.357  0.358  0.592  0.615  0.616  0.596  0.625  0.631 
SpherH [17]  CVPR12  0.254  0.291  0.333  0.495  0.558  0.582  0.516  0.547  0.589 
KMH [15]  CVPR13  0.279  0.296  0.334  0.562  0.597  0.600  0.543  0.554  0.592 
ITQ [12]  PAMI13  0.305  0.325  0.349  0.627  0.645  0.664  0.598  0.624  0.648 
DGH [29]  NIPS14  0.335  0.353  0.361  0.572  0.607  0.627  0.613  0.631  0.638 
DeepBit [27]  CVPR16  0.194  0.249  0.277  0.392  0.403  0.429  0.407  0.419  0.430 
SGH [8]  ICML17  0.435  0.437  0.433  0.593  0.590  0.607  0.594  0.610  0.618 
BGAN [39]  AAAI18  0.525  0.531  0.562  0.684  0.714  0.730  0.645  0.682  0.707 
BinGAN [49]  NIPS18  0.476  0.512  0.520  0.654  0.709  0.713  0.651  0.673  0.696 
GreedyHash [40]  NIPS18  0.448  0.473  0.501  0.633  0.691  0.731  0.582  0.668  0.710 
HashGAN [10]*  CVPR18  0.447  0.463  0.481             
DVB [36]  IJCV19  0.403  0.422  0.446  0.604  0.632  0.665  0.570  0.629  0.623 
DistillHash [45]  CVPR19  0.284  0.285  0.288  0.667  0.675  0.677       
TBH  Proposed  0.532  0.573  0.578  0.717  0.725  0.735  0.706  0.735  0.722 
*Note the duplicate naming of HashGAN, i.e., the unsupervised one [10] and the supervised one [3].
4 Experiments
We evaluate the performance of the proposed TBH on four largescale image benchmarks, i.e., CIFAR10, NUSWIDE, MS COCO. We additionally present results for image reconstruction on the MNIST dataset.
4.1 Implementation Details
The proposed TBH model is implemented with the popular deep learning toolbox Tensorflow
[1]. The hidden layer sizes and the activation functions used in TBH are all provided in Fig. 2. The gradient estimation of Eq. (8) can be implemented with a single Tensorflow decorator in Python, following [8]. TBH only involves two hyperparameters, i.e., and . We set and by default. For all of our experiments, the fc_7 features of the AlexNet [22] network are utilized for data representation. The learning rate of Adam optimizer [18] is set to , with default decay rates and . We fix the training batch size to 400. Our implementation can be found at https://github.com/ymcidence/TBH.4.2 Datasets and Setup
CIFAR10 [21] consists of 60,000 images from 10 classes. We follow the common setting [10, 40] and select 10,000 images (1000 per class) as the query set. The remaining 50,000 images are regarded as the database.
NUSWIDE [7] is a collection of nearly 270,000 images of 81 categories. Following the settings in [44], we adopt the subset of images from 21 most frequent categories. 100 images of each class are utilized as a query set and the remaining images form the database. For training, we employ 10,500 images uniformly selected from the 21 classes.
MS COCO [28] is a benchmark for multiple tasks. We adopt the pruned set as with [4] with 12,2218 images from 80 categories. We randomly select 5,000 images as queries with the remaining ones the database, from which 10,000 images are chosen for training.
Standard metrics [4, 40] are adopted to evaluate our method and other stateoftheart methods, i.e., Mean Average Precision (MAP), PrecisionRecall (PR) curves, Precision curves within Hamming radius 2 (P@H2), and Precision w.r.t. 1,000 top returned samples (P@1000). We adopt MAP@1000 for CIFAR10, and MAP@5000 for MS COCO and NUSWIDE according to [4, 48].
CIFAR10  MSCOCO  

Method  16 bits  32 bits  64 bits  16 bits  32 bits  64 bits 
KMH  0.242  0.252  0.284  0.557  0.572  0.612 
SpherH  0.228  0.256  0.291  0.525  0.571  0.612 
ITQ  0.276  0.292  0.309  0.607  0.637  0.662 
SpH  0.238  0.239  0.245  0.541  0.548  0.567 
AGH  0.306  0.321  0.317  0.602  0.635  0.644 
DGH  0.315  0.323  0.324  0.623  0.642  0.650 
HashGAN  0.418  0.436  0.455       
SGH  0.387  0.380  0.367  0.604  0.615  0.637 
GreedyHash  0.322  0.403  0.444  0.603  0.624  0.675 
TBH (Ours)  0.497  0.524  0.529  0.646  0.698  0.701 
4.3 Comparison with Existing Methods
We compare TBH with several stateoftheart unsupervised hashing methods, including LSH [6], ITQ [12], SpH [42], SpherH [17], KMH [15], AGH [30], DGH [29], DeepBit [27], BGAN [39], HashGAN [10], SGH [8], BinGAN [49], GreedyHash [40], DVB [36] and DistillHash [45]
. For fair comparisons, all the methods are reported with identical training and test sets. Additionally, the shallow methods are evaluated with the same deep features as the ones we are using.
4.3.1 Retrieval results
The MAP and P@1000 results of TBH and other methods are respectively provided in Tables 1 and 2, while the respective PR curves and P@H2 results are illustrated in Fig. 3. The performance gap between TBH and existing unsupervised methods can be clearly observed. Particularly, TBH obtains remarkable MAP gain with 16bit codes (i.e., ). Among the unsupervised baselines, GreedyHash [40] performs closely next to TBH. It bases the produced code similarity on pairwise feature distances. As is discussed in Sec 1, this design is straightforward but suboptimal since the original feature space is not fully revealing data relevance. On the other hand, as a generative model, HashGAN [10] significantly underperforms TBH, as the binary constraints are violated during its adversarial training. TBH differs SGH [8] by leveraging the twinbottleneck scheme. Since SGH [8] only considers the reconstruction error and in autoencoder, it generally does not produce convincing retrieval results.
4.3.2 Extremely short codes
Inspired by [40], we illustrate the retrieval performance with extremely short bit length in Fig. 4 (a). TBH works well even when the code length is set to . The significant performance gain over SGH can be observed. This is due to that, during training, the continuous bottleneck complements the information discarded by the binary one.
Baseline  16 bits  32 bits  64 bits  

1  Single bottleneck  0.435  0.437  0.433 
2  Swapped bottlenecks  0.466  0.471  0.475 
3  Explicit regularization  0.524  0.559  0.560 
4  Without regularization  0.521  0.535  0.547 
5  Without stochastic neuron  0.408  0.412  0.463 
6  Fixed graph  0.442  0.464  0.459 
7  Attention equilibrium  0.477  0.503  0.519 
TBH (full model)  0.532  0.573  0.578 
4.4 Ablation Study
In this subsection, we validate the contribution of each component of TBH, and also show some empirical analysis. Different baseline network structures are visualized in the Supplementary Material for better understanding.
4.4.1 Component Analysis
We compare TBH with the following baselines. (1) Single bottleneck. This baseline coheres with SGH. We remove the twinbottleneck structure and directly feed the binary codes to the decoder. (2) Swapped bottlenecks. We swap the functionalities of the two bottlenecks, i.e., using the continuous one for adjacency building and the binary one for decoding. (3) Explicit regularization. The WAE regularizers are replaced by conventional regularization terms. An entropy loss similar to SGH is used to regularize , while an norm is applied to . (4) Without regularization. The regularization terms for and are removed in this baseline. (5) Without stochastic neuron. The discrete stochastic neuron is deprecated on the top of , and bit quantization loss [9] is appended to . (6) Fixed graph. is precomputed using feature distances. The continuous bottleneck is removed and the GCN is applied to the binary bottleneck with the fixed . (7) Attention equilibrium. This baseline performs weighted average on according to , instead of employing GCN inbetween.
Table 3 shows the performance of the baselines. We can observe that the model undergoes significant performance drop when modifying the twinbottleneck structure. Specifically, our trainable adaptive Hamming graph plays an important role in the network. When removing this (i.e., baseline_6), the performance decreases by 9%. This accords with our motivation in dealing with the ‘static graph’ problem.In practice, we also experience training perturbations when applying different regularization and quantization penalties to the model.
4.4.2 Reconstruction Error
As mentioned in the ‘deficient BinBN’ problem, decoding from a single binary bottleneck is less effective. This is illustrated in Fig. 4 (b), where the normalized reconstruction errors of TBH, baseline_1 and baseline_4 are plotted. TBH produces lower decoding error than the single bottleneck baseline. Note that baseline_1 structurally coheres SGH [8].
4.4.3 HyperParameter
Only two hyperparameters are involved in TBH. The effect of the adversarial loss scaler is illustrated in Fig. 4 (c). A large regularization penalty slightly influences the overall retrieval performance. The results w.r.t. different settings of on CIFAR10 are shown in Fig. 4 (d). Typically, no dramatic performance drop is observed when squeezing the bottleneck, as data relevance is not only reflected by the continuous bottleneck. Even when setting to 64, TBH still outperforms most existing unsupervised methods, which also endorses our twinbottleneck mechanism.
4.5 Qualitative Results
We provide some intuitive results to further justify the design. The implementation details are given in the Supplementary Material to keep the content concise.
4.5.1 The Constructed Graph by Hash Codes
We show the effectiveness of the codedriven graph learning process in Fig. 5. 20 random samples are selected from a training batch to plot the adjacency. The twinbottleneck mechanism automatically tunes the codes, constructing based on Eq. (4). Though TBH has no access to the labels, the constructed adjacency simulates the labelbased one. Here brighter color indicates closer Hamming distances.
4.5.2 Visualization
Fig. 6 (a) shows the tSNE [31] visualization results of TBH. Most of the data from different categories are clearly scattered. Interestingly, TBH successfully locates visually similar categories within short Hamming distances, e.g., Automobile/Truck and Deer/Horse
. Some qualitative image retrieval results w.r.t. 32bit codes are shown in Fig.
6 (b).4.5.3 Toy Experiment on MNIST
Following [8], another toy experiment on image reconstruction with the MNIST [26] dataset is conducted. For this task, we directly use the flattened image pixels as the network input. The reconstruction results are reported in Fig. 6 (c). Some bad handwritings are falsely decoded to wrong numbers by SGH [8], while this phenomenon is not frequently observed in TBH. This also supports our insight in addressing the ‘deficient BinBN’ problem.
5 Conclusion
In this paper, a novel unsupervised hashing model was proposed with an autoencoding twinbottleneck mechanism, namely TwinBottleneck Hashing (TBH). The binary bottleneck explored the intrinsic data structure by adaptively constructing the codedriven similarity graph. The continuous bottleneck interactively adopted data relevance information from the binary codes for highquality decoding and reconstruction. The proposed TBH model was fully trainable with SGD and required no empirical assumption on data similarity. Extensive experiments revealed that TBH remarkably boosted the stateoftheart unsupervised hashing schemes in image retrieval.
References
 [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Largescale machine learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
 [2] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
 [3] Yue Cao, Bin Liu, Mingsheng Long, and Jianmin Wang. Hashgan: Deep learning to hash with pair conditional wasserstein gan. In CVPR, 2018.
 [4] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S Yu. Hashnet: Deep learning to hash by continuation. In ICCV, 2017.

[5]
Miguel A CarreiraPerpinán and Ramin Raziperchikolaei.
Hashing with binary autoencoders.
In CVPR, 2015.  [6] Moses S Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
 [7] TatSeng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. Nuswide: a realworld web image database from national university of singapore. In CIVR, 2009.
 [8] Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, and Le Song. Stochastic generative hashing. In ICML, 2017.
 [9] Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, and Jie Zhou. Deep hashing for compact binary codes learning. In CVPR, 2015.
 [10] Kamran Ghasedi Dizaji, Feng Zheng, Najmeh Sadoughi, Yanhua Yang, Cheng Deng, and Heng Huang. Unsupervised deep generative adversarial hashing network. In CVPR, 2018.
 [11] Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. Iterative quantization: A procrustean approach to learning binary codes for largescale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2916–2929, 2013.
 [12] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for largescale image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2916–2929, 2013.
 [13] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014.
 [14] David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
 [15] Kaiming He, Fang Wen, and Jian Sun. Kmeans hashing: An affinitypreserving quantization method for learning binary compact codes. In CVPR, 2013.
 [16] Xiangyu He, Peisong Wang, and Jian Cheng. Knearest neighbors hashing. In CVPR, 2019.
 [17] JaePil Heo, Youngwoon Lee, Junfeng He, ShihFu Chang, and SungEui Yoon. Spherical hashing. In CVPR, 2012.
 [18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 [19] Thomas N Kipf and Max Welling. Semisupervised classification with graph convolutional networks. In ICLR, 2017.
 [20] Weihao Kong and WuJun Li. Isotropic hashing. In NeurIPS, 2012.
 [21] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009.
 [22] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012.
 [23] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In NeurIPS, 2009.
 [24] Brian Kulis and Kristen Grauman. Kernelized localitysensitive hashing for scalable image search. In ICCV, 2009.
 [25] Hanjiang Lai, Yan Pan, Ye Liu, and Shuicheng Yan. Simultaneous feature learning and hash coding with deep neural networks. In CVPR, 2015.
 [26] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, Nov 1998.
 [27] Kevin Lin, Jiwen Lu, ChuSong Chen, and Jie Zhou. Learning compact binary descriptors with unsupervised deep neural networks. In CVPR, 2016.
 [28] TsungYi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
 [29] Wei Liu, Cun Mu, Sanjiv Kumar, and ShihFu Chang. Discrete graph hashing. In NeurIPS, 2014.
 [30] Wei Liu, Jun Wang, Sanjiv Kumar, and ShihFu Chang. Hashing with graphs. In ICML, 2011.

[31]
Laurens van der Maaten and Geoffrey Hinton.
Visualizing data using tsne.
Journal of Machine Learning Research
, 9(Nov):2579–2605, 2008.  [32] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
 [33] Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7), 2009.
 [34] Fumin Shen, Chunhua Shen, Wei Liu, and Heng Tao Shen. Supervised discrete hashing. In CVPR, 2015.
 [35] Fumin Shen, Yan Xu, Li Liu, Yang Yang, Zi Huang, and Heng Tao Shen. Unsupervised deep hashing with similarityadaptive and discrete optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):3034–3044, 2018.

[36]
Yuming Shen, Li Liu, and Ling Shao.
Unsupervised binary representation learning with deep variational
networks.
International Journal of Computer Vision
, 127(1112):1614–1628, 2019.  [37] Yuming Shen, Li Liu, Fumin Shen, and Ling Shao. Zeroshot sketchimage hashing. In CVPR, 2018.
 [38] Yuming Shen, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, and Ziyi Shen. Embarrassingly simple binary representation learning. In ICCV Workshops, 2019.
 [39] Jingkuan Song, Tao He, Lianli Gao, Xing Xu, Alan Hanjalic, and Heng Tao Shen. Binary generative adversarial networks for image retrieval. In AAAI, 2018.
 [40] Shupeng Su, Chao Zhang, Kai Han, and Yonghong Tian. Greedy hash: Towards fast optimization for accurate hash coding in cnn. In NeurIPS, 2018.
 [41] Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein autoencoders. In ICLR, 2018.
 [42] Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NeurIPS, 2009.

[43]
Ronald J Williams.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning.
Machine learning, 8(34):229–256, 1992.  [44] Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan. Supervised hashing for image retrieval via image representation learning. In AAAI, 2014.
 [45] Erkun Yang, Tongliang Liu, Cheng Deng, Wei Liu, and Dacheng Tao. Distillhash: Unsupervised deep hashing by distilling data pairs. In CVPR, 2019.
 [46] Mengyang Yu, Li Liu, and Ling Shao. Structurepreserving binary representations for rgbd action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(8):1651–1664, 2016.
 [47] X. Zhou, F. Shen, L. Liu, W. Liu, L. Nie, Y. Yang, and H. T. Shen. Graph convolutional network hashing. IEEE Transactions on Cybernetics, pages 1–13, 2018.
 [48] Han Zhu, Mingsheng Long, Jianmin Wang, and Yue Cao. Deep hashing network for efficient similarity retrieval. In AAAI, 2016.
 [49] Maciej Zieba, Piotr Semberecki, Tarek ElGaaly, and Tomasz Trzcinski. Bingan: Learning compact binary descriptors with a regularized gan. In NeurIPS, 2018.