The need to model and compute similarity between some objects is central to many applications ranging from medical imaging to biometric security. In various problems in different fields we need to compare object as different as functions, images, geometric shapes, probability distributions, or text documents. Each such problem has its own notion of data similarity.
A particularly challenging case of similarity arises in applications dealing with multimodal data, which have different representation, dimensionality, and structure. Data of this kind is encountered prominently in medical imaging (e.g. fusion of different imaging modalities like PET and CT)  and multimedia retrieval (e.g. querying image databases by text keywords) . Such data are incomparable as apples and oranges by means of standard metrics and require the notion of multimodal similarity.
While such multimodal similarity is difficult to model, in many cases it is easy to learn from examples. For instance, in Internet vision applications we can easily obtain multiple examples of visual objects with a binary similarity function telling whether two objects are similar or not. Learning and representing such similarities in a convenient way is a big challenge.
Particular setting of similarity representation problem is similarity sensitive hashing 4], we extended the boosting-based similarity-sensitive hashing (SSH) method to the multimodal setting (referred to as cross-modality SSH or CM-SSH). This is, to the best of our knowledge, the first and the only multimodal similarity-preserving hashing algorithm in the literature.
The purpose of this paper is to develop a different simpler and efficient multimodal hashing algorithm. The rest of the paper is organized as follows. In Section 2, we formulate the problem of multimodal hashing. In Section 3, we overview the CM-SSH algorithm. In Section 4, we propose our new method (cross-modality diff-hash or CM-DIF) and in Section 5 discuss its extension (multimodal kernel diff-hash or MM-kDIF) using kernelization. Section 6 shows some experimental results.
Let and be two spaces representing data belonging to different modalities (e.g., are images and are text descriptions). Note that even though we assume that the data can be represented in the Euclidean space, the similarity of the data is not necessarily Euclidean and in general can be described by some metrics and , to which we refer as intra-modal dissimilarities. Furthermore, we assume that there exists some inter-modal dissimilarity quantifying the “distance” between points in different modality. The ensemble of intra- and inter-modal structures is not necessarily a metric in the strict sense. In order to deal with these structures in a more convenient way, we try to represent them in a common metric space.
The broader problem of multimodal hashing is to represent the data from different modalities in a common space of
-dimensional binary vectors with the Hamming metricby means of two embeddings, and mapping similar points as close as possible to each other and dissimilar points as distant as possible from each other, such that , , and . In a sense, the embeddings act as a metric coupling, trying to construct a single metric that preserves the intra- and inter-modal similarities.
A simplified setting of the multimodal hashing problem is cross-modality hashing, in which only the inter-modal dissimilarity is taken into consideration and are ignored.
For simplicity, in the following discussion we assume the inter-modal dissimilarity to be binary, , i.e., a pair of points can be either similar or dissimilar. This dissimilarity is usually unknown and hard to model, however, it should be possible to sample on some subset of the data . This sample can be represented as set of similar pairs of points (positives) and a set of dissimilar pairs of points (negatives) .
The problem of cross-modality hashing thus boils down to find two embeddings and such that . Alternatively, this can be expressed as having (i.e., the hash has high collision probability on the set of positives) and . The former can be interpreted as the false negative rate (FNR) and the latter as the false positive rate (FPR).
3 Cross-modality similarity-sensitive hashing (CM-SSH)
In , we introduced the cross-modality similarity-sensitive hashing (CM-SSH) method, which is to the best of our knowledge, the first and the only multimodal hashing algorithm existing to date. The idea closely follows the similarity-sensitive hashing (SSH) method 
, considering the hash construction as boosted binary classification, where each hash dimension acts as a weak binary classifier. For each dimension, AdaBoost is used to maximize the following loss function
where is binary intra-modal similarity and is the AdaBoost weigh for pair at th iteration. Since the minimization problem (1) is difficult, it is relaxed in the following way : First, removing the non-linearity and setting , find the projection vectors . Then, fixing the projections , find the thresholds .
The disadvantages of the boosting-based CM-SSH is first high computational complexity, and second, the tendency to find unnecessary long hashes (the second problem can be partially resolved by using sequential probability testing  which creates hashes of minimum expected length).
4 Cross-modality diff-hash (CM-DIF)
In , we proposed a different and simpler approach (dubbed diff-hash) to create similarity-sensitive hash functions in the unimodal setting. We adopt similar ideas here to develop multimodal similarity-sensitive hashing algorithms.
The optimal cross-modality hashing can be found by minimizing the loss
with respect to the embedding functions , which is, up to constants, equivalent to minimizing the correlations
w.r.t. the projection matrices and threshold vectors . The first and second terms in (3) can be thought of as FPR and FNR, respectively. The parameter controls the tradeoff between FPR and FNR. The limit case effectively considers only the positive pairs ignoring the negative set.
Problem (3) is a highly non-convex non-linear optimization problem difficult to solve straightforwardly. Similarly to [8, 4], we simplify the problem in the following way. First, we ignore the threshold and solve a simplified problem without the sign non-linearity for projection matrices,
Second, fixing the projections we find optimal thresholds,
4.1 Projection computation
Dropping the sign function and the offset, the loss function (3) becomes
where denote the covariance matrices of the positive and negative multi-modal data, respectively, and is the weighted difference of these covariances. The name of the algorithm, cross-modality diff-hash (CM-DIF), refers in fact to this covariance difference matrix. Note that in order to avoid trivial solution, we must constrain the projection matrices to be unitary, i.e., and .
The difference of covariance matrices has a singular value decomposition of the form, where and are unitary matrices of singular vectors of size and , respectively (, ), and is a diagonal matrix of singular values of size .
It can be easily shown that the loss is minimized by setting the projection matrices to be the smallest left and right singular vectors of the matrix , respectively: and . From this result it also follows that the problem is separable, and each dimension can be treated independently.
4.2 Threshold selection
Having the projection matrices fixed, the loss function (3) can be written as
The problem is separable and can be solved independently in each dimension . We express the false positive and negative rates as a function of the thresholds as
The above probabilities can be estimated from histograms (cumulative distributions) ofand on the positive and negative sets. Optimal thresholds
are obtained by means of exhaustive search. To reduce the complexity of this search, we define a set of grids on the threshold parameter space.
4.3 Hash function application
Once the projections and thresholds are computed, given new data points , we construct the corresponding -dimensional binary hash vectors as and .
5 Multimodal kernel diff-hash (MM-kDIF)
An obvious disadvantage of diff-hash (and spectral methods in general) compared to AdaBoost-based methods is that it must be dimensionality-reducing: since we compute projections and as the singular vectors of a covariance matrix of size , the dimensionality of the embedding space must satisfy . In some cases, such a dimensionality may be too low and would not allow to correctly separate the data. A second disadvantage is of the cross-modality hashing problem in general, that it considers only the inter-modal similarity , ignoring the intra-modal similarities .
A standard way to cope with the first problem is the kernel trick , which transforms the data into some feature space that is never dealt with explicitly (only inner products in this space, referred to as kernel, are required). A kernel version of the uni-modal diff-hash was described in . Here, we show that the use of kernels also allows incorporating intra-modal similarities into the problem.
Since the problem is separable (as we have seen, projection in each dimension corresponds to a singular vector of the positives covariance matrix), we consider for simplicity one-dimensional projections.
The whole method is summarized in Algorithm 2. Since it considers (though implicitly) the intra-modal dissimilarities in addition to the inter-modal dissimilarity, we refer to it as multimodal kernel diff-hash (MM-kDIF).
5.1 Projection computation
Let be a positive semi-definite kernel, and let . The map maps the data into some feature space, which we represent here as a Hilbert space (possibly of infinite dimension) with an inner product . It satisfies . Same way, we define the kernel and the associated map to some other Hilbert space for the second modality.
The idea of kernelization is to replace the original data with the corresponding feature vectors , replacing the linear projections and with
respectively. Here, are sunknown linear combination coefficients, and and denote some representative points of each modality acting as respective bases of subspaces used for the representation of data in each modality.
In this formulation, the approximate loss becomes
where and denote and matrices, and and denote and matrices with elements and , respectively. The optimal projection coefficients minimizing are given as the largest left and right singular vectors of the matrix
The kernels can be selected in a way to incorporate the intra-modal similarities which are not accounted for in the previously discussed cross-modality hashing problem. For example, a classical choose is the Gaussian kernel, and . This way, we account both for the inter-modal similarity (through the definition of the positive set ) and the intra-modal similarities (through the definition of the kernels ). Furthermore, the dimensionality of the hash is now bounded by the number of the basis vectors, , which can be arbitrary and in practice limited only by the training set size and computational complexity. Finally, the use of kernels generalizes the embeddings to be of a more generic rather than affine form.
5.2 Threshold selection
As previously, the threshold should be selected to minimize the false positive and negative rates for each dimension of the projection,
The optimal thresholds are obtained as
5.3 Hash function application
Once the linear combination coefficients and thresholds are computed, given new data points , we construct the corresponding -dimensional binary hash vectors as and .
To test the performance of the algorithms, we created simulated multimodal data of dimensionality and . In each modality, the data was created as follows: first, , and random vectors were generated as “centers”. To each “center” (- or
-dimensional, respectively), i.i.d. Gaussian noise with different standard deviation in each dimension (varying between) was added. Binary inter-modal similarity partitioned the dataset into classes. As the intra-modal dissimilarity in each modality, we used the Mahalanobis metric with respective diagonal covariance matrix.
We compared boosting-based CM-SSH  and our CM-DIF and MM-kDIF methods. Hash of different dimension was used for CM-SSH and MM-kDIF; for CM-DIF was used. We used tradeoff parameter . For MM-kDIFF, we used bases of size and Gaussian kernels of the form
For CM-SSH, the settings were according to .
The training set consisted of positive and negative pairs. The training time for was approximately , , and seconds for CM-SSH, CM-DIF, and MM-kDIF, respectively. Testing was performed on a set of pairs, using data from one modality as a query and data from another modality as the database. Performance was measured as mean average precision (mAP) and equal error rate (EER). Ideal performance is and .
Figures 1–2 show the performance of different multimodal hashing algorithms as a function of for datasets with a different number of classes. For comparison, we show the performance of unimodal retrieval (Euclidean distance). Our methods clearly outperform CM-SSH both in accuracy and training time. Moreover, the performance of CM-SSH seems to fall dramatically with increasing complexity of the dataset (more classes), while our methods continue producing good performance.
-  A. M. Bronstein, M. M. Bronstein, M. Ovsjanikov, and L. J. Guibas. WaldHash: sequential similarity-preserving hashing. Technical Report CIS-2010-03, Technion, Israel, 2010.
-  A. M. Bronstein, M. M. Bronstein, M. Ovsjanikov, and L. J. Guibas. Shape Google: geometric words and expressions for invariant shape retrieval. ACM Trans. Graphics (TOG), 30(1):1–20, 2011.
-  M. M. Bronstein. Kernel diff-hash. Technical Report arXiv:1111.0466v1, 2011.
-  M. M. Bronstein, A. M. Bronstein, F. Michel, and N. Paragios. Data fusion through cross-modality metric learning using similarity-sensitive hashing. In Proc. CVPR, 2010.
-  F. Michel, M. M. Bronstein, A. M. Bronstein, and N. Paragios. Boosted metric learning for 3D multi-modal deformable registration. In Proc. ISBI, 2011.
B. Schölkopf, A. Smola, and K.R. Müller.
Kernel principal component analysis.Proc. ICANN, pages 583–588, 1997.
-  G. Shakhnarovich. Learning task-specific similarity. PhD thesis, MIT, 2005.
-  C. Strecha, A. M. Bronstein, M. M. Bronstein, and P. Fua. LDAHash: improved matching with smaller descriptors. PAMI, 2011.