Graph-based Multi-view Binary Learning for Image Clustering

12/11/2019 ∙ by Guangqi Jiang, et al. ∙ 0

Hashing techniques, also known as binary code learning, have recently gained increasing attention in large-scale data analysis and storage. Generally, most existing hash clustering methods are single-view ones, which lack complete structure or complementary information from multiple views. For cluster tasks, abundant prior researches mainly focus on learning discrete hash code while few works take original data structure into consideration. To address these problems, we propose a novel binary code algorithm for clustering, which adopts graph embedding to preserve the original data structure, called (Graph-based Multi-view Binary Learning) GMBL in this paper. GMBL mainly focuses on encoding the information of multiple views into a compact binary code, which explores complementary information from multiple views. In particular, in order to maintain the graph-based structure of the original data, we adopt a Laplacian matrix to preserve the local linear relationship of the data and map it to the Hamming space. Considering different views have distinctive contributions to the final clustering results, GMBL adopts a strategy of automatically assign weights for each view to better guide the clustering. Finally, An alternating iterative optimization method is adopted to optimize discrete binary codes directly instead of relaxing the binary constraint in two steps. Experiments on five public datasets demonstrate the superiority of our proposed method compared with previous approaches in terms of clustering performance.



There are no comments yet.


page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the development of computer vision applications, we have witnessed that hash technology has become an indispensable step in the processing of large data

wang2015learning bernabe2019efficient

. In dealing with data analysis, organization, and storage, etc., there is an imminent need to use the effective hash code to process data clustering from big databases. Besides, most existed digital devices mainly based on binary code, which can effectively save computing time and storage space. In general, the similarity between the original data can be effectively preserved by encoding the original high-dimensional data using a set of compact binary codes

dean2013fast , fuentes2019topic . These advantages make it obtained widely applied in the computer vision task, such as image clustering sudharshan2019multiple

, image retrieval

ahmed2019hash and multi-view learning yang2017discrete etc.

Nowadays, binary coding methods have been well investigated in many fields. Locality Sensitive Hashing (LSH) datar2004locality pioneered hash research by indexing similar data with hash codes and achieved large-scale search in constant time. Commonly, the hashing method can be roughly divided into two major categories: supervised model and unsupervised model. The supervised hash code generates a discrete, efficient and compact hash code by using the label information of the data. For instance, Minimal Loss Hashing (MLH) norouzi2011minimal , Supervised Discrete Hashing (SDH) shen2015supervised , Supervised Discrete Hashing With Relaxation (SDHR) gui2016supervised and Fast Supervised Discrete Hashing (FSDH) gui2017fast . However, the problem of manually labeling large-scale data are very expensive has not been considered. Thus, the unsupervised hash method is proposed to address this problem, which also obtained good performance in binary code learning. Unsupervised hash models include, but not limited to, Spectral Hashing (SH) weiss2009spectral , Iterative Quantization (ITQ) gong2012iterative , Discrete Graph Hashing (DGH) liu2014discrete , inductive Hashing on Manifolds shen2015hashing etc.. Because discrete hash codes reduce the quantization error, Discrete Hash (DGH) liu2014discrete and Supervised Discrete Hash (SDH) shen2015supervised have significant improvements in hash coding performance.

Up to now, most methods usually use a single view to learn binary code representation, which fails to explain the observed fact that the complementary features and diversity of multiple views. In many visual applicationswang2017effective , wu2018deep wang2015unsupervised wu2016exploiting , data is usually collected from datasets in various fields or from different feature extractorswu2018deep1 ; wang2016iterative , such as Histogram dalal2005histograms , Local Binary Patters (LBP) ojala2002multiresolution and Scale Invariant Feature Transform (SIFT) rublee2011orb etc.. Compared with single-view information, multi-view data maybe include more potential comprehensive information. Therefore, multi-view learning obtained more and more attention in many applications. Xia et al. xia2010multiview introduced a spectral-embedding algorithm to explore the complementary information of different views, which have proved effective for image clustering and retrieval. Zhang et al. zhang2016flexible explicitly produced low-dimensional projections for different views, which could be applied to other data in the out-of-sample. Wang et al. wang2015robust effectively maintain well-encapsulated individual views while study subspace clustering for multiple views. Therefore, gathering information from multiple views and exploring the underlying structure of data is a key issue in data analysis. In addition, since the hash method could efficiently encode high-dimensional data, a promising research field by adopting multi-view binary code to improve clustering performance.

Recently, some efforts have been done to learn effective hash code from multi-view data wu2018cycle . There are two types of research areas: cross-view hashing and multi-view hashing. Song et al. proposed a novel Imedia Hashing Method (IMH) method, which can explore the relevance among different media types from various data sets to achieve large-scale retrieval inter-media. Besides, Zhu et al. zhu2013linear proposed Linear Cross-modal Hashing (LCMH) has obtained good performance in cross-view retrieval tasks. Ding et al. ding2014collective by using the latent factor models from different modalities collective matrix decomposition. Composited Hashing with Multiple Information Sources (CHMIS) zhang2011composite is the first work in the multi-view hash field. More recently, Multiview Alignment Hashing (MAH) liu2015multiview based on nonnegative matrix factorization can respect the distribution of data and reveal the hidden semantics representation. Then many multi-view hash methods are proposed, such as Discrete Multi-view Hashing (DMVH) yang2017discrete and Multi-view Discrete Hashing (MvDH) shen2018multiview . Most of these related works of hash focus on mutual retrieval tasks between different views, which ignored the potential cluster structure and distribution of information in multi-view data. Therefore, hash technology is of vital significance for multi-view clustering and arouses attention from researchers in the computer vision region. Table 1 summarizes the current multi-view hash methods from model learning paradigms, hash optimization strategies, and categories.

Multi-view cluster
Table 1: Comparison of several multi-view hash algorithms

In this paper, we introduce a novel frame for graph-based multi-view binary code clustering. In order to learn an efficient binary code, our method attempts to efficiently learn discrete binary code and maintain manifold structure in Hamming space for multi-view clustering tasks. To learn discriminated binary codes, the key design is to generate similar binary codes for similar data without destroying the inherent attributes of the original space, which can share information between multiple views as much as possible. By learning the cooperative work between hash codes and graph, clustering tasks and coding efficiency is significantly improved. Since direct optimization of binary codes is a difficult problem, an effective alternating iterative optimization strategy is developed to solve the hash coding. The construction process of GMBL has been shown as Fig.1. The main contributions of this paper are illustrated as follows:

  • We propose an innovative unsupervised hashing method to learn compact binary codes from multi-view data. To preserve the original structure of input data, our proposed method combines hash codes learning and graph clustering through Locally Linear Embedding learning. Joint learning ensures that the generated hash code can improve performance for clustering.

  • Inspired in graph learning, the local similar structure of the original information is embedded into the Hamming space to learn compact hash codes. The view-specific information is shared from multiple views by projecting different views of original features into the common subspace through local linear embedding.

  • In order to obtain an accurate clustering effect, we assign different weights to various views according to contribute for clustering. In addition, we introduce the alternating optimization algorithm with strict convergence proof in a new discrete optimization scheme to solve hash coding.

Figure 1: The Construction Process of GMBL

2 Related work

Most hashing algorithms are based on single-view data to generate binary codes. In this section, we first introduce theories and notations of multi-view binary code learning. Then, we review a classical spectral embedding method by graph Laplacian matrix to preserve the data similarity structure. We will present how to learn binary code from multiple views, and then study complementary hash codes with similarity preservation in the next section.

2.1 Binary Code Learning

Assume we are given a dataset of examples with th views. The multi-view matrix in th view can be represented as: .

Where is the dimension of th view. Unsupervised hashing to map the high-dimensional data into the binary codes . Therefore, binary code generating is to learn a set of hash project functions to produce the corresponding set of -bits binary code. For the th views sample of hash function as: , Where is a binary mapping. Such functions are usually constructed by combining dimension reduction and binary quantization. Since Hamming distance represents the similarity between binary codes, the hash objective function in th view can be constructed. Then, the binary code of the th view dataset can be written as:


Where is the corresponding hash code of the whole dataset. In the process of binary code mapping, it is necessary to minimize the loss of data and the destruction of the original structure.

2.2 Single-view Graph Learning

The main purpose of similarity preservation is to preserve the geometry structure of manifold data by the local neighborhood of a point, which can be efficiently approximated by the point nearest neighbors. Generally, it has two steps, i.e., discovering similar neighbors and constructing weight matrix.

Let denote a feature of the samples and denotes

is a low dimensional vector mapped from

. Firstly, each sample is approximated to its -nearest neighbor samples. Then minimizes the reconstruction error in original space are be used as follows:


Where if and are not neighbours. Local liner embedding assumes that such linear combinations should remain unchanged even if the manifold structure is mapped to a lower space. Then, used the low-dimensional representation minimizes the reconstruction error as follows:


Where is a low dimensional matrix mapped from . Where is the graph Laplacian matrix and is the trace of matrix.

3 Graph-based Multi-view Binary Learning

In this section, we first propose a novel clustering method called Graph-based Multiview Binary Learning(GMBL), which map the data to Hamming space and implement clustering tasks by efficient binary codes. Firstly, the anchor points of data are selected randomly, and the different views are mapped to the same dimension by nonlinear kernel mapping in section 3.1. Then, we propose a method of mapping hash codes, which can learn efficient binary codes with balanced binary code distribution in section 3.2. Furthermore, similarity preservation of different views means that similar data will be mapped to binary code by a short Hamming distance. To do this, our proposed method preserves the local similar structure of data through a similar matrix in section 3.3. Finally, an alternating iterative optimization strategy is applied to search for the optimal solution and the optimization process is illustrated in describe in section 3.4.

Suppose our multi-view dataset can be represented , where contains all features matrix from the th view, is the corresponding feature dimension and is the total number of samples. The aim of our method is to learn hash code to represent multi-view data, where is the binary code length. And some important formula symbols are summarized as follows in table 2.

Notation Description
Feature matrix of the th view data
Encode each feature vector.of the th view
Map matrix for features in the th view
Collaborative binary code matrix
The hash code representation to the th sample
The anchor samples from the th view
The weighting factor for the th view
Set of all features Laplace matrix
The spares relationships for the i-th feature in the th view
The dimension of features in th view
th-nearest points in with the th view
Table 2: The description of important formular symbols

3.1 Kernelization from Multiple Views

We normalize the data from each view to maintain the balance of the data. Since the dimensions of different views may be various, we demand to find an effective method to embed multi-view data into a low-dimensional representation space.

In order to obtain low-dimensional representation, GMBL adopts nonlinear kernel mapping for each view. Inspired by zhang2018binary the simple nonlinear RBF kernel mapping method was used to encode each feature vector. GMBL adopt the above technique to explore various information for each view as follows:


Where is the kernel width, and are the anchor points are randomly selected from the th view. In the algorithm, we choose the number of anchor points to mapping based on the size of the dataset. Besides, projecting data into the kernel space can avoid the problem of uneven dimensions. represents the -dimensional nonlinear embedding of data features from the th view .

3.2 Common Discrete Binary Code representation

The features of different views are mapped into hash codes in Hamming space by the projection matrix. The representation of the hash code is ,where is the common binary code representation of the -th sample from different views. is a sign operator function. is the projection matrix of the th view. GMBL combines different views to embed them simultaneously into a common Hamming subspace. The purpose of our method is learning an efficient projection matrix

, which to map all samples in the original space into binary code. Therefore, we construct a minimizing loss function as follows:


Here is a binary code for the th samples. By optimizing the above formula, we can get an efficient binary code. It is important to note that learning equilibrium and stable binary codes by using regularized item constraints. In general, using the maximum entropy principle that the equation can be rewritten as:


Where is a nonnegative normalized weighting vector assigned according to the contributes of different views. ¿1 is the weight parameter, which ensures all views have a special contribution to the final low-dimentional representation. The first item of the equation ensures to learn efficient binary code for multi-view data. The last two terms of the equation are constraints for learning binary code. In this way, adding regularization on can ensure a balanced partition and reduce the redundancy of binary codes.

3.3 Graph-based binary code learning

This section introduces the method of similarity preservation for mapping data to binary codes. Due to the existence of similar underlying structures in different views, the structural features of the original data should also be considered when learning the binary code projection matrix. Keeping the similarity of data is one of the key problems of the hashing algorithms, which means that similar data should be mapped to binary codes with short Hamming distance. Based on this problem, we propose a method to construct a similarity matrix, which can not only preserve the local structure of the data but also preserve the similarity between the data. Then, we introduce the similarity preservation method to map data into binary codes.

In many graph-based hash methods, a key step in similarity preservation is to build neighborhood graphs on the data. For the th view of each data point, we pick up all points set from to reconstruct . Where is one of -nearest points. Thus, The optimization equation can be obtained as follows:


By solving Eq.(7), we get


Where , is a covariance matrix. Where are described the relationship between data points, which we can use to define the similar matrix as:


Where denotes the th neighbor between and in , i.e.. In order to ensure the symmetry of matrix , we need to operate with . We consider setting weights for similar matrices from different views rather than simply accumulating similar matrices. i.e., where is a weight vector. Therefore, the similarity preservation part can be calculated as follows:


Where is the hash code representation to the th sample, and is the length of the hash code. The last two constraints force the binary codes to be uncorrelated and balanced, respectively. Eq.(10) can be organized as:


Where , is a diagonal matrix given by , and is the graph Laplcian matrix.

3.4 Overall Objective Function

In order to learn binary codes associated with the clustering task, we find that the binary code representation learning and the discrete similarity preserving is both crucial. At last, We combine similarity preservation with binary code learning into a common framework as follows:

where , and are regularization parameters to balance the effects of different terms. To optimize the complex discrete coding problems, an alternating optimization algorithm is proposed in the next section.

4 Optimization Algorithm

We have constructed a general framework named GMBL which can combine discrete hashing representation and structured binary clustering for multi-view data. We apply an alternative iterative optimization strategy to optimize the proposed objective function. The problem is resolved to separate the problem into several, which are to update a variable while fixing the remaining variables until convergence. In order to fully understand the proposed GMBL method, we summarize in Algorithm 1.

Updating : When fixing other variables, we update the projection matrix by:


It closed-form solution can be obtained by setting partial derivative , whose optimal solution is , where .

Updating : We next move to update , the sub-problem with respect to the are defined as follows:


We design an effective algorithm that can maintain discrete constraints in the optimization process, and through this method we can obtain more efficient binary codes shen2015supervised ,shen2016fast . According to the DPLM algorithm, we can get as follows:


Where is the gradient of . We update variable use to in each iteration.

Updating : According to the attributes of different views, Optimization of can be equivalent as the following optimization problem:


Let then we can rewritten (15) as


We can solve the constraint equation by Lagrange multiplier method, the Lagrange function of (16) is

By setting the partial derivative of with respect to and to zero, we can get


Therefore, we can get as


In order to obtain the local optimal solution, we update the three variables iteratively until the convergence.

1:Data set ; Anchor samples ; parameters , and ;
2:binary code ;
3:Initialize binary code ; nonlinear embedding ; Weights from different views ; project matrix ; binary code length ;
4:Construct get laplacian matrix ;
6:     Update by solving equation in Eq.12;
7:     Update binary code according to Eq.14;
8:     Update by solving problem Eq.15;
9:until convergence ;
Algorithm 1 Framework of ensemble learning for our method.

5 Experimental Evaluation

In this section, extensive experiments are the command to evaluate the proposed binary clustering methods in clustering performance. All the experiments are conducted with Matlab 2018b using a standard Windows PC with an Intel 3.4 GHz CPU.

5.1 Experimental Settings

In this section, we describe the datasets and comparison methods. We evaluated the clustering performance of GMBL by comparing it with several classical hash methods in the multi-view datasets. In addition, the effectiveness of GMBL algorithm is evaluated by comparing the real-valued multi-view methods. In the end, we compared the single-view low-dimensional embedding in the framework with the original GMBL low-dimensional embedding to verify that our method can modify and supplement complementary information between different views.

5.1.1 Datasets

In most practical applications, images are generally represented by multiple features, which constitute multiple views of the experiment. Without loss of generality, we evaluated our image cluster method using five popular datasets. Some Image samples of datasets are presented in Fig.2. The details of utilizing the data information are listed as follows:

(a) Some images from Caltech256. It consists of 101 classes in total.
(b) Some images from Caltech101. Caltech101 is an image dataset which contains 101 classes and 1 backgroud class.
(c) Some images from Coil-100. There are 100 classes in this dataset.
Figure 2: Some sample images of these image datasets for various applications.


contains 9144 images associated with 101 objects and a background category. It is a benchmark image dataset for image clustering and retrieval tasks. Each example is associated with a reciprocally class label. For this dataset, five publicly available features are engaged for experiments, i.e. 48-dim Gabor feature, 928-dim LBP feature, 512-dim GIST feature, 254-dim CENTRIST feature, 40-dim wavelet moments and 1984-dim HOG feature.

Caltech256222 contains 30,607 images of 256 object categories, each of which contains more than 80 images. We use a 729-dim color histogram feature, 1024-dim Gist feature and 1152-dim HOG feature, which three different types of features.

NUS-WIDE-obj333 contains 30,000 images in 31 categories. The features of the dataset can be found on the contributor’s home page, including 65-dim color histogram(CH), 226-dim color moments(CM),74-dim edge distribution(ED), 129-dim wavelet texture(WT) and 145-dim color correlation(CORR).

Coil-100444 is the abbreviation of the Columbia object image library dataset, which consists of 7200 images in 100 object categories. Each category contains 72 images and all images are with size 32×32. Intensity, 220-dim DSD, 512-dim HOG and 768-dim Gist features are extracted for representation.


consists of 3,312 documents on scientific publications. These documents can be further classified into six categories: Agent, AI, DB, IR, ML and HCI. For our multi-view learning clustering, we construct a 3703-dimensional vector representing the keywords of text view and a 3279-dimensional vector representing the reference relationship between another document. All the features of discretion are briefly summarized in table


Datasets Caltech101 Caltech256 NUS-WIDE-obj Coil-100 CiteSeer
Samples 9144 30608 30000 7200 3312
Classes 102 175 31 100 6
Views 6 3 5 3 2
Table 3: Summarization of each dataset

5.1.2 Compared Methods

We compared our approach with the following state-of-the-art methods, including hash multi-view and real-value multi-view methods for clustering. As for the hash method, we utilized seven famous single-view hash algorithms and two multi-view hash clustering algorithms as comparing methods, including LSH gionis1999similarity , MFH song2013effective , SH weiss2009spectral , DSH jin2013density , SP xia2015sparse , SGH jiang2015scalable , BPH, ITQ gong2012iterative , BMVC zhang2018binary , HSIC zhang2018highly

. For single-view hash methods, we adopted the best result of each feature clustering. As for the real-value multi-view method, we adopted seven algorithms as comparing methods, including k-means

jetsadalak2018algorithm , SC von2007tutorial , Co-regularize kumar2011co , AMGL nie2016parameter , Mul-NMF liu2013multi , MLAN nie2017multi . It is noteworthy that the k-means method concatenates multi-view data into one vector as the evaluation result. The length of the hash code used in the experiment is 128-bits. We use the source code from the author’s homepage for comparative experiments.

5.1.3 Evaluation Metrics

To generally evaluate the performance for clustering, We report the experimental results using four most widely used evaluation metrics, including accuracy(ACC), normalized mutual information(NMI), Purity and F-score

gao2015multi , cao2015diversity . For all algorithms, the higher value of metrics indicates better performance. For the hashing methods, five different bits coding length are used for all datasets.

5.2 Hash Method Experimental Results and Analysis

In the section, we conducted experiments for hash clustering on 5 datasets (Include Caltech101, Caltech256, NUS-WIDE-obj, Coil-100 and CitySeer) to prove the performance of our proposed method. We utilized all methods to project multi-view features into five hash codes of different lengths and adopted the k-means method to finish the task of image clustering. The results with different code lengths on four benchmark image datasets are reported in Figures 3, 4, 5 and 6. Table 4 shows the results when the hash code length is 128-bits in text clustering from two views. We have the following observations:

(a) ACC
(b) NMI
(c) Purity
Figure 3: Experiment results on caltech101. It is clear that our proposed GMBL can achieve the best performance in most situations.

For Caltech101 datasets, we adopted six views to complete the clustering task, which is the dataset with the most views in the experiment. We adopt a view with the best clustering performance is used to evaluate single-view hash methods experiments. It is clear that GMBL can achieve better performance than the other hash methods in different binary code lengths. Generally, the results of multi-view algorithms are better than single-view hash ones. It shows that in GMBL, the result increases with the increase of the hash code length. GMBL can obtain better results compared with the multi-view hash methods. Because GMBL can construct a similarity matrix to obtain the nearest neighbor relation of data, the optimal result can be obtained when the length of the hash code increases. It can be found from Fig.3 that when the hash code length of our method is short, the clustering result can’t obtain better performance. The reason may be the hash code length is short, which the nearest neighbor relationship of the data is not well preserved. In the algorithm of this paper, several parameters have a significant influence on the experiment in Eq.(3.4). Generally speaking, larger the values of and , the experimental results attempt to lower. It is possible that the restriction of large regular terms will restrict the efficient learning of hash codes. Through many experiments found that increasing the value of will reduce the clustering performance while increasing the value of the nearest neighbor parameter will improve the clustering result of GMBL. However, the running time will be affected by the number of nearest neighbors and our method selects nearest neighbors for clustering. If more strategies are adopted to select anchor points, clustering performance will be improved. Besides, when the number of anchor point is greater than 1000, the clustering performance is not significantly improved.

(a) ACC
(b) NMI
(c) Purity
Figure 4: Performance of different clustering methods vs. different code lengths of clusters on Caltech256.

For the Caltech256 dataset, we randomly selected 175 categories of images as the experimental data with a total of 20222 images. It is explicit from Fig.4 that our approach obtains the perfect results in terms of ACC, NMI, and Purity among all the compared other hash methods in 128-bits binary code. From Fig.4, we can observe that GMBL outperforms other hash methods when the code length is relatively large (i.e., greater than 32). As the length of the binary code increases, the performance of all algorithms improves. On the Caltech256 dataset, our model achieved 0.29 on NMI when the code length was 128-bits while the second-highest NMI was 0.26.

(a) ACC
(b) NMI
(c) Purity
Figure 5: ACC, NMI and Purity results on Coil-100.

Fig.5 illustrates the experimental results on the Coil-100 dataset. Our method outperforms all the other methods of NMI and Purity evaluation metrics. For the ACC, HSIC method obtains the best result and we can deduce that using individual information and shared information to capture the hidden correlations of multiple views is necessary. We can also find that BMVC and HSIC can achieve the best performance in the short hash code lengthIt can be observed from the Fig.5 that the results of roughly all single-view hash methods are significantly poorer than that of multi-view hash methods under different hash code lengths.

(a) ACC
(b) NMI
(c) Purity
Figure 6: ACC, NMI and Purity results on NUS-WIDE-obj. Multi-view methods achieve ideal performance.

As we all know, NUS-WIDE-obj is a widely used dataset that includes 30000 images from Flickr. The dataset consists of 31 categories and each image is marked by at least one label and multiple labels can be assigned to each image. The dataset was divided into 12072 test sets and 17928 train sets by the provider. In this algorithm, we use the test set to evaluate the clustering task. For multiple labels with the same sample, we will automatically specify the corresponding labels ground-truth after the clustering task of each algorithm is completed. In Fig.6, the ACC, NMI, and Purity of all hash algorithms under 128-bits binary code are reported. Box diagram corroborates the advantages of our GMBL relative to its simulated alternatives.

Experiments demonstrate that the single-view hash method can also obtain satisfactory performance. Moreover, both SGH and ITQ get excellent performance. In general, multi-view methods get better results than single-view methods. Generally speaking, Multi-view methods exploit multi-view information and achieve better results. For multi-view data that holds more of the original data structures during the construction of the binary code that our method obtains a similarity matrix reflecting the local structure of the original data. Therefore, graph-based similarity matrix construction plays an important role in our method, which can effectively obtain the structural relationship between the initial input and adjacent data. GMBL has been verified can improve Largely clustering performance. We can notice that GMBL achieves much better results than BMVC on almost all datasets. The primary reason is that in the process of binary code, we can keep the local structure of data to further explore the internal relationship of data, so as to obtain better clustering results.

ACC 0.1413 0.1887 0.1954 0.2017 0.2126 0.1730 0.2292 0.2343 0.2560 0.2766
NMI 0.0105 0.0432 0.0079 0.0431 0.0026 0.0124 0.0353 0.0298 0.0298 0.0517
Purity 0.2542 0.3222 0.2225 0.3206 0.2183 0.2292 0.2497 0.2844 0.2844 0.3273
Table 4: The clustering results on CitySeer dataset

To demonstrate the robustness of the GMBL, we consider the clustering experiment in the text dataset. There are 3312 documents in CitySeer dataset, which are divided into six categories. We use keywords and references between documents as two views for clustering experiments. Table 4 compare ACC, NMI, and Purity when the length of the hash code is 128-bits. We have the following observations: GMBL obtained a higher value in the three indexes of the clustering task, which consistently outperforms other methods by large margins in all situations. Compared with single-view hashing method, ACC, NMI and Purity were increased 24%-49%, 17%-90% and 3%-33%; Compared with multi-view hashing method, ACC, NMI and Purity were improved 7%-16%, 42%-60% and 13%-17%; The results demonstrate that the proposed multi-view algorithm is effective in utilizing the graph-based method.

In order to verify more clearly, Fig.7 demonstrate the clustering performance of the algorithm in GMBL method of single views, evaluated by ACC, NMI and Purity respectively. We can notice that our multi-views method get higher results than the GMBL methods of single views. Fig.7 illustrates the clustering results of GMBL algorithm from different views on the Caltech101 and Coil-100 dataset. Multi-view GMBL still obtains higher results than a single view of GMBL when the clustering result of a certain view is remarkably efficient. In particular, with 128-bits, our multi-view method exceeds the best of single-view GMBL methods by more than 24% and 67% in terms of ACC, NMI and Purity, respectively. Thus, multi-view methods explore common cluster structure work better compared with single-view methods.

5.3 Comparison with state-of-art multi-view methods

In this section, we present the detailed clustering results of three datasets in tables 5, 6 and 7. In each table, the bold values illustrate the best clustering performance. These tables indicate that GMBL achieves excellent performance in four evaluation indexes of three datasets and is superior to other methods by Caltech101, clatech256 and NUS-WIDE-obj datasets. GMBL by learning discrete coding and real-value representation of multi-view clustering and has obtained encouraging results. In addition, even though the k-means clustering method concatenates all multiple views into a vector, it can not achieve efficient clustering performance. Because k-means clustering is essentially a single-view clustering method. In the experiment, the length of the hash code is set to 128-bits when compared with the real-valued multi-view clustering method.

Figure 7: multi-view vs single-view
Methods k-means SC Co-re-c Co-re-s AMGL Mul-NMF MLAN BMVC HSIC GMBL
ACC 0.1331 0.1365 0.2670 0.2425 0.1350 0.2018 0.1807 0.2930 0.2578 0.3070
NMI 0.3056 0.3269 0.4691 0.4683 0.2645 0.4089 0.2686 0.4900 0.3511 0.4982
Purity 0.2909 0.3187 0.4600 0.4694 0.1569 0.2300 0.3286 0.4907 0.3492 0.5008
F-score 0.1895 0.0955 0.2295 0.1867 0.0319 0.1705 0.0481 0.2466 0.2502 0.2586
Table 5: The clustering results on Caltech101 dataset

Table 5 shows that the clustering results on the Caltech101 dataset. GMBL outperforms all the other methods on ACC, NMI, Purity, F-score and improves the baseline k-means more than 50%. k-means is the worst method because the views are directly concatenated together and more noise will be introduced. The Co-regularization method expresses different views through co-regularization spectrum clustering to pursue a better clustering index, which is suitable for experiments with fewer perspectives and takes a longer time. Compared with real-value multi-view methods, GMBL has been improved significantly. The main reason is that BMVC learns the binary code of the different views in Hamming space and improve calculation efficiency. However, the calculation of distance in Euclidean space by real-value multi-view method has low efficiency and high time cost. However, pursuing a similar matrix by graph-based clustering is time-consuming, Compared with the hash multi-view method, our calculation time is shorter.

Methods k-means SC Co-re-c Co-re-s AMGL Mul-NMF MLAN BMVC HSIC GMBL
ACC 0.1001 0.0924 0.1030 0.0738 0.0467 0.0713 0.0693 0.1028 0.0971 0.1049
NMI 0.1184 0.2764 0.2856 0.2467 0.1070 0.2272 0.0794 0.2915 0.2503 0.2949
Purity 0.1018 0.1339 0.1602 0.1070 0.0415 0.1119 0.0922 0.1428 0.1184 0.1475
F-score 0.0804 0.0628 0.0727 0.0415 0.0466 0.0458 0.0224 0.0781 0.0719 0.0878
Table 6: The clustering results on Caltech256 dataset

For Caltech256 dataset, we randomly selected 20222 samples as experimental data, which extracts three features and 175 categories of pictures. The clustering results with different methods can be found in table 6. GMBL method outperforms all the other methods on four evaluation metrics. Compared with the real-valued multi-view method, the hash method has obvious advantages in large datasets and takes the least time in clustering. Compared with other multi-view hash methods, the clustering performance of GMBL is improved. Therefore, it is very important to maintain the original spatial structure in the process of learning binary codes.

Methods k-means SC Co-re-c Co-re-s AMGL Mul-NMF MLAN BMVC HSIC GMBL
ACC 0.1459 0.1360 0.1521 0.1625 0.1281 0.1183 0.1554 0.1508 0.1621 0.1682
NMI 0.1415 0.1289 0.1505 0.1604 0.1362 0.1029 0.1199 0.1527 0.1625 0.1649
Purity 0.2576 0.2460 0.2816 0.2826 0.1484 0.1975 0.2604 0.2855 0.2790 0.2968
F-score 0.1105 0.0840 0.1038 0.1018 0.1125 0.1128 0.1136 0.1090 0.1190 0.1126
Table 7: The clustering results on NUS-WIDE-obj dataset

We used the test set of the NUS-WIDE-obj dataset to complete the clustering task in table 7 methods. Since some images in the dataset have multiple labels, the most representative label was adopted as ground-truth in our comparative experiment. It can be found that our method is superior to other methods in three indicators. The adoption of the similarity matrix is more conducive to the hidden structure of mining data, but the use of the hash method only improves significantly in time, and the evaluation result does not improve significantly. In addition, GMBL takes a lower evaluation index F-score than HSIC.

5.4 Visualization

(a) Orignal data
(b) Binary Codes of multiview
(c) Orignal data
(d) Binary Codes of multiview
Figure 8: Visualization of original features and binary encoding of clustering based on Caltech101 and Coil-100 datasets

In Fig.8, and are the visualization effects of binary codes and original data using t-SNE maaten2008visualizing on the caltech101 dataset (we randomly selected 5 classes). The original data links all six features into a vector as input. In addition, and are the visualization effects of binary code and full connection raw data using t-SNE in the Coil-100 dataset (we randomly selected 10 classes). In Fig.8, different colors geometric figures belong to various categories and the clustering results are well when the same kinds are adjacent to each other. We observed that the visualization of binary codes was more discriminating than the original data because each category in the graph was more scattered in the visualizations.

All the experiments above can verify the excellent performance of our proposed GMBL. It can extend the Euclidean space measure method to binary code in Hamming space. Through the experiment results, GMBL is better than the real-value multi-view method exceeds in most situations. Compared with the hash method, the local structure of the data preserved by constructing the similarity matrix can effectively improve the clustering performance, which is stronger than the most hash algorithm.

5.5 Convergence analysis

(a) Caltech101
Figure 9: Convergence Curve on Caltech101

We adopted the Alternating iterative optimization method to iteratively update all the pending parameter matrix in our optimization problem. Fig.9 indicates the objective function values on the Caltech101 dataset. We observe that the values of our objective function on the dataset decrease rapidly in each iteration and access to a point. It can be identified that the constructed function is monotone and convergent and has minimum values.

6 Conclusion

In this paper, we propose a discrete hash coding algorithm based on graph clustering, named GMBL. GMBL learns efficient binary code, which can fully explore the original information of multi-view data and reduce the lack of information. With the Laplace similarity matrix, the proposed algorithm can be preserved in the local linear relationship of the original data and the multi-view binary clustering task can be well optimized. In addition, since various views contribute differently to cluster tasks, we assign weights to different views of adaptively giving to their contributions. In order to optimize the binary code, we adopt the alternating iteration method to directly optimize it instead of the loan constraint. It can be found that different from the traditional real-valued multi-view clustering method, the hashing clustering method can effectively reduce the experimental time. We evaluated our proposed framework on five multi-view datasets for experimental examination. The experiment demonstrated the superiority of our proposed method.

7 Acknowledgements

This work was supported in part by the National Natural Science Foundation of China Grant 61370142 and Grant 61272368, by the Fundamental Research Funds for the Central Universities Grant 3132016352, by the Fundamental Research of Ministry of Transport of P. R. China Grant 2015329225300, by Chinese Postdoctoral Science Foundation 3620080307, by the Dalian Science and Technology Innovation Fund 2018J12GX037 and Dalian Leading talent Grant, by the Foundation of Liaoning Key Research and Development Program.


  • (1) J. Wang, W. Liu, S. Kumar, S.-F. Chang, Learning to hash for indexing big data—a survey, Proceedings of the IEEE 104 (1) (2015) 34–57.
  • (2) J. A. Bernabé-Díaz, M. del Carmen Legaz-García, J. M. García, J. T. Fernández-Breis, Efficient, semantics-rich transformation and integration of large datasets, Expert Systems with Applications 133 (2019) 198–214.
  • (3)

    T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, J. Yagnik, Fast, accurate detection of 100,000 object classes on a single machine, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1814–1821.

  • (4) G. Fuentes-Pineda, I. V. Meza-Ruiz, Topic discovery in massive text corpora based on min-hashing, Expert Systems with Applications.
  • (5) P. Sudharshan, C. Petitjean, F. Spanhol, L. E. Oliveira, L. Heutte, P. Honeine, Multiple instance learning for histopathological breast cancer image classification, Expert Systems with Applications 117 (2019) 103–111.
  • (6) T. Ahmed, M. Sarma, Hash-based space partitioning approach to iris biometric data indexing, Expert Systems with Applications 134 (2019) 1–13.
  • (7) R. Yang, Y. Shi, X.-S. Xu, Discrete multi-view hashing for effective image retrieval, in: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ACM, 2017, pp. 175–183.
  • (8) M. Datar, N. Immorlica, P. Indyk, V. S. Mirrokni, Locality-sensitive hashing scheme based on p-stable distributions, in: Proceedings of the twentieth annual symposium on Computational geometry, ACM, 2004, pp. 253–262.
  • (9)

    M. Norouzi, D. M. Blei, Minimal loss hashing for compact binary codes, in: Proceedings of the 28th international conference on machine learning (ICML-11), Citeseer, 2011, pp. 353–360.

  • (10) F. Shen, C. Shen, W. Liu, H. Tao Shen, Supervised discrete hashing, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 37–45.
  • (11)

    J. Gui, T. Liu, Z. Sun, D. Tao, T. Tan, Supervised discrete hashing with relaxation, IEEE transactioSupervised ns on neural networks and learning systems 29 (3) (2016) 608–617.

  • (12) J. Gui, T. Liu, Z. Sun, D. Tao, T. Tan, Fast supervised discrete hashing, IEEE transactions on pattern analysis and machine intelligence 40 (2) (2017) 490–496.
  • (13) Y. Weiss, A. Torralba, R. Fergus, Spectral hashing, in: Advances in neural information processing systems, 2009, pp. 1753–1760.
  • (14) Y. Gong, S. Lazebnik, A. Gordo, F. Perronnin, Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (12) (2012) 2916–2929.
  • (15) W. Liu, C. Mu, S. Kumar, S.-F. Chang, Discrete graph hashing, in: Advances in neural information processing systems, 2014, pp. 3419–3427.
  • (16) F. Shen, C. Shen, Q. Shi, A. Van den Hengel, Z. Tang, H. T. Shen, Hashing on nonlinear manifolds, IEEE Transactions on Image Processing 24 (6) (2015) 1839–1851.
  • (17) Y. Wang, X. Lin, L. Wu, W. Zhang, Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval, IEEE Transactions on Image Processing 26 (3) (2017) 1393–1404.
  • (18) L. Wu, Y. Wang, X. Li, J. Gao, Deep attention-based spatially recursive networks for fine-grained visual recognition, IEEE transactions on cybernetics 49 (5) (2018) 1791–1802.
  • (19) Y. Wang, W. Zhang, L. Wu, X. Lin, X. Zhao, Unsupervised metric fusion over multiview data by graph random walk-based cross-view diffusion, IEEE transactions on neural networks and learning systems 28 (1) (2015) 57–70.
  • (20) L. Wu, Y. Wang, S. Pan, Exploiting attribute correlations: A novel trace lasso-based weakly supervised dictionary learning method, IEEE transactions on cybernetics 47 (12) (2016) 4497–4508.
  • (21) L. Wu, Y. Wang, J. Gao, X. Li, Deep adaptive feature embedding with local sample distributions for person re-identification, Pattern Recognition 73 (2018) 275–288.
  • (22)

    Y. Wang, W. Zhang, L. Wu, X. Lin, M. Fang, S. Pan, Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering, in: International Joint Conference on Artificial Intelligence (IJCAI), 2016.

  • (23) N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, 2005.
  • (24) T. Ojala, M. Pietikäinen, T. Mäenpää, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis & Machine Intelligence (7) (2002) 971–987.
  • (25) E. Rublee, V. Rabaud, K. Konolige, G. R. Bradski, Orb: An efficient alternative to sift or surf., in: ICCV, Vol. 11, Citeseer, 2011, p. 2.
  • (26) T. Xia, D. Tao, T. Mei, Y. Zhang, Multiview spectral embedding, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40 (6) (2010) 1438–1446.
  • (27) C. Zhang, H. Fu, Q. Hu, P. Zhu, X. Cao, Flexible multi-view dimensionality co-reduction, IEEE Transactions on Image Processing 26 (2) (2016) 648–659.
  • (28) Y. Wang, X. Lin, L. Wu, W. Zhang, Q. Zhang, X. Huang, Robust subspace clustering for multi-view data by exploiting correlation consensus, IEEE Transactions on Image Processing 24 (11) (2015) 3939–3949.
  • (29) L. Wu, Y. Wang, L. Shao, Cycle-consistent deep generative hashing for cross-modal retrieval, IEEE Transactions on Image Processing 28 (4) (2018) 1602–1612.
  • (30) X. Zhu, Z. Huang, H. T. Shen, X. Zhao, Linear cross-modal hashing for efficient multimedia search, in: Proceedings of the 21st ACM international conference on Multimedia, ACM, 2013, pp. 143–152.
  • (31) G. Ding, Y. Guo, J. Zhou, Collective matrix factorization hashing for multimodal data, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2075–2082.
  • (32) D. Zhang, F. Wang, L. Si, Composite hashing with multiple information sources, in: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, ACM, 2011, pp. 225–234.
  • (33) L. Liu, M. Yu, L. Shao, Multiview alignment hashing for efficient image search, IEEE Transactions on image processing 24 (3) (2015) 956–966.
  • (34) X. Shen, F. Shen, L. Liu, Y.-H. Yuan, W. Liu, Q.-S. Sun, Multiview discrete hashing for scalable multimedia search, ACM Transactions on Intelligent Systems and Technology (TIST) 9 (5) (2018) 53.
  • (35) Z. Zhang, L. Liu, F. Shen, H. T. Shen, L. Shao, Binary multi-view clustering, IEEE transactions on pattern analysis and machine intelligence 41 (7) (2018) 1774–1782.
  • (36) F. Shen, X. Zhou, Y. Yang, J. Song, H. T. Shen, D. Tao, A fast optimization method for general binary code learning, IEEE Transactions on Image Processing 25 (12) (2016) 5610–5621.
  • (37) A. Gionis, P. Indyk, R. Motwani, et al., Similarity search in high dimensions via hashing, in: Vldb, Vol. 99, 1999, pp. 518–529.
  • (38) J. Song, Y. Yang, Z. Huang, H. T. Shen, J. Luo, Effective multiple feature hashing for large-scale near-duplicate video retrieval, IEEE Transactions on Multimedia 15 (8) (2013) 1997–2008.
  • (39) Z. Jin, C. Li, Y. Lin, D. Cai, Density sensitive hashing, IEEE transactions on cybernetics 44 (8) (2013) 1362–1371.
  • (40) Y. Xia, K. He, P. Kohli, J. Sun, Sparse projections for high-dimensional binary codes, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3332–3339.
  • (41) Q.-Y. Jiang, W.-J. Li, Scalable graph hashing with feature transformation, in: Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
  • (42) Z. Zhang, L. Liu, J. Qin, F. Zhu, F. Shen, Y. Xu, L. Shao, H. Tao Shen, Highly-economized multi-view binary compression for scalable image clustering, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 717–732.
  • (43) N. Jetsadalak, O. Suwunnamek, O. T. Association, Y. Sriwaranun, C. Gan, M. Lee, D. Cohen, P. Panpluem, M. Putsakum, S. Heaton, et al., Algorithm as 136: A k-means clustering algorithm., Asian Journal of Scientific Research 12 (1) (2018) 480–510.
  • (44) U. Von Luxburg, A tutorial on spectral clustering, Statistics and computing 17 (4) (2007) 395–416.
  • (45) A. Kumar, P. Rai, H. Daume, Co-regularized multi-view spectral clustering, in: Advances in neural information processing systems, 2011, pp. 1413–1421.
  • (46) F. Nie, J. Li, X. Li, et al., Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification., in: IJCAI, 2016, pp. 1881–1887.
  • (47) J. Liu, C. Wang, J. Gao, J. Han, Multi-view clustering via joint nonnegative matrix factorization, in: Proceedings of the 2013 SIAM International Conference on Data Mining, SIAM, 2013, pp. 252–260.
  • (48) F. Nie, G. Cai, X. Li, Multi-view clustering and semi-supervised classification with adaptive neighbours, in: Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  • (49) H. Gao, F. Nie, X. Li, H. Huang, Multi-view subspace clustering, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 4238–4246.
  • (50) X. Cao, C. Zhang, H. Fu, S. Liu, H. Zhang, Diversity-induced multi-view subspace clustering, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 586–594.
  • (51) L. v. d. Maaten, G. Hinton, Visualizing data using t-sne, Journal of machine learning research 9 (Nov) (2008) 2579–2605.