Multi-view data becomes common nowadays because data can be collected from different sources or represented by different features. For example, the same news can be reported in different articles from different news sources, one document can be translated into different kinds of languages and one image can be represented with different kinds of features. Learning and analyzing multi-view data has become a hot research topic in recent years and attracted many researchers in, to name a few, the areas of data mining, machine learning, information retrieval and cybersecurity. Many multi-view learning approaches based on different strategies including co-training(Blum & Mitchell, 1998), multiple kernel learning (Lanckriet et al., 2004) and subspace learning (Jia et al., 2010) have been proposed in the literature (Xu et al., 2013). In (Xu et al., 2015) a multi-view learning approach based on subspace learning was proposed to discover a latent intact representation of the data. In (Wang et al., 2015)
deep neural networks were used to learn representations (features) for multi-view data. Multi-view learning approaches can be divided into supervised learning and unsupervised learning approaches. In this paper, we focus on one of the unsupervised learning techniques which is clustering for multi-view data analysis. As a promising data analysis tool, clustering is able to find the pattern structure and information underlying the unlabelled data. Clustering algorithms based on different theories have been proposed in various applications in the literature(Jain, 2010; Filippone et al., 2008; Xu & Wunsch, 2005). Multi-view clustering approaches are able to mine valuable information underlying different views of data and integrate them to improve clustering performance which have wild applications. For example, in news articles categorization, each article may be written in different languages or collected from different news sources. In an e-learning education system, students’ behaviour and performance in study may be analysed based on some features collected from various sources. Students may be clustered into different groups based on several sets of features, for example, how they approach the exercises, and how they interact with the tutorial videos, those form two different sets of features.
Many multi-view clustering approaches have been proposed in the literature. For clustering multi-view data, roughly three strategies are applied among the existing approaches. The first strategy is to integrate multi-view data into a single objective function which is optimized directly during the clustering process. The consensus clustering result is generated directly without one more step to combine the clustering result of each view. For example, in (Kumar et al., 2011)
, two co-regularized multi-view spectral clustering algorithms were proposed. The pairwise disagreement term and centroid based disagreement term for different views are added into the objective function of spectral clustering. The clustering results which are consistent across the views are achieved after the optimization process. In(Tzortzis & Likas, 2012), a kernel-based weighted multi-view clustering approach was presented. In particular, each view is expressed by a kernel matrix. The weight of each view and consensus clustering result are learned by minimizing the disagreements of different views. In (Cai et al., 2013)
, a multi-view clustering approach based on K-means was proposed. The consensus cluster indication is integrated in the objective function directly. The second strategy includes two steps as follows. First, a unified representation (view) is generated based on multiple views. Then the existing clustering algorithm such as K-means(MacQueen, 1967) or spectral clustering (Ng et al., 2002) is applied to achieve the final clustering result. For example, in (Huang et al., 2012)
, Huang et al. propose an affinity aggregation spectral clustering in which an aggregated affinity matrix is found first by seeking the optimal combination of different affinity matrices. Then spectral clustering is applied on the new affinity matrix to get the final clustering result. In(Guo, 2013), a common subspace representation of the data shared across multiple views is first learned. Then K-means is applied on the learned subspace representation matrix to generate the clustering result. In the third strategy, each view of the data is processed independently and an additional step is needed to generate the consensus clustering result based on the result of each view. For example, in (Bruno & Marchand-Maillet, 2009) and (Greene & Cunningham, 2009), the consensus clustering result was achieved by integrating the previously generated clusters of individual views based on the latent modeling of cluster-cluster relationships and matrix factorization respectively.
The above multi-view clustering approaches are all based on hard clustering in which each object can only belong to one cluster. Since the real world data sets may not be well separated, different approaches have been proposed based on soft or fuzzy clustering algorithms (Aparajeeta et al., 2016; Kannan et al., 2015; Anderson et al., 2013) in which each object can belong to all the clusters with various degrees of memberships. The memberships used in soft clustering help to describe the data better and have many potential applications in the real world. For example, soft clustering approaches can better capture the topics of each document which belongs to several topics with different degrees. In (Liu et al., 2013), Liu et al. propose a joint Nonnegative Matrix Factorization (NMF) (Lee & Seung, 1999) approach for multi-view clustering in which a disagreement term is introduced in the objective function. Besides NMF based multi-view clustering approaches, several multi-view fuzzy clustering algorithms based on the well known Fuzzy c means (FCM) algorithm (Bezdek, 1981) have been developed. For example, in (Cleuziou et al., 2009), CoFKM is proposed to handle multi-view data by minimizing the objective function of FCM of each view and penalizing the disagreement between any pairs of views. In (Jiang et al., 2015), a multi-view fuzzy clustering with weighted views called WV-Co-FCM was proposed. In WV-Co-FCM, the clustering process is based on optimizing the objective function which highlights the fuzzy partition and the weight of each view is achieved by introducing the entropy regularization term.
Both hard and soft approaches discussed above all formulate the multi-view clustering to an optimization problem in which the disagreement of the views is minimized. In (Wang et al., 2014), a minimax optimization based multi-view spectral clustering approach was proposed to handle multi-view relational data. However, as pointed out in (Cai et al., 2013), the spectral clustering based multi-view clustering approaches have two drawbacks. One is that the clustering performance is sensitive to the choice of the kernel to build the graph. The other is that they are not suitable for large scale data clustering because of the high time computational cost on kernel construction as well as eigen decomposition. Fuzzy c means (FCM) is widely applied in many applications because of its effectiveness and low time complexity. To combine the advantages of minimax optimization and FCM, in this paper we propose MinimaxFCM for multi-view data clustering. In MinimaxFCM, the goal is to achieve the consensus clustering result of multi-view data by minimizing the maximum disagreement of the weighted views. Except for the fuzzifier which is one parameter in all FCM based approaches, there is only one extra parameter in MinimaxFCM to control the distribution of each view. Moreover, the time complexity of MinimaxFCM is similar to FCM. The experiments with MinimaxFCM on nine real world data sets including image and document data sets show that MinimaxFCM achieves better clustering performance than the related clustering approaches.
The rest of the paper is organized as follows: in the next section, the highlights of the related multi-view clustering approaches reported in the literature are given. In Section III, the details of the proposed multi-view fuzzy clustering approach MinimaxFCM are described. Experiments on the real world data sets are conducted and the results are analyzed in Section IV. Finally, conclusions are drawn in Section V.
2 Related work
In this section, five related multi-view clustering approaches including two hard clustering approaches and three soft clustering approaches are reviewed. Two hard clustering approaches are a K-means based multi-view clustering and the minimax optimization based multi-view spectral clustering. Three soft approaches include one Nonnegative Matrix Factorization based approach and two fuzzy clustering based approaches are reviewed.
Throughout this paper, the following notations are used unless otherwise stated: we denote the data set which has N objects and K classes as . The data set is represented by different views such that the object in view is denoted as . We use to denote the fuzzy membership which represents the degree of object belongs to cluster in view and to denote the consensus membership of object to cluster shared across different views. The centroid of cluster of the view is denoted as . is used to denote the distance between centroid and object in view and is used to denote the fuzzifier.
RMKMC (Cai et al., 2013) is a multi-view clustering approach based on K-means. The first strategy as discussed in section I is used by RMKMC in which a single objective function is formulated and the consensus clustering result is generated directly after the algorithm converges. In RMKMC, the objective function of K-means is reformulated based on the fact that G-orthogonal non-negative matrix factorization (NMF) is equivalent to relaxed K-means clustering (Ding et al., 2005)
. To make the approach more robust to outliers, thenorm is applied in the objective function as follows.
Here is the coefficient matrix which is considered as the cluster indicator matrix. is the basis matrix of the view which can be considered as the cluster centroid matrix. As shown in the objective function, the summation of the weighted difference of each view is minimized and the consensus clustering result is achieved directly after the algorithm converges. Moreover, the weight of each view is updated automatically. The higher the value of the weight, the more important the view is.
MinimaxMVSC (Wang et al., 2014) is a multi-view spectral clustering approach based on minimax optimization. In MinimaxMVSC, the first strategy is used to formulate the objective function as follows:
Where is the final consensus cluster indicator matrix and is the Laplacian embedding of view. is the standard objective function of spectral clustering of view and measures the disagreement of view and view. The aim of MinimaxMVSC is to minimize the maximum summation of weighted by to achieve the consensus cluster indicators matrix . Then K-means is applied on to get the final clustering results.
In (Liu et al., 2013), the multiview clustering based joint Nonnegative Matrix Factorization (MultiNMF) is proposed. In MultiNMF, using the first strategy the objective function of joint nonnegative matrix factorization is formulated as follows to find the consensus clustering result.
Where is the Frobenius norm and . The first term measures the standard NMF reconstruction error for individual views. The second term measures the disagreement between each cluster indicator matrix and the consensus cluster indicator matrix . The parameter is set by the user to control the relative weight among different views and between the two terms. To keep the disagreement across different views meaningful and comparable, a novel normalization strategy was proposed by exploring the relation between NMF and Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999). Specifically,
normalization is conducted with respect to the basis vectors induring the optimization.
CoFKM (Cleuziou et al., 2009) is a multi-view fuzzy clustering approach developed based on FCM. To handle multi-view data, CoFKM combines the first and third strategies. For the first strategy, the term of the average disagreement between any pairs of the views is integrated into the objective function. By minimizing the summation of the standard objective function of FCM of each view and the pairwise disagreement term, the membership of each view is achieved. Then the third strategy is applied in which the final consensus fuzzy membership
is generated by calculating the geometric mean of membership of all views as follows:
A parameter is used in the objective function to control the weight of the disagreement term, however in CoFKM each view is treated equally and the weight of each view is not considered. As described in (Jiang et al., 2015), it may degrade the clustering performance in scenarios where some views are noisy and not reliable.
In (Jiang et al., 2015), based on similar strategies as applied in Co-FKM, WV-Co-FCM is proposed to handle multi-view data. Same as Co-FKM, the fuzzy membership for each object in each view is first calculated in WV-Co-FCM. Then an additional step is needed to calculate the final consensus membership. There are mainly three differences between the two approaches. First, instead of using standard FCM, WV-Co-FCM is based on GIFP-FCM (Zhu et al., 2009) in which the term is added to enhance the fuzzy membership. Second, the weight for each view is considered in WV-Co-FCM and the entropy regularization term of the weight is introduced into the objective function. Third, instead of using the geometric mean in Co-FKM, the final consensus membership is generated based on the weight of each view as follows:
As discussed above, both Co-FKM and WV-Co-FCM use an additional step to achieve the final consensus clustering results. For Co-FKM, as discussed above, the weight of each view is not considered which may degrade the clustering accuracy. For WV-Co-FCM, the weights are only used in the final step. In other words, the membership of each object and the cluster centroids of each view are updated independently without considering the influence of the weights. In our method, similar to the strategy used in RMKMC, minimaxMVSC and MultiNMF, we formulate the final consensus membership directly into the objective function. Moreover, inspired by minimaxMVSC, the minimax optimization instead of direct minimization of the objective function is used in our approach. The maximum of the weighted summation of the objective function of each view is minimized. In other words, the view with larger cost measured by MinimaxFCM objective function will be given a higher weight so the cost from the view will be suppressed more vigorously/robustly than that from other views. Hence better consensus results can be achieved. The appropriate consensus membership and the weight of each view can be obtained simultaneously in the proposed MinimaxFCM clustering process. The consensus membership and the weight of each view is achieved simultaneously in the clustering process. Next, we present our new multi-view fuzzy clustering approach called MinimaxFCM, including the detailed formulation, derivation and an in-depth analysis.
3 The proposed approach
In this section, first the objective function of the proposed approach MinimaxFCM is formulated. The updating rules are derived by applying the Lagrangian Multiplier method. Next, we introduce the algorithm of MinimaxFCM including detailed steps. The time complexity of the algorithm will be discussed as well.
3.1 Fomulation of MinimaxFCM
We formulate the multi-view fuzzy clustering as a minimax optimization as follows:
In the formulation, is the membership matrix whose element in row and column is . is the centroid matrix of view where the column is the centroid of cluster in view. Here is the dimension of the objects in view. can be considered as the cost of view which is the standard objective function of Fuzzy c means(FCM). is the weight of view. The parameter controls the distribution of weights for different views. is the fuzzifier for fuzzy clustering which controls the fuzziness of the membership.
The clustering goal is to conduct a minimax optimization on the objective function , and subject to the constraints in (8), (9), (10) and (11). In this new minimax formulation for multi-view fuzzy clustering, the consensus clustering result integrating heterogeneous views of data is generated directly based on the consensus membership . In addition, the weights for each view are automatically determined based on minimax optimization, without specifying the weights by users. Moreover, by using minimax optimization, the different views are integrated harmonically by weighting each cost term differently.
It is difficult to solve the variables , and in (6) directly because (6) is nonconvex. However, as we observed that the objective function is convex w.r.t and and is concave w.r.t , therefore, similar to FCM, alternating optimizaiton (AO) can be used to solve the optimization problem by solving one variable with others fixed.
3.2.1 Minimization: Fixing , and updating
The Lagrangian Multiplier method is applied to solve the optimization problem of the objective function with constraints. The Lagrangian function considering the constraints is given as follows:
where the represents the objective function of MinimaxFCM . and are the Lagrange multipliers. The condition for solving is as follows:
As shown in (14), the weight for each view is considered in the updating for .
3.2.2 Minimization: Fixing , and updating
3.2.3 Maximization: Fixing , and updating
Based on the Lagrangian Multiplier method, the condition for solving is as follows:
Here the cost term is the weighted distance summation of all the data points under the view to its corresponding centroid. The larger the value of is, the larger cost this view will contribute to the objective function. From (18), we can see that the larger the cost of the view is, the higher value that will be assigned to which leads to the maximum of the weighted cost. The maximum is minimized with respect to the membership and centroids in order to suppress the high cost views and achieve a harmonic consensus clustering result. Next, we propose the details of the multi-view fuzzy clustering algorithm based on the minimax optimization.
3.3 MinimaxFCM Algorithm
The details of the algorithm of the proposed MinimaxFCM approach are outlined in Algorithm 1 as follows. First, the data set is represented by multiple views calculated from different features respectively. The centroids and of each view are initialized. is initialized as to make the weight be uniform for each view. Then, the consensus membership , the centroids for each view, and for each view are updated by using (14), (16) and (17) respectively. Step 4-16 are repeated until the convergence condition is satisfied. In the final step, the cluster indicator q is determined for each object. is the cluster number which object belongs to. This is achieved by assigning object to the cluster which has the largest consensus membership .
|Algorithm 1: MinimaxFCM|
|Input: Data set of views with size|
|Cluster Number , stopping criterion , fuzzifier|
|Output: Cluster Indicator q|
|Cluster centroids for each view|
|The weight for each view|
|1 Initialize centroids for each view.|
|2 Initialize for each view|
|6 Update using equation (14);|
|7 end for|
|8 end for|
|11 Update using equation (16);|
|12 end for|
|13 end for|
|15 Update using equation (17);|
|16 end for|
There are two parameters including the fuzzifier and parameter need to be set before running the algorithm. The parameter controls the distribution of weight for different views. It can be seen from (18) that when , the weight of each view will become close to each other. When , the weight of view whose cost term is the largest among all views will be assigned as 1, and the other views will be assigned as 0.
The time complexity of MinimaxFCM is considering the number of iterations , number of objects , number of clusters and number of views . is the time complexity for updating , , and , respectively. Note that different from the graph based multi-view algorithms such as (Kumar et al., 2011) and (Wang et al., 2014), the time complexity of our multi-view fuzzy clustering is similar to FCM. The graph construction and eigen decomposition whose time complexity are and respectively in graph based algorithms are time consuming.
4 Experimental results
In this section, experimental studies of the proposed approach are conducted on different kinds of multi-view data sets including image and document data sets. In the experiments, we compare the performace of MinimaxFCM with six related approaches on multi-view data clustering. The experiments implemented in Matlab were conducted on a PC with four cores of Intel I5-2400 with 8 gigabytes of memory.
4.1 Data sets
Nine data sets as summarized in Table. 1 were used for the experimental study and comparisons.
|Data Sets||No. of views||No. of classes||No. of objects||No. of dimension|
|Reuters multilingual data||5||6||1500||107783|
Multiple features (MF)111This data set can be downloaded on https://archive.ics.uci.edu/ml/datasets/Multiple+Features.
: This data set consists of 2000 handwritten digit images (0-9) extracted from a collection of Dutch utility maps. It has 10 classes and each class has 200 images. Each object is described by 6 different views (Fourier coefficients, profile correlations, Karhunen-Love coefficients, pixel averages, Zernike moments, morphological features).
Image segmentation (IS) data set222This data set can be downloaded on https://archive.ics.uci.edu/ml/datasets/Image+Segmentation.: This data set is composed of 2310 outdoor images which have 7 classes. Each image is represented by 19 features. The features can be considered as two views which are shape view and RGB view. The shape view consists of 9 features which describe the shape information of each image. The RGB view consists of 10 features which describe the RGB values of each image.
Corel Image data set 333This data set can be downloaded on http://www.cs.virginia.edu/ xj3a/research/CBIR/Download.htm.: This data set is a part of the popular corel image data set which consists of 34 classes. Each class has 100 images. Some of the image examples are shown in Fig. 1. Each image is represented by 7 different views including three color-related views and four texture-related views. Table. 2 shows details of the 7 views. We extracted several four class subsets and tested the representative ones as shown in Table. 3.
|View Categories||View name||Dimension|
|Color Views||Color Histogram||64|
|Texture Views||Coarseness of Tamura Texture||10|
|Directionality of Tamura Texture||8|
3-Sources document data set444This data set can be downloaded on http://mlg.ucd.ie/datasets/3sources.html.: This data set consists of 948 news articles covering 416 distinct news stories. They are collected from three online news sources: BBC, Guardian and Reuters. We selected 169 news articles which are reported in all three sources. It has 6 topic classes which are business, entertainment, health, politics, sport and technique. Each article is described by 3 views which are the three sources.
Reuters multilingual data set: This data set contains documents originally written in five different languages (English, French, German, Spanish and Italian) and their translations (Amini et al., 2009). This multilingual data set covers a common set of six classes. We use documents originally in English as the first view and their four translations as the other four views. We randomly sample 1500 documents from this collection with each of the 6 classes having 250 documents.
4.2 Experimental settings
We firstly compare the performance of the proposed MinimaxFCM with its corresponding single-view counterpart. In addition, we compare the results of our method with those of the baseline method naive multi-view fuzzy clustering which is implemented by simply using the concatenated features of all views as input to the FCM clustering algorithm. In order to demonstrate the effectiveness of MinimaxFCM, different kinds of multi-view clustering approaches are also comprehensively compared with. This consists of two fuzzy clustering based approaches including Co-FKM (Cleuziou et al., 2009) and WV-Co-FCM (Jiang et al., 2015); K means based RMKMC, nonnegative matrix factorization based MultiNMF; spectral clustering based approach using minimax optimization MinimaxMVSC. The six compared approaches and their parameter settings are summarized as follows:
FCM on Single View: We apply standard FCM on each single view of the data sets and report the worst and best clustering results among different views.
FCM on Concatenated View: We first concatenate the features of all views and then apply standard FCM on the concatenated data.
Multiview Fuzzy Clustering: As in WV-Co-FCM (Jiang et al., 2015), the grid search strategy is adopted to find the better parameters. For Co-FKM, as recommended in (Cleuziou et al., 2009), the parameter is searched from with the step 0.01. Here is the number of views. For WV-Co-FCM, the searching method of parameters are the same as describe in (Jiang et al., 2015). And we select the first updating equation (case (a) in (Jiang et al., 2015)) for WV-Co-FCM as the results of the four updating equations are very similar.
Multiview Spectral Clustering: The multiview spectral clustering based minimax optimization (MinimaxMVSC) (Wang et al., 2014) is compared. The parameter is searched in the range of [0.1 0.9] with step 0.1.
The parameter setting for MinimaxFCM is similar to that in (Cleuziou et al., 2009). The parameter is searched from [0.1 0.9] with the step 0.1. For all fuzzy clustering based approaches, the fuzzifier is set by searching from [1.1 2] with the step 0.05. The results reported are the value with the best searched parameter for each approach.
4.3 Evaluation criterion
Three popular external criteria Accuracy (Mei & Chen, 2012), F-measure (Larsen & Aone, 1999), and Normalized Mutual Information(NMI) (Strehl & Ghosh, 2003) are used to evaluate the clustering results, which measure the agreement of the clustering results produced by an algorithm and the ground truth. If we refer to class as the ground truth, and cluster as the results of a clustering algorithm, the NMI is calculated as follows:
where is the total number of objects, and are the numbers of objects in the cluster and the class, respectively, and is the number of common objects in class and cluster
. For F-measure, the calculation based on precision and recall is as follows:
Accuracy is calculated as follows after obtaining a one-to-one match between clusters and classes:
where is the number of common objects in the cluster and its matched class . The higher the values of the three criterions are, the better the clustering result is. The value is equal to 1 only when the clustering result is same as the ground truth.
To make MinimaxFCM be more robust to initialization, we initialized the K centroids for each view based on the method used in (Krishnapuram et al., 2001). K objects in each view are selected as the initial K centroids for each view. For each view, we select the object which has the minimum distance to all the other objects as the first centroid. The remaining centroids are chosen consecutively by selecting the objects that maximize their minimal distance with existing centroids. Based on the selection mechanism, convergence to a bad local optimum may be avoided because the centroids are distributed evenly in the data space. The detailed steps of initialization of MinimaxFCM are as follows.
|Initialization for MinimaxFCM|
|Set the number of clusters|
|2 Calculate the first centroid:|
|5 Centroids set ;|
|10 end for|
For fair comparison, the same initialization method is applied in standard FCM, Co-FKM and WV-Co-FCM to initialize the centroids. For RMKMC and MultiNMF, we initialize the cluster centroid matrix which is composed of the centroids selected based on the same method. The same method is also applied in K-means which is used as the final step of MinimaxMVSC.
4.5 Results on image data sets
For the image data sets Multiple features (MF), Image segmentation (IS) and five subsets of the Corel data set, to get the comparable cost of each view, we adopt the method used in Co-FKM (Cleuziou et al., 2009)
to normalize each view. We normalize each feature to unit variance, and apply a weight with valueon the data in view. Here is the dimension of view. The Euclidean distance measure is used to calculate the distance. For MultiNMF, as described in (Liu et al., 2013), the data is preprocessed to make . For MinimaxMVSC, as described in (Wang et al., 2014), the similarity matrix is constructed based on the Gaussian kernel in which Euclidean distance is used. Note that the above initialization method will generate the same set of initial centroids, hence the clustering results of each run is same for each approach. The accuracy, NMI and F-measure results of MF, IS and Corel data sets are shown in Table. 4, 5 and 6, respectively.
|Worst Single View|
|Best Single View|
|Worst Single View|
|Best Single View|
|Worst Single View|
|Best Single View|
From the tables we can see that all the multi-view clustering approaches perform better than the best single view and the concatenated view. We also observe that the concatenating method in which the features of all views are concatenated directly may not be a guarantee to generate better clustering results. For example, the results of concatenated view of Multiple features(MF) data set are better than its best single view, while the results of concatenated view of Image segmentation(IS) data set are worse than its best single view. The reason for this phenomenon may be that different views are not compatible with each other. MinimaxFCM is based on minimax optimization which helps to find the harmonic consensus clustering results for the data with both compatible or non-compatible views. As we can see from the tables that MinimaxFCM performs the best in almost all the data sets. Note that MinimaxFCM performs better than MinimaxMVSC in which minimax optimization is also used.
4.6 Results on document data sets
For document data sets (3-Sources and Reuters multilingual data), as the bag-of-words representation of documents generates features which are very sparse and high-dimensional, standard distance measures for example Euclidean distance in high dimensions are always unreliable. Therefore, for 3-Sources data, we adopt a normalization method similar to that used in (Liu et al., 2013). For each document in the view , we conduct normalization such that . Moreover, the cosine distance is used for the 3-Sources data set. For Reuters multilingual data set, same as the experimental setting in (Kumar et al., 2011), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999) is applied to project the data to a 100-dimensional space and the clustering approaches are conducted on the low dimensional data. Table. 7 shows the results of accuracy, F-measure and NMI of the two data sets. As we can see that the MinimaxFCM performs better consistently than the other approaches.
4.7 Parameter Analysis
The formulation of the objective function of MinimaxFCM has two parameters( and fuzzifier ) as shown in (6) and (7). To show the impact of the two parameters on the performance of MinimaxFCM, we plot the NMI performance curve w.r.t. the parameters for each data set in Fig. 2 and Fig. 3, respectively. Here we only show the NMI results, the results of accuracy and F-measure have a similar pattern. Fig. 2 and Fig. 3 are generated as follows. First, for Fig. 2 the vaule of the fuzzifier which produces the results in Table. 4, Table. 5 and Table. 6 is fixed. Then, the NMI results is plotted w.r.t the parameter with values from [0.1 0.9] with the step 0.1. Fig. 3 is plotted using the same method as Fig. 2, while the NMI results is plotted w.r.t the fuzzifier with values from [1.1 2] with the step 0.05. As shown in Fig. 2 and Fig. 3, the NMI results are not very sensitive to the parameter for each data set. Compared to , the NMI results are more sensitive to the fuzzifier . We observe that the values of fuzzifier are always in the range of [1.1 1.7] when the best NMI results are achieved. Moreover, for document data a smaller fuzzifier achieves better NMI results. Therefore, we recommend to set to a value from [0.1 0.9] and from [1.1 1.7] in practice. In addition, if the data set is document data, a smaller is more suitable.
We have proposed a new multi-view fuzzy clustering approach called MinimaxFCM for multi-view data analysis, and apply MinimaxFCM on seven image data sets and two document data sets to demonstrate its effectiveness and potential. MinimaxFCM processes multi-view data based on the minimax optimization and the standard FCM to get the harmonic consensus clustering results. The maximum of the weighted cost of each view is minimized. Experimental results show that MinimaxFCM outperforms related multi-view clustering algorithms with more accurate clustering results. Moreover, the time complexity of MinimaxFCM is similar to FCM which indicates that MinimaxFCM has a great potential to be used for large multi-view data clustering. In the future, MinimaxFCM may be further extended to handle the scenario where the entire data set is too large to be stored in the memory for clustering tasks.
- Amini et al. (2009) Amini, M., Usunier, N., & Goutte, C. (2009). Learning from multiple partially observed views-an application to multilingual text categorization. In Proceedings of Advances in Neural Information Processing Systems (pp. 28–36). Vancouver.
- Anderson et al. (2013) Anderson, D. T., Zare, A., & Price, S. (2013). Comparing fuzzy, probabilistic, and possibilistic partitions using the earth mover s distance. IEEE Transactions on Fuzzy Systems, 21, 766–775.
- Aparajeeta et al. (2016) Aparajeeta, J., Nanda, P. K., & Das, N. (2016). Modified possibilistic fuzzy c-means algorithms for segmentation of magnetic resonance image. Applied Soft Computing, 41, 104–119.
- Bezdek (1981) Bezdek, J. C. (1981). Pattern Recognition with Fuzzy Objective Function Algorithms. Norwell, MA: Kluwer Academic Publishers.
Blum & Mitchell (1998)
Blum, A., & Mitchell, T.
Combining labeled and unlabeled data with
Proceedings of the eleventh annual conference on Computational learning theory(pp. 92–100).
- Bruno & Marchand-Maillet (2009) Bruno, E., & Marchand-Maillet, S. (2009). Multiview clustering: A late fusion approach using latent models. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval (pp. 736–737). Boston, MA.
Cai et al. (2013)
Cai, X., Nie, F., &
Huang, H. (2013).
Multi-view k-means clustering on big data.
Proceedings of the 23rd international joint conference on Artificial Intelligence(pp. 2598–2604). Beijing, China.
- Cleuziou et al. (2009) Cleuziou, G., Exbrayat, M., Martin, L., & Sublemontier, J. (2009). Cofkm: A centralized method for multiple-view clustering. In Proceedings of the 9th IEEE International Conference on Data Mining (pp. 752–757). Miami, FL.
- Ding et al. (2005) Ding, C., He, X., & Simon, H. D. (2005). Nonnegative lagrangian relaxation of k-means and spectral clustering. In Proceedings of the 16th European Conference on Machine Learning (pp. 530–538). Porto, Portugal.
- Filippone et al. (2008) Filippone, M., Camastra, F., Masulli, F., & Rovetta, S. (2008). A survey of kernel and spectral methods for clustering. Pattern Recognition, 41, 176–190.
- Greene & Cunningham (2009) Greene, D., & Cunningham, P. (2009). A matrix factorization approach for integrating multiple data views. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 423–438). Bled, Slovenia.
- Guo (2013) Guo, Y. (2013). Convex subspace representation learning from multi-view data. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (pp. 387–393). Washington.
- Hofmann (1999) Hofmann, T. (1999). Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval (pp. 50–57). Berkeley, CA.
Huang et al. (2012)
Huang, H.-C., Chuang, Y.-Y., &
Chen, C.-S. (2012).
Affinity aggregation for spectral clustering.
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(pp. 773–780). Providence, RI.
- Jain (2010) Jain, A. (2010). Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31, 651–666.
- Jia et al. (2010) Jia, Y., Salzmann, M., & Darrell, T. (2010). Factorized latent spaces with structured sparsity. In Advances in Neural Information Processing Systems (pp. 982–990).
- Jiang et al. (2015) Jiang, Y., Chung, F.-L., Wang, S., Deng, Z., Wang, J., & Qian, P. (2015). Collaborative fuzzy clustering from multiple weighted views. IEEE Transactions on Cybernetics, 45, 688–701. doi:10.1109/TCYB.2014.2334595.
- Kannan et al. (2015) Kannan, S., Devi, R., Ramathilagam, S., Hong, T.-P., & Ravikumar, A. (2015). Robust fuzzy clustering algorithms in analyzing high-dimensional cancer databases. Applied Soft Computing, 35, 199–213.
- Krishnapuram et al. (2001) Krishnapuram, R., Joshi, A., Nasraoui, O., & Yi, L. (2001). Low-complexity fuzzy relational clustering algorithms for web mining. IEEE Transactions on Fuzzy Systems, 9, 595–607.
- Kumar et al. (2011) Kumar, A., Rai, P., & Daume, H. (2011). Co-regularized multi-view spectral clustering. In Proceedings of Advances in Neural Information Processing Systems (pp. 1413–1421). Granada, Spain.
- Lanckriet et al. (2004) Lanckriet, G. R., Cristianini, N., Bartlett, P., Ghaoui, L. E., & Jordan, M. I. (2004). Learning the kernel matrix with semidefinite programming. The Journal of Machine Learning Research, 5, 27–72.
- Larsen & Aone (1999) Larsen, B., & Aone, C. (1999). Fast and effective text mining using linear-time document clustering. In Proceedings of the 5th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 16–22). San Diego, CA.
- Lee & Seung (1999) Lee, D. D., & Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401, 788–791.
- Liu et al. (2013) Liu, J., Wang, C., Gao, J., & Han, J. (2013). Multi-view clustering via joint nonnegative matrix factorization. In Proceedings of the 2013 SIAM International Conference on Data Mining (pp. 252–260). Austin, TX.
MacQueen, J. (1967).
Some methods for classification and analysis of
Proceedings of the fifth Berkeley symposium on mathematical statistics and probability Volume 1(pp. 281–297).
- Mei & Chen (2012) Mei, J.-P., & Chen, L. (2012). A fuzzy approach for multitype relational data clustering. IEEE Transactions on Fuzzy Systems, 20, 358–371.
Ng et al. (2002)
Ng, A. Y., Jordan, M. I., &
Weiss, Y. (2002).
On spectral clustering: Analysis and an algorithm.In Proceedings of Advances in Neural Information Processing Systems (pp. 849–856). Vancouver.
- Strehl & Ghosh (2003) Strehl, A., & Ghosh, J. (2003). Cluster ensembles—a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3, 583–617.
- Tzortzis & Likas (2012) Tzortzis, G., & Likas, A. (2012). Kernel-based weighted multi-view clustering. In Proceedings of the 12th IEEE International Conference on Data Mining (pp. 675–684). Brussels, Belgium.
- Wang et al. (2014) Wang, H., Weng, C., & Yuan, J. (2014). Multi-feature spectral clustering with minimax optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 4106–4113). Columbus, OH.
- Wang et al. (2015) Wang, W., Arora, R., Livescu, K., & Bilmes, J. (2015). On deep multi-view representation learning. In Proceedings of the 32nd International Conference on Machine Learning (pp. 1083–1092).
- Xu et al. (2013) Xu, C., Tao, D., & Xu, C. (2013). A survey on multi-view learning. arXiv preprint arXiv:1304.5634, .
- Xu et al. (2015) Xu, C., Tao, D., & Xu, C. (2015). Multi-view intact space learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 2531–2544.
- Xu & Wunsch (2005) Xu, R., & Wunsch, D. (2005). Survey of clustering algorithms. IEEE Transactions on Neural Networks, 16, 645–678.
- Zhu et al. (2009) Zhu, L., Chung, F.-L., & Wang, S. (2009). Generalized fuzzy c-means clustering algorithm with improved fuzzy partitions. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39, 578–591.