DeepAI
Log In Sign Up

One-Pass Incomplete Multi-view Clustering

Real data are often with multiple modalities or from multiple heterogeneous sources, thus forming so-called multi-view data, which receives more and more attentions in machine learning. Multi-view clustering (MVC) becomes its important paradigm. In real-world applications, some views often suffer from instances missing. Clustering on such multi-view datasets is called incomplete multi-view clustering (IMC) and quite challenging. To date, though many approaches have been developed, most of them are offline and have high computational and memory costs especially for large scale datasets. To address this problem, in this paper, we propose an One-Pass Incomplete Multi-view Clustering framework (OPIMC). With the help of regularized matrix factorization and weighted matrix factorization, OPIMC can relatively easily deal with such problem. Different from the existing and sole online IMC method, OPIMC can directly get clustering results and effectively determine the termination of iteration process by introducing two global statistics. Finally, extensive experiments conducted on four real datasets demonstrate the efficiency and effectiveness of the proposed OPIMC method.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/07/2019

Doubly Aligned Incomplete Multi-view Clustering

Nowadays, multi-view clustering has attracted more and more attention. T...
10/08/2021

TSK Fuzzy System Towards Few Labeled Incomplete Multi-View Data Classification

Data collected by multiple methods or from multiple sources is called mu...
09/17/2018

Incomplete Multi-view Clustering via Graph Regularized Matrix Factorization

Clustering with incomplete views is a challenge in multi-view clustering...
04/05/2022

Incremental Unsupervised Feature Selection for Dynamic Incomplete Multi-view Data

Multi-view unsupervised feature selection has been proven to be efficien...
05/31/2019

Spectral Perturbation Meets Incomplete Multi-view Data

Beyond existing multi-view clustering, this paper studies a more realist...
10/15/2020

Multi-view Hierarchical Clustering

This paper focuses on the multi-view clustering, which aims to promote c...
05/01/2021

Multi-view Clustering via Deep Matrix Factorization and Partition Alignment

Multi-view clustering (MVC) has been extensively studied to collect mult...

Introduction

With the increase of diverse data acquisition devices, real data are often with multiple modalities or from multiple heterogeneous sources [Blum and Mitchell1998], forming so-called multi-view data [Son et al.2017]. For example, a web document can be represented by its url and words on the page; images of a 3D object are taken from different viewpoints [Sun2013]. In multi-view datasets, the consistency and complementary information among different views need to be exploited for learning task at hand such as classification and clustering [Zhao, Ding, and Fu2017]

. Nowadays, multi-view learning has been widely studied in different areas such as machine learning, data mining and artificial intelligence

[Xing et al.2017, Tulsiani et al.2017, Nie et al.2018].

Multi-view Clustering (MVC), as one of the most important tasks of multi-view learning, has attracted unimaginable attention due to preventing the expensive requirement of data labeling [Bickel and Scheffer2004, Fan et al.2017]. The pursuit of MVC is how to make full use of both consistency and complementary information among multi-view data to get a better clustering result. To date, a variety of related methods have been proposed and these can roughly be divided into two main categories: subspace approaches [Ding and Fu2014, Cao et al.2015, Li2016] and spectral approaches [Kumar and Daumé2011, Tao et al.2017, Ren et al.2018]

. The former try to learn a shared latent subspace such that different dimensionality views are comparable in this space. Whereas the latter aim to learn a unified similarity matrix among multi-view data by extending single-view spectral clustering approaches.

A normal assumption for most of above methods is that all the views are complete, meaning that all the instances appear in individual views and correspond to each other. However, in real-world applications, some views often suffer from instances missing which makes some instances in one view unnecessarily have corresponding instances. Such incompleteness will bring a great difficulty for MVC. Clustering on such incomplete multi-view dataset is called incomplete multi-view clustering (IMC). So far, many approaches have also been developed [Li, Jiang, and Zhou2014, Shao, He, and Philip2015, Zhao, Liu, and Fu2016, Liu et al.2017, Wen et al.2018, Hu and Chen2018]. Nevertheless, almost all these approaches are offline and can hardly handle large scale datasets because of their high time and space complexities.

In data explosion age, the size of individual views data is often huge. For example, video of hundreds of hours is uploaded to YouTube every minute, which appears in multiple modalities or views, namely audio, text and visual views. Another example is in Web scale data mining, one may encounter billions of Web pages and the dimension of the features may be as large as . Data in such scale is hard to store in the memory and process in offline way. To our best knowledge, to date, only one method OMVC is proposed for the large scale IMC problem [Shao et al.2016]. However, OMVC still suffers from some problems in such aspects as normalizing data matrix, handling missing instances, determining convergence and so on. Therefore, solving large scale IMC problem is still very urgent.

In this paper, we propose an One-Pass Incomplete Multi-view Clustering framework (OPIMC) for large scale multi-view datasets based on subspace learning. OPIMC can easily address IMC problem with the help of Regularized Matrix Factorization (RMF) [Gunasekar et al.2017, Qi et al.2017] and Weighted Matrix Factorization (WMF) [Kim and Choi2009]. Furthermore, OPIMC can directly get clustering results and effectively determine the termination of iteration by introducing the two global statistics which can yield a prominent reduction in clustering time.

In the following, we firstly give a brief review of some related work. Secondly, we detail our OPIMC approach and give the optimization. Thirdly, we report the experimental results. And finally, we conclude the paper.

Related Work

Multi-view Clustering. As mentioned in the introduction, a variety of multi-view clustering methods have been proposed and these can roughly be divided into two categories: subspace approaches [Li2016] and spectral approaches [Ren et al.2018]. Contrasting with the spectral approaches, the subspace approaches have become a main paradigm due to less time and space complexities, they try to learn a latent subspace so that different dimensionality views are close to each other in this space. Among the subspace approaches, nonnegative matrix factorization (NMF)[Lee and Seung1999] has become a dominating technique because it can be conveniently applied for clustering and subsequently many NMF based methods and their variants have been proposed. For examples, [Liu et al.2013] establishes a joint NMF model for multi-view clustering, which performs NMF for each view and pushes low dimensional representation of each view towards a common consensus. Besides, manifold learning is also considered for multi-view clustering problem. By imposing the manifold regularization on the objective function of NMF for data of individual views [Wang, Yang, and Li2016, Zong et al.2017], these methods get a relatively better results. Here, just to name a few, for more related works on MVC, please refer to [Chao, Sun, and Bi2017, Sun2013]
Incomplete Multi-view Clustering. Most of these previous studies on multi-view clustering assume that all instances present in all views. However, this assumption is not always to be held in real world applications. For example, in the camera network, for some reasons, such as the camera temporarily fail or be blocked by some objects, making the instance missing. This case will cause the incompleteness of multi-view data. Recently, some incomplete multi-view clustering methods have been proposed. For instance, [Li, Jiang, and Zhou2014] proposes PVC to establish a latent subspace where the instances corresponding to the same object in different views are close to each other, and similar instances in the same view should be well grouped by utilizing instance alignment information. Besides, a method of clustering more than two incomplete views is proposed in [Shao, He, and Philip2015](MIC) by firstly filling the missing instances with the average feature values in each incomplete view, then handling the problem with the help of weighted NMF and -Norm regularization [Kong, Ding, and Huang2011, Wu et al.2018]. Moreover, [Hu and Chen2018] proposes DAIMC, which extends PVC to multi-view case by utilizing instance missing information and aligning the clustering centers among different views simultaneously.
Online Incomplete Multi-view Clustering. In data explosion age, multi-view data tends to be large scale. However the above approaches for incomplete multi-view are almost all offline and can hardly conduct the large scale datasets due to their high time and space complexities. Online learning, as an efficient strategy to build large-scale learning systems, has attracted much attention during the past years [Nguyen et al.2015, Wan, Wei, and Zhang2018]. As a special case of online learning, one-pass learning (OPL) [Zhu, Ting, and Zhou2017] has the benefit of requiring only one pass over the data and is particularly useful and efficient for streaming data. To our best knowledge, to date, only one method extends MIC to online case and develops so-called OMVC [Shao et al.2016] by combining online learning and incomplete multi-view clustering. Nevertheless, OMVC still suffers from some problems in the following aspects:
1. Normalization for dataset: OMVC normalizes the multi-view datasets by summing all elements of the data, which is unreasonable in online learning.

2. Imputation for missing instances: Due to the mechanism of online learning, it is difficult to get the average feature values in each incomplete view to fill the missing instances.


3. Efficiency: OMVC works by learning a consensus latent feature matrix across all the views and then applies K-means on this matrix to get the clustering results, which brings high computational cost when both the instance number and the category number are large.
4. Termination determination for iterative convergence: OMVC terminates the iteration process by using all the scanned instances, which is not only unreasonable but also time-consuming and laborious.

Considering these disadvantages of the OMVC, we propose a more general and feasible incomplete multi-view clustering algorithm, which can deal with large-scale incomplete multi-view data efficiently and effectively.

Proposed Approach

Preliminaries

Given an input data matrix , where each column of X is an instance. Regularized Matrix Factorization (RMF) aims to approximately factorize the data matrix X into two matrices U and V with the Frobenius norm regularized constraint for U, V. Then we can get the following minimization problem

(1)

where low-rank regularized factor matrices and , K denotes dimension of subspace. is nonnegative parameter. Obviously, this is a biconvex problem. Thus we can easily get the updating rules to find the locally optimal solution for this problem as follows:
Update U (while fixing V) using the rule

(2)

Update V (while fixing U) using

(3)

Weighted Matrix Factorization (WMF), as one of the most commonly used methods for missing matrix, is widely used for recommender systems [Xue et al.2017]. The WMF optimization problem is formulated as:

(4)

where W contains entries only in , and when the entry is missing.

One-Pass Incomplete Multi-view Clustering

Given a set of input incomplete multi-view data matrices , where , N represent the dimensionality and instance number respectively. In order to describe directly and conveniently, the missing instances of individual views are filled with 0. Here we introduce an indicate matrix for this incomplete multi-view dataset.

(5)

where each row of M represents the instance presence or absence for corresponding view. From the matrix M, we can easily get the missing information of individual views and aligned information across different views.

For the -th view, inspired by Regularized Matrix Factorization, we can factorize the data matrix into two matrices and , where , , and K denotes dimension of subspace, equal to the categories of the dataset. Furthermore, in order to avoid the third problem of OMVC, we apply an 1-of-K coding constraint to , which causes . Thus we can get the following model:

(6)

For multi-view dataset, (6) does not consider the consistency information across different views. To address this issue, we assume that different views have distinct matrices , but share the same matrix V. Meanwhile, we consider the instance missing information to handle the incompleteness of each view with the help of Weighted Matrix Factorization. Thus, (6) is rewritten as:

(7)

where the weighted matrix is defined as:

(8)

In real-world applications, the data matrices may be too large to fit into the memory. We propose to solve the above optimization problem in an online fashion with low computational and storage complexities. We assume that the data of each view is get by chunks and whose size is . Thus the objective function can be decomposed as:

(9)

where is the -th data chunk in the -th view, is the clustering indicator matrix for the -th data chunk, and is the diagonal weight matrix for the -th data chunk.

Optimization

From (9), we can find that it is biconvex for and at each time . So we update and in an alternating way. Firstly, we will give the normalization of the dataset.
Normalization:

In multi-view data, there are scaling differences among views. In order to reduce these differences and improve the clustering results, the appropriate normalization is necessary. However, due to the mechanism of online learning, it is difficult to normalize the dataset using global information such as mean and variance. In this paper, instead we map all the instances to a hypersphere,

i.e. .

Next, we describe the following subproblems for the OPIMC optimization problem.
Subproblem of . With fixed, for each , the partial derivation of with respect to is

(10)

From the definition of , we can see that . Meanwhile, due to the zero filling of dataset, let , we get the following updating rule:

(11)

Here, for the sake of convenience, we introduce two terms and as below:

(12)

Consequently, (11) can be rewritten as:

(13)

Then, when new chunk coming, the matrices and can be updating easily as follows:

(14)

Subproblem of . With fixed and inspired by K-means, we introduce a matrix to record the distance between all the instances (the column of ) and all the clustering centers (the column of ) among all the views.

(15)

where denotes the -th instance of and denotes the entry of . Note that the indexes of all the row minimum values in matrix D represent the clustering indicators of the corresponding instances. Thus, we can get the following updating rule for :

(16)

where (16) is two matlab instructions.

From the above procedure, we have solved the first three problems of OMVC [Shao et al.2016]. In the following we will present the solution to OMVC’s fourth problem. Termination determination for iterative convergence: By unfolding the objective function (9), we can get

(17)

where ratio denotes the incomplete rate of the dataset and denotes the matrix trace. From (17), by recording the statistics of R and T, we can easily get the loss of all the scanned instances. Moreover, the memory space requirement for this operation is very small, i.e. .

It is worth noting that for the first initial chunk, because of the random initialization of U,V and the small size of the chunk, in updating U, some clustering centers are likely to be degraded. In order to prevent this, in the iterative update of the first chunk, we use the chunk average values to fill the degenerative clustering centers. While in the iterative update for other chunks, we use the last corresponding values to fill. The experiment results verify the effectiveness of this operation.

The entire optimization procedure for OPIMC is summarized in Algorithm 1.

0:  Data matrices for incomplete views , weight matrices , parameter , number of clusters .
1:  , for each view .
2:  for  do
3:     Draw for all the views.
4:     if  then
5:        Initialize the , with random value.
6:     else
7:        Initialize the according to Eq.(15-16).
8:     end if
9:     repeat
10:        for  do
11:           Update according to Eq.(11-13).
12:        end for
13:        Fill the degenerative clustering centers
14:        Update according to Eq.(15-16)
15:     until converges
16:     Update and according to Eq.(14).
17:  end for
18:  Get clustering results according to V.
19:  return   and clustering results.
Algorithm 1 One-Pass Incomplete Multi-view Clustering

Convergence

The convergence of the OPIMC can be proved by the following theorem.
Theorem 1 The objective function value of Eq.(9) is nonincreasing under the optimization procedure in Algorithm 1.
Proof of Theorem 1: As shown in Algorithm 1, the optimization of OPIMC can be divided into two subproblems, each of which is convex w.r.t one variable. Thus, by finding the optimal solution for each subproblem alternatively, our algorithm can at least find a locally optimal solution.

Complexity

Time Complexity: The computational complexity of OPIMC algorithm is dominated by matrix multiplication and inverse operations. We discuss this problem in two aspects: optimizing , optimizing . Here we assume that , and . Thus, the time complexities for updating and are both . Suppose are the iteration times of the loop and the largest dimensionality of all the views respectively, by considering the chunk number , we can get the overall computational complexity of . It is worth noting that through experiments we find that OPIMC converges quickly, thus setting is enough.
Space Complexity: The proposed OPIMC algorithm only requires memory space (). By recording two global statistics R and T, OPIMC can easily update U, V and determinate convergence with the scanned instances.

Experiment

DataSets

In this paper, we conduct the experiments on four real-world multi-view datasets, which contains two small datasets and

Dataset Instance View Cluster
WebKB111http://vikas.sindhwani.org/manifoldregularization.html 1051 Content(3,000), Anchor text (1,840) 2
Digit222http://archive.ics.uci.edu/ml/datasets/Multiple+Features 2000 Fourier (76), Profile (216), Karhunen-Loeve (64), Pixel (240), Zernike (47) 10
Reuters333http://archive.ics.uci.edu/ml/machine-learning-databases/ 00259/ 111740 English (21,531), French (24,893), German (34,279), Spanish (15,506), Italian (11,547) 6
Youtube444https://archive.ics.uci.edu/ml/datasets/YouTube+Multiview
+Video+Games+Dataset
92457 Vision (512), Audio (2,000), Text (1,000) 31
Table 1: Statistics of the datasets

two large datasets, where Reuters and Youtube are known to be the largest benchmark datasets used for multi-view clustering experiments currently. The important statistics of these datasets are given in the Table 1.

Compared Methods

We compare OPIMC with several state-of-art methods.
OPIMC: OPIMC is the proposed one-pass incomplete multi-view clustering method in this paper. We search the parameter in .
IMC: As shown in (7), IMC is the offline case of OPIMC.
OMVC: OMVC is an online incomplete multi-view clustering method proposed in [Shao et al.2016]. To facilitate comparison, we set the same value for all the views. Meanwhile, we select the parameter within the set of and select the parameter within .
MultiNMF: MultiNMF is a classic offline method for multi-view clustering proposed in [Liu et al.2013]. We select the parameter within .
ONMF: ONMF is an online document clustering algorithm for single view using NMF [Wang et al.2011]. In order to apply ONMF, we simply concatenate all the normalized views together to form a big single view. We compare two versions of ONMF from the original paper. ONMFI is the original algorithm that calculates the exact inverse of Hessian matrix, while ONMFDA uses diagonal approximation for the inverse of Hessian matrix.

Setup

To simulate the incomplete view setting, we randomly remove some instances from each view. On WebKB and Digit datasets, we set the incomplete rate to 0.3 and 0.4 respectively for the experiment. Besides, we set the incomplete rate to 0.4 on Reuters and Youtube datasets. Meanwhile we shuffle the order of the samples to fit the more real online scene. The chunk size s for online methods is set to 50 for small datasets and 2000 for large datasets, respectively. Meanwhile, it is worth mentioning that MultiNMF and ONMF can only deal with complete multi-view dataset, in order to the completeness of the experiment, we firstly fill the missing instances in each incomplete view using average feature values.

The normalized mutual information (NMI) and precision (AC) clustering evaluation measures are used in this paper. For online and one-pass methods, in order to more comprehensively compare with OMVC and ONMF, we also conduct the experiments for 10 passes and report both NMI and AC for different passes. The experimental results are shown in Figure 1.

(a) AC for 0.3 missing WebKB
(b) NMI for 0.3 missing WebKB
(c) AC for 0.3 missing Digit
(d) NMI for 0.3 missing Digit
(e) AC for 0.4 missing WebKB
(f) NMI for 0.4 missing WebKB
(g) AC for 0.4 missing Digit
(h) NMI for 0.4 missing Digit
(i) AC for 0.4 missing Reuters
(j) NMI for 0.4 missing Reuters
(k) AC for 0.4 missing Youtube
(l) NMI for 0.4 missing Youtube
Figure 1: Performance of clustering on WebKB, Digit, Reuters and Youtube for different passes.

Results

Figure 1 reports the performance of clustering on WebKB, Digit, Reuters and Youtube datasets for different passes with different incomplete rates. From Figure 1, we can get the following results.

From Figure 1(a) and Figure 1(b), we can see that on WebKB dataset, the offline method IMC achieves the best performance, the proposed OPIMC gets close performance after just two passes and outperforms the other four comparison methods. The same phenomena can be observed from Figures 1(c), 1(d), 1(g) and 1(h) on Digit dataset.

From Figure 1(e) and Figure 1(f), we can see that OPIMC performs terribly on WebKB dataset in the first few passes for the incomplete rate of 0.4. The main reasons are that the large incomplete rate and the small size of the chunk, which cause the matrices hard to be learned. However, after few passes, through continuous correction of global information, the clustering performance on WebKB dataset grows rapidly.

On large scale Reuters dataset, from Figure 1(i) and Figure 1(j), we can see that OPIMC gets the best results after only one pass, but the clustering performance decreases with the pass number increasing.

From Figure 1(k) and Figure 1(l), we can find that on Youtube dataset, OPIMC produces excellent results and much better than the other methods. This fully demonstrates the effectiveness of OPIMC.
Complexity Study: All the experiments are run on computer with Intel(R)390 Core(TM) i5-3470 @ 3.20GHz CPU and 16.0 GB RAM with the help of Matlab R2013a. The complexity study results are reported in Table 2.

Run Time (seconds)
WebKB Digit Reuters Youtube
OPIMC/Pass 0.25 0.56 27.89 26.76
OMVC/Pass 23.37 34.76 3753.02 2064.83
ONMFI/Pass 18.69 31.16 2887.12 1657.22
ONMFDA/Pass 20.09 30.63 2224.44 1307.14
IMC 2.91 6.31 / /
MultiNMF 149.7 647.2 / /
Table 2: Run time for different methods
Figure 2: Parameter studies on WebKB, Digit, Reuters and Youtube datasets, where the incomplete rate of WebKB and Digit experiment is set as 0.3, and the incomplete rate of Reuters and Youtube experiment is set as 0.4.

From Table 2, we can get some observations. Firstly, OMVC gets better results than ONMFI and ONMFDA, but the latter two methods run faster than OMVC. Secondly, the offline method IMC runs faster than the other methods except OPIMC. Thirdly, compared with OMVC, OPIMC takes much less running time (only of OMVC running time), while obtains relatively better clustering results. All these observations prove the efficiency and effectiveness of our model.
Parameter Study: We conduct the parameter experiments on the four aforementioned datasets for just one pass. Meanwhile, we set the incomplete rate as 0.3 for small datasets and 0.4 for large scale datasets respectively, and report the clustering performance of OPIMC by ranging in the set of . The results are shown in Figure 2.

From Figure 2, we can see that OPIMC gets best clustering results in on WebKB, Digit, Reuters and Youtube datasets respectively.
Convergence Study: The convergence experiments are conducted on the four aforementioned datasets for 20 passes. We set the incomplete rate as 0.4 for all the datasets and conduct the experiments. According to the definition of , , and inspired by ONMF, OMVC, for the first pass, the average loss is defined as follows:

(a) Average Loss on WebKB
(b) Average Loss on Digit
(c) Average Loss on Reuters
(d) Average Loss on Youtube
Figure 3: Convergence studies on WebKB, Digit, Reuters and Youtube datasets, where the incomplete rate is set as 0.4, and the experiments are run for 20 passes, the corresponding average loss is recorded. It is worth mentioning that since we ignore the loss of , the average loss is negative.
(18)

where

(19)

And for the other passes, since we can easily count the loss of scanned instances, we define the average loss as follows:

(20)

We cascade all pass losses and get the results as shown in Figure 3.

From Figure 3, we can see that, as the training goes on, the average loss converge gradually. Corresponding to Figure 1, we can observe that when the average loss converges, both NMI and AC get stable values.
Block Size Study: In OPIMC, the size of data chunk is a vitally important parameter. In order to study the performance of OPIMC with different chunk sizes, we conduct a block size study on digit dataset. Besides, we set the incomplete rate to 0.4, and report the clustering performance of OPIMC by ranging in the set of . Meanwhile, we run the experiment for 10 passes and the results are shown in Figure 4.

From Figure 4, we can see that generally the bigger the block size, the better the clustering results. Furthermore, when , the NMI and AC get a great value. However, using larger chunk size will cause larger space complexity.
Clustering Center Degradation Study: In this experiment, we will prove the validity of filling the degraded cluster centers. we conduct the experiment on Digit dataset with the incomplete rate of 0.4. We do not disrupt the instance order of the Digit dataset and implement OPIMC with filled (OPIMC-F) and not filled (OPIMC-NF) degraded cluster centers, respectively. We run the experiment for 10 passes and the results are shown in Figure 5, from which we can witness the effect of filling degenerate cluster centers very directly.

Conclution

In this paper, we propose an efficient and effective method to deal with large scale incomplete multi-view clustering problem by adequately considering the instance missing information with the help of regularized matrix factorization and weighted matrix factorization. By introducing two global statistics, OPIMC can directly get clustering results and effectively determine the termination of iteration process. The experimental results on four real-world multi-view datasets demonstrate the efficiency and effectiveness of our method. In the future, the generation of new classes and the robustness of algorithms will be the focus of our consideration.

Figure 4: Different block size study on Digit dataset, where the incomplete rate is set as 0.4 and the experiment is run for 10 passes.
Figure 5: Clustering center degradation study on Digit dataset, where OPIMC-F and OPIMC-NF denote the OPIMC with filled and not filled degraded cluster centers, respectively. Besides, the incomplete rate is set as 0.4 and the experiment is run for 10 passes.

Acknowledgments

This work is supported in part by the NSFC under Grant No.61672281, and the Key Program of NSFC under Grant No.61732006

References

  • [Bickel and Scheffer2004] Bickel, S., and Scheffer, T. 2004. Multi-view clustering. In ICDM, 19–26.
  • [Blum and Mitchell1998] Blum, A., and Mitchell, T. 1998. Combining labeled and unlabeled data with co-training. In COLT, 92–100.
  • [Cao et al.2015] Cao, X.; Zhang, C.; Fu, H.; Liu, S.; and Zhang, H. 2015. Diversity-induced multi-view subspace clustering. In CVPR, 586–594.
  • [Chao, Sun, and Bi2017] Chao, G.; Sun, S.; and Bi, J. 2017. A survey on multi-view clustering. arXiv preprint arXiv:1712.06246.
  • [Ding and Fu2014] Ding, Z., and Fu, Y. 2014. Low-rank common subspace for multi-view learning. In ICDM, 110–119.
  • [Fan et al.2017] Fan, Y.; Liang, J.; He, R.; Hu, B.-G.; and Lyu, S. 2017. Robust localized multi-view subspace clustering. arXiv preprint arXiv:1705.07777.
  • [Gunasekar et al.2017] Gunasekar, S.; Woodworth, B. E.; Bhojanapalli, S.; Neyshabur, B.; and Srebro, N. 2017. Implicit regularization in matrix factorization. In NIPS, 6151–6159.
  • [Hu and Chen2018] Hu, M., and Chen, S. 2018. Doubly aligned incomplete multi-view clustering. In IJCAI, 2262–2268.
  • [Kim and Choi2009] Kim, Y.-D., and Choi, S. 2009. Weighted nonnegative matrix factorization. In ICASSP, 1541–1544.
  • [Kong, Ding, and Huang2011] Kong, D.; Ding, C.; and Huang, H. 2011. Robust nonnegative matrix factorization using -norm. In CIKM, 673–682.
  • [Kumar and Daumé2011] Kumar, A., and Daumé, H. 2011. A co-training approach for multi-view spectral clustering. In ICML, 393–400.
  • [Lee and Seung1999] Lee, D. D., and Seung, H. S. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788–791.
  • [Li, Jiang, and Zhou2014] Li, S.-Y.; Jiang, Y.; and Zhou, Z.-H. 2014. Partial multi-view clustering. In AAAI, 1968–1974.
  • [Li2016] Li, Y. 2016. Advances in multi-view matrix factorizations. In IJCNN, 3793–3800.
  • [Liu et al.2013] Liu, J.; Wang, C.; Gao, J.; and Han, J. 2013. Multi-view clustering via joint nonnegative matrix factorization. In SDM, 252–260.
  • [Liu et al.2017] Liu, X.; Li, M.; Wang, L.; Dou, Y.; Yin, J.; and Zhu, E. 2017. Multiple kernel k-means with incomplete kernels. In AAAI, 2259–2265.
  • [Nguyen et al.2015] Nguyen, T. D.; Le, T.; Bui, H.; and Phung, D. Q. 2015. Large-scale online kernel learning with random feature reparameterization. In IJCAI, 2750–2756.
  • [Nie et al.2018] Nie, F.; Cai, G.; Li, J.; and Li, X. 2018. Auto-weighted multi-view learning for image clustering and semi-supervised classification. IEEE Transactions on Image Processing 27(3):1501–1511.
  • [Qi et al.2017] Qi, M.; Wang, T.; Liu, F.; Zhang, B.; Wang, J.; and Yi, Y. 2017.

    Unsupervised feature selection by regularized matrix factorization.

    Neurocomputing 593–610.
  • [Ren et al.2018] Ren, P.; Xiao, Y.; Xu, P.; Guo, J.; Chen, X.; Wang, X.; and Fang, D. 2018. Robust auto-weighted multi-view clustering. In IJCAI, 2644–2650.
  • [Shao et al.2016] Shao, W.; He, L.; Lu, C.-T.; and Yu, P. S. 2016. Online multi-view clustering with incomplete views. In ICBDA, 1012–1017.
  • [Shao, He, and Philip2015] Shao, W.; He, L.; and Philip, S. Y. 2015. Multiple incomplete views clustering via weighted nonnegative matrix factorization with regularization. In ECML PKDD, 318–334.
  • [Son et al.2017] Son, J. W.; Jeon, J.; Lee, A.; and Kim, S.-J. 2017. Spectral clustering with brainstorming process for multi-view data. In AAAI, 2548–2554.
  • [Sun2013] Sun, S. 2013. A survey of multi-view machine learning. Neural Computing and Applications 23(7-8):2031–2038.
  • [Tao et al.2017] Tao, Z.; Liu, H.; Li, S.; Ding, Z.; and Fu, Y. 2017. From ensemble clustering to multi-view clustering. In IJCAI, 2843–2849.
  • [Tulsiani et al.2017] Tulsiani, S.; Zhou, T.; Efros, A. A.; and Malik, J. 2017. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 209–217.
  • [Wan, Wei, and Zhang2018] Wan, Y.; Wei, N.; and Zhang, L. 2018. Efficient adaptive online learning via frequent directions. In IJCAI, 2748–2754.
  • [Wang et al.2011] Wang, F.; Tan, C.; Li, P.; and König, A. C. 2011. Efficient document clustering via online nonnegative matrix factorizations. In SDM, 908–919.
  • [Wang, Yang, and Li2016] Wang, H.; Yang, Y.; and Li, T. 2016. Multi-view clustering via concept factorization with local manifold regularization. In ICDM, 1245–1250.
  • [Wen et al.2018] Wen, J.; Zhang, Z.; Xu, Y.; and Zhong, Z. 2018. Incomplete multi-view clustering via graph regularized matrix factorization. arXiv preprint arXiv:1809.05998.
  • [Wu et al.2018] Wu, B.; Wang, E.; Zhu, Z.; Chen, W.; and Xiao, P. 2018. Manifold nmf with norm for clustering. Neurocomputing 273:78–88.
  • [Xing et al.2017] Xing, J.; Niu, Z.; Huang, J.; Hu, W.; Zhou, X.; and Yan, S. 2017. Towards robust and accurate multi-view and partially-occluded face alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 40:987–1001.
  • [Xue et al.2017] Xue, H.-J.; Dai, X.; Zhang, J.; Huang, S.; and Chen, J. 2017. Deep matrix factorization models for recommender systems. In IJCAI, 3203–3209.
  • [Zhao, Ding, and Fu2017] Zhao, H.; Ding, Z.; and Fu, Y. 2017. Multi-view clustering via deep matrix factorization. In AAAI, 2921–2927.
  • [Zhao, Liu, and Fu2016] Zhao, H.; Liu, H.; and Fu, Y. 2016. Incomplete multi-modal visual data grouping. In IJCAI, 2392–2398.
  • [Zhu, Ting, and Zhou2017] Zhu, Y.; Ting, K. M.; and Zhou, Z.-H. 2017. New class adaptation via instance generation in one-pass class incremental learning. In ICDM, 1207–1212.
  • [Zong et al.2017] Zong, L.; Zhang, X.; Zhao, L.; Yu, H.; and Zhao, Q. 2017. Multi-view clustering via multi-manifold regularized non-negative matrix factorization. Neural Networks 88:74–89.