1 Introduction
Discovering clusters in unlabeled data is one of the most fundamental scientific tasks, with an endless list of practical applications in data mining, pattern recognition, and machine learning
jain1999data ; yang2017discrete ; huang2018self ; chen2012fgkm ; ren2019semi . It is wellknown that labels are expensive to obtain, so clustering techniques are useful tools to process data and to reveal its underlying structure.Over the past decades, a number of clustering techniques have been developed xu2005survey ; huang2019auto ; yang2018fast ; peng2018integrate
. One main class of clustering methods is Kmeans and its various extensions. To some extent, these techniques are distancebased methods. Kmeans has been extensively investigated since its introduction in 1957 by Lloyd
lloyd1982least , due to its simplicity and effectiveness. However, it is only suitable for data points that are evenly spread around some centroids yang2017towards ; chen2013twkm . To make it work under general circumstances, much effort has been spent on mapping data to a certain space. One representative approach is using kernel method. The first Kernel Kmeans algorithm was proposed in 1998 scholkopf1998nonlinear . Although some data points cannot be separated in the original data representation, they are linearly separable in kernel space.Recently, robust Kernel Kmeans (RKKM) method has been developed du2015robust . Different from other Kmeans algorithms, RKKM uses
norm to evaluate the fidelity term. Consequently, RKKM can alleviate the adverse effects of noise and outliers considerably. It shows that RKKM can achieve superior performance on a number of realworld data sets. However, its performance still depends on the choice of the kernel function.
Graphbased algorithms, as another main category of clustering methods, have been drawing growing attention. Among them, spectral clustering is a leading and highly popular method due to its ability in incorporating manifold information with good performance
ng2002spectral ; kang2018unified. In particular, it embeds the data into the eigenspace of the Laplacian matrix, derived from the pairwise similarities between data points
peluffo2016relationship . A commonly used way of similarity measure is the Gaussian kernel von2007tutorial . Nevertheless, it is challenging to select an appropriate scaling factor zelnik2005self . Kernel spectral clustering (KSC) alzate2010multiway and its variants langone2016kernel have also been proposed.Recently, a novel approach which models graph construction as an optimization problem has been proposed nie2014clustering ; Cheng2010 ; elhamifar2013sparse ; liu2013robust ; tang2018learning . It works by either performing adaptive local structure learning or representing each data point as a weighted combination of other data points. The second approach can capture the global structure information and can be easily extended to kernel space kang2017kernel ; kang2019low . That is to say, one seeks to learn a highquality graph from artificially constructed kernel matrix. These methods are free of similarity metrics or kernel parameters, thus they are more appealing to realworld applications.
Although the above approach has shown much better performance than traditional methods, it also causes some information loss. In particular, it learns similarity graph from the data itself without considering other prior information. Consequently, some similarity information might get lost, which should be helpful for our graph learning haeffele2017structured ; Kang2019aa
. On the other hand, preserving similarity information has been shown to be important for feature selection
zhao2013similarity . In zhao2013similarity, new feature vector
is obtained by maximizing , where is the refined similarity matrix derived from original kernel matrix . In this paper, we propose a way to preserve the similarity information between samples when we learn the graph and cluster labels. To the best of our knowledge, this is the first work that develops similarity preserving strategy for graph learning.It is necessary to point out that the key point of this paper is the similarity preserving concept. Though there are many similarity learning methods in the literature, they often ignore to explicitly retain structure information of original data. Concretely, we expect our learned similarity matrix approximates predefined kernel matrix to some extent. The quality of similarity matrix is crucial to many tasks, such as graph embedding cai2018comprehensive , where the lowdimensional representation is expected to respect the neighborhood relation characterized by .
In addition, most existing graphbased clustering methods perform clustering in two separate steps ng2002spectral ; liu2013robust ; elhamifar2013sparse ; kang2019robust . Specifically, they first construct a graph. Then, the obtained graph is inputted to the spectral clustering algorithm. In this approach, the quality of the graph is not guaranteed, which might not be suitable for subsequent clustering nie2014clustering ; kang2017twin . In this paper, the structure information of the graph is explicitly considered in our model, so that the component number in the learned graph is equal to the number of clusters. Then, we can directly obtain cluster indicators from the graph itself without performing further graph cut or Kmeans clustering steps. Extensive experimental results validate the effectiveness of our proposed method.
The contributions of this paper are twofold:

Our proposed model has the capability of similarity preserving. This is the first attempt to preserve the sample’s similarity information when we construct the similarity graph. Consequently, the quality of the learned graph would be enhanced.

Cluster structure is seamlessly incorporated into our objective function. As a result, the component number in the learned graph is equal to the number of clusters, such that the vertices in each connected component of the graph are partitioned into one cluster. Therefore, we directly obtain cluster indicators from the graph itself without performing further graph cut or Kmeans clustering steps.
Notations. Given a data matrix with features and samples, we denote its th element and th column as and , respectively. The norm of vector is represented by , where is the transpose of . The squared Frobenius norm is defined as .
represents the identity matrix with the proper size.
means all the elements of are nonnegative. denotes the inner product of two matrices.2 Preliminaries
In this section, we give a brief overview of two popular similarity learning techniques which have been developed recently.
2.1 Adaptive Local Structure Learning
For each data point , it can be connected to data point
with probability
. Closer points should have a larger probability, thus characterizes the similarity between and niyogi2004locality ; nie2014clustering . Since has the negative correlation with the distance between and , the determination of can be achieved by optimizing the following problem:(1) 
where is the tradeoff parameter. Here, is adaptively learned from the data. This idea has recently been applied in a number of problems. Nonnegative matrix factorization dacheng2017 ; huang2018adaptive , feature selection du2015unsupervised , multiview learning nie2017multi , just to name a few. One limitation of this method is that it can only capture the local structure information and thus the performance might be deteriorated.
2.2 Adaptive Global Structure Learning
To explore the global structure information, methods based on selfexpression, have become increasingly popular in recent years Cheng2010 ; yang2014data . The basic idea is to encode each datum as a weighted combination of other samples, i.e., its direct neighbors and reachable indirect neighbors. If is quite similar to , coefficient , which denotes the contribution from to , should be large. From this point of view, can be viewed as the similarity between the data points. The corresponding optimization problem can be formulated as:
(2) 
This has drawn significant attention and achieved impressive performance in a number of applications, including face recognition
zhang2011sparse , motion segmentation liu2013robust ; elhamifar2013sparse zhuang2017label .As a matter of fact, Eq. (2) is related to some dimension reduction methods. For example, in Locally Linear Embedding (LLE), nearest neighbors are first identified for each data point roweis2000nonlinear . Then each data point is reconstructed by a linear combination of its k nearest neighbors. By contrast, Eq. (2) uses all data points and determines the neighbors automatically according to the optimization result. Thus, it is supposed to capture the global structure information. Eq. (2) is different from Locality Preserving Projections (LPP), which tries to preserve the neighborhood structure during the dimension reduction process he2004locality . LPP uses a predefined similarity matrix to characterize the neighbor relations, while Eq. (2) is trying to learn this similarity matrix automatically from data. For Laplacian Eigenmaps (LE), a similarity graph matrix is also predefined belkin2002laplacian
. On the other hand, Principal Component Analysis (PCA) aims to find a projection so that the variance is maximized in lowdimensional space, which is less relevant to the similarity learning methods.
To capture the nonlinear structure information of data, Eq. (2) can be easily extended to kernel space, which gives
(3) 
where is the trace operator and is the kernel matrix of . This model recovers the linear relations among the data in the new representation, and thus the nonlinear relations in the original space. Eq. (3) is more general than Eq. (2) and reduces to Eq. (2) if a linear kernel function is applied.
3 Similarity Preserving Clustering
The aforementioned two learning mechanisms lead to much better performance than traditional similarity measure based techniques in many realworld applications. However, they ignore some important information. Specifically, as they operate on the data itself, some data relation information might get lost haeffele2017structured . Since we seek to learn a highquality similarity graph, data relation information would be crucial to our task. In this paper, we aim to retain this information.
Because the kernel matrix itself contains similarity information of data points, we expect to be close to . To this end, we optimize the following objective function
(4) 
Although we claim similarity preserving, Eq. (4) also keeps dissimilarity information. For example, if points and are from different clusters, , then would hold. Note that we already have term in Eq. (3). Hence we can combine Eq. (4) and Eq. (3) by introducing a coefficient , we have
(5) 
Although we just make a small modification to Eq. (3), it makes a lot of sense in practice. By tuning parameter , we can control how much relation information we want to keep from the original kernel matrix. In particular, can avoid the conflicts between the precomputed similarity and the learned similarity . If is not suitable to reveal the underlying relationships among samples, we just set , which means that there is no similarity preserving effect. The influence of selecting parameter is elaborated in Sec. 5.2.4.
Eq. (5) provides a framework to learn graph matrix with similarity preservation. Further clustering is achieved by using spectral clustering and Kmeans clustering on the learned graph. These separate steps often lead to suboptimal solutions nie2014clustering and Kmeans is sensitive to the initialization of cluster centers. To this end, we propose to unify clustering with graph learning, so that two tasks can be simultaneously achieved. Speficially, if there are clusters in the data, we hope to learn a graph with exactly number of connected components. Obviously, Eq. (5) can hardly satisfy such a constraint. To this end, we leverage the following theorem:
Theorem 1.
mohar1991laplacian The number of connected components
is equal to the multiplicity of zero as an eigenvalue of its Laplacian matrix
.Since is positive semidefinite, it has nonnegative eigenvalues . Theorem 1 indicates that if is satisfied, the graph would be ideal and the data points are already clustered into clusters. According to Fan’s theorem fan1949theorem , we obtain
(6) 
where Laplacian matrix , is a diagonal matrix and its elements are the column sums of . Combining Eq.(6) with Eq.(5), our proposed Similarity Preserving Clustering (SPC) is formulated as
(7) 
By solving Eq. (7), we can obtain a structured graph , which has exactly connected components. By running Matlab builtin function graphconncomp, we can obtain which component each sample belongs to.
3.1 Optimization
The problem (7) can be easily solved with an alternating optimization approach. When is fixed, Eq. (7) becomes
(8) 
It is quite standard to achieve which is formed by the eigenvectors of corresponding to the smallest eigenvalues.
When is fixed, Eq. (7) can be written columnwisely
(9) 
where and we have used equality . It is easy to achieve the closedform solution
(10) 
Once parameter is given, becomes a constant. Therefore, we only perform the matrix inversion once. We summarize the steps in Algorithm 1. Our algorithm stops if the maximum iteration number 200 is reached or the relative change of is less than .
4 Multiple Kernel Learning
Different kernels correspond to different notions of similarity and lead to different results. This makes it not be reliable for practical applications. Multiple Kernel Learning (MKL) offers a principal way to encode complementary information and automatically learning an optimal combination of distinct kernels sonnenburg2006large ; kang2018self
. Instead of heuristic kernel selection, a principled method is developed to automatically learn a good combination of multiple kernels.
Specifically, suppose there are in total kernels, we introduce kernel weight for each kernel, e.g., for kernel . We denote the combined kernel as , and the weight distribution satisfies gonen2011multiple . Finally, our multiple kernel learning based similarity preserving clustering (mSPC) method can be formulated as
(11) 
The problem (11) can be solved in a similar way as (7). In specific, we repeat the following steps.
5 Experiment
In this section, we perform extensive experiments to demonstrate the effectiveness of our proposed models.
5.1 Experiment on Synthetic Data
We generate a synthetic data set with 300 points. The data points distribute in the pattern of two moons. Each moon is considered as a cluster. In Figure 1, we present the clustering results of our proposed SPC and standard kmeans. Gaussian kernel with is used in our SPC model. We can observe that our method performs much better than kmeans. We quantitatively assess the clustering performance in terms of accuracy (Acc), normalized mutual information (NMI), and Purity. For SPC, Acc, NMI, and Purity are 93%, 63.49%, 93%, respectively. Correspondingly, kmeans produces 73.67%, 16.87%, and 73.67%.
5.2 Experiment on Real Data
5.2.1 Data sets
# instances  # features  # classes  

YALE  165  1024  15 
JAFFE  213  676  10 
ORL  400  1024  40 
YEAST  1484  1470  10 
USPS  1854  256  20 
TR11  414  6429  9 
TR41  878  7454  10 
TR45  690  8261  10 
We conduct our experiments with eight benchmark data sets, which are widely used in clustering experiments. We show the statistics of these data sets in Table 1.
These data sets are from different fields. Specifically, YALE^{1}^{1}1http://vision.ucsd.edu/content/yalefacedatabase, JAFFE^{2}^{2}2http://www.kasrl.org/jaffe.html, ORL^{3}^{3}3http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html are three face databases. Each image represents different facial expressions or configurations due to times, illumination conditions, and glasses/no glasses. Figures (a)a and (b)b show some example images from JAFFE and YALE database, respectively. YEAST is a microarray data set. USPS data set^{4}^{4}4http://wwwstat.stanford.edu/ tibs/ElemStatLearn/data.html is obtained from the scanning of handwritten digits from envelopes by the U.S. Postal Service. Some sample digits are shown in Figure (c)c. The last three data sets in Table 1 are text data^{5}^{5}5http://wwwusers.cs.umn.edu/ han/data/tmdata.tar.gz.
We manually construct 12 kernels. They consist of seven Gaussian kernels with and denotes the maximal distance between data points; four polynomial kernels of the form with and ; a linear kernel . Furthermore, all kernel matrices are normalized to range to avoid numerical inconsistence.



5.2.2 Comparison Methods
To fully investigate the performance of our method on clustering, we choose a good set of methods to compare. In general, they can be classified into two categories: graphbased and kernelbased clustering methods.

Spectral Clustering (SC) ng2002spectral . We use kernel matrix as its graph input. For our SPC, we learn graph from kernels.

Robust Kernel Kmeans (RKKM)^{6}^{6}6https://github.com/csliangdu/RMKKMdu2015robust . As an extension to the classical Kmeans clustering method, RKKM has the capability of dealing with nonlinear structures, noise, and outliers in the data. We also compare with its multiple kernel learning version: RMKKM.

Simplex Sparse Representation (SSR) huang2015new . Based on selfexpression, SSR achieves satisfying performance in numerous data sets.

Clustering with Adaptive Neighbor (CAN) nie2014clustering . Based on adaptive local structure learning, CAN constructs the similarity graph by Eq. (1).

Kernel Spectral Clustering (KSC) alzate2010multiway . Based on a weighted kernel principal component analysis strategy, KSC performs multiway spectral clustering. Moreover, Balanced Line Fit (BLF) is proposed to obtain model parameters.

Our proposed SPC and mSPC^{7}^{7}7https://github.com/sckangz/SPC. Our proposed single kernel and multiple kernel learning based similarity preserving clustering methods.

SPC1 and mSPC1. To observe the effect of similarity preserving, we let in SPC and name this method as SPC1. Similarly, we have mSPC1. They are equivalent to the methods in kang2017twin .

Multiple Kernel Kmeans (MKKM)^{8}^{8}8http://imp.iis.sinica.edu.tw/IVCLab/research/Sean/mkfc/codehuang2012multiple . The MKKM extends Kmeans in a multiple kernel setting. It imposes a different constraint on the kernel weight distribution.

Affinity Aggregation for Spectral Clustering (AASC)^{9}^{9}9http://imp.iis.sinica.edu.tw/IVCLab/research/Sean/aasc/codehuang2012affinity . The AASC is an extension of spectral clustering to the situation when multiple affinities exist.
For a fair comparison, we either use the recommended parameter settings in their respective papers or tune each method to obtain the best performance. In fact, the optimal performance for SC, RKKM, MKKM, AASC, and RMKKM methods can be easily obtained through implementing the package given in du2015robust . SC, SSR, and CAN are parameterfree models. KSC selects parameters based on Balanced Line Fit principle.
5.2.3 Results
All results are summarized in Table 2. We can see that our methods SPC and mSPC outperform others in most cases. In particular, we have the following observations: 1) The improvement of SPC over SPC1 is considerable. Noted that the only difference between SPC and SPC1 is that SPC explicitly considers the similarity preserving effect. In other words, SPC adds the proposed term Eq. (4), which aims to keep the learned graph matrix close to the kernel matrix , so that the similarity information carried by the kernel matrix will transfer to the learned graph matrix. Hence this demonstrates the significance of similarity preserving in graph learning; 2) For multiple kernel methods, mSPC also performs better than mSPC1 in most experiments. This once again confirms the importance of similarity preserving; 3) Compared to selfexpression based method SSR, our advantage is also obvious. For example, in TR11, SPC enhances the accuracy from 41.06% to 78.26%. Note that our basic objective function Eq. (3) is also derived from selfexpression idea. However, our method is kernel method; 4) With respect to traditional spectral clustering, kernel spectral clustering, the recently proposed robust kernel Kmeans method, adaptive local structure graph learning method, the improvement is very promising; 5) In terms of multiple kernel learning approach, mSPC also achieves much better performance than other stateoftheart techniques.
To better illustrate the effect of similarity preserving, we visualize the results of YALE data in Figure 3. In specific, Figure (a)a plots the histogram of , i.e., the difference between the learned kernel in Eq. (11) and similarity matrix. We can see that they are quite close for most elements and the difference is the refinement brought by our learning algorithm. The manually constructed kernel matrix often fails to reflect the underlying relationships among samples due to the inherent noise or the inappropriate use of a metric function. This is validated by the experimental results. Note that for SC method, we directly treat kernel matrix as similarity matrix, while for our proposed SPC method, we use the learned similarity matrix to perform clustering. It can be seen that the results of SPC are much better than that of SC.
Figure (b)b displays the difference between the original data and the reconstructed data . Good reconstruction means that represents the similarity pretty well. The reconstruction error accounts for noise or outliers in the original data. As shown by Figure (b)b, our learned reconstructs the original data with a small error. Therefore, our proposed approach can achieve a highquality similarity matrix.
5.2.4 Parameter Analysis
As shown in Eq. (11), there are three parameters in our model. As we mentioned previously, is bigger than one. Take YALE data set as an example, we demonstrate the sensitivity of our model mSPC in Figure 4. We can see that it works well over a wide range of values. Note that case has been discussed by SPC1 and mSPC1 methods in Table 2. When , Eq. (7) and (11) do not possess similarity preserving capability.
6 Conclusion
In this paper, we propose a clustering algorithm which can exploit similarity information of raw data. Furthermore, the structure information of a graph is also considered in our objective function. Comprehensive experimental results on real data sets well demonstrate the superiority of the proposed method on the clustering task. It has been shown that the performance of the proposed method is largely determined by the choice of the kernel function. To this end, we develop a multiple kernel learning method, which is capable of automatically learning an appropriate kernel from a pool of candidate kernels. In the future, we will examine the effectiveness of our framework on the semisupervised learning task.
Acknowledgment
This paper was in part supported by Grants from the Natural Science Foundation of China (Nos. 61806045, 61572111, and 61872062), three Fundamental Research Fund for the Central Universities of China (Nos. ZYGX2017KYQD177, A03017023701012, and ZYGX2016J086), and a 985 Project of UESTC (No. A1098531023601041).
References
References
 (1) A. K. Jain, M. N. Murty, P. J. Flynn, Data clustering: a review, ACM computing surveys (CSUR) 31 (3) (1999) 264–323.
 (2) Y. Yang, F. M. Shen, Z. Huang, , H. T. Shen, X. L. Li, Discrete nonnegative spectral clustering, IEEE Transactions on Knowledge and Data Engineering 29 (9) (2017) 1834–1845.
 (3) S. Huang, Z. Kang, Z. Xu, Selfweighted multiview clustering with soft capped norm, KnowledgeBased Systems 158 (2018) 1–8.

(4)
X. Chen, Y. Ye, X. Xu, J. Z. Huang, A feature group weighting method for subspace clustering of highdimensional data, Pattern Recognition 45 (1) (2012) 434–446.
 (5) Y. Ren, K. Hu, X. Dai, L. Pan, S. C. Hoi, Z. Xu, Semisupervised deep embedded clustering, Neurocomputing 325 (2019) 121–130.

(6)
R. Xu, D. Wunsch, Survey of clustering algorithms, IEEE Transactions on neural networks 16 (3) (2005) 645–678.
 (7) S. Huang, Z. Kang, I. W. Tsang, Z. Xu, Autoweighted multiview clustering via kernelized graph learning, Pattern Recognition 88 (2019) 174–184.
 (8) X. Yang, W. Yu, R. Wang, G. Zhang, F. Nie, Fast spectral clustering learning with hierarchical bipartite graph for largescale data, Pattern Recognition Letters.
 (9) C. Peng, Z. Kang, S. Cai, Q. Cheng, Integrate and conquer: Doublesided twodimensional kmeans via integrating of projection and manifold construction, ACM Transactions on Intelligent Systems and Technology (TIST) 9 (5) (2018) 57.
 (10) S. Lloyd, Least squares quantization in pcm, IEEE transactions on information theory 28 (2) (1982) 129–137.

(11)
B. Yang, X. Fu, N. D. Sidiropoulos, M. Hong, Towards kmeansfriendly spaces: Simultaneous deep learning and clustering, in: International Conference on Machine Learning, 2017, pp. 3861–3870.
 (12) X. Chen, X. Xu, Y. Ye, J. Z. Huang, TWkmeans: Automated Twolevel Variable Weighting Clustering Algorithm for Multiview Data, IEEE Transactions on Knowledge and Data Engineering 25 (4) (2013) 932–944.
 (13) B. Schölkopf, A. Smola, K.R. Müller, Nonlinear component analysis as a kernel eigenvalue problem, Neural computation 10 (5) (1998) 1299–1319.

(14)
L. Du, P. Zhou, L. Shi, H. Wang, M. Fan, W. Wang, Y.D. Shen, Robust multiple
kernel kmeans using
norm, in: Proceedings of the 24th International Conference on Artificial Intelligence, AAAI Press, 2015, pp. 3476–3482.

(15)
A. Y. Ng, M. I. Jordan, Y. Weiss, et al., On spectral clustering: Analysis and an algorithm, Advances in neural information processing systems 2 (2002) 849–856.
 (16) Z. Kang, C. Peng, Q. Cheng, Z. Xu, Unified spectral clustering with optimal graph, in: ThirtySecond AAAI Conference on Artificial Intelligence, 2018.
 (17) D. H. PeluffoOrdóñez, M. A. Becerra, A. E. CastroOspina, X. BlancoValencia, J. C. AlvaradoPérez, R. Therón, A. AnayaIsaza, On the relationship between dimensionality reduction and spectral clustering from a kernel viewpoint, in: Distributed Computing and Artificial Intelligence, 13th International Conference, Springer, 2016, pp. 255–264.
 (18) U. Von Luxburg, A tutorial on spectral clustering, Statistics and computing 17 (4) (2007) 395–416.
 (19) L. ZelnikManor, P. Perona, Selftuning spectral clustering, in: Advances in neural information processing systems, 2005, pp. 1601–1608.
 (20) C. Alzate, J. A. Suykens, Multiway spectral clustering with outofsample extensions through weighted kernel pca, IEEE transactions on pattern analysis and machine intelligence 32 (2) (2010) 335–347.

(21)
R. Langone, R. Mall, C. Alzate, J. A. Suykens, Kernel spectral clustering and applications, in: Unsupervised Learning Algorithms, Springer, 2016, pp. 135–161.
 (22) F. Nie, X. Wang, H. Huang, Clustering and projected clustering with adaptive neighbors, in: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2014, pp. 977–986.
 (23) B. Cheng, J. Yang, S. Yan, Y. Fu, T. S. Huang, Learning with l1graph for image analysis, Trans. Img. Proc. 19 (4) (2010) 858–866. doi:10.1109/TIP.2009.2038764.
 (24) E. Elhamifar, R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE transactions on pattern analysis and machine intelligence 35 (11) (2013) 2765–2781.
 (25) G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, Y. Ma, Robust recovery of subspace structures by lowrank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1) (2013) 171–184.
 (26) C. Tang, X. Zhu, X. Liu, M. Li, P. Wang, C. Zhang, L. Wang, Learning joint affinity graph for multiview subspace clustering, IEEE Transactions on Multimedia.
 (27) Z. Kang, C. Peng, Q. Cheng, Kerneldriven similarity learning, Neurocomputing 267 (2017) 210–219.
 (28) Z. Kang, L. Wen, W. Chen, Z. Xu, Lowrank kernel learning for graphbased clustering, KnowledgeBased Systems 163 (2019) 510–517.
 (29) B. D. Haeffele, R. Vidal, Structured lowrank matrix factorization: Global optimality, algorithms, and applications, arXiv preprint arXiv:1708.07850.
 (30) Z. Kang, Y. Lu, Y. Su, C. Li, Z. Xu, Similarity learning via kernel preserving embedding, in: Proceedings of the ThirtyThird AAAI Conference on Artificial Intelligence (AAAI19). AAAI Press, 2019.
 (31) Z. Zhao, L. Wang, H. Liu, J. Ye, On similarity preserving feature selection, IEEE Transactions on Knowledge and Data Engineering 25 (3) (2013) 619–632.
 (32) H. Cai, V. W. Zheng, K. C.C. Chang, A comprehensive survey of graph embedding: Problems, techniques, and applications, IEEE Transactions on Knowledge and Data Engineering 30 (9) (2018) 1616–1637.
 (33) Z. Kang, H. Pan, S. C. Hoi, Z. Xu, Robust graph learning from noisy data, IEEE transactions on cybernetics.
 (34) Z. Kang, C. Peng, Q. Cheng, Twin learning for similarity and clustering: A unified kernel approach, in: Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence (AAAI17). AAAI Press, 2017.
 (35) X. Niyogi, Locality preserving projections, in: Neural information processing systems, Vol. 16, MIT, 2004, p. 153.
 (36) L. Zhang, Q. Zhang, B. Du, J. You, D. Tao, Adaptive manifold regularized matrix factorization for data clustering, in: Twentysixth international joint conference on artificial intelligence, 2017, pp. 33999–3405.
 (37) S. Huang, Z. Xu, J. Lv, Adaptive local structure learning for document coclustering, KnowledgeBased Systems 148 (2018) 74–84.
 (38) L. Du, Y.D. Shen, Unsupervised feature selection with adaptive structure learning, in: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2015, pp. 209–218.
 (39) F. Nie, G. Cai, X. Li, Multiview clustering and semisupervised classification with adaptive neighbours., in: AAAI, 2017, pp. 2408–2414.
 (40) Y. Yang, Z. Wang, J. Yang, J. Wang, S. Chang, T. S. Huang, Data clustering by laplacian regularized l1graph., in: AAAI, 2014, pp. 3148–3149.

(41)
L. Zhang, M. Yang, X. Feng, Sparse representation or collaborative representation: Which helps face recognition?, in: Computer vision (ICCV), 2011 IEEE international conference on, IEEE, 2011, pp. 471–478.
 (42) L. Zhuang, Z. Zhou, S. Gao, J. Yin, Z. Lin, Y. Ma, Label information guided graph construction for semisupervised learning, IEEE Transactions on Image Processing.
 (43) S. T. Roweis, L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, science 290 (5500) (2000) 2323–2326.
 (44) X. He, P. Niyogi, Locality preserving projections, in: Advances in neural information processing systems, 2004, pp. 153–160.
 (45) M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in: Advances in neural information processing systems, 2002, pp. 585–591.
 (46) B. Mohar, Y. Alavi, G. Chartrand, O. Oellermann, The laplacian spectrum of graphs, Graph theory, combinatorics, and applications 2 (871898) (1991) 12.

(47)
K. Fan, On a theorem of weyl concerning eigenvalues of linear transformations i, Proceedings of the National Academy of Sciences 35 (11) (1949) 652–655.
 (48) S. Sonnenburg, G. Rätsch, C. Schäfer, B. Schölkopf, Large scale multiple kernel learning, Journal of Machine Learning Research 7 (Jul) (2006) 1531–1565.
 (49) Z. Kang, X. Lu, J. Yi, Z. Xu, Selfweighted multiple kernel learning for graphbased clustering and semisupervised classification, in: Proceedings of the 27th International Joint Conference on Artificial Intelligence, AAAI Press, 2018, pp. 2312–2318.
 (50) M. Gönen, E. Alpaydın, Multiple kernel learning algorithms, Journal of machine learning research 12 (Jul) (2011) 2211–2268.
 (51) J. Huang, F. Nie, H. Huang, A new simplex sparse learning model to measure data similarity for clustering., in: IJCAI, 2015, pp. 3569–3575.
 (52) H.C. Huang, Y.Y. Chuang, C.S. Chen, Multiple kernel fuzzy clustering, IEEE Transactions on Fuzzy Systems 20 (1) (2012) 120–134.
 (53) H.C. Huang, Y.Y. Chuang, C.S. Chen, Affinity aggregation for spectral clustering, in: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 773–780.
Comments
There are no comments yet.