Semi-Supervised Information-Maximization Clustering

04/30/2013 ∙ by Daniele Calandriello, et al. ∙ Politecnico di Milano 0

Semi-supervised clustering aims to introduce prior knowledge in the decision process of a clustering algorithm. In this paper, we propose a novel semi-supervised clustering algorithm based on the information-maximization principle. The proposed method is an extension of a previous unsupervised information-maximization clustering algorithm based on squared-loss mutual information to effectively incorporate must-links and cannot-links. The proposed method is computationally efficient because the clustering solution can be obtained analytically via eigendecomposition. Furthermore, the proposed method allows systematic optimization of tuning parameters such as the kernel width, given the degree of belief in the must-links and cannot-links. The usefulness of the proposed method is demonstrated through experiments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The objective of clustering is to classify unlabeled data into disjoint groups based on their similarity, and clustering has been extensively studied in statistics and machine learning.

K-means [12]

is a classic algorithm that clusters data so that the sum of within-cluster scatters is minimized. However, its usefulness is rather limited in practice because k-means only produces linearly separated clusters.

Kernel k-means [5] overcomes this limitation by performing k-means in a feature space induced by a reproducing kernel function [15]. Spectral clustering [17, 13] first unfolds non-linear data manifolds based on sample-sample similarity by a spectral embedding method, and then performs k-means in the embedded space.

These non-linear clustering techniques is capable of handling highly complex real-world data. However, they lack objective model selection strategies, i.e., tuning parameters included in kernel functions or similarity measures need to be manually determined in an unsupervised manner. Information-maximization clustering can address the issue of model selection [1, 7, 19]

, which learns a probabilistic classifier so that some information measure between feature vectors and cluster assignments is maximized in an unsupervised manner. In the information-maximization approach, tuning parameters included in kernel functions or similarity measures can be systematically determined based on the information-maximization principle. Among the information-maximization clustering methods, the algorithm based on

squared-loss mutual information (SMI) was demonstrated to be promising [19], because it gives the clustering solution analytically via eigendecomposition.

In practical situations, additional side information regarding clustering solutions is often provided, typically in the form of must-links and cannot-links: A set of sample pairs which should belong to the same cluster and a set of sample pairs which should belong to different clusters, respectively. Such semi-supervised clustering (which is also known as clustering with side information) has been shown to be useful in practice [23, 6, 22]. Spectral learning [9] is a semi-supervised extension of spectral clustering that enhances the similarity with side information so that sample pairs tied with must-links have higher similarity and sample pairs tied with cannot-links have lower similarity. On the other hand, constrained spectral clustering [24] incorporates the must-links and cannot-links as constraints in the optimization problem.

However, in the same way as unsupervised clustering, the above semi-supervised clustering methods suffer from lack of objective model selection strategies and thus tuning parameters included in similarity measures need to be determined manually. In this paper, we extend the unsupervised SMI-based clustering method to the semi-supervised clustering scenario. The proposed method, called semi-supervised SMI-based clustering (3SMIC), gives the clustering solution analytically via eigendecomposition with a systematic model selection strategy. Through experiments on real-world datasets, we demonstrate the usefulness of the proposed 3SMIC algorithm.

2 Information-Maximization Clustering with Squared-Loss Mutual Information

In this section, we formulate the problem of information-maximization clustering and review an existing unsupervised clustering method based on squared-loss mutual information.

2.1 Information-Maximization Clustering

The goal of unsupervised clustering is to assign class labels to data instances so that similar instances share the same label and dissimilar instances have different labels. Let

be feature vectors of data instances, which are drawn independently from a probability distribution with density

. Let be class labels that we want to obtain, where denotes the number of classes and we assume to be known through the paper.

The information-maximization approach tries to learn the class-posterior probability

in an unsupervised manner so that some “information” measure between feature and label is maximized. Mutual information (MI) [16] is a typical information measure for this purpose [1, 7]:

(1)

An advantage of the information-maximization formulation is that tuning parameters included in clustering algorithms such as the Gaussian width and the regularization parameter can be objectively optimized based on the same information-maximization principle. However, MI is known to be sensitive to outliers

[3]

, due to the log function that is strongly non-linear. Furthermore, unsupervised learning of class-posterior probability

under MI is highly non-convex and finding a good local optimum is not straightforward in practice [7].

To cope with this problem, an alternative information measure called squared-loss MI (SMI) has been introduced [20]:

(2)

Ordinary MI is the Kullback-Leibler (KL) divergence [10] from to , while SMI is the Pearson (PE) divergence [14]. Both KL and PE divergences belong to the class of the Ali-Silvey-Csiszár divergences [2, 4], which is also known as the -divergences. Thus, MI and SMI share many common properties, for example they are non-negative and equal to zero if and only if feature vector and label are statistically independent. Information-maximization clustering based on SMI was shown to be computationally advantageous [19]. Below, we review the SMI-based clustering (SMIC) algorithm.

2.2 SMI-Based Clustering

In unsupervised clustering, it is not straightforward to approximate SMI (2) because labeled samples are not available. To cope with this problem, let us expand the squared term in Eq.(2). Then SMI can be expressed as

(3)

Suppose that the class-prior probability

is uniform, i.e.,

Then we can express Eq.(3) as

(4)

Let us approximate the class-posterior probability by the following kernel model:

(5)

where is the parameter vector, denotes the transpose, and denotes a kernel function. Let be the kernel matrix whose element is given by and let . Approximating the expectation over in Eq.(4) with the empirical average of samples and replacing the class-posterior probability with the kernel model , we have the following SMI approximator:

(6)

Under orthonormality of

, a global maximizer is given by the normalized eigenvectors

associated with the eigenvalues

of . Because the sign of eigenvector is arbitrary, we set the sign as

where denotes the sign of a scalar and denotes the -dimensional vector with all ones. On the other hand, since

and the class-prior probability was set to be uniform, we have the following normalization condition:

Furthermore, negative outputs are rounded up to zero to ensure that outputs are non-negative.

Taking these post-processing issues into account, cluster assignment for is determined as the maximizer of the approximation of :

where denotes the -dimensional vector with all zeros, the max operation for vectors is applied in the element-wise manner, and denotes the -th element of a vector. Note that is used in the above derivation.

For out-of-sample prediction, cluster assignment for new sample may be obtained as

(7)

This clustering algorithm is called the SMI-based clustering (SMIC).

SMIC may include a tuning parameter, say , in the kernel function, and the clustering results of SMIC depend on the choice of . A notable advantage of information-maximization clustering is that such a tuning parameter can be systematically optimized by the same information-maximization principle. More specifically, cluster assignments are first obtained for each possible

. Then the quality of clustering is measured by the SMI value estimated from paired samples

. For this purpose, the method of least-squares mutual information (LSMI) [20] is useful because LSMI was theoretically proved to be the optimal non-parametric SMI approximator [21]; see Appendix 0.A for the details of LSMI. Thus, we compute LSMI as a function of and the tuning parameter value that maximizes LSMI is selected as the most suitable one:

3 Semi-Supervised SMIC

In this section, we extend SMIC to a semi-supervised clustering scenario where a set of must-links and a set of cannot-links are provided. A must-link means that and are encouraged to belong to the same cluster, while a cannot-link means that and are encouraged to belong to different clusters. Let be the must-link matrix with if a must-link between and is given and otherwise. In the same way, we define the cannot-link matrix . We assume that for all , and for all . Below, we explain how must-link constraints and cannot-link constraints are incorporated into the SMIC formulation.

3.1 Incorporating Must-Links in SMIC

When there exists a must-link between and , we want them to share the same class label. Let

be the soft-response vector for . Then the inner product is maximized if and only if and belong to the same cluster with perfect confidence, i.e., and are the same vector that commonly has in one element and otherwise. Thus, the must-link information may be utilized by increasing if . We implement this idea as

where determines how strongly we encourage the must-links to be satisfied.

Let us further utilize the following fact: If and belong to the same class and and belong to the same class, and also belong to the same class (i.e., a friend’s friend is a friend). Letting , we can incorporate this in SMIC as

If we set , we have a simpler form:

which will be used later.

3.2 Incorporating Cannot-Links in SMIC

We may incorporate cannot-links in SMIC in the opposite way to must-links, by decreasing the inner product to zero. This may be implemented as

where determines how strongly we encourage the cannot-links to be satisfied.

In binary clustering problems where , if and belong to different classes and and belong to different classes, and actually belong to the same class (i.e., an enemy’s enemy is a friend). Let , and we will take this also into account as must-links in the following way:

If we set , we have

which will be used later.

3.3 Kernel Matrix Modification

Another approach to incorporating must-links and cannot-links is to modify the kernel matrix . More specifically, is increased if there exists a must-link between and , and is decreased if there exists a cannot-link between and . In this paper, we assume , and set if there exists a must-link between and and if there exists a cannot-link between and . Let us denote the modified kernel matrix by :

This modification idea has been employed in spectral clustering [9] and demonstrated to be promising.

3.4 Semi-Supervised SMIC

Finally, we combine the above three ideas as

where

(8)

When , we fix at zero.

This is the learning criterion of semi-supervised SMIC (3SMIC), whose global maximizer can be analytically obtained under orthonormality of by the leading eigenvectors of . Then the same post-processing as the original SMIC is applied and cluster assignments are obtained. Out-of-sample prediction is also possible in the same way as the original SMIC.

3.5 Tuning Parameter Optimization in 3SMIC

In the original SMIC, an SMI approximator called LSMI is used for tuning parameter optimization (see Appendix 0.A). However, this is not suitable in semi-supervised scenarios because the 3SMIC solution is biased to satisfy must-links and cannot-links. Here, we propose using

where indicates tuning parameters in 3SMIC; in the experiments, , , and the parameter included in the kernel function is optimized. “Penalty” is the penalty for violating must-links and cannot-links, which is the only tuning factor in the proposed algorithm.

4 Experiments

In this section, we experimentally evaluate the performance of the proposed 3SMIC method in comparison with popular semi-supervised clustering methods: Spectral Learning (SL) [9] and Constrained Spectral Clustering (CSC) [24]. Both methods first perform semi-supervised spectral embedding and then k-means to obtain clustering results. However, we observed that the post k-means step is often unreliable, so we use simple thresholding [17] in the case of binary clustering for CSC.

In all experiments, we will use a sparse version of the local-scaling kernel [25] as the similarity measure:

where denotes the set of nearest neighbors for ( is the kernel parameter), is a local scaling factor defined as , and is the -th nearest neighbor of . For SL and CSC, we test (note that there is no systematic way to choose the value of ), except for the spam dataset with that caused numerical problems in the eigensolver when testing SL. On the other hand, in 3SMIC, we choose the value of from based on the following criterion:

(9)

where is the number of violated links. Here, both the LSMI value and the penalty are normalized so that they fall into the range . The and parameters in 3SMIC are also chosen based on Eq.(9).

We use the following real-world datasets:

parkinson

(, , and ): The UCI dataset consisting of voice registration from patients suffering Parkinson’s disease and sane individuals. From the voice, 22 feature are extracted.

spam

(, , and ): The UCI dataset consisting of e-mails, categorized in spam and non-spam. 48 word-frequency features and 9 other frequency features such as specific characters and capitalization are extracted.

sonar

(, , and ): The UCI dataset consisting of sonar responses from a metal object or a rock. The features represent energy in each frequency band.

digits500

(, , and ): The USPS digits dataset consisting of images of written numbers from 0 to 9, 256 pixels in gray-scale. We randomly sampled 50 numbers for each digit, and normalized each pixel intensity in the image between and .

digits5k

(, , and ): The same USPS digits dataset but with 500 images for each class.

faces100

(, , and ): The Olivetti Face dataset consisting of images of human faces in gray-scale, 4096 pixels. We randomly selected 10 persons, and used 10 images for each person.

Must-links and cannot-links are generated from the true labels, by randomly sampling a couple of points and adding the corresponding 1 to the or matrices depending on the labels of the chosen pair of points. CSC is excluded from digits5k and spam because it needs to solve the complete eigenvalue problem and its computational cost was too high on these large datasets.

We evaluate the clustering performance by the Adjusted Rand Index (ARI) [8]

between learned and true labels. Larger ARI values mean better clustering performance, and the zero ARI value means that the clustering result is equivalent to random. We investigate the ARI score as functions of the number of links used. Averages and standard deviations of ARI over

runs with different random seeds are plotted in Figure 1.

Figure 1: Experimental results.

We can separate the datasets into two groups. For digits500, digits5k, and faces100, the baseline performances without links are reasonable; the introduction of links significantly increase the performance, bringing it around from .

For parkinson, spam, and sonar where the baseline performances without links are poor, introduction of links quickly allow the clustering algorithms to find better solutions. In particular, only 3% of links (relative to all possible pairs) was sufficient for parkinson to achieve reasonable performance and surprisingly only 0.1% for spam.

As shown in Figure 1, the performance of SL depends heavily on the choice of , but there is no systematic way to choose for SL. It is important to notice that 3SMIC with chosen systematically based on Eq.(9) performs as good as SL with tuned optimally with hindsight. On the other hand, CSC performs rather stably for different values of , and it works particularly well for binary problems with a small number of links. However, it performs very poorly for multi-class problems; we observed that the post k-means step is highly unreliable and poor local optimal solutions are often produced. For the binary problems, simply performing thresholding [17] instead of using k-means was found to be useful. However, there seems no simple alternatives in multi-class cases. The performance of CSC drops in parkinson and sonar when the number of links is increased, although such phenomena were not observed in SL and 3SMIC.

Overall, the proposed 3SMIC method was shown to be a promising semi-supervised clustering method.

5 Conclusions

In this paper, we proposed a novel information-maximization clustering method that can utilize side information provided as must-links and cannot-links. The proposed method, named semi-supervised SMI-based clustering (3SMIC), allows us to compute the clustering solution analytically. This is a strong advantage over conventional approaches such as constrained spectral clustering (CSC) that requires a post k-means step, because this post k-means step can be unreliable and cause significant performance degradation in practice. Furthermore, 3SMIC allows us to systematically determine tuning parameters such as the kernel width based on the information-maximization principle, given our reliance on the provided side information. Through experiments, we demonstrated that automatically-tuned 3SMIC perform as good as optimally-tuned spectral learning (SL) with hindsight.

The focus of our method in this paper was to inherit the analytical treatment of the original unsupervised SMIC in semi-supervised learning scenarios. Although this analytical treatment was demonstrated to be highly useful in experiments, our future work will explore more efficient use of must-links and cannot-links.

In the previous work [11], negative eigenvalues were found to contain useful information. Because must-link and cannot-link matrices can possess negative eigenvalues, it is interesting to investigate the role and effect of negative eigenvalues in the context of information-maximization clustering.

Acknowledgements

This work was carried out when DC was visiting at Tokyo Institute of Technology by the YSEP program. GN was supported by the MEXT scholarship, and MS was supported by MEXT KAKENHI 25700022 and AOARD.

Appendix 0.A Least-Squares Mutual Information

The solution of SMIC depends on the choice of the kernel parameter included in the kernel function . Since SMIC was developed in the framework of SMI maximization, it would be natural to determine the kernel parameter so as to maximize SMI. A direct approach is to use the SMI estimator given by Eq.(6) also for kernel parameter choice. However, this direct approach is not favorable because is an unsupervised SMI estimator (i.e., SMI is estimated only from unlabeled samples ). On the other hand, in the model selection stage, we have already obtained labeled samples , and thus supervised estimation of SMI is possible. For supervised SMI estimation, a non-parametric SMI estimator called least-squares mutual information (LSMI) [20] was proved to achieve the optimal convergence rate to the true SMI. Here we briefly review LSMI.

The key idea of LSMI is to learn the following density-ratio function [18],

without going through probability density/mass estimation of , , and . More specifically, let us employ the following density-ratio model:

(10)

where and is a kernel function. In practice, we use the Gaussian kernel

where the Gaussian width is the kernel parameter. To save the computation cost, we limit the number of kernel bases to with randomly selected kernel centers.

The parameter in the above density-ratio model is learned so that the following squared error is minimized:

(11)

Let be the parameter vector corresponding to the kernel bases , i.e., is the sub-vector of consisting of indices . Let be the number of samples in class , which is the same as the dimensionality of . Then an empirical and regularized version of the optimization problem (11) is given for each as follows:

(12)

where () is the regularization parameter. is the matrix and is the -dimensional vector defined as

where is the -th sample in class (which corresponds to ).

A notable advantage of LSMI is that the solution can be computed analytically as

Then a density-ratio estimator is obtained analytically as follows:

The accuracy of the above least-squares density-ratio estimator depends on the choice of the kernel parameter included in and the regularization parameter in Eq.(12). These tuning parameter values can be systematically optimized based on cross-validation as follows: First, the samples are divided into disjoint subsets of approximately the same size (we use in the experiments). Then a density-ratio estimator is obtained using (i.e., all samples without ), and its out-of-sample error (which corresponds to Eq.(11) without irrelevant constant) for the hold-out samples is computed as

where denotes the summation over all combinations of and in (and thus terms), while denotes the summation over all pairs in (and thus terms). This procedure is repeated for , and the average of the above hold-out error over all is computed as

Then the kernel parameter and the regularization parameter that minimize the average hold-out error are chosen as the most suitable ones.

Finally, given that SMI (2) can be expressed as

an SMI estimator based on the above density-ratio estimator, called least-squares mutual information (LSMI), is given as follows:

where is a density-ratio estimator obtained above.

References

  • [1] F. Agakov and D. Barber (2006) Kernelized infomax clustering. In Advances in Neural Information Processing Systems 18, Y. Weiss, B. Schölkopf, and J. Platt (Eds.), Cambridge, MA, USA, pp. 17–24. Cited by: §1, §2.1.
  • [2] S. M. Ali and S. D. Silvey (1966) A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society, Series B 28 (1), pp. 131–142. Cited by: §2.1.
  • [3] A. Basu, I. R. Harris, N. L. Hjort, and M. C. Jones (1998) Robust and efficient estimation by minimising a density power divergence. Biometrika 85 (3), pp. 549–559. Cited by: §2.1.
  • [4] I. Csiszár (1967) Information-type measures of difference of probability distributions and indirect observation. Studia Scientiarum Mathematicarum Hungarica 2, pp. 229–318. Cited by: §2.1.
  • [5] M. Girolami (2002) Mercer kernel-based clustering in feature space.

    IEEE Transactions on Neural Networks

    13 (3), pp. 780–784.
    Cited by: §1.
  • [6] A. B. Goldberg (2007) Dissimilarity in graph-based semi-supervised classification. In

    Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS2007)

    ,
    pp. 155–162. Cited by: §1.
  • [7] R. Gomes, A. Krause, and P. Perona (2010) Discriminative clustering by regularized information maximization. In Advances in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta (Eds.), pp. 766–774. Cited by: §1, §2.1.
  • [8] L. Hubert and P. Arabie (1985) Comparing partitions. Journal of Classification 2 (1), pp. 193–218. Cited by: §4.
  • [9] S. D. Kamvar, D. Klein, and C. D. Manning (2003) Spectral learning. In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI2003), pp. 561–566. Cited by: §1, §3.3, §4.
  • [10] S. Kullback and R. A. Leibler (1951) On information and sufficiency. The Annals of Mathematical Statistics 22, pp. 79–86. Cited by: §2.1.
  • [11] J. Laub and K.-R. Müller (2004-Jul.) Feature discovery in non-metric pairwise data. Journal of Machine Learning Research 5, pp. 801–818. Cited by: §5.
  • [12] J. B. MacQueen (1967) Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, Berkeley, CA, USA, pp. 281–297. Cited by: §1.
  • [13] A. Y. Ng, M. I. Jordan, and Y. Weiss (2002)

    On spectral clustering: analysis and an algorithm

    .
    In Advances in Neural Information Processing Systems 14, T. G. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Cambridge, MA, USA, pp. 849–856. Cited by: §1.
  • [14] K. Pearson (1900) On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine Series 5 50 (302), pp. 157–175. Cited by: §2.1.
  • [15] B. Schölkopf and A. J. Smola (2002) Learning with kernels. MIT Press, Cambridge, MA, USA. Cited by: §1.
  • [16] C. Shannon (1948) A mathematical theory of communication. Bell Systems Technical Journal 27, pp. 379–423. Cited by: §2.1.
  • [17] J. Shi and J. Malik (2000) Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8), pp. 888–905. Cited by: §1, §4, §4.
  • [18] M. Sugiyama, T. Suzuki, and T. Kanamori (2012) Density ratio estimation in machine learning. Cambridge University Press, Cambridge, UK. Cited by: Appendix 0.A.
  • [19] M. Sugiyama, M. Yamada, M. Kimura, and H. Hachiya (2011-Jun. 28–Jul. 2) On information-maximization clustering: Tuning parameter selection and analytic solution. In Proceedings of 28th International Conference on Machine Learning (ICML2011), L. Getoor and T. Scheffer (Eds.), Bellevue, Washington, USA, pp. 65–72. Cited by: §1, §2.1.
  • [20] T. Suzuki, M. Sugiyama, T. Kanamori, and J. Sese (2009) Mutual information estimation reveals global associations between stimuli and biological processes. BMC Bioinformatics 10 (1), pp. S52 (12 pages). Cited by: Appendix 0.A, §2.1, §2.2.
  • [21] T. Suzuki and M. Sugiyama (2013) Sufficient dimension reduction via squared-loss mutual information estimation. Neural Computation 3 (25), pp. 725–758. Cited by: §2.2.
  • [22] K. Wagstaff, C. Cardie, S. Rogers, and S. Schrödl (2001) Constrained k-means clustering with background knowledge. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML2001), pp. 577–584. Cited by: §1.
  • [23] K. Wagstaff and C. Cardie (2000) Clustering with instance-level constraints. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML2000), pp. 1103–1110. Cited by: §1.
  • [24] X. Wang and I. Davidson (2010) Flexible constrained spectral clustering. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2010), pp. 563–572. Cited by: §1, §4.
  • [25] L. Zelnik-Manor and P. Perona (2005) Self-tuning spectral clustering. In Advances in Neural Information Processing Systems 17, L. K. Saul, Y. Weiss, and L. Bottou (Eds.), Cambridge, MA, USA, pp. 1601–1608. Cited by: §4.