A Novel Bayesian Cluster Enumeration Criterion for Unsupervised Learning

10/22/2017 ∙ by Freweyni K. Teklehaymanot, et al. ∙ Technische Universität Darmstadt 0

The Bayesian Information Criterion (BIC) has been widely used for estimating the number of data clusters in an observed data set for decades. The original derivation, referred to as classic BIC, does not include information about the specific model selection problem at hand, which renders it generic. However, very little effort has been made to check its appropriateness for cluster analysis. In this paper we derive BIC from first principle by formulating the problem of estimating the number of clusters in a data set as maximization of the posterior probability of candidate models given observations. We provide a general BIC expression which is independent of the data distribution given some mild assumptions are satisfied. This serves as an important milestone when deriving BIC for specific data distributions. Along this line, we provide a closed-form BIC expression for multivariate Gaussian distributed observations. We show that incorporating the clustering problem during the derivation of BIC results in an expression whose penalty term is different from the penalty term of the classic BIC. We propose a two-step cluster enumeration algorithm that utilizes a model-based unsupervised learning algorithm to partition the observed data according to each candidate model and the proposed BIC for selecting the model with the optimal number of clusters. The performance of the proposed criterion is tested using synthetic and real-data examples. Simulation results show that our proposed criterion outperforms the existing BIC-based cluster enumeration methods. Our proposed criterion is particularly powerful in estimating the number of data clusters when the observations have unbalanced and overlapping clusters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Statistical model selection is concerned with choosing a model that adequately explains the observations from a family of candidate models. Many methods have been proposed in the literature, see for example [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] and the review in [26]. Model selection problems arise in various applications, such as the estimation of the number of signal components [15, 23, 24, 25, 18, 20, 19]

, the selection of the number of non-zero regression parameters in regression analysis

[4, 22, 14, 5, 6, 11, 12, 21], and the estimation of the number of data clusters in unsupervised learning problems [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. In this paper, our focus lies on the derivation of a Bayesian model selection criterion for cluster analysis.
The estimation of the number of clusters, also called cluster enumeration, has been intensively researched for decades [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45] and a popular approach is to apply the Bayesian Information Criterion (BIC) [31, 37, 32, 33, 29, 38, 39, 40, 41, 44]. The BIC finds the large sample limit of the Bayes’ estimator which leads to the selection of a model that is a posteriori most probable. It is consistent if the true data generating model belongs to the family of candidate models under investigation. The BIC was originally derived by Schwarz in [8] assuming that (i) the observations are independent and identically distributed (iid), (ii) they arise from an exponential family of distributions, and (iii) the candidate models are linear in parameters. Ignoring these rather restrictive assumptions, the BIC has been used in a much larger scope of model selection problems. A justification of the widespread applicability of the BIC was provided in [16] by generalizing Schwarz’s derivation. In [16], the authors drop the first two assumptions made by Schwarz given that some regularity conditions are satisfied. The BIC is a generic criterion in the sense that it does not incorporate information regarding the specific model selection problem at hand. As a result, it penalizes two structurally different models the same way if they have the same number of unknown parameters.
The works in [46, 15] have shown that model selection rules that penalize for model complexity have to be examined carefully before they are applied to specific model selection problems. Nevertheless, despite the widespread use of the BIC for cluster enumeration [31, 37, 32, 33, 29, 38, 39, 40, 41, 44], very little effort has been made to check the appropriateness of the original BIC formulation [16] for cluster analysis. One noticeable work towards this direction was made in [38] by providing a more accurate approximation to the marginal likelihood for small sample sizes. This derivation was made specifically for mixture models assuming that they are well separated. The resulting expression contains the original BIC term plus some additional terms that are based on the mixing probability and the Fisher Information Matrix (FIM) of each partition. The method proposed in [38]

requires the calculation of the FIM for each cluster in each candidate model, which is computationally very expensive and impractical in real world applications with high dimensional data. This greatly limits the applicability of the cluster enumeration method proposed in

[38]. Other than the above mentioned work, to the best of our knowledge, no one has thoroughly investigated the derivation of the BIC for cluster analysis using large sample approximations.
We derive a new BIC by formulating the problem of estimating the number of partitions (clusters) in an observed data set as maximization of the posterior probability of the candidate models. Under some mild assumptions, we provide a general expression for the BIC, , which is applicable to a broad class of data distributions. This serves as a starting point when deriving the BIC for specific data distributions in cluster analysis. Along this line, we simplify by imposing an assumption on the data distribution. A closed-form expression, , is derived assuming that the data set is distributed as a multivariate Gaussian. The derived model selection criterion, , is based on large sample approximations and it does not require the calculation of the FIM. This renders our criterion computationally cheap and practical compared to the criterion presented in [38].

Standard clustering methods, such as the Expectation-Maximization (EM) and K-means algorithm, can be used to cluster data only when the number of clusters is supplied by the user. To mitigate this shortcoming, we propose a two-step cluster enumeration algorithm which provides a principled way of estimating the number of clusters by utilizing existing clustering algorithms. The proposed two-step algorithm uses a model-based unsupervised learning algorithm to partition the observed data into the number of clusters provided by the candidate model prior to the calculation of

for that particular model. We use the EM algorithm, which is a model-based unsupervised learning algorithm, because it is suitable for Gaussian mixture models and this complies with the Gaussianity assumption made by

. However, the model selection criterion that we propose is more general and can be used as a wrapper around any clustering algorithm, see [47] for a survey of clustering methods.
The paper is organized as follows. Section II formulates the problem of estimating the number of clusters given data. The proposed generic Bayesian cluster enumeration criterion, , is introduced in Section III. Section IV presents the proposed Bayesian cluster enumeration algorithm for multivariate Gaussian data in detail. A brief description of the existing BIC-based cluster enumeration methods is given in Section V. Section VI provides a comparison of the penalty terms of different cluster enumeration criteria. A detailed performance evaluation of the proposed criterion and comparisons to existing BIC-based cluster enumeration criteria using simulated and real data sets are given in Section VII. Finally, concluding remarks are drawn and future directions are briefly discussed in Section VIII. A detailed proof is provided in Appendix A, whereas Appendix B

contains the vector and matrix differentiation rules that we used in the derivations.


Notation: Lower- and upper-case boldface letters stand for column vectors and matrices, respectively; Calligraphic letters denote sets with the exception of which is used for the likelihood function; represents the set of real numbers; denotes the set of positive integers; Probability density and mass functions are denoted by and , respectively;

represents a Gaussian distributed random variable

with mean and covariance matrix ; stands for the estimator (or estimate) of the parameter ; denotes the natural logarithm; iid stands for independent and identically distributed; (A.) denotes an assumption, for example (A.1) stands for the first assumption; represents Landau’s term which tends to a constant as the data size goes to infinity; stands for an identity matrix; denotes an

all zero matrix;

represents the cardinality of the set ; stands for vector or matrix transpose; denotes the determinant of the matrix ; represents the trace of a matrix; denotes the Kronecker product; refers to the staking of the columns of an arbitrary matrix into one long column vector.

Ii Problem Formulation

Given a set of -dimensional vectors , let be a partition of into clusters for . The subsets (clusters) are independent, mutually exclusive, and non-empty. Let be a family of candidate models that represent a partitioning of into subsets, where . The parameters of each model are denoted by which lies in a parameter space . Let

denote the probability density function (pdf) of the observation set

given the candidate model and its associated parameter matrix . Let be the discrete prior of the model over the set of candidate models and let denote a prior on the parameter vectors in given .

According to Bayes’ theorem, the joint posterior density of

and given the observed data set is given by

(1)

where is the pdf of . Our objective is to choose the candidate model , where , which is most probable a posteriori assuming that

  1. the true number of clusters in the observed data set satisfies the constraint .

Mathematically, this corresponds to solving

(2)

where is the posterior probability of given the observations . can be written as

(3)

where is the likelihood function. can also be determined via

(4)

instead of Eq. (2) since is a monotonic function. Hence, taking the logarithm of Eq. (3) results in

(5)

where is replaced by (a constant) since it is not a function of and thus has no effect on the maximization of over . Since the partitions (clusters) are independent, mutually exclusive, and non-empty, and can be written as

(6)
(7)

Substituting Eqs. (6) and (7) into Eq. (5) results in

(8)

Maximizing over all candidate models involves the computation of the logarithm of a multidimensional integral. Unfortunately, the solution of the multidimensional integral does not possess a closed analytical form for most practical cases. This problem can be solved using either numerical integration or approximations that allow a closed-form solution. In the context of model selection, closed-form approximations are known to provide more insight into the problem than numerical integration [15]. Following this line of argument, we use Laplace’s method of integration [15, 46, 48] and provide an asymptotic approximation to the multidimensional integral in Eq. (8).

Iii Proposed Bayesian Cluster Enumeration Criterion

In this section, we derive a general BIC expression for cluster analysis, which we call . Under some mild assumptions, we provide a closed-form expression that is applicable to a broad class of data distributions.
In order to provide a closed-form analytic approximation to Eq. (8), we begin by approximating the multidimensional integral using Laplace’s method of integration. Laplace’s method of integration makes the following assumptions.

  1. with has first- and second-order derivatives which are continuous over the parameter space .

  2. with has a global maximum at , where is an interior point of .

  3. with is continuously differentiable and its first-order derivatives are bounded on .

  4. The negative of the Hessian matrix of

    (9)

    is positive definite, where is the cardinality of . That is, for and , where is the

    th eigenvalue of

    and is a small positive constant.

The first step in Laplace’s method of integration is to write the Taylor series expansion of and around . We begin by approximating by its second-order Taylor series expansion around as follows:

(10)

where . The first derivative of evaluated at vanishes because of assumption (A.3). With

(11)

substituting Eq. (10) into Eq. (11) and approximating by its Taylor series expansion yields

(12)

where HOT denotes higher order terms and is a Gaussian kernel with mean and covariance matrix . The second term in the first line of Eq. (12) vanishes because it simplifies to , where is a constant (see [48, p. 53] for more details). Consequently, Eq. (12) reduces to

(13)

where stands for the determinant, given that . Using Eq. (13), we are thus able to provide an asymptotic approximation to the multidimensional integral in Eq. (8). Now, substituting Eq. (13) into Eq. (8), we arrive at

(14)

where

(15)

is the Fisher Information Matrix (FIM) of data from the th partition.
In the derivation of , so far, we have made no distributional assumption on the data set except that the log-likelihood function and the prior on the parameter vectors , for , should satisfy some mild conditions under each model . Hence, Eq. (14) is a general expression of the posterior probability of the model given for a general class of data distributions that satisfy assumptions (A.2)-(A.5). The BIC is concerned with the computation of the posterior probability of candidate models and thus Eq. (14) can also be written as

(16)

After calculating for each candidate model , the number of clusters in is estimated as

(17)

However, calculating using Eq. (16) is a computationally expensive task as it requires the estimation of the FIM, , for each cluster in the candidate model . Our objective is to find an asymptotic approximation for in order to simplify the computation of . We solve this problem by imposing specific assumptions on the distribution of the data set . In the next section, we provide an asymptotic approximation for assuming that contains iid multivariate Gaussian data points.

Iv Proposed Bayesian Cluster Enumeration Algorithm for Multivariate Gaussian Data

We propose a two-step approach to estimate the number of partitions (clusters) in and provide an estimate of cluster parameters, such as cluster centroids and covariance matrices, in an unsupervised learning framework. The proposed approach consists of a model-based clustering algorithm, which clusters the data set according to each candidate model , and a Bayesian cluster enumeration criterion that selects the model which is a posteriori most probable.

Iv-a Proposed Bayesian Cluster Enumeration Criterion for Multivariate Gaussian Data

Let denote the observed data set which can be partitioned into clusters . Each cluster contains data vectors that are realizations of iid Gaussian random variables , where and represent the centroid and the covariance matrix of the th cluster, respectively. Further, let denote a set of Gaussian candidate models and let there be a clustering algorithm that partitions into independent, mutually exclusive, and non-empty subsets (clusters) by providing parameter estimates for each candidate model , where and . Assume that (A.1)-(A.7) are satisfied.

Theorem 1.

The posterior probability of given can be asymptotically approximated as

(18)

where is the cardinality of the subset and it satisfies . The term sums the logarithms of the number of data vectors in each cluster .

Proof.

Proving Theorem 1 requires finding an asymptotic approximation to in Eq. (16) and, based on this approximation, deriving an expression for . A detailed proof is given in Appendix A. ∎

Once the Bayesian Information Criterion, , is computed for each candidate model , the number of partitions (clusters) in is estimated as

(19)
Remark.

The proposed criterion, , and the original BIC as derived in [8, 16] differ in terms of their penalty terms. A detailed discussion is provided in Section VI.

The first step in calculating for each model is the partitioning of the data set into clusters and the estimation of the associated cluster parameters using an unsupervised learning algorithm. Since the approximations in are based on maximizing the likelihood function of Gaussian distributed random variables, we use a clustering algorithm that is based on the maximum likelihood principle. Accordingly, a natural choice is the EM algorithm for Gaussian mixture models.

Iv-B The Expectation-Maximization (EM) Algorithm for Gaussian Mixture Models

The EM algorithm finds maximum likelihood solutions for models with latent variables [49]. In our case, the latent variables are the cluster memberships of the data vectors in , given that the -component Gaussian mixture distribution of a data vector can be written as

(20)

where represents the -variate Gaussian pdf and is the mixing coefficient of the th cluster. The goal of the EM algorithm is to maximize the log-likelihood function of the data set with respect to the parameters of interest as follows:

(21)

where and . Maximizing Eq. (21) with respect to the elements of results in coupled equations. The EM algorithm solves these coupled equations using a two-step iterative procedure. The first step (E step) evaluates , which is an estimate of the probability that data vector belongs to the th cluster at the th iteration, for and . is calculated as follows:

(22)

where and represent the centroid and covariance matrix estimates, respectively, of the th cluster at the previous iteration (). The second step (M step) re-estimates the cluster parameters using the current values of as follows:

(23)
(24)
(25)

The E and M steps are performed iteratively until either the cluster parameter estimates or the log-likelihood estimate converges.
A summary of the estimation of the number of clusters in an observed data set using the proposed two-step approach is provided in Algorithm 1. Note that the computational complexity of is only , which can easily be ignored during the run-time analysis of the proposed approach. Hence, since the EM algorithm is run for all candidate models in , the computational complexity of the proposed two-step approach is , where is a fixed stopping threshold of the EM algorithm.

Inputs: data set ; set of candidate models
for  do
     Step 1: Model-based clustering
     Step 1.1: The EM algorithm
     for  do
         Initialize using K-means++ [50, 51]
         
         
     end for
     for  do
         E step:
         for  do
              for  do
                  Calculate using Eq. (22)
              end for
         end for
         M step:
         for  do
              Determine , , and via Eqs. (23)-(25)
         end for
         Check for convergence of either or
         if convergence condition is satisfied then
              Exit for loop
         end if
     end for
     Step 1.2: Hard clustering
     for  do
         for  do
         end for
     end for
     for  do
         
     end for
     Step 2: Calculate via Eq. (18)
end for
Estimate the number of clusters, , in via Eq. (19)
Algorithm 1 Proposed two-step cluster enumeration approach

V Existing BIC-Based Cluster Enumeration Methods

As discussed in Section I, existing cluster enumeration algorithms that are based on the original BIC use the criterion as it is known from parameter estimation tasks without questioning its validity on cluster analysis. Nevertheless, since these criteria have been widely used, we briefly review them to provide a comparison to the proposed criterion , which is given by Eq. (18).
The original BIC, as derived in [16], evaluated at a candidate model is written as

(26)

where denotes the likelihood function and . In Eq. (26), denotes the data-fidelity term, while is the penalty term. Under the assumption that the observed data is Gaussian distributed, the data-fidelity terms of and the ones of our proposed criterion, , are exactly the same. The only deference between the two is the penalty term. Hence, we use a similar procedure as in Algorithm 1 to implement the original BIC as a wrapper around the EM algorithm.

Moreover, the original BIC is commonly used as a wrapper around K-means by assuming that the data points that belong to each cluster are iid as Gaussian and all clusters are spherical with an identical variance, i.e.

for , where is the common variance of the clusters in [29, 32, 33]. Under these assumptions, the original BIC is given by

(27)

where denotes the original BIC of the candidate model derived under the assumptions stated above and is the number of estimated parameters in . Ignoring the model independent terms, can be written as

(28)

where

(29)

is the maximum likelihood estimator of the common variance.
In our experiments, we implement as a wrapper around the K-means++ algorithm [51]. The implementation of the proposed BIC as a wrapper around the K-means++ algorithm is given by Eq. (37).

Vi Comparison of the Penalty Terms of Different Bayesian Cluster Enumeration Criteria

Comparing Eqs. (18),  (26), and  (27), we notice that they have a common form [46, 11], that is,

(30)

but with different penalty terms, where

(31)
(32)
(33)
Remark.

and carry information about the structure of the data only on their data-fidelity term, which is the first term in Eq. (30). On the other hand, as shown in Eq. (18), both the data-fidelity and penalty terms of our proposed criterion, , contain information about the structure of the data.

The penalty terms of and depend linearly on , while the penalty term of our proposed criterion, , depends on in a non-linear manner. Comparing the penalty terms in Eqs. (31)-(33), has the weakest penalty term. In the asymptotic regime, the penalty terms of and coincide. But, in the finite sample regime, for values of , the penalty term of is stronger than the penalty term of . Note that the penalty term of our proposed criterion, , depends on the number of data vectors in each cluster, , of each candidate model , while the penalty term of the original BIC depends only on the total number of data vectors in the data set. Hence, the penalty term of our proposed criterion might exhibit sensitivities to the initialization of cluster parameters and the associated number of data vectors per cluster.

Vii Experimental Evaluation

In this section, we compare the cluster enumeration performance of our proposed criterion, given by Eq. (18), with the cluster enumeration methods discussed in Section V, namely and given by Eqs. (26) and (28), respectively, using synthetic and real data sets. We first describe the performance measures used for comparing the different cluster enumeration criteria. Then, the numerical experiments performed on synthetic data sets and the results obtained from real data sets are discussed in detail. For all simulations, we assume that a family of candidate models is given with and , where is the true number of clusters in the data set . All simulation results are an average of Monte Carlo experiments unless stated otherwise. The compared cluster enumeration criteria are based on the same initial cluster parameters in each Monte Carlo experiment, which allows for a fair comparison.
The MATLAB© code that implements the proposed two-step algorithm and the Bayesian cluster enumeration methods discussed in Section V is available at:
https://github.com/FreTekle/Bayesian-Cluster-Enumeration

Vii-a Performance Measures

The empirical probability of detection , the empirical probability of underestimation , the empirical probability of selection, and the Mean Absolute Error (MAE) are used as performance measures. The empirical probability of detection is defined as the probability with which the correct number of clusters is selected and it is calculated as

(34)

where MC is the total number of Monte Carlo experiments, is the estimated number of clusters in the th Monte Carlo experiment, and is an indicator function which is defined as

(35)

The empirical probability of underestimation is the probability that . The empirical probability of selection is defined as the probability with which the number of clusters specified by each candidate model in is selected. The last performance measure, which is the Mean Absolute Error (MAE), is computed as

(36)

Vii-B Numerical Experiments

Vii-B1 Simulation Setup

We consider two synthetic data sets, namely Data-1 and Data-2, in our simulations. Data-1, shown in Fig. 0(a), contains realizations of the random variables , where , with cluster centroids and covariance matrices

The first cluster is linearly separable from the others, while the remaining clusters are overlapping. The number of data vectors per cluster is specified as , , and , where is a constant.
The second data set, Data-2, contains realizations of the random variables , where , with cluster centroids , , , , , , , , , , and covariance matrices

where . As depicted in Fig. 0(b), Data-2 contains eight identical and spherical clusters and two elliptical clusters. There exists an overlap between two clusters, while the rest of the clusters are well separated. All clusters in this data set have the same number of data vectors.

(a) Data-1 for
(b) Data-2 for
Fig. 1: Synthetic data sets.

Vii-B2 Simulation Results

Data-1 is particularly challenging for cluster enumeration criteria because it has not only overlapping but also unbalanced clusters. Cluster unbalance refers to the fact that different clusters have a different number of data vectors, which might result in some clusters dominating the others. The impact of cluster overlap and unbalance on and MAE is displayed in Table I. This table shows and MAE as a function of , where is allowed to take values from the set . The cluster enumeration performance of is lower than the other methods because it is designed for spherical clusters with identical variance, while Data-1 has one elliptical and two spherical clusters with different covariance matrices. Our proposed criterion performs best in terms of and MAE for all values of . As increases, which corresponds to an increase in the number of data vectors in the data set, the cluster enumeration performance of and greatly improves, while the performance of deteriorates because of the increase in overestimation. The total criterion (BIC) and penalty term of different Bayesian cluster enumeration criteria as a function of the number of clusters specified by the candidate models for is depicted in Fig. 2. The BIC plot in Fig. 1(a) is the result of one Monte Carlo run. It shows that and have a maximum at the true number of clusters , while overestimates the number of clusters to . As shown in Fig. 1(b), our proposed criterion, , has the second strongest penalty term. Note that, the penalty term of our proposed criterion shows a curvature at the true number of clusters, while the penalty terms of and are uninformative on their own.

MAE