Online and Distributed learning of Gaussian mixture models by Bayesian Moment Matching

09/19/2016 ∙ by Priyank Jaini, et al. ∙ 0

The Gaussian mixture model is a classic technique for clustering and data modeling that is used in numerous applications. With the rise of big data, there is a need for parameter estimation techniques that can handle streaming data and distribute the computation over several processors. While online variants of the Expectation Maximization (EM) algorithm exist, their data efficiency is reduced by a stochastic approximation of the E-step and it is not clear how to distribute the computation over multiple processors. We propose a Bayesian learning technique that lends itself naturally to online and distributed computation. Since the Bayesian posterior is not tractable, we project it onto a family of tractable distributions after each observation by matching a set of sufficient moments. This Bayesian moment matching technique compares favorably to online EM in terms of time and accuracy on a set of data modeling benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Gaussian Mixture models (GMMs) (Murphy, 2012) are simple, yet expressive distributions that are often used for soft clustering and more generally data modeling. Traditionally, the parameters of GMMs are estimated by batch Expectation Maximization (EM) (Dempster et al., 1977). However, as datasets get larger and do not fit in memory or are continuously streaming, several online variants of EM have been proposed (Titterington, 1984; Neal & Hinton, 1998; Cappé & Moulines, 2009; Liang & Klein, 2009). They process the data in one sweep by updating a sufficient statistics in constant time after each observation, however this update is approximate and stochastic, which slows down the learning rate. Furthermore it is not clear how to distribute the computation over several processors given the sequential nature of those updates.

We propose a new Bayesian learning technique that lends itself naturally to online and distributed computation. As pointed out by (Broderick et al., 2013)

, Bayes’ theorem can be applied after each observation to update the posterior in an online fashion and a dataset can be partitioned into subsets that are each processed by different processors to compute partial posteriors that can be combined into a single exact posterior that corresponds to the product of the partial posteriors divided by their respective priors.

The main issue with Bayesian learning is that the posterior may not be tractable to compute and represent. If we start with a prior that consists of the product of a Dirichlet by several Normal-Wisharts (one per Gaussian component) over the parameters of the GMM, the posterior becomes a mixture of products of Dirichlets by Normal-Wisharts where the number of mixture components grows exponentially with the number of observations. To keep the computation tractable, we project the posterior onto a single product of a Dirichlet with Normal-Wisharts by matching a set of moments of the approximate posterior with the moments of the exact posterior. While moment matching is a popular frequentist technique that can be used to estimate the parameters of a model by matching the moments of the empirical distribution of a dataset (Anandkumar et al., 2012), here we use moment matching in a Bayesian setting to project a complex posterior onto a simpler family of distributions. For instance, this type of Bayesian moment matching has been used in Expectation Propagation (Minka & Lafferty, 2002).

Despite the approximation induced by the moment matching projection, the approach compares favorably to Online EM in terms of time and accuracy. Online EM requires several passes through the data before converging and therefore when it is restricted to a single pass (streaming setting), it necessarily incurs a loss in accuracy while Bayesian moment matching converges in a single pass. The approximation due to moment matching also induces a loss in accuracy, but the empirical results suggest that it is less important than the loss incurred by online EM. Finally, BMM lends itself naturally to distributed computation, which is not the case for Online EM.

The rest of the paper is structured as follows. Section 2 discusses the problem statement and motivation for online Bayesian Moment Matching algorithm. In Section 3, we give a brief background about the moment of methods and describe the family of distributions - Dirichlet, Normal-Wishart and Normal-Gamma, used as priors in this work. We further review the other online algorithm - online EM, used for parameter estimation of Gaussian Mixture models. Section 4 presents the Bayesian Moment matching algorithm for approximate Bayesian learning using moment matching. Section 5 demonstrates the effectiveness of online BMM and online Distributed Moment Matching over online EM through empirical results on both synthetic and real data sets. Finally, Section 6 concludes the paper and talks about future work.

2 Motivation

Given a set of data instances, where each data instance is assumed to be sampled independently and identically from a Gaussian mixture model, we want to estimate the parameters of the Gaussian mixture model in an online setting.

More precisely, let be a set of n data points, where each data point is sampled from a Gaussian mixture model with components. Let the parameters of this underlying Gaussian mixture model be denoted by , where . Each is a tuple of where is the weight, is the mean and is the covariance matrix of the component in the Gaussian mixture model. This can be expressed as

where d

denotes a d-dimensional Gaussian distribution and

. The aim is to find an estimate of in an online manner given the data .

One way to find the estimate is to compute the posterior by using Bayes theorem recursively.

(1)

where k = and the prior be a distribution in given parameter set . Hence,

However, a major limitation with the approach above is that with each new data point , the number of terms in the posterior given by Eq. 1 increases by a factor due to the summation over the number of components. Hence, after N data points, the posterior will consist of a mixture of terms, which is intractable. In this paper, we describe a Bayesian Moment Matching technique that helps to circumvent this problem.

The Bayesian Moment Matching (BMM) algorithm approximates the posterior obtained after each iteration in a manner that prevents the exponential growth of mixture terms in Eq. 1. This is achieved by approximating the distribution obtained as the posterior by another distribution which is in the same family of distributions as the prior by matching a set of sufficient moments S of with . We will make this idea more concrete in the following sections.

3 Background

3.1 Moment Matching

A moment is a quantitative measure of the shape of a distribution or a set of points. Let

be a probability distribution over a d-dimensional random variable

. The order moments of are defined as where and is a monomial of of degree j.

For some distributions f, there exists a set of monomials S(f) such that knowing allows us to calculate the parameters of f. For example, for a Gaussian distribution , the set of sufficient moments S(f) = {}. This means knowing and allows us to estimate the parameters and that characterize the distribution. We use this concept called the method of moments in our algorithm.

Method of Moments is a popular frequentist technique used to estimate the parameters of a probability distribution based on the evaluation of the empirical moments of a dataset. It has been previously used to estimate the parameters of latent Dirichlet allocation, mixture models and hidden Markov models

(Anandkumar et al., 2012). Method of Moments or moment matching technique can also be used for a Bayesian setting by computing a subset of the moments of the intractable posterior distribution given by Eq. 1. Subsequently, another tractable distribution from a family of distributions that matches the set of moments can be selected as an approximation for the intractable posterior distribution. For Gaussian mixture models, we use the Dirichlet as a prior over the weights of the mixture and a Normal-Wishart distribution as a prior over each Gaussian component. We next give details about the Dirichlet and Normal-Wishart distributions, including their set of sufficient moments.

3.2 Family of Prior Distributions

In Bayesian Moment Matching, we project the posterior onto a tractable family of distribution by matching a set of sufficient moments. To ensure scalability, it is desirable to start with a family of distributions that is a conjugate prior pair for a multinomial distribution (for the set of weights) and Gaussian distribution with unknown mean and covariance matrix. The product of a Dirichlet distribution over the weights with a Normal-Wishart distribution over the mean and covariance matrix of each Gaussian component ensures that the posterior is a mixture of products of Dirichlet and Normal-Wishart distributions. Subsequently, we can approximate this mixture in the posterior with a single product of Dirichlet and Normal-Wishart distributions by using moment matching. We explain this in greater detail in Section 

4, but first we describe briefly the Normal-Wishart and Dirichlet distributions along with some sets of sufficient moments.

3.2.1 Dirichlet Distribution

The Dirichlet distribution is a family of multivariate continuous probability distributions over the interval [0,1]. It is the conjugate prior probability distribution for the multinomial distribution and hence it is a natural choice of prior over the set of weights

of a Gaussian mixture model. A set of sufficient moments for the Dirichlet distribution is . Let be the parameters of the Dirichlet distribution over w, then

(2)

3.2.2 Normal Wishart Prior

The Normal-Wishart distribution is a multivariate distribution with four parameters. It is the conjugate prior of a multivariate Gaussian distribution with unknown mean and covariance matrix (Degroot, 1970). This makes a Normal-Wishart distribution a natural choice for the prior over the unknown mean and precision matrix for our case.

Let be a d

-dimensional vector and

be a symmetric positive definite matrix of random variables respectively. Then, a Normal-Wishart distribution over () given parameters () is such that where is real, and has a Wishart distribution given as where is a positive definite matrix and is real. The marginal distribution of is a multivariate t-distribution i.e

. The univariate equivalent for the Normal-Wishart distribution is the Normal-Gamma distribution.

In Section 3.1, we defined S, a set of sufficient moments to characterize a distribution. In the case of the Normal-Wishart distribution, we would require at least four different moments to estimate the four parameters that characterize it. A set of sufficient moments in this case is where is the element of the matrix . The expressions for sufficient moments are given by

(3)

3.3 Online Expectation Maximization

Batch Expectation Maximization (Dempster et al., 1977) is often used in practice to learn the parameters of the underlying distribution from which the given data is assumed to be derived. In (Titterington, 1984), a first online variant of EM was proposed, which was later modified and improved in several variants (Neal & Hinton, 1998; Sato & Ishii, 2000; Cappé & Moulines, 2009; Liang & Klein, 2009) that are closer to the original batch EM algorithm. In online EM, an updated parameter estimate is produced after observing each data instance . This is done by replacing the expectation step by a stochastic approximation, while the maximization step is left unchanged. In the limit, online EM converges to the same estimate as batch EM when it is allowed to do several iterations over the data. Hence, a loss in accuracy is incurred when it is restricted to a single pass over the data as required in the streaming setting.

  Input: Data ,
  Let be a family of probability distributions with parameters
  Initialize a prior
  for  to  do
     Compute from using Eq. 1
     , evaluate
     Compute using ’s
     Approximate with
  end for
  Return
Algorithm 1 Generic Bayesian Moment Matching

4 Bayesian Moment Matching

We now discuss in detail the Bayesian Moment Matching (BMM) algorithm. BMM approximates the posterior after each observation with fewer terms in order to prevent the number of terms to grow exponentially.

In Algorithm 1, we first describe a generic procedure to approximate the posterior after each observation with a simpler distribution by moment matching. More precisely, a set of moments sufficient to define are matched with the moments of the exact posterior . For every iteration, we first calculate the exact posterior . Then, we compute the set of moments S(f) that are sufficient to define a distribution in the family . Next, we compute the parameter vector based on the set of sufficient moments. This determines a specific distribution in the family that we use to approximate . Note that the moments in the sufficient set S(f) of the approximate posterior are the same as that of the exact posterior. However, all the other moments outside this set of sufficient moments S(f) may not necessarily be the same.

In the next section (4.1), we illustrate Algorithm 1 for learning the parameters of a univariate Gaussian mixture model. Subsequently, we will give the BMM algorithm for general multivariate Gaussian mixture models.

4.1 BMM for univariate Gaussian mixture model

In this section, we illustrate the Bayesian moment matching algorithm for Gaussian mixture models. Let be a dataset of n data points derived from a univariate Gaussian mixture model with density function given by , where .

The first step is to choose an appropriate family of distributions for the prior . A conjugate prior probability distribution pair of the likelihood would be a desirable family of distributions. We further make the assumption that every component of GMM are independent of all the other components. The independence assumption helps to simplify the expressions for the posterior. Hence, the prior is chosen as a product of a Dirichlet distribution over the weights and Normal-Gamma distributions over each tuple where . More precisely, where and .

Given a prior , the posterior after observing the first data point is given by

(4)

Since, a Normal-Gamma distribution is a conjugate prior for a Normal distribution with unknown mean and variance,

where is some constant. Similarly, where is some constant and

(5)

Therefore, Eq. 4 can now be expressed as

(6)

where and k is the normalization constant. Eq. 6 suggests that the posterior is a mixture of product of distributions where each product component in the summation has the same form as that of the family of distributions of the prior . It is evident from Eq. 6 that the terms in the posterior grow by a factor of M for each iteration, which is problematic. In the next step, we approximate this mixture with a single product of Dirichlet and Normal-Gamma distributions by matching all the sufficient moments of with i.e.

(7)

We evaluate the parameters by matching some sufficient moments of with . The set of sufficient moments for the posterior is . For any

(8)

The parameters of can be computed from the following set of equations

(9)

Using the set of equations given by (9), we approximate the exact posterior with . This posterior will be the prior for the next iteration and we keep following the steps above iteratively to finally have a distribution after observing a stream of data . The estimate is returned.

Here, we have assumed that the number of components is known. In practice, however, this may not be the case. This problem can be addressed by taking a large enough value of while learning the model. Although, such an approach might lead to overfitting for maximum likelihood techniques such as online EM, in our case, this is a reasonable approach since Bayesian learning is fairly robust to overfitting.

4.2 BMM for multivariate Gaussian mixture model

In the previous section, we illustrated the Bayesian moment matching algorithm for a univariate Gaussian mixture model in detail. In this section we briefly discuss the general case for a multivariate Gaussian mixture model. The family of distributions for the prior in this case becomes where and . The algorithm works in the same manner as shown before. However, the update equations in (5) would now change accordingly.

The set of sufficient moments for the posterior in this case would be given by where is the element of the matrix . Notice that, since is a symmetric matrix, we only need to consider the moments of the elements on and above the diagonal of .

In Eq. 3 of Section 3.2.2, we presented the expressions for a set of sufficient moments of a Normal-Wishart distribution. Using those expressions we can again approximate a mixture of products of Dirichlet and Normal-Wishart distributions in the posterior with a single product of Dirichlet and Normal-Wishart distributions, as we did in the previous section. Finally, the estimate is obtained after observing the data . In Algorithm 2, we give the algorithm for Bayesian moment matching for Gaussian mixture models.

  Input: Data ,
  Let be a family of probability distributions given by a product of a Dirichlet and Normal-Wishart distributions.
  Initialize as
  for  to  do
     Compute from using Eq.(1)
     , evaluate
     Compute using ’s
     Approximate with
  end for
  Return
Algorithm 2 Bayesian Moment Matching for Gaussian mixture

4.3 Distributed Bayesian Moment Matching

One of the major advantages of Bayes’ theorem is that the computation of the posterior can be distributed over several machines, each of which processes a subset of the data. It is also possible to compute the posterior in a distributed manner using Bayesian moment matching algorithm. For example, let us assume that we have T machines and a data set with TN data points. Each machine , can compute the approximate posterior where using Algorithm 2 over N data points. These partial posteriors can be combined to obtain a posterior over the entire data set according to the following equation:

(10)

Subsequently, the estimate is obtained over the whole data set. Therefore, we can use Bayesian moment matching algorithm to perform Bayesian learning in an online and distributed fashion. We will show in Section 5 that distributed Bayesian moment matching performs favorably in terms of accuracy and results in a huge speed-up of running time.

5 Experiments

We performed experiments on both synthetic and real datasets to evaluate the performance of online Bayesian moment matching algorithm (oBMM). We used the synthetic datasets to verify whether oBMM converged to the true model given enough data. We subsequently compared the performance of oBMM with the online Expectation Maximization algorithm (oEM) described in (Cappé & Moulines, 2009). We compared oBMM with this version of oEMsince it has been shown to perform best among various variants of oEM (Liang & Klein, 2009). We now discuss experiments on both kinds of datasets in detail.

Synthetic Data sets

We evaluate the performance of oBMM on 9 different synthetic data sets. All the data sets were generated with a Gaussian mixture model with a different number of components lying in the range of 2 to 6 components and having a different number of attributes (or dimensions) in the range of 3 to 10 dimensions. For each data set, we sampled 200,000 data points. We divided each data set in to a training set with 170,000 data instances and 30,000 testing instances. To evaluate the performance of oBMM, we calculated the average log-likelihood of the model learned by oBMM after each data instance is observed. Figure 1 shows the plots for performance of oBMM against the true model. Each subplot has the average log-likelihood on the vertical axis and the number of observation on the horizontal axis. It is clear from the plots, that oBMM converges to the true model likelihood, in each of the nine cases, given a large enough data set.


Figure 1: Performance Analysis of online Bayesian moment matching algorithm for Gaussian mixture models on synthetic datasets with 170,000 training instances and 30,000 testing instances. The plot shows the convergence of log-likelihood of the model learned by BMM vs number of observed data instances. The plot clearly shows convergence to the true model.

Next, we discuss the performance of oBMM against oEM and show through experiments on real data sets that oBMM performs better than oEM in terms of both accuracy and running time.

Real Data sets

We evaluated the performance of oBMM on 2 sets of real datasets - 10 moderate-small size datasets and 4 large datasets available publicly online at the UCI machine learning repository and Function Approximation repository(Guvenir & Uysal, 2000). All the datasets span over diverse domains. The number of attributes(or dimensions) range from 4 to 91.

In order to evaluate the performance of oBMM, we compare it to oEM. We measure both - the quality of the two algorithms in terms of average log-likelihood scores on the held-out test datasets and their scalability in terms of running time. We use the Wilcoxon signed ranked test(Wilcoxon, 1950) to compute the p-value and report statistical significance with p-value less than 0.05, to test the statistical significance of the results. We computed the parameters for each algorithm over a range of components varying from 2 to 10. For analysis, we report the model for which the log-likelihood over the test data stabilized and showed no further significant improvement for both oEM and oBMM. For oEM the step size for the stochastic approximation in the E-Step was set to where (Liang & Klein, 2009) where n is the number of observations. We evaluate the performance of online Distributed Moment Matching (oDMM) by dividing the training datasets in to 5 smaller data sets, and processing each of these small datasets on a different machine. The output from each machine is collected and combined to give a single estimate for the parameters of the model learned.


Data set Instances oEM oBMM
Abalone 4177 -2.65 -1.82
Banknote 1372 -9.74 -9.65
Airfoil 1503 -15.86 -16.53
Arabic 8800 -15.83 -14.99
Transfusion 748 -13.26 -13.09
CCPP 9568 -16.53 -16.51
Comp. Activity 8192 -132.04 -118.82
Kinematics 8192 -10.37 -10.32
Northridge 2929 -18.31 -17.97
Plastic 1650 -9.4694 -9.01
Table 1: Log-likelihood scores on 10 data sets. The best results among oBMM and oEM are highlighted in bold font. (or ) indicates that the method has significantly better (or worse) log-likelihoods than Online Bayesian Moment Matching (oBMM) under Wilcoxon signed rank test with pvalue 0.05.

Table 1 shows the average log-likelihood on test sets for oBMM and oEM. oBMM outperforms oEM on 9 of the 10 datasets. The results show that for some datasets, oBMM has significantly better log-likelihoods than oEM. Table 2 and Table 3 show the log-likelihood scores and running times of each algorithm on large datasets. In terms of log-likelihood scores, oBMM outperforms oEM and oDMM on all 4 datasets. While, the performance of oDMM is expected to be worse than oBMM, it is to be noticed that the performance of oDMM is not very significantly worse. This is encouraging in light of the huge gains in terms of running time of oDMM over oEM and oBMM. Table 3 shows the performance of each algorithm in terms of running times. oDMM outperforms each of the other algorithms very significantly. It is also worth noting that oBMM performed better than oEM on 3 out of 4 datasets.


Data (Attributes) Instances oEM oBMM oDMM
Heterogeneity (16) 3930257 -176.2 -174.3 -180.7
Magic 04 (10) 19000 -33.4 -32.1 -35.4
Year MSD (91) 515345 -513.7 -506.5 -513.8
MiniBooNe (50) 130064 -58.1 -54.7 -60.3
Table 2: Log-likelihood scores on 4 large data sets. The best results among oBMM, oDMM and oEM are highlighted in bold font.

Data (Attributes) Instances oEM oBMM oDMM
Heterogeneity (16) 3930257 77.3 81.7 17.5
Magic 04 (10) 19000 7.3 6.8 1.4
Year MSD (91) 515345 336.5 108.2 21.2
MiniBooNe (50) 130064 48.6 12.1 2.3
Table 3: Running time in seconds on 4 large datasets. The best running time is highlighted in bold fonts

6 Conclusion

With the advent of technology, large data sets are being generated in almost all fields - scientific, social, commercial - spanning diverse areas like physics, molecular biology, social networks, health care, trading markets, to name a few. Therefore, it has become imperative to develop algorithms which can process these large data sets in minimum time in an online fashion. In this paper, we explored online algorithms to learn the parameters of Gaussian Mixture models. We proposed an online Bayesian Moment Matching algorithm for parameter learning and demonstrated how it can be used in a distributed manner leading to substantial gains in running time. We further showed through empirical analysis that the online Bayesian Moment Matching algorithm converges to the true model and outperforms online EM both in terms of accuracy and running time. We also demonstrated that distributing the algorithm over several machines results in faster running times without significantly compromising accuracy, which is particularly advantageous when running time is a major bottleneck.

In the future, we would like to further develop the online Bayesian Moment Matching algorithm to learn the number of components in a mixture model in an online fashion. Some work has already been done in this direction with Dirichlet process mixtures (Wang & Blei, 2012; Lin, 2013) and it would be desirable to explore how the BMM algorithm can be adapted to learn the number of components. Further, we can use the proposed online BMM for Gaussian Mixture models to extend this work to learn a Sum-Product Network with continuous variables in an online manner.

References

  • Abdullah Rashwan & Poupart (2016) Abdullah Rashwan, Han Zhao and Poupart, Pascal. Online and Distributed Bayesian Moment Matching for Sum-Product Networks. In

    International Conference on Artificial Intelligence and Statistics (AISTATS)

    , 2016.
  • Anandkumar et al. (2012) Anandkumar, Animashree, Hsu, Daniel, and Kakade, Sham M. A method of moments for mixture models and hidden markov models. Journal of Machine Learning Research - Proceedings Track, 23:33.1–33.34, 2012.
  • Broderick et al. (2013) Broderick, Tamara, Boyd, Nicholas, Wibisono, Andre, Wilson, Ashia C, and Jordan, Michael I. Streaming variational bayes. In Advances in Neural Information Processing Systems, pp. 1727–1735, 2013.
  • Cappé & Moulines (2009) Cappé, Olivier and Moulines, Eric. On-line expectation–maximization algorithm for latent data models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3):593–613, 2009.
  • Degroot (1970) Degroot, Morris H. Optimal statistical décisions. McGraw-Hill Book Company, New York, St Louis, San Francisco, 1970. ISBN 0-07-016242-5. URL http://opac.inria.fr/record=b1080767.
  • Dempster et al. (1977) Dempster, Arthur P, Laird, Nan M, and Rubin, Donald B. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pp. 1–38, 1977.
  • Guvenir & Uysal (2000) Guvenir, H. Altay and Uysal, I. Bilkent university function approximation repository. 2000. URL http://funapp.cs.bilkent.edu.tr.
  • Liang & Klein (2009) Liang, Percy and Klein, Dan. Online em for unsupervised models. In Proceedings of human language technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics, pp. 611–619. Association for Computational Linguistics, 2009.
  • Lin (2013) Lin, Dahua. Online learning of nonparametric mixture models via sequential variational approximation. In Advances in Neural Information Processing Systems, pp. 395–403, 2013.
  • Minka & Lafferty (2002) Minka, Thomas and Lafferty, John. Expectation-propagation for the generative aspect model. In Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pp. 352–359. Morgan Kaufmann Publishers Inc., 2002.
  • Murphy (2012) Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.
  • Neal & Hinton (1998) Neal, Radford M and Hinton, Geoffrey E. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pp. 355–368. Springer, 1998.
  • Sato & Ishii (2000) Sato, Masa-Aki and Ishii, Shin. On-line em algorithm for the normalized gaussian network. Neural computation, 12(2):407–432, 2000.
  • Titterington (1984) Titterington, D. M. Recursive parameter estimation using incomplete data. pp. 46(2):257–267, 1984.
  • Wang & Blei (2012) Wang, Chong and Blei, David M. Truncation-free online variational inference for bayesian nonparametric models. In Advances in neural information processing systems, pp. 413–421, 2012.
  • Wilcoxon (1950) Wilcoxon, Frank. Some rapid approximate statistical procedures. Annals of the New York Academy of Sciences, pp. 808–814, 1950.