Bayesian Inference for Gaussian Process Classifiers with Annealing and Pseudo-Marginal MCMC

11/28/2013 ∙ by Maurizio Filippone, et al. ∙ University of Glasgow 0

Kernel methods have revolutionized the fields of pattern recognition and machine learning. Their success, however, critically depends on the choice of kernel parameters. Using Gaussian process (GP) classification as a working example, this paper focuses on Bayesian inference of covariance (kernel) parameters using Markov chain Monte Carlo (MCMC) methods. The motivation is that, compared to standard optimization of kernel parameters, they have been systematically demonstrated to be superior in quantifying uncertainty in predictions. Recently, the Pseudo-Marginal MCMC approach has been proposed as a practical inference tool for GP models. In particular, it amounts in replacing the analytically intractable marginal likelihood by an unbiased estimate obtainable by approximate methods and importance sampling. After discussing the potential drawbacks in employing importance sampling, this paper proposes the application of annealed importance sampling. The results empirically demonstrate that compared to importance sampling, annealed importance sampling can reduce the variance of the estimate of the marginal likelihood exponentially in the number of data at a computational cost that scales only polynomially. The results on real data demonstrate that employing annealed importance sampling in the Pseudo-Marginal MCMC approach represents a step forward in the development of fully automated exact inference engines for GP models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Kernel methods have revolutionized the fields of pattern recognition and machine learning due to their nonlinear and nonparametric modeling capabilities [1]. Their success, however, critically depends on the choice of kernel parameters. In applications where accurate quantification of uncertainty in predictions is of primary interest, it has been argued that optimization of kernel parameters may not be desirable, and that inference using Bayesian techniques represents a much more reliable alternative [2, 3, 4, 5, 6].

This paper focuses in particular on the problem of inferring covariance (kernel) parameters of Gaussian Process classification models using Markov chain Monte Carlo (MCMC) techniques. The choice of using GP classification as a working example is that they are formulated in probabilistic terms and are therefore particularly suitable candidates for carrying out Bayesian inference of their kernel parameters. The choice of employing MCMC based inference techniques is that for general GP models and for general kernels they offer an all-purpose solution to do so up to a given precision [7], as discussed in [4, 8, 9]. The formulation of GP classifiers, and that of GP models in general, makes use of a set of latent variables that are assumed to be distributed according to a GP prior with covariance parameterized by a set of parameters . The application of MCMC to directly draw samples form the posterior distribution over covariance parameters would require the evaluation of the so called marginal likelihood, namely the likelihood where latent variables are integrated out of the model, which is analytically intractable.

Recently, the Pseudo-Marginal (PM) MCMC approach has been proposed as a practical way to efficiently infer covariance parameters in Gaussian process classifiers exactly [3]. In this approach, computations do not rely on the actual marginal likelihood, but on an unbiased estimate obtained by approximate methods and Importance Sampling (IS). While the sampling of covariance parameters using PM MCMC improves on previous approaches for inferring covariance parameters, a large variance in the estimate of the marginal likelihood can negatively impact the efficiency of the PM MCMC approach, making convergence slow and efficiency low. In [3], IS was based on an importance distribution obtained by Gaussian approximations to the posterior over latent variables [10, 11, 12]. For certain values of the covariance parameters, the posterior over latent variables can be strongly non-Gaussian and the approximation can be poor, thus leading to a large variance in the IS estimate of the marginal likelihood [11]. This effect is exacerbated by the dimensionality of the problem that makes the variance of IS grow exponentially large [13]. In the case of GP classification, estimating the marginal likelihood entails an integration in as many dimensions as the number of data, so this effect might be problematic in the case of large data sets.

This paper presents the application of Annealed Importance Sampling (AIS) [13] to obtain a low-variance unbiased estimate of the marginal likelihood111The code to reproduce all the results in this paper can be found here:
www.dcs.gla.ac.uk/maurizio/pages/code_ea_mcmc_ais/
. This paper empirically demonstrate that compared to IS, AIS can reduce the variance of the estimate of the marginal likelihood exponentially in the number of data at a computational cost that scales only polynomially. Finally, two versions of PM MCMC approaches, employing AIS and IS respectively, are compared on five real data sets. The results on these data demonstrate that employing AIS in the PM MCMC approach represents a step forward in the development of fully automated exact Bayesian inference engines for GP classifiers.

The remainder of this paper is organized as follows. Sections II and III review GP models and their fully Bayesian treatment using the PM MCMC approach. Section IV presents AIS to obtain an unbiased estimate of the marginal likelihood in GP models that can be used in the PM MCMC approach. Section V reports results on synthetic and real data, and section VI reports the conclusions.

Ii Bayesian Inference for GP Classification

Let be a set of input data where , and let be a set of associated observed binary responses . GP classification models are a class of hierarchical models where labels

are modeled as being independently distributed according to a Bernoulli distribution. The probability of class

for an input is based on a latent variable and is defined as , where

is the cumulative distribution function of the standard normal distribution, so that

. Latent variables

are assumed to be distributed according to a GP prior, where a GP is a set of random variables characterized by the fact that any finite subset of them is jointly Gaussian. GPs are specified by a mean function and a covariance function; for the sake of simplicity, in the remainder of this paper we will employ zero mean GPs. The covariance function

gives the covariance between latent variables at inputs and and it is assumed to be parameterized by a set of parameters . This specification results in a multivariate Gaussian prior over the latent variables with defined as an matrix with entries .

A GP can be viewed as a prior over functions and it is appealing in situations where it is difficult to specify a parametric form for the function mapping

into the probabilities of class labels. The covariance plays the role of the kernel in kernel machines, and in the remainder of this paper it will be assumed to be the Radial Basis Function (RBF) covariance

(1)

There can be one length-scale parameters for each feature, which is a suitable modelling assumption for Automatic Relevance Determination (ARD) [14], or there can be one global length-scale parameter such that . The parameter represents the variance of the marginal distribution of each latent variable. A complete specification of a fully Bayesian GP classifier requires a prior over .

When predicting the label for a new input data , it is necessary to estimate or infer all unobserved quantities in the model, namely and . An appealing way of calculating predictive distributions is as follows:

(2)

In the last expression predictions are no longer conditioned on latent variables and covariance parameters, as they are integrated out of the model. Crucially, such an integration accounts for the uncertainty in latent variables and covariance parameters based on their posterior distribution .

In order to compute the predictive distribution in eq. 2, a standard way to proceed is to approximate it using a Monte Carlo estimate:

(3)

provided that samples from the posterior are available. Note that in the case of GP classification, the remaining integral has a closed form solution [10].

As it is not possible to directly draw samples from , alternative ways to characterize it have been proposed. A popular way to do so employs deterministic approximations to integrate out latent variables [11, 12], but there is no way to quantify the error introduced by these approximation. Also, quadrature is usually employed to integrate out covariance parameters, thus limiting the applicability of GP models to problems with few covariance parameters [5]. Such limitations might not be acceptable in some pattern recognition applications, so we propose Markov chain Monte Carlo (MCMC) based inference as a general framework for tackling inference problems exactly in GP models. The idea underpinning MCMC methods for GP models is to set up a Markov chain with as invariant distribution.

To date, most MCMC approaches applied to GP models alternate updates of latent variables and covariance parameters. All these approaches, however, face the complexity of having to decouple latent variables and covariance parameters, whose posterior dependence makes convergence to the posterior distribution slow. Reparameterization techniques are a popular way to attempt to decouple the two groups of variables [15, 16, 17]. Also, jointly sampling and has been attempted in [18, 19], and it is based on approximations to the posterior over latent variables. Despite these efforts, a satisfactory way of sampling the parameters for general GP models is still missing, as demonstrated in a recent comparative study [8].

At this point it is useful to notice that samples from the posterior distribution of latent variables and covariance parameters can be obtained by alternating the sampling from and . Obtaining samples from is obviously difficult, as it requires the marginal likelihood ; except for the case of a Gaussian likelihood, evaluating the marginal likelihood entails an integration which cannot be computed analytically [10]. In the next section we will focus on the PM MCMC approach as a practical way of dealing with this problem.

Obtaining samples from , instead, can be done efficiently using Elliptical Slice Sampling (Ell-SS) [20]. Ell-SS defines a transition operator , and is a variant of Slice Sampling [21] adapted to the sampling of latent variables in GP models. Ell-SS begins by randomly choosing a threshold for

(4)

and by drawing a set of latent variables from the prior . Then, a combination of and is sought, such that the log-likelihood of the resulting combination is larger than the threshold . Such a combination is defined by means of sine and cosine of an auxiliary variable , which makes the resulting combination spanning a domain of points that is an ellipse in the latent variable space. The search procedure is based on slice sampling on starting from the interval . Due to the fact that Ell-SS does not require any tuning and it has been shown to be very efficient for several GP models [8], it is the operator that will be used in the remainder of this paper to sample latent variables. However, note that latent variables can be also efficiently sampled by means of a variant of Hybrid Monte Carlo [8].

Iii Pseudo-Marginal Inference for GP models

For the sake of simplicity, this work will focus on the Metropolis-Hastings (MH) algorithm [9, 22] to obtain samples from the posterior distribution over covariance parameters. The MH algorithm is based on the iteration of the following two steps: (i) proposing a new set of parameters drawing from a user defined proposal distribution and (ii) evaluating the Hastings ratio

(5)

to accept or reject . As previously discussed, the marginal likelihood cannot be computed analytically, except for the case of a Gaussian likelihood.

The PM approach in [3] builds upon a remarkable theoretical result [23, 24] stating that it is possible to plug an unbiased estimate of the marginal likelihood in the Hastings ratio

(6)

and still obtain an MCMC algorithm sampling from the correct posterior distribution . In [3] an unbiased estimate of the marginal likelihood was obtained as follows. First, an approximation of the posterior over latent variables , say , was obtained by means of approximate methods, such as for example the Laplace Approximation (LA) or Expectation Propagation. Second, based on , it was proposed to get an unbiased estimate of the marginal likelihood using IS. In particular, this was achieved by drawing samples from the approximating distribution . Defining

(7)

the marginal likelihood was approximated by

(8)

Such an estimate is unbiased and the closer is to the lower the variance of the estimate [13].

In the experiments shown in [3] this estimate was adequate for the problems that were analyzed, especially when accurate approximations based on Expectation Propagation were used. However, the variance of the IS estimate grows exponentially with the dimensionality of the integral [13], and this might represent a limitation when applying PM MCMC to large data sets. In particular, a large variance in the estimate of can eventually lead to the acceptance of a because the corresponding marginal likelihood is overestimated. If the overestimation is severe, it is unlikely that any new proposal will be accepted, resulting in slow convergence and low efficiency. The aim of this paper is to present a methodology based on AIS [13] which is capable of mitigating this effect.

Iv Marginal Likelihood estimation with Annealed Importance Sampling

Fig. 1: Illustration of the annealing strategies studied in this work. The figure was generated as follows. The input data comprises two data points in two dimensions with and , and corresponding labels . The covariance is the one in eq. 1 with and

. The leftmost plots show the multiplication of the GP prior (grey) and the likelihood (light blue) resulting in the posterior distribution over the two latent variables (blue). The first row of the figure shows the annealing procedure from the GP prior to the posterior. The leftmost plot in the second row shows prior, likelihood and posterior as before, along with the Gaussian approximation given by the LA algorithm (red). The remaining plots in the second row show the annealing procedure from the approximating Gaussian distribution to the posterior. In both cases, we defined

, thus assuming a geometric spacing for the ’s. Three samples drawn from and propagated using operators (one iteration of Ell-SS) have also been added to the plots.

AIS is an extension of IS where the weights in eq. 7 are computed based on a sequence of distributions going from one that is easy to sample from to the posterior distribution of interest. Following the derivation in [13], define as the unnormalized density of a distribution which is easy to sample from; in the next section we will study two of such distributions. Also, define

(9)

AIS defines a sequence of intermediate unnormalized distributions

(10)

with . The AIS sampling procedure begins by drawing one sample from . After that, for , a new is obtained from by iterating a transition operator that leaves the normalized version of invariant. Finally, computing the average of the following weights

(11)

yields an unbiased estimate of the ratio of the normalizing constants of and , which immediately yields an unbiased estimate of . For numerical reasons, it is safe to implement the calculations using logarithm transformations. Also, note that although the annealing strategy is inherently serial, the computations with respect to multiple importance samples can be parallelized. We now analyze two ways of implementing AIS for GP models, which are visually illustrated in fig. 1.

Iv-a Annealing from the prior

When annealing from the prior, the intermediate distributions are between and , namely

(12)

Employing Ell-SS as a transition operator for for the intermediate unnormalized distributions is straightforward, as the log-likelihood is simply scaled by . Annealing from the prior was proposed in [11] where it was reported that a sequence of annealed distributions was employed. This is because the prior and the posterior look very much different (see fig. 1) and the only way to ensure a smooth transition from the prior to the posterior is by using several intermediate distributions. This is problematic from a computational perspective, as the calculation of the marginal likelihood has to be done at each iteration of the PM approach to sample from the posterior distribution over . We therefore propose an alternative starting distribution that leads to a reduction in the number of intermediate distributions while obtaining estimates of the marginal likelihood that are accurate enough to ensure good sampling efficiency when used in the PM MCMC approach.

Iv-B Annealing from an approximating distribution

Several Gaussian-based approximation schemes to integrate out latent variables have been proposed for GP models [25, 26]. When an approximation to the posterior over latent variables is available, it might be reasonable to construct the sequence of intermediate distributions in AIS starting from it rather than the prior. When annealing from an approximating Gaussian distribution, the intermediate distributions are between and . In order to employ Ell-SS as a transition operator , it is useful to write the unnormalized intermediate distributions as

(13)

In this way, the model can be interpreted as having a prior and a likelihood given by the term in square brackets; applying Ell-SS to this formulation is straightforward.

V Experimental results

The first part of this section, compares the behavior of IS and AIS in the case of synthetic data. The second part of this section, reports an analysis of IS and AIS when employed in the PM MCMC approach applied to real data. In all experiments, the approximation was based on the Laplace Approximation (LA) algorithm. Also, we imposed Gamma priors on the parameters and for the ARD covariance and for the isotropic covariance, where and are shape and rate parameters respectively. Following the recommendations in [13, 27], intermediate distributions were defined based on a geometric spacing of the ’s. In particular, this was implemented by setting uniformly spaced values of between and , uniformly spaced values between and , and finally . In AIS, the transitions involved one iteration of Ell-SS.

V-a Synthetic data

The aim of this section is to highlight the potential inefficiency in employing IS to obtain an unbiased estimate of the marginal likelihood and to demonstrate the effectiveness of AIS in dealing with this problem. In particular, this can be problematic in large dimensions, namely when analyzing large amounts of data. In order to show this effect, we generated data sets with an increasing number of data

in two dimensions with a balanced class distribution. Data were generated drawing input vectors uniformly in the unit square and a latent function from a GP with covariance in eq. 

1 with and a global . This combination of covariance parameters leads to a strongly non-Gaussian posterior distribution over the latent variables making IS perform poorly when is large.

In order to obtain a measure of variability of the IS and AIS estimators of the marginal likelihood, we analyze the standard deviation of the estimator of

(14)

In the experiments, was estimated based on repetitions; fig. 2 shows the distribution of based on draws of from the posterior obtained from a preliminary run of an MCMC algorithm. Ideally, a perfect estimator of the marginal likelihood would yield a degenerate distribution of over posterior samples of at zero. In practice, the distribution of indicates the variability (across posterior samples of ) around an average value of the standard deviation of the estimator of the logarithm of the marginal likelihood. The representation in is helpful to get an idea of the order of magnitude of such a variability. For instance, a distribution of across posterior samples of concentrated around would mean that, on average, the estimates of the marginal likelihood span roughly two orders of magnitude.

Fig. 2: This figure shows a measure of the quality of the IS and AIS (annealing from the prior and from an approximating distribution obtained by the LA algorithm) estimators of the marginal likelihood. The boxplot summarizes the distribution of in eq. 14 for values of drawn from .

Fig. 2 shows the distribution of for AIS when annealing from the prior and from an approximating distribution, along with the distribution of for IS as in [3]. In all methods we set . The results confirm that annealing from the prior offers much poorer estimates of the marginal likelihood compared to annealing from an approximating distribution and will not be considered further. The analysis of the results in fig. 2 reveal that when annealing from an approximating distribution, the reduction in variance of the estimate of the marginal likelihood compared to IS is exponential in . When comparing the computational cost of running IS and AIS, instead, we notice that AIS increases it by a factor which scales only polynomially with . This is because, after approximating the posterior over (that typically costs operations), in AIS drawing the initial importance samples, iterating Ell-SS, and computing the weights costs operations; this needs to be done as many times as the number of intermediate distributions , which in our case means times. In IS, drawing the importance samples and computing the weights requires operations.

V-B Real data

Isotropic covariance
Glass Thyroid Breast Pima Banknote
, , , , ,
IS AIS IS AIS IS AIS IS AIS IS AIS
1
10

ARD covariance
Glass Thyroid Breast Pima Banknote
, , , , ,
IS AIS IS AIS IS AIS IS AIS IS AIS
1
10
TABLE I: Comparison between the average acceptance rate (in %) obtained by the PM MCMC approach using IS and AIS. The number in parentheses represents the standard deviation of the average acceptance rate across five parallel chains.

This section reports an analysis of the PM MCMC approach applied to five UCI data sets [28] when the marginal likelihood is estimated using AIS and IS. The Glass data set is multi-class, and we turned it into a two class data set by considering the data labelled as “window glass” as one class and data labelled as “non-window glass” as the other class. In all data sets, features were normalized to have zero mean and unit standard deviation. All experiments were repeated varying the number of importance samples , and employing isotropic and ARD RBF covariance functions as in eq. 1.

In order to tune the MH proposal, we ran a preliminary MCMC algorithm for iterations. This was initialized from the prior and the marginal likelihood in the Hastings ratio was obtained by the LA algorithm. The proposal was then adapted to obtain an acceptance rate between and . This set up was useful in order to avoid problems in tuning the proposal mechanism when a noisy version of the marginal likelihood is used, which may lead to a poor acceptance rate independently of the proposal mechanism. Tab. I reports the average acceptance rate when switching to an unbiased version of the marginal likelihood obtained by IS or AIS for different values of after the adaptive phase. The average acceptance rate was computed based on iterations, collected after discarding iterations, and over parallel chains.

The results are variable across data sets and the type of covariance, but the general trend is that employing AIS in the PM MCMC approach improves on the acceptance rate compared to IS. In a few cases, it is striking to see how replacing an approximate marginal likelihood with an unbiased estimate in the Hastings ratio does not affect the acceptance rate, thus confirming the merits of the PM MCMC approach. In general, however, PM MCMC is affected by the use of an estimate of the marginal likelihood. In cases where this happens, AIS consistently offers a way to reduce the variance of the estimate of the marginal likelihood compared to IS, and this improves on the acceptance rate.

Vi Conclusions

This paper presented the application of annealed importance sampling to obtain an unbiased estimate of the marginal likelihood in GP classifiers. Annealed importance sampling for GP classifiers was previously proposed in [11] where the sequence of distributions was constructed from the prior to the posterior over latent variables. Given the difference between these two distributions, the annealing strategy requires the use of several intermediate distributions, thus making this methodology impractical. This paper studied the possibility to construct a sequence of distributions from an approximating distribution rather than the prior, and empirically demonstrated that, compared to importance sampling, this reduces the variance of the estimator of the marginal likelihood exponentially in the number of data. Crucially, this reduction comes at a cost that is only polynomial in the number of data. Also, annealed importance sampling can be easily parallelized.

The motivation for studying this problem was to plug the unbiased estimate of the marginal likelihood in the Hastings ratio in order to obtain an MCMC approach sampling from the correct posterior distribution over covariance parameters. The results on real data show that employing importance sampling within the pseudo-marginal MCMC approach can be satisfactory in many cases. However, in general, annealed importance sampling leads to a lower variance estimator of the marginal likelihood, and the resulting pseudo-marginal MCMC approach significantly improves on the average acceptance rate. These results suggest a promising direction of research towards the development of MCMC methods where the likelihood is estimated in an unbiased fashion, but the acceptance rate is as if the likelihood were known exactly. Given that the computational overhead scales with less than the third power of the number of data, the results indicate that this can be achieved with an acceptable computational cost.

This paper considered GP classification as a working example, and the Laplace approximation algorithm to obtain the importance distribution. A matter of current investigation is the application of the proposed methodology to other GP models and other approximation schemes. Furthermore, this paper focused on the case of full covariance matrices. These results can be extended to deal with sparse inverse covariance matrices, which are popular when modeling spatio-temporal data, thus leading to the possibility to process massive amounts of data due to the use of sparse algebra routines. Finally, this paper did not attempt to optimize the annealing scheme, but it would be sensible to do so in order to minimize the variance of the annealed importance sampling estimator of the marginal likelihood [29].

References