Gaussian process regression has emerged as a powerful, yet practical class of non-parametric Bayesian models that quantify the uncertainties of the underlying process using Gaussian distributions(Rasmussen and Williams, 2006)
. Gaussian processes are commonly applied to time-series interpolation, regression and classification, where the GP can provide predictive distributions(Rasmussen and Williams, 2006).
The standard GP model assumes that the model parameters stay constant over the input space. This includes the observational noise variance , as well as the signal variance and the lengthscale of the covariance function. The signal variance determines the signal amplitude, while the characteristic lengthscale defines the local ‘support’ neighborhood of the function. In many real world problems either the noise variance or the signal smoothness, or both, vary over the input space, implying a heteroscedatic noise model or a nonstationary function dynamics, respectively (Le et al., 2005) (see also Wang and Neal (2012)). In both cases, the analytical posterior of the GP becomes intractable (Tolvanen et al., 2014). For instance, in biological studies, rapid signal changes are often observed quickly after perturbations, with the signal becoming smoother in time (Heinonen et al., 2015).
Several authors have proposed extending GPs by learning a latent noise variance as another GP, and by inferring the unknown function and the noise model in a maximum likelihood (ML) (Kersting et al., 2007) or maximum a posteriori (MAP) fashion (Quadrianto et al., 2009)
. Fully Bayesian inference methods include MCMC sampling(Goldberg, 1998) and variational and expectation propagation approximations of the posterior (Lazaro-Gredilla and Titsias, 2011; Tolvanen et al., 2014). Non-stationarities can also be included in the signal variance or lengthscale with Gaussian process priors. Nonstationary lengthscale was introduced by Gibbs (1997) and further extended by Paciorek and Schervish (2004) with MCMC inference. Recently, Tolvanen et al. (2014) introduced a non-stationary signal variance using expectation propagation and approximate variational inference.
In this paper we introduce the first fully non-stationary and heteroscedastic GP regression framework, in which all three main components (noise variance, signal variance and the lengthscale) can be simultaneously input-dependent, with GP priors222Matlab implementation available from github.com/markusheinonen/adaptivegp. We propose an inference method for the exact joint posterior of the underlying signal and all three latent functions, avoiding the need for introducing variational or expectation propagation approximations (Lazaro-Gredilla and Titsias, 2011; Tolvanen et al., 2014). We use HMC-NUTS, which can effectively sample the posterior guided by the model gradients, which we derive analytically. Furthermore, an exact MAP solution arises as a simple gradient ascent of the posterior. We enhance both approaches by posterior whitening using Cholesky decompositions of the latent function priors. Our experiments demonstrate the necessity of non-stationary GPR to model realistic input-dependent dynamics, while the proposed method performs comparably to conventional stationary or previous non-stationary GPR models otherwise.
In Section 2 we introduce the fully nonstationary GP model. In its subsections we first introduce MAP and HMC inference, discuss model whitening and finally define the predictive distributions. Section 3 presents experimental results on several synthetic and one real biological datasets, and we conclude in Section 4.
2 Heteroscedatic nonstationary GP model
be an observation vector overinputs . We assume an additive regression model
where both the underlying signal and the zero-mean observation noise variance are unknown functions to be learned. We proceed by first placing a zero mean GP prior on the unknown function ,
which assumes that the output covariances depend on the input covariance through a kernel function. We use a nonstationary generalisation of the squared exponential kernel (Gibbs, 1997)
where , and and are input-dependent signal variance and lengthscale functions, respectively. The kernel reduces into a standard squared exponential kernel if both are constant. We show the kernel (2) is positive definite in the Supplementary Material.
We model the lengthscale, signal variance and noise variance with latent functions. We are interested in smoothly varying latent functions and thus we place separate GP priors on them as well:
where we set the priors on the logarithms to ensure their positivity. We select separate standard squared exponential covariances for each,
where . The model has nine hyper-parameters that define the prior for the three latent functions , and . The means determine latent function means, while the ’s are scaling terms. The ’s are the characteristic lengthscales of the priors. In practice, the ’s and ’s have a small effect on the models, whereas ’s determine the smoothness of the latent functions, and can be set based on prior knowledge.
Given a dataset , the model can be equivalently written as , where is a latent function vector at observed points and has elements computed using eq. (2) with signal variances and lengthscales . Finally, the data likelihood is , where is a diagonal noise matrix and
are the noise standard deviations.
We note that by placing a GP prior on just the noise and restricting the other two to be constants, we arrive at the heteroscedastic model studied in several earlier works (Goldberg, 1998; Kersting et al., 2007; Quadrianto et al., 2009). Setting a prior on only the lengthscale retrieves the models of Gibbs (1997); Paciorek and Schervish (2004), and setting a prior on both the signal variance and the noise gives the model of Tolvanen et al. (2014). Out method is the first to combine heteroscedatic noise and nonstationary lengthscale, while also allowing the signal variance to vary over the inputs.
To infer latent functions from the full posterior we introduce two approaches in the next two Sections333 In the following we omit the hyperparameters
In the following we omit the hyperparametersfor notational clarity
. We propose to learn the MAP estimate, or infer the full posterior using HMC sampling. Both approaches are based on the analytical gradients of the latent functions.
2.1 Maximum a posteriori estimation
has been marginalised out. Using Bayes’ theorem this is equivalent to maximizing the marginal likelihood
whose logarithm we denote as the marginal log likelihood (MLL).
The partial derivatives of the log of marginal likelihood (3) with respect to the latent functions are analytical:
where and is given in the Supplementary Material.
We perform gradient ascent over the MLL, . The solution is only guaranteed to converge to a local optimum, and hence we perform multiple restarts from random initial conditions. The MAP solution is adequate when the posterior is close to unimodal.
Given the MAP solution, the function posterior is a Gaussian with
where has been computed with eq. (2) using MAP latent vectors and , and with .
2.2 HMC inference
As a second approach we sample the full posterior using Hamiltonian Monte Carlo (HMC) (Hoffman and Gelman, 2014; Neal, 2011). In HMC, we introduce an additional momentum variable for each of the model variables and interpret the extended model as a Hamiltonian system. We simulate time evolution of the Hamiltonian dynamics to produce proposals for the Metropolis algorithm. This simulation step makes use of the gradient of the log joint density
However, in a GP regression the function posterior with latent functions marginalised out, can be integrated conveniently by
are HMC samples of the latent posterior. The function posterior for each HMC sample is a Gaussian with
where is a nonstationary kernel matrix computed using and , and is the diagonal noise covariance matrix of .
Hence, we only need to do HMC sampling over the three latent vectors and the posterior of follows analytically as a mixture of Gaussians. The latent posterior is proportional to the marginal likelihood in eq. (3), and thus the HMC sampling of uses the same gradients from eq. (2.1) as the MAP solution.
2.3 Posterior whitening
The posterior of the latent vectors is by definition highly correlated due to Gaussian priors, leading to inefficient Monte Carlo sampling. To ease the sampling, we perform the sampling over the whitened latent vectors (Kuss and Rasmussen, 2005)
with Cholesky decompositions of the corresponding GP prior covariances, which are fixed based on the hyperparameters . The derivative for e.g. the lengthscale becomes , where the last term is the standard gradient of the non-whitened model defined in eq. (2.1).
2.4 Making predictions
Both the MAP solution and the HMC sampler infer values of the latent functions only at the observed inputs . To extrapolate the values of the unknown function and the latent functions over arbitrary target points , we define the predictive distribution, where we extrapolate the latent functions to , and then express the function posterior with them (Goldberg, 1998). With the MAP solution we have
where we approximate the integral by drawing samples , of dimensions from the conditional Gaussians and (See Supplementary Material). This results in a mixture of corresponding Gaussians , where
and where , and are computed with eq. (2) over the latent vectors over inputs , or using over inputs . The simplest approximation is to choose only a single sample from the conditionals by choosing the conditional means and (See Supplementary Material). This is a sufficient approximation if the inputs are sufficiently dense.
The predictive distribution given the HMC sample is derived analogously. We average over the HMC samples instead of a single MAP solution, and over the samples and from the conditionals, resulting in
where and , and where the kernel matrices are computed using and .
We note that a slower but perhaps more elegant alternative is to model latent functions jointly over concatenated inputs , resulting in , and analogously for the other functions. In this case the function posterior contains the predictive posterior, but the latent vector sizes increase to .
We assess the performance of the proposed method on several synthetic and real datasets. We experiment with 8 synthetic datasets and a gene expression time series dataset (Heinonen et al., 2015). Of the synthetic datasets, three datasets are from the literature: the motorcycle dataset M (Silverman, 1985), the ‘jump’ dataset J (Paciorek and Schervish, 2004) and a nonstationary dataset T from GPstuff (demo_epinf in Vanhatalo et al. (2013)). We also generated five synthetic datasets with different types of nonstationarities (See Table 1). We expect datasets exhibiting specific types of input-dependent characteristics to require a model with the corresponding input-dependencies.
|M||133||67||Motorcycle dataset (Silverman, 1985)|
|J||N/A||101||50||Jump dataset (3rd) (Paciorek and Schervish, 2004)|
|T||500||250||from GPstuff demo_epinf (Vanhatalo et al., 2013)|
All dataset outputs are normalised to range and inputs to range . For each dataset, we use half of the data as training data and the rest as test data. We will assess the performance against the test data with mean squared error
and the negative log probability densityvalues. For consistency, we model stationary parameters as vectors of length for .
We run MAP optimisation from 10 different initial conditions and choose the one with the highest MLL value. We run 10 chains of 1000 samples of HMC-NUTS sampling using model whitening (Algorithm 3 of Hoffman and Gelman (2014), , maximum tree depth ). The hyperparameters are set to , and , and . We set the parameters to sensible values of and .
3.1 Regression performance
Table 2 shows the MSE and NLPD performance on the test folds of the synthetic datasets using various MAP GP models. For each dataset, the model with the smallest NLPD, or second smallest NLPD value, is the one where the model’s nonstationarities match those of the dataset. For instance, the dataset T contains heteroscedatic noise and input-dependent . For this, the best performance is obtained with a matching -GP and with a fully nonstationary -GP as well.
With MSE the stationary GP performs slightly better, albeit still worse than the optimal non-stationary method. This applies for every dataset.
Adding ‘unnecessary’ nonstationarities retains or only slightly worsens the performance, with the major exception being the ‘jump’ dataset J. Here, the lengthscale is clearly input-dependent (NLPD , optimal), while in contrast the nonstationary signal variance is unable to model the data (NLPD ). Adding Heteroscedatic noise to any of the models of this dataset weakens the model.
3.2 HMC performance
We explored the difference between the MAP solution and the HMC sampling. In practice we found the MAP to be slightly better on average regarding the MSE and NLPD values (See Figure 1a). However, the sampling solution is robust against multimodality in the latent posterior. Figure 1b shows the test errors of the individual HMC samples in comparison to the MAP solution with the D dataset using the -GP model. The HMC solution includes numerous samples that are better, while on average being slightly worse than MAP.
The dataset D contains several latent modes (See Figure 2, bottom), which the HMC sampler captures. These modes include latent functions that imply a ‘shortcut’ or a ‘zigzag’ signal around timepoints or , or both. The HMC samples are centered mostly around the shortcut profile at the earlier timepoint, while only a few samples with the shortcut profile exist at the later timepoint. The MAP solution has chosen both zigzags. The latent posterior shows largest variance in the signal variance component, while the lengthscale and noise variance have tighter distributions.
3.3 Biological dataset
We showcase the method with a biological dataset of gene expression time series measurements of human endothelial cells after irradiation at time . Due to the irradiation the dataset exhibits nonstationary dynamics as the cells try to repair themselves and revert back to steady states. The gene expressions are measured over 8 days in three replicates (Heinonen et al., 2015). The goal is to construct a realistic model of the underlying gene expression process and the underlying dynamics with no knowledge of the ‘true’ expression levels, given only the small number of sparse measurements.
We modeled the dataset using stationary GP, heteroscedatic -GP and three nonstationary GPs: -GP, -GP and with -GP. We found the performance of the three nonstationary GPs to be similar. Figure 3 indicates the MLL, MSE and NLPD values of the 205 timeseries under stationary, heteroscedatic or fully nonstationary models. Addition of heteroscedasticity greatly increases the model fits, while also improving the data likelihoods against the function posterior. Finally the fully nonstationary GP still improves model fits, while consistently improving the NLPD values, with similar MSE performance compared to the HGP. Figure 4 compares the three models learned from an an example gene expression time series.
3.4 Latent function reconstruction
Finally, we highlight the methods potential in reconstructing the latent parameter processes. We simulate a noisy sample where true generating latent parameter functions are known. We computed both the MAP solution and sampled the posterior of the latent space and of the unknown function using HMC inference and showcase the results in Figure 5. The latent function reconstruction error curve against increasing numbers of noisy points shows that the regression error, and lengthscale and noise variances converge to true values (Figure 5 top), while the signal variance shows small bias but is still well estimated.
Figure 5 bottom highlights the MAP and HMC solutions given datapoints, and compares them to the state-of-the-art -GP model of Tolvanen et al. (2014). We are able to accurately estimate the latent functions.
In this paper, we have proposed a fully non-stationary Gaussian process regression framework, where all three key components can be input-dependent. Our approach uses analytical gradient-based techniques to perform inference with HMC sampling and MAP estimation. We are able to effectively sample from the exact posterior of the latent functions. We have shown that the method is able to infer the underlying latent functions and improve regression performance when the datasets truly are nonstationary, and achieve equivalent performance to a stationary model when they are not.
The interplay between the signal variance and the lengthscale is an interesting topic (Diggle et al., 1998; Zhang, 2004). When modeling the ‘jump’ dataset the signal variance was unable to model dynamics, while nonstationary lengthscale produced a good model. This is natural since the signal variance serves as a linear amplitude, while the lengthscale has a possibly non-linear effect on the function model.
The gradient-based HMC is a powerful inference tool for Gaussian processes, and could be further enhanced by utilizing natural gradients or position-dependent mass matrices with Riemannian Manifold HMC (Girolami and Calderhead, 2011). We note that the method could be extended by also inferring the hyperparameters using HMC. However, proper care has to be taken to set their priors.
Comparison between vanilla and -Gp
A comparison between stationary (vanilla) and -GP is shown in Supplemental Figure 6.
Nonstationary, heteroscedastic GP (right) fits nonstationary dataset better than vanilla GP (left). Vanilla GP overestimates confidence intervals at stationary regions, as a result of having to choose the lengthscale to fit the nonstationary regions, and for the same reason produces spurious oscillations in regions where there are no observations. The dataset is thefrom Table 2.
Kernel SDP proof
The conditional distributions of the latent functions at target timepoints given the latent functions at observed timepoints are
where , and are standard gaussian kernels computed using the hyperparameters (Rasmussen and Williams, 2006).
The partial derivatives of the unconstrained latent functions against the marginal log likelihood
are analytically derived.
The partial derivative for is
, , , and . The derivative matrix becomes a ’plus’ matrix where only ’th column and row are nonzero.
The derivatives for is
and for is
The partial derivatives of the latent parameters in a stationary formulation are for (Rasmussen and Williams, 2006)
- Diggle et al. (1998) P. Diggle, J. Tawn, and R. Moyeed. Model-based geostatistics. Journal of the Royal Statistical Society, Series C, 47(3):299–350, 1998.
- Gibbs (1997) M. N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis, Department of Physics, University of Cambridge, 1997.
- Girolami and Calderhead (2011) M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society, Series B, 73(2):123–214, 2011.
- Goldberg (1998) P. Goldberg. Regression with input-dependent noise: A Gaussian process treatment. In NIPS, 1998.
- Heinonen et al. (2015) M. Heinonen, O. Guipaud, F. Milliat, V. Buard, B. Micheau, G. Tarlet, M. Benderitter, F. Zehraoui, and F. d’Alche Buc. Detecting time periods of differential gene expression using gaussian processes: An application to endothelial cells exposed to radiotherapy dose fraction. Bioinformatics, 2015.
Hoffman and Gelman (2014)
M. Hoffman and A. Gelman.
The No-U-Turn Sampler: Adaptively setting path lengths in
Hamiltonian monte carlo.
Journal of Machine Learning Research, 15:1351–1381, 2014.
- Kersting et al. (2007) K. Kersting, C. Plagemann, P. Pfaff, and W. Burgard. Most likely heterscedatic gaussian process regression. In ICML, pages 393–400, 2007.
- Kuss and Rasmussen (2005) M. Kuss and C.E. Rasmussen. Assessing approximate inference for binary gaussian process classification. Journal of Machine Learning Research, 6:1679–1704, 2005.
- Lazaro-Gredilla and Titsias (2011) M. Lazaro-Gredilla and M. K. Titsias. Variational heteroscedatic gaussian process regression. ICML, 2011.
- Le et al. (2005) Q. Le, A. Smola, and S. Canu. Heteroscedastic Gaussian process regression. In ICML, pages 489–496, 2005.
Handbook of Markov Chain Monte Carlo, chapter 5, MCMC Using Hamiltonian Dynamics. CRC Press, 2011.
- Paciorek and Schervish (2004) C. Paciorek and M. J. Schervish. Nonstationary covariance functions for gaussian process regression. In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2004.
Quadrianto et al. (2009)
N. Quadrianto, K. Kersting, M. Reid, T. Caetano, and W. Buntine.
Kernel conditional quantile estimation via reduction revisited.In IEEE International Conference on Data Mining, 2009.
- Rasmussen and Williams (2006) C.E. Rasmussen and K.I. Williams. Gaussian processes for machine learning. MIT Press, 2006.
- Shawe-Taylor and Christianini (2004) J. Shawe-Taylor and N. Christianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
- Silverman (1985) B. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve fitting. Journal of the Royal Statistical Society, 47:1–52, 1985.
- Tolvanen et al. (2014) V. Tolvanen, P. Jylänki, and A. Vehtari. Approximate inference for nonstationary heteroscedatic gaussian process regression. In Machine Learning for Signal Processing, 2014.
- Vanhatalo et al. (2013) J. Vanhatalo, J. Riihimäki, J. Hartikainen, P. Jylänki, V. Tolvanen, and A. Vehtari. Gpstuff: Bayesian modeling with Gaussian processes. Journal of Machine Learning Research, 14:1175–1179, 2013.
- Wang and Neal (2012) C. Wang and R. Neal. Gaussian process regression with heteroscedastic or non-gaussian residuals. arXiv, 2012.
- Zhang (2004) H. Zhang. Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics. Journal of the American Statistical Association, 99(465):250–261, 2004.