Scalable Gaussian Processes for Characterizing Multidimensional Change Surfaces

11/13/2015 ∙ by William Herlands, et al. ∙ University of Oxford Parallels IP Holdings GmbH Carnegie Mellon University 0

We present a scalable Gaussian process model for identifying and characterizing smooth multidimensional changepoints, and automatically learning changes in expressive covariance structure. We use Random Kitchen Sink features to flexibly define a change surface in combination with expressive spectral mixture kernels to capture the complex statistical structure. Finally, through the use of novel methods for additive non-separable kernels, we can scale the model to large datasets. We demonstrate the model on numerical and real world data, including a large spatio-temporal disease dataset where we identify previously unknown heterogeneous changes in space and time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 13

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In human systems we are often confronted with changes or perturbations which may not immediately disrupt an entire system. Instead, changes such as policy interventions take time to affect deeply held habits or trickle through a complex bureaucracy. The dynamics of these changes are non-trivial, with sophisticated spatial distributions, rates, and intensity functions. Using expressive models to fully characterize such changes is essential for making accurate predictions and yielding scientifically relevant results.

Typically, changepoint methods (Chernoff and Zacks, 1964) model system perturbations as discrete, or near-discrete, changepoints. These points are either identified sequentially using online algorithms, or retrospectively. Here we consider retrospective analysis (Brodsky and Darkhovsky, 2013; Chen and Gupta, 2011).

Gaussian processes have been used for changepoint modeling to provide a nonparametric framework. Saatçi et al. (2010) extend the sequential Bayesian Online Changepoint Detection algorithm (Adams and MacKay, 2007), by using a Gaussian process to model temporal covariance within a particular regime. Similarly, Garnett et al. (2009) provide Gaussian processes for sequential changepoint detection with mutually exclusive regimes. These models focus on discrete changepoints, where regimes defined by distinct Gaussian processes change instantaneously at . While such models may be appropriate for mechanical systems, they do not permit modeling of the complex changes common to many human systems.

A small collection of pioneering work has briefly considered the possibility of non-discrete Gaussian process change-points (Wilson, 2014; Lloyd et al., 2014). Yet these models rely on sigmoid transformations of linear functions which are restricted to fixed rates of change, and are demonstrated exclusively on small, one-dimensional time series data. They cannot expressively characterize non-linear changes or feasibly operate on large multidimensional data.

Applying changepoints to multiple dimensions, such as spatio-temporal data, is theoretically and practically non-trivial, and has thus been seldom attempted. Notable exceptions include Majumdar et al. (2005) who consider discrete spatio-temporal changepoints with three additive Gaussian processes: one for , one for , and one . Alternatively, Nicholls and Nunn (2010) use a Bayesian onset-field process on a lattice to model the spatio-temporal distribution of human settlement on the Fiji islands.

The limitations of these models reflect a common criticism that Gaussian processes are unable to convincingly respond to changes in covariance structure. We propose addressing this deficiency with an expressive, flexible, and scalable change surface model.

Throughout the paper we refer to change surfaces as the multidimensional generalization of changepoints. Unlike the discrete notion of changepoints, a change surface can have a variable rate of change and non-monotonicity in the transition between functional regimes. Additionally, changes can occur heterogeneously across the input dimensions. We formalize the notion of a change surface through our model specification in Section 3.

1.1 Main contributions

We introduce a scalable Gaussian process model, which is capable of automatically learning expressive covariance functions, including a sophisticated continuous change surface. We derive scalable inference procedures leveraging Kronecker structure, and a lower bound on the marginal likelihood using the Weyl inequality, as a principled means for scalable kernel learning. Our contributions include:

  1. A non-discrete Gaussian process change surface model over multiple input dimensions. Our model specification learns the change surface from data, enabling it to approximate discrete changes or gradual shifts between regimes. The input can have arbitrary dimension, though we primarily focus our attention on spatio-temporal modeling over 2D space and 1D time.

  2. The first scalable Gaussian process changepoint model by using novel Kronecker methods. Modern datasets require methods which can scale to hundreds of thousands of instances.

  3. A novel method for estimating the log determinant of additive positive semidefinite matrices using the Weyl inequality. This enables scalable additive Gaussian process models with non-separable kernels in space and time.

  4. Random Kitchen Sink features to sample from a Gaussian process change surface. This flexibility permits arbitrary changes which can adapt to heterogeneous effects over multiple dimensions. It also allows us to analytically optimize the entire model.

  5. We use logistic functions to normalize the weights on all latent functions (one per regime), thereby providing a very interpretable model. Additionally, we permit arbitrary specification of the change surface parameterization, allowing experts to specify interpretable models for how the change surface behaves over the input space.

  6. A novel initialization method for spectral mixture kernels by fitting a Gaussian mixture model to the Fourier transform of the data. This provides good starting values for hyperparameters of expressive stationary kernels, allowing for proper optimization over a multimodal parameter space.

  7. A nonparametric Bayesian framework for discovering and characterizing continuous changes in large observational data. We demonstrate our approach on numerical and real world data, including a recently developed public health dataset. We demonstrate how the effect of the measles vaccine introduced in the US in 1963 was spatio-temporally varying. Our model discovers the time frame in which the measles vaccine was introduced, and accurately represents the change in dynamics before and after the introduction, thus providing new insights into the spatial and temporal dynamics of reported disease incidence.

1.2 Outline

In the remainder of the paper, section 2 provides background on Gaussian processes. Section 3 describes our change surface model including the weighting, warping, and kernel functions. Section 4 introduces our novel algorithm for approximating the log determinant of additive kernels. Section 5 details our initialization procedure including our new approach for spectral mixture hyperparameter initialization. Section 6 describes our numerical and real-world experiments. Finally, we conclude with summary remarks in section 7.

2 Gaussian Process

Given data , where

, are outputs or response variables, and

are inputs or covariates, we assume that the responses are generated from the inputs by a latent function with a Gaussian process prior and Gaussian noise, such that , , . A Gaussian process is a nonparametric prior over functions completely specified by mean and covariance functions:

(1)
(2)
(3)

Any finite collection of function values is normally distributed

where and matrix .

In order to learn hyperparameters, we often desire to optimize the marginal likelihood of the data, conditioned on kernel hyperparameters , and inputs, .

(4)

In the case of a Gaussian observation model we can express the log marginal likelihood as,

(5)

We assume familiarity with the basics of Gaussian processes as described by Rasmussen and Williams (2006).

3 Smooth Change Surface Model

Change surface data consists of latent functions defining regimes in the data. The transition between any two functions is considered a change surface. Were these functions not mutually exclusive, we could consider an input dependent mixture model such as (Wilson et al., 2011),

(6)

where the weighting functions, , describe the mixing proportions over the input domain. However, for data with changing regimes we are particularly interested in latent functions that exhibit some amount of mutual exclusivity.

We induce this partial discretization with a warping function, , which has support over the entire real line but a range which is concentrated towards 0 and 1. Additionally, we choose such that it produces a convex combination over the weighting functions, . In this way, each defines the strength of latent over the domain, while normalizes these weights to induce weak mutual exclusivity.

A natural choice for flexible, smooth change surfaces is the softmax function since it can approximate a Heaviside step function or gradual changes. For latent functions, the resulting warping function is

(7)

Our model is thus,

(8)

If we assume Gaussian process priors on all latent functions we can define where has a Gaussian process prior with covariance function,

(9)

This assumption does not limit the expressiveness of Eq. 8 since each Gaussian process may be defined with different mean and covariance functions. Indeed, where the data exhibits latent functional change we expect that the latent functions will have correspondingly different hyperparameters even if the kernel forms are identical.

induce nonstationarity since they are dependent on the input . Thus, even if we use stationary kernels for all , our model results in a flexible, nonstationary kernel.

Each defines how the coverage of varies over the input domain. Where , dominates and primarily describes the relationship between and , and in cases where there is no such that , a number of functions are dominant in defining the relationship between and . Since pushes values towards 1 or 0, the regions with multiple dominant functions are transitory and thus considered change regions. Therefore, we can interpret how the change surface develops and where different regimes dominate by evaluating over the input domain.

3.1 Design choices for

The functional form of determines how changes can occur in the data, and how many can occur. For example, a linear parametric weighting function,

(10)

only permits a single linear change surface in the data. Yet even this simple model is more expressive than discrete changepoints since it permits flexibility in the rate of change and extends to change regions in .

In order to develop a general framework we do not require any prior knowledge about the functional form of and instead assume a Gaussian process prior on . While in principle we could sample from the full Gaussian process prior, this would lead to a non-conjugate model which would thus be less computationally attractive and significantly constrain the “plug and play” nature of choices for , , and . Instead, we approximate the Gaussian process with Random Kitchen Sink (RKS) features and analytically derive inference procedures using the log marginal likelihood (Lázaro-Gredilla et al., 2010).

Rahimi and Recht (2007)

demonstrate that if we consider the vector of RKS features which maps the

dimensional input to an dimensional feature space,

(11)

then we can approximate any stationary kernel by taking the Fourier transform of ,

(12)

and putting priors over the parameters of the RKS feature mapping,

(13)
(14)

For an RBF kernel where is a diagonal matrix of length-scales, we sample,

(15)

Therefore, if we want to place a Gaussian process prior over our weighting functions, , we can use RKS features to create a compact representation of the kernel (Lázaro-Gredilla et al., 2010). For any finite input we know that,

(16)

Equivalently, we can define parameters such that,

(17)
(18)

which we can write in the explicit RKS feature space representation,

(19)

allowing us to sample from with a finite sum of RKS features. Initialization of hyperparameters and is discussed in Section 5.

Experts with domain knowledge can specify a parametric form for other than RKS features. Such specification can be advantageous, requiring relatively few, highly interpretable parameters to optimize. Additionally, specifying the functional form of does not require prior knowledge about if, where, or how rapidly changes occur.

3.2 Design choices for

Each latent function is specified by a kernel with unique hyperparameters. By design, each may be of a different form. For example, one function may have a Matérn kernel, another a periodic kernel, and a third an exponential kernel. Such specification is useful when domain knowledge provides insight into the covariance structure of the various regimes.

In order to maintain maximal generality and expressivity, we develop the model using spectral mixture kernels (Wilson and Adams, 2013) where

where and is a diagonal covariance matrix for multidimensional inputs. With a sufficiently large , spectral mixture kernels can approximate any stationary kernel, providing the flexibility to capture complex patterns over multiple dimensions. These kernels have been used in pattern prediction, outperforming complex combinations of standard stationary kernels (Wilson et al., 2014).

Using spectral mixture kernels extends previous work on Gaussian processes changepoint modeling which has been restricted in practice to RBF (Saatçi et al., 2010; Garnett et al., 2009) or exponential kernels (Majumdar et al., 2005). Expressive covariance functions are particularly important with multidimensional and spatio-temporal data where the dynamics are complex and unknown a priori. While most Gaussian process models provide the theoretical flexibility to choose any kernel, the practical mechanics of initializing and fitting more expressive kernels is a challenging problem. We describe an initialization procedure in Section 5 which we hope can enable other models to exploit expressive kernels as well.

4 Scalable inference

Analytic optimization and inference requires computation of the log marginal likelihood (Eq. 5). Yet calculating the inverse and log determinant of covariance matrices requires computations and memory (Rasmussen and Williams, 2006), which is impractical for large datasets. Recent advances in scalable Gaussian processes have reduced this computational burden by exploiting Kronecker structure under two assumptions. One, the inputs lie on a grid formed by a Cartesian product, . Two, the kernel is multiplicative across each dimension. The assumption of separable, multiplicative kernels is commonly employed in spatio-temporal Gaussian process modeling (Martin, 1990; Majumdar et al., 2005; Flaxman et al., 2015). Under these assumptions, the covariance matrix , where each is such that .

Using efficient Kronecker algebra, Saatçi (2012) calculates the inverse and log determinant calculations in operations using memory. Furthermore, Wilson et al. (2014) extends the Kronecker methods for incomplete grids.

Yet for an additive kernel such as that needed for change surface modeling (Eq. 9), calculating the inverse and log determinant is no longer feasible using Kronecker algebra as in Saatçi (2012) because the sum of the matrix Kronecker products does not decompose as a single Kronecker product. Instead, calculations involving the inverse can be efficiently carried out using linear conjugate gradients as in Flaxman et al. (2015) because the key subroutine is matrix-vector multiplication and the sum of Kronecker products can be efficiently multiplied by a vector.

However, there is no exact method for efficient computation of the log determinant of the sum of Kronecker products. Instead, Flaxman et al. (2015) upper bound the log determinant using the Fiedler bound (Fiedler, 1971) which says that for Hermitian matrices and

with sorted eigenvalues

and respectively,

(20)

While this yields fast, computation, the Fiedler bound does not generalize for more than two matrices.

Instead, we bound the log determinant of the sum of multiple covariance matrices using Weyl’s inequality (Weyl, 1912) which states that for Hermitian matrices, , with sorted eigenvalues , , and , respectively,

(21)

Since we can bound the log determinant by . Furthermore, we can use the Weyl bound iteratively over pairs of matrices to bound the sum of covariance matrices .

As the bound indicates, there is flexibility in the choice of which eigenvalue pair to sum in order to bound . One might be tempted to minimize over all possible pairs for each of the eigenvalues of in order to obtain the tightest bound on the log determinant. Unfortunately, this requires computations. Instead we explore two possible alternatives:

  1. For each we choose the “middle” pair such that when possible, and

    otherwise. This heuristic requires

    computations.

  2. We employ a greedy search by using the previous and to choose the minimum of pairs of eigenvalues . When this corresponds to the middle heuristic. When this corresponds to the exact Weyl bound. The greedy search requires computations.

In addition to bounding the sum of kernels, we must also deal with the scaling functions, . We can rewrite Eq. 9 in matrix notation,

(22)

where and . Employing the bound on eigenvalues of matrix products (Bhatia, 2013),

(23)

we can bound the log determinant of in Eq. 22 with a Weyl approximation over where is the largest eigenvalue of and is the largest eigenvalue of

We empirically evaluate the exact Weyl bound, middle heuristic, and greedy search with for our model using synthetic data (generated according to the procedure in Section 6.1). We compare these results against the Fiedler bound (in the case of two kernels), and a recently proposed method for estimating the log determinant using Chebyshev polynomials coupled with stochastic Hutchinson trace approximation (Han et al., 2015).

Figure 1: Left plot shows the ratio of approximations to the true log determinant of 2 additive kernels. Right plot shows the time to compute each approximation and the true log determinant of 2 additive kernels.
Figure 2: Left plot shows the ratio of approximations to the true log determinant of 3 additive kernels. Right plot shows the time to compute each approximation and the true log determinant of 3 additive kernels.

Figures 1 and 2 depict the ratio of each approximation to the true log determinant, and the time to compute each approximation over increasing number of observations for 2 and 3 kernels. We note that all Weyl and Fiedler approximations converge to of the true log determinant, which was negative in the experiments. While the exact Weyl bound scales poorly, as expected, both approximate Weyl bounds scale well. In practice, we use the middle heuristic since it provides the fastest results.

5 Initialization

Since our model uses expressive spectral mixture kernels and flexible RKS features, the parameter space is highly multimodal. Therefore, it is essential to initialize the model hyperparameters appropriately. Below we present a method where we first initialize the RKS features and then use those values in a novel initialization method for the spectral mixture kernels.

To initialize we simplify our model and assume that each is an RBF kernel. Using the procedure in Algorithm 1 we test possible functions by drawing the hyperparameters , , and from their respective prior distributions (Section 3.1). We set reasonable values of , , and .

For each , we sample possible sets of hyperparameters for the RBF kernels and select the best set via maximum marginal likelihood. Then we run an abbreviated optimization procedure over each of the sets of and RBF hyperparameters and select the best set via marginal likelihood. Finally, we optimize all the resulting parameters until convergence.

1:  for  do
2:     Draw , , for RKS features in
3:     Draw random values for RBF kernels. Choose the best with maximum marginal likelihood
4:     Partial optimization of and RBF kernels
5:  end for
6:  Choose the best set of hyperparameters with maximum marginal likelihood
7:  Optimize all hyperparameters until convergence
Algorithm 1 Initialize RKS by optimizing a simplified model with RBF kernels

In order to initialize the spectral mixture kernels, we use the optimized from above to define the subset where the latent is dominant. For each we then take a Fourier transform of over each dimension of to obtain the empirical spectrum in that dimension. Note that we consider each dimension of individually since we have a multiplicative Q-component spectral mixture kernel over each dimension. Since spectral mixture kernels model the spectral density with Gaussians on , we fit a 1D Gaussian mixture model,

(24)

to the the empirical spectrum for each dimension. Using the learned mixture model we initialize the parameters of our spectral mixture kernels for as detailed in line 8 of Algorithm 2.

Finally, we use the initialized and spectral mixture kernel hyperparameters and jointly optimize the model using marginal likelihood and standard gradient techniques (Rasmussen and Nickisch, 2010).

1:  Use output from Algorithm 1
2:  for  do
3:     for  do
4:        for Each  do
5:           Sample [empirical spectrum for dimension ]
6:        end for
7:        Fit Q component 1D GMM to all samples
8:        Initialize ; ;
9:     end for
10:  end for
Algorithm 2 Initialize spectral mixture kernels

6 Experiments

We test our model with both numerical and real world data. There do not exist standard datasets for evaluating spatio-temporal changepoint models. For example, Majumdar et al. (2005) used simulations to demonstrate the effectiveness of their model. Therefore, we apply our method on a standard 1D changepoint dataset, synthetic data, and a newly available spatio-temporal disease dataset.

6.1 Numerical Experiments

We generate a

grid of synthetic data by drawing independently from two latent functions. Each function is characterized by a 2D RBF kernel with different length-scales and variances. The synthetic change surface between the functions is defined by

where , .

We apply our change surface model with two latent functions, spectral mixture kernels, and defined by 5 RKS features. We do not provide the model prior information about the change surface or latent functions. Figures 3 and 4 depict typical results using the initialization procedure followed by analytic optimization. The model captures the change surface and produces an appropriate regression over the data.

Figure 3: Numerical data experiment. The top-left depicts the data; the bottom-left shows the true change surface with the range from blue to red depicting . The top-right depicts the predicted output; the bottom-right shows the predicted change surface.
Figure 4: Numerical data experiment. The top-left depicts the data; the bottom-left shows the true change surface with the range from blue to red depicting . The top-right depicts the predicted output; the bottom-right shows the predicted change surface.

Using synthetic data, we create a predictive test by splitting the data into training and testing sets. We compare our smooth change surface model to three other expressive, scalable methods: sparse spectrum Gaussian process with 500 basis functions (Lázaro-Gredilla et al., 2010), sparse spectrum Gaussian process with fixed spectral points with 500 basis functions (Lázaro-Gredilla et al., 2010), and a Gaussian process with multiplicative spectral mixture kernels in each dimension. For each method we average the results for 10 random restarts. Table 1 shows the normalized mean squared error (NMSE) of each method, where is the mean of the training data.

(25)
Method NMSE
Smooth change surface 0.00078
SSGP 0.01530
SSGP fixed 0.02820
Spectral mixture 0.00200
Table 1: Comparison of prediction using flexible, scalable Gaussian process methods on synthetic multidimensional change-surface data.

Our change surface model performed best due to the expressive nonstationary covariance function that fits to the different functional regimes in the data. Although the alternate methods can flexibly adapt to the data, they must account for the change in covariance structure by setting an effectively shorter length-scale over the data. Thus their predictive accuracy is reduced compared to the change surface model.

6.2 British Coal Mining Data

British coal mining accidents from 1861 to 1962 have been well studied in the point process and changepoint literature (Raftery and Akman, 1986; Adams and MacKay, 2007). We use yearly counts of accidents from Carlin et al. (1992). Domain knowledge suggests that the Coal Mines Regulation Act of 1887 affected the underlying process of coal mine accidents. This act limited child labor in mines, detailed inspection procedures, and regulated construction standards.

We apply our change surface model with two latent functions, spectral mixture kernels, and defined by 5 RKS features. We do not provide the model with prior information about the 1887 legislation date. Figure 5 depict the cumulative data and predicted change surface. The red line marks the year 1887 and the magenta line marks . Our algorithm correctly identified the change region and suggests a gradual change that took 11.3 years to transition from to .

Figure 5: British coal mining accidents from 1851 to 1962. The blue line depicts cumulative annual accidents, the green line plots , the vertical red line marks the Coal Mines Regulation Act of 1887, and the vertical magenta line indicates .

6.3 US Disease Data

Measles was nearly eradicated in the United States following the introduction of the measles vaccine in 1963. We analyze monthly incidence data for measles from 1935 to 2003 in each of the continental United States and the District of Columbia, made publicly available by Project Tycho (van Panhuis et al., 2013). We fit the model to data points where with two spatial dimensions representing centroids of each state and one temporal dimension.

We apply our change surface model with two latent functions, spectral mixture kernels, and defined by 5 RKS features. We do not provide prior information about the 1963 vaccination date.

Results for three states are shown in Figure 6 along with the predicted change surface. The red line marks the vaccine year of 1963, while the magenta line marks the points where . Our algorithm correctly identified the time frame when the measles vaccine was released in the US.

Figure 6: Measles incidence levels from 3 states, 1935 - 2003. The green line plots , the vertical red line indicates the vaccine in 1963, and the magenta line indicates .
Figure 7: US states colored by the date where . Red indicates earlier dates, with California being the earliest. Blue indicates later dates, with North Dakota being the latest. Grayed out states were missing in the dataset.

Additionally, the model suggests that the effect of the measles vaccine varied both temporally and spatially. In Figure 7 we depict the midpoint, , for each state. We discover that there is an approximately 6 year difference in midpoint between states. In Figure 8 we depict the change surface slope from to for each state to estimate the rate of change. Here we find that some states had approximately twice the rate of change as others.

Figure 8: US states colored by the slope of from to . Red indicates flatter slopes, with Arizona being the lowest. Blue indicates steeper slopes, with Maine being the highest. Grayed out states were missing in the dataset.

These variations in the change surface illustrate how the measles vaccine affected states heterogeneously over space and time. They suggest that further scientific research is warranted to understand the underlying causes of this heterogeneity in order to provide insight for future vaccination programs.

7 Conclusions

We presented a scalable, multidimensional Gaussian process model with expressive kernel structure which can learn a complex change surface from data. Using the Weyl inequality, we perform efficient inference with additive kernel structure using Kronecker methods, enabling a multidimensional non-separable kernel. Additionally, we introduce a novel initialization algorithm for learning the RKS features and spectral mixture kernels. Finally, we apply our model to numerical and real world data, illustrating how it can characterize heterogeneous spatio-temporal change surfaces, yielding scientifically relevant insights.

The work on changepoint modeling is extensive and the current work cannot address all facets of the literature. Future work can extend our retrospective analysis to address sequential change surface detection. Additionally, the current method can be extended to automatically determining the number of latent functions using a automatic modeling discovery approach such as Lloyd et al. (2014).

Acknowledgements

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1252522 and the National Science Foundation award No. IIS-0953330.

References

  • Adams and MacKay (2007) Adams, R. P. and MacKay, D. J. (2007). Bayesian online changepoint detection. arXiv preprint arXiv:0710.3742.
  • Bhatia (2013) Bhatia, R. (2013). Matrix analysis, volume 169. Springer Science & Business Media.
  • Brodsky and Darkhovsky (2013) Brodsky, E. and Darkhovsky, B. S. (2013). Nonparametric methods in change point problems, volume 243. Springer Science & Business Media.
  • Carlin et al. (1992) Carlin, B. P., Gelfand, A. E., and Smith, A. F. (1992). Hierarchical bayesian analysis of changepoint problems. Applied statistics, pages 389–405.
  • Chen and Gupta (2011) Chen, J. and Gupta, A. K. (2011). Parametric statistical change point analysis: With applications to genetics, medicine, and finance. Springer Science & Business Media.
  • Chernoff and Zacks (1964) Chernoff, H. and Zacks, S. (1964). Estimating the current mean of a normal distribution which is subjected to changes in time. The Annals of Mathematical Statistics, pages 999–1018.
  • Fiedler (1971) Fiedler, M. (1971). Bounds for the determinant of the sum of hermitian matrices. Proceedings of the American Mathematical Society, pages 27–31.
  • Flaxman et al. (2015) Flaxman, S. R., Wilson, A. G., Neill, D. B., Nickisch, H., and Smola, A. J. (2015). Fast kronecker inference in gaussian processes with non-gaussian likelihoods.

    International Conference on Machine Learning 2015

    .
  • Garnett et al. (2009) Garnett, R., Osborne, M. A., and Roberts, S. J. (2009). Sequential bayesian prediction in the presence of changepoints. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 345–352. ACM.
  • Han et al. (2015) Han, I., Malioutov, D., and Shin, J. (2015). Large-scale log-determinant computation through stochastic chebyshev expansions. arXiv preprint arXiv:1503.06394.
  • Lázaro-Gredilla et al. (2010) Lázaro-Gredilla, M., Quiñonero-Candela, J., Rasmussen, C. E., and Figueiras-Vidal, A. R. (2010). Sparse spectrum gaussian process regression. The Journal of Machine Learning Research, 11:1865–1881.
  • Lloyd et al. (2014) Lloyd, J. R., Duvenaud, D., Grosse, R., Tenenbaum, J. B., and Ghahramani, Z. (2014). Automatic construction and natural-language description of nonparametric regression models. arXiv preprint arXiv:1402.4304.
  • Majumdar et al. (2005) Majumdar, A., Gelfand, A. E., and Banerjee, S. (2005). Spatio-temporal change-point modeling. Journal of Statistical Planning and Inference, 130(1):149–166.
  • Martin (1990) Martin, R. (1990). The use of time-series models and methods in the analysis of agricultural field trials. Communications in Statistics-Theory and Methods, 19(1):55–81.
  • Nicholls and Nunn (2010) Nicholls, G. K. and Nunn, P. D. (2010). On building and fitting a spatio-temporal change-point model for settlement and growth at bourewa, fiji islands. arXiv preprint arXiv:1006.5575.
  • Raftery and Akman (1986) Raftery, A. and Akman, V. (1986). Bayesian analysis of a poisson process with a change-point. Biometrika, pages 85–89.
  • Rahimi and Recht (2007) Rahimi, A. and Recht, B. (2007). Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177–1184.
  • Rasmussen and Nickisch (2010) Rasmussen, C. E. and Nickisch, H. (2010). Gaussian processes for machine learning (gpml) toolbox. The Journal of Machine Learning Research, 11:3011–3015.
  • Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. (2006). Gaussian processes for machine learning.
  • Saatçi (2012) Saatçi, Y. (2012). Scalable inference for structured Gaussian process models. PhD thesis, University of Cambridge.
  • Saatçi et al. (2010) Saatçi, Y., Turner, R. D., and Rasmussen, C. E. (2010). Gaussian process change point models. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 927–934.
  • van Panhuis et al. (2013) van Panhuis, W. G., Grefenstette, J., Jung, S. Y., Chok, N. S., Cross, A., Eng, H., Lee, B. Y., Zadorozhny, V., Brown, S., Cummings, D., et al. (2013). Contagious diseases in the united states from 1888 to the present. The New England journal of medicine, 369(22):2152.
  • Weyl (1912) Weyl, H. (1912). Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung). Mathematische Annalen, 71(4):441–479.
  • Wilson et al. (2014) Wilson, A., Gilboa, E., Cunningham, J. P., and Nehorai, A. (2014). Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, pages 3626–3634.
  • Wilson (2014) Wilson, A. G. (2014). Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. PhD thesis, PhD thesis, University of Cambridge.
  • Wilson and Adams (2013) Wilson, A. G. and Adams, R. P. (2013). Gaussian process kernels for pattern discovery and extrapolation. arXiv preprint arXiv:1302.4245.
  • Wilson et al. (2011) Wilson, A. G., Knowles, D. A., and Ghahramani, Z. (2011). Gaussian process regression networks. arXiv preprint arXiv:1110.4411.