Bayesian views of generalized additive modelling

02/04/2019
by   David L Miller, et al.
0

Links between frequentist and Bayesian approaches to smoothing were highlighted early on in the smoothing literature, and power much of the machinery that underlies the modern generalized additive modelling framework (implemented in software such as the R package mgcv), but they tend to be unknown or under appreciated. This article aims to highlight useful links between Bayesian and frequentist approaches to smoothing, and their practical applications (with a somewhat mgcv-centric viewpoint).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 11

10/27/2021

Spike-and-Slab Generalized Additive Models and Scalable Algorithms for High-Dimensional Data

There are proposals that extend the classical generalized additive model...
11/20/2019

Additive Bayesian Network Modelling with the R Package abn

The R package abn is designed to fit additive Bayesian models to observa...
07/07/2020

qgam: Bayesian non-parametric quantile regression modelling in R

Generalized additive models (GAMs) are flexible non-linear regression mo...
12/17/2021

Online Generalized Additive Model

Additive models and generalized additive models are effective semiparame...
02/25/2020

Smoothing Graphons for Modelling Exchangeable Relational Data

Modelling exchangeable relational data can be described by graphon theor...
07/10/2018

Multi-D Kneser-Ney Smoothing Preserving the Original Marginal Distributions

Smoothing is an essential tool in many NLP tasks, therefore numerous tec...
09/27/2018

Scalable visualisation methods for modern Generalized Additive Models

In the last two decades the growth of computational resources has made i...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper is about Generalized Additive Models (GAMs; e.g., Wood, 2017)111

Well, it’s also about Generalized Additive Mixed Models (GAMMs) and Generalized Linear Mixed Models (GLMMs) too.

. That is, models of the form:

(1)

where and where () is the response and indicates an exponential family distribution with mean and scale parameter .

is a row vector of strictly parametric components (i.e., things that look like regular GLM components) and

are their associated coefficients. The s are “smooth” functions of one or more of the covariates (, etc). Smooth terms are in turn constructed from sums of simple basis functions (e.g., DeBoor, 1978). In general for some smooth of covariate we have the following decomposition:

(2)

where are fixed basis functions and

are coefficients to be estimated (one can always augment the design matrix,

, so the parametric and smooth components can be written as a matrix-coefficient-vector multiplication i.e., ). Smooth terms tend to be flexible so, to avoid overfitting, we penalize the flexibility of each smooth term according to its wiggliness. Generally such a penalty will be an integral (sometimes a sum) of integrated, squared derivatives of . This penalty can be written in the form where is a vector of coefficients, is a matrix of the fixed parts of the penalty (integrated, squared derivatives of the s; see examples below) and are smoothing parameters to be estimated that control the influence of the penalty ( is a vector of all the coefficients for all terms and the

s are padded to form a block diagonal matrix)

222Note there are more general forms that are possible (where e.g., smoothing parameters are shared or transformed), but I restrict myself to this form here.. More than one term of the summation might correspond to a single in our model—i.e., there may be multiple penalty terms corresponding to a single smooth. We want to estimate:

(3)

where is the log-likelihood and there are smoothing parameters to estimate. Conditional on the smoothing parameters (), estimation of is relatively straight-forward and the problem can be attacked with penalized iteratively re-weighted least squares (PIRLS; e.g., Wood, 2017, section 6.1.1). However estimating both and is more complicated. Several strategies have been suggested, including: Akaike’s Information Criterion (AIC)/UnBiased Risk Estimator (UBRE)/Mallows’ (Craven and Wahba, 1978), Generalized Cross Validation (GCV; Wahba and Wold, 1975), REstricted Maximum Likelihood (REML; known as “generalized maximum likelihood” by Wahba 1985) and Marginal Likelihood (ML)333Backfitting (Hastie and Tibshirani, 1986) could be included in this list but doesn’t itself include a way of estimating smoothing parameters, though e.g., GCV could be used.. These methods essentially split into two classes: those that minimize prediction error (AIC, UBRE, Mallows’

and GCV) and likelihood-based approaches which cast the smooth functions as random effects and smoothing parameters as variance parameters (REML and ML)

(Wood, 2011). This paper will only address REML estimation, which has been shown theoretically by Reiss and Ogden 2009 and in practice in Wood (2011) that GCV can overfit (undersmooth) at finite sample sizes and can be subject to issues with multiple optima (though GCV is still asymptotically optimal as ).

GAMs are often taught as “an extension of the linear model” – adding wiggles to make a (G)LM more flexible (often as a more principled step forward from adding polynomial terms into a linear model). When we talk about adding these smooth functions to our model, we tend to concentrate on equations like (1), looking at the mean effects of including smooths rather than thinking about the penalty. One might think of the penalty as a way of constraining our fit not to be too wiggly, ensuring that our model isn’t overfitting. There are two additional (non-exclusive) perspectives on the basis and penalty that will be useful to consider:

  1. Basis functions are “solutions” to objective functions like (3); in that sense the basis functions are derived from the objective and are “optimal” given that choice of objective (i.e, the form of the ). We can therefore thing of the bases as consequences of the problem definition (specifically in terms of wigglyness and dependency in the data).

  2. Ruppert et al. (2003) and Verbyla et al. 1999 (see references therein, in particular Speed (1991)) show that smooths can be formulated in a way such that they can be fitted using standard mixed model approaches. In that case we view the coefficients of the smooths () as random effects and the penalties are their associated (prior) precision matrices. Though this is primarily seen as a computational “trick”, the interpretation will be useful below.

Most often folks think of splines (DeBoor, 1978) when they hear “smooths”, but there are many possible model terms that can be specified as a basis with a (sum of) penalties. These include (Gaussian) Markov random fields (Rue and Held, 2005), Gaussian processes (Rasmussen and Williams, 2006), random effects (Speed, 1991) and varying coefficient models (Hastie and Tibshirani, 1993). Here I use the word smooth to include all these possible flexible model terms.

2 Bayesian interpretations

Bayesian interpretations of the smoothing process date back to at least Whittle (1958) (Silverman, 1985). Thinking about a single smooth in the model, which we’ll call , the approaches fall into two categories (Hastie and Tibshirani, 1990, section 3.6):

  1. infinite-dimensional: is a sum of a random linear function and an integrated Wiener process, combining this with a likelihood for the data gives a posterior mean for , this is a smoothing spline with appropriate .

  2. finite-dimensional: use a basis expansion as (2) and setup priors on the smoother’s coefficients (the ). In combination with a likelihood for the data, a normal prior on will lead to the posterior modes of coinciding with their maximum likelihood estimate.

The infinite-dimensional approach is problematic as a prior (or priors) must be set up over the complete space of possible smooth functions for each smooth term in the model; this is conceptually tricky (Wahba, 1983; Silverman, 1985)

. One can think of the infinite-dimensional problem proposed in 1. as having a finite-dimensional solution in 2. (a basis expansion); this is known as the “kernel property” in the support vector machine literature

(Hastie et al., 2009). These differing Bayesian approaches give the same posterior mean (point estimate) but different posterior uncertainty (Hastie and Tibshirani, 1990) (see section 3.2 below).

Here we’ll think about 2. as the more practical alternative (and the one that has been explored more in the literature). From this we then have two choices as to how to do estimation:

  1. “Full Bayes” (FB): in which we formulate a likelihood and attach priors to the smoothing parameters, as well as the model coefficients. This is easily achievable via mgcv::jagam (which implements translation between mgcv syntax and JAGS syntax; Wood 2016), greta444Using the same trick as jagam with Hamiltonian Monte Carlo to speed up sampling, Golding and Miller, https://greta-dev.github.io/greta/. or BayesX (Fahrmeir and Lang, 2001; Fahrmeir et al., 2004; Brezger et al., 2005).

  2. “Empirical Bayes” (EB): in which we treat the smoothing parameters as fixed effects and estimate them from the data. This is what REML estimation does (Wood, 2011) and can be interpreted as a Best Linear Unbiased Predictor (BLUP; Speed, 1991). The REML criterion has exactly the same form as the Bayesian marginal likelihood, for computational efficiency the Laplace approximation is often used (which is exact in the Gaussian case).

We explore these two approaches below in the examples. Section 3.4 shows an example of using JAGS/jagam to fit a GAM.

Note that we can get to the Bayesian approach conveniently by exponentiating the penalized estimation problem in (3), which gives us:

we can then appeal to Bayes theorem and think of the exponential term as proportional to a multivariate normal distribution with mean zero. Hence we can think of

(defined as ) as a prior precision (inverse variance) matrix. This formulation is more one of convenience (and equivalence with the penalized likelihood) than of rigorous application of Bayesian ideology though; as noted by Hastie and Tibshirani (2000) (see the comment from Green in particular), other, more “principled” priors could be formulated. On this note Wahba 1978 notes that “since Gauss-Markov estimation is equivalent to Bayesian estimation with a prior diffuse over the coefficients of the regression functions, [generalized spline smoothing’s equivalence to Bayesian estimation with a (partially improper) prior] leads us to the conclusion that spline smoothing is a (the?) natural extension of the Gauss-Markov regression with [some number of] specified regression functions”.

2.1 Priors

If we setup a penalty for some smooth term in our model, what are we really telling the model to do? We are, in some sense saying “this is how we want this term to act”—we are specifying that observations which are close to each other in covariate space have similar values, the response varies smoothly and that the true function we seek to estimate is more likely to be smooth than wiggly (hence penalizing wigglyness). The Bayesian formulation allows us to be more explicit about these beliefs (Wood, 2017, section 5.8). In general if we want to fit a model there is no unique solution unless some restriction is put on the form of (Watson, 1984). Fahrmeir et al. 2010

expand on the idea of Bayesian regularisation and its interpretation: techniques such as ridge regression, lasso,

regularization, elastic net, etc are conditionally Gaussian, in the sense they can be expressed as Gaussian priors with different variance specifications leading to rather different models.

Following on from the above, the priors for the coefficients of smooth components of the model are specified as , where is the pseudoinverse of the penalty matrix (Kimeldorf and Wahba, 1970; Wahba, 1978, 1990). A pseudoinverse is required as usually is rank-deficient. This due to some basis functions not having derivatives, so they don’t get penalized, we refer to these terms as “in the nullspace” of the penalty; in most 1D splines this is the slope and intercept. Basis functions in the nullspace of the penalty have infinite variance (improper uniform priors) (Wood, 2006); Hastie and Tibshirani (1990, section 3.6) state “one is assuming complete prior uncertainty about the linear and constant functions, and decreasing uncertainty about the higher order functions.” As the penalty will have larger values for more wiggly components (we penalize those more strongly) the corresponding variance component will have less uncertainty (as we don’t think that the function should be “too” wiggly), this makes explicit our belief that smooth functions are more likely than wiggly ones (Wood, 2006)

. If we opt for a fully Bayesian approach we put (“hyper”)priors on the smoothing parameters (which are effectively variance components) and use a diffuse but proper prior such as the gamma distribution. Specifying priors on the precision of components of “multilevel”/“hierarchical” models is a tricky business

(Gelman, 2006; Simpson et al., 2017)555See also http://andrewgelman.com/2018/04/03/justify-my-love/., this is especially the case for smoothing parameters as the “true” values of the smoothing parameter(s) could be infinite (or at least numerically infinite for computing purposes), e.g., for components that are truely linear terms.

There are penalties (and hence priors) which lead to completely penalized basis functions (e.g., the P-spline approach in Lang and Brezger, 2004), in which case the prior for is proper. This can be done for any smooth by using the methods in section 3.3. It is also worth noting that identifiability constraints (Wood, 2017, section 5.4.1) that need to be imposed on the model may lead to proper priors, so these more complex methods may not be necessary in some situations (Marra and Wood, 2011).

When using splines we must also decide on knot placement/number and basis dimension ( above; these are usually linked). Exactly how this is done depends on the basis that is used, but it is clear that since involves basis functions (or at least their derivatives), so the number of basis functions (and/or number of knots) and knot placement will affect the posterior. Though the knot selection problem can be avoided via simple even grid spacing (which may be computationally demanding) or with eigen-based approaches like thin-plate regression splines (Wood, 2003) there are often reasons to involve humans in knot placement (or at least let them dictate a rule for knot placement). Using more basis functions/knots than “necessary” and equally spacing them allows wigglyness to be dictated by the smoothing parameter, rather than making results sensitive to multiple “parameters” in the model (Wood, 2017, section 5.9).

2.2 Posteriors

Turning the crank on the Bayesian sausage machine, we can get to the posterior marginal for : where for the Gaussian likelihood case and for the exponential family case we have , where is a variance parameter and is a scale parameter (Wood, 2017, section 6.10).

For FB, will include uncertainty about the smoothing parameter(s). For EB we only have information conditional on the value of the smoothing parameter(s). Wood et al. (2016) propose a correction to to account for the uncertainty in the smoothing parameter(s) using a Taylor expansion to approximate the extra variability, adapting the methods of Kass and Steffey 1989. These three versions of (FB, EB and EB corrected) are explored in section 3.4.

3 Some examples

The Bayesian results above lead to some useful applications. Here I highlight a couple of the more commonly-used ones. We’ll investigate these properties using cubic regression splines (Wood, 2017, section 5.3.1). The penalty for this basis is:

where and are the first and last knots. This leads us to define the element of the penalty matrix, as:

The 10 basis functions are shown in the left panel of Figure 1 and the right shows the estimates (basis functions multiplied by their estimated coefficients and, in red, the final estimated smooth). Figure 2 shows the penalty matrix (left) and corresponding prior variance-covariance matrix (right) for such a model; both are highly structured. Note that these matrices are only , as in this case the linear term (flat line in the large right plot in figure 1) is unpenalized. The penalty shows the largest values (most yellow) on the diagonal, with smaller values off the diagonal and further from the diagonal (the smallest being the very dark blue off-diagonal band). This emphasizes the local nature of the basis (Hastie and Tibshirani, 1990, section 2.10).

Data used below were generated using the gamSim function in the package mgcv using the test functions from Gu and Wahba (1991), in each case noise was added.

Figure 1: The cubic spline basis used as an example. Left, small plots: the nine basis functions ( for , so is omitted for uninterestingness) with coefficients set to 1. Right: in grey the basis functions from the left plot, multiplied by their estimated coefficients, the black line is the sum of the basis functions (). See Lancaster and Šalkauskas (1986, chapter 4) for further information on the mathematical properties and definition of this basis.
Figure 2: Left: image plot of the penalty matrix for the cubic spline model in section 3. Right: the corresponding prior variance-covariance matrix for the same model (calculated by taking the Moore-Penrose pseudoinverse of the penalty matrix). In both cases, yellow indicates high values and dark blue indicates low values.

3.1 Posterior simulation

Since we know the posterior distribution of the coefficients (in the EB case conditional on the smoothing parameter(s)), we can use this result to simulate from the posterior and look at possible smooths that the model can generate. This can be useful to visualize the uncertainty in the fitted smooth, but can also be useful for calculating uncertainty about summary statistics of predictions from the fitted model.

We can do this in general by following this algorithm (Wood, 2017, section 7.2.7):

  1. Let be the number of samples to generate.

  2. Form , the matrix that maps the model covariates to the linear predictor (the prediction equivalent of the design matrix).

  3. For in :

    1. Simulate .

    2. Calculate the linear predictor .

    3. Apply the inverse link function, , so .

    4. Calculate and store the required summary over .

  4. Perform inference on the summaries (e.g., calculating empirical variance, percentile intervals, etc).

These results can be used to calculate credible intervals for functions of the predictions

(Wood, 2017, section 6.10). For example, this technique can be useful for calculating time series summaries (e.g., count in a given year) for complex spatiotemporal models (e.g., Augustin et al., 2009; Marra et al., 2012; Augustin et al., 2013). Example of simulated smooths are shown in figure 3.

Figure 3: Comparison of 200 posterior samples of a model fitted to samples (black lines) from a fitted model created using the algorithm given in section 3.1

to 95% (Nychka-type) confidence interval (blue dashed lines) around the same smooth constructed using the procedure in section

3.2

. The 95% pointwise quantiles of the black lines are given by the red dashed lines. Note the difference in the behaviour at the peaks and troughs of the function.

3.2 Confidence intervals

It is useful to construct confidence intervals around the smooths in a model not only as a visual check but also for hypothesis testing. Using the results in section 2.2 for the posterior of , we can construct estimates of uncertainty about our fitted model. We can build pointwise “confidence intervals” (to use the scare quotes of Wahba 1990) as , where is our estimated smooth, is the variance of the smooth at point and is the usual appropriate value from a normal CDF. These intervals have good frequentist across-the-function properties: that is, a 95% credible interval has close to 95% coverage, when averaged over the whole function (there may be over and under coverage at the peaks and troughs of the function). Justification for these intervals was developed in Nychka (1988) and expanded to the GAM case in Marra and Wood (2012). The Bayesian perspective is important here as frequentist uncertainty measures will be subject to smoothing bias in the model terms (Wahba, 1990, chapter 5), the Bayesian formulation includes a term that accounts for this bias and hence the intervals have good coverage properties (Wood, 2017, section 6.10.1). Intervals for the cubic spline model fitted in the previous section are shown in figure 3. Hodges and Clayton (2011, section 4.1.1) has a good example of why taking a simplistic frequentist view of constructing the confidence intervals leads to incorrect inference.

Since these intervals have good coverage and tell us about the whole function (by the “across-the-function” property), we can use them to test the hypothesis —whether a term should be dropped from the model. Considering the block of which relates to a given smooth in the model, we can construct such a test. See Wood (2017, section 6.12.1) and references therein for more details.

3.3 Term selection, proper priors

An alternative method to hypothesis testing for term selection in the model is to use shrinkage/penalty-type methods to remove terms during fitting. Many approaches are possible (see Marra and Wood (2011) for some examples) but here I focus on two approaches implemented in mgcv.

As described in section 2.1, the prior placed on can be improper due to rank deficiency in , this is caused by some basis functions having no derivatives (of the order of the penalty) as as such they are not penalized. We can make our priors proper by simply adding an extra penalty term to the model for the nullspace components of each term (the “double penalty” approach of Marra and Wood (2011)), this is achieved by eigendecomposing the penalty matrix then forming the matrix where

is a matrix of eigenvectors corresponding to the zero entries on the diagonal of

(zero eigenvalues). This approach is implemented as the

select=TRUE option in mgcv::gam, and includes one additional smoothing parameter for each smooth term in the model. Alternatively one can form a basis where the nullspace terms have a shrinkage penalty applied to them by simply adding a small to the diagonal entries of so that the resulting penalty matrix is not rank-deficient (the “shrinkage” approach of Marra and Wood (2011); implemented as the ‘‘cs’’ and ‘‘ts’’ bases in mgcv).

These two approaches lead to rather different interpretations of how wigglyness should be penalized: the double penalty approach makes no assumption about how much to smooth the nullspace in the penalty relative to the other parts of the smooth (and determines this during fitting), the shrinkage approach however assumes that the nullspace should be penalized less than the other parts of the smooth. These are of course not the only options (simply those which are implemented in software) and there may be reasons for penalizing different parts of the nullspace in different ways (and to different degrees).

Looking at the cubic splines, if we add additional smooths of (nonsense) covariates to the model, using the either of these techniques should remove those terms during model fitting. Figure 4, bottom right panel shows the term being removed from the model using the cubic regression spline with shrinkage. The top right shows the result when shrinkage is not applied.

Figure 4: Comparison using the shrinkage version of cubic splines for a model with a nonsense predictor (x3). The final term in the model (x3

) has no effect but in the top row (when a cubic spline basis is used) a small amount of curvature is estimated in the term. In the bottom row, when the shinkage approach is used, the smooth is correctly estimated as having zero effect (zero effective degrees of freedom), the other terms have minimal changes.

3.4 Fully Bayesian fitting

The adaptation of the model using the extra penalty described in Marra and Wood (2011) allows us to construct proper priors for any term in our model. We can then do fully Bayesian fitting via Gibbs sampling using JAGS (Wood, 2016). We can then compare the uncertainty estimates obtained using the empirical Bayes approach (implied flat prior on the smoothing parameters) and the fully Bayesian approach (proper gamma priors are put on the smoothing parameters). Note that this approach is relatively inefficient (sampling-wise) and much work has been done on efficient MCMC schemes (Fahrmeir and Lang, 2001; Brezger et al., 2005; Rue and Held, 2005). Alternatively one could use exactly the same approach with Hamiltonian Monte Carlo schemes, like those implemented in Stan (Carpenter et al., 2017) and greta. One of the big advantages of this type of approach is to build larger, more flexible, hierarchical models within the MCMC samplers provided by these general software frameworks.

As an example we set up our model as:

with the same data as the previous example (see figure 4). A cubic spline basis was again used with a multivariate normal prior on the s and a vague gamma prior on the smoothing parameters (of which there are four, two for each penalty), we can then use a Gibbs sampler to obtain posterior samples. When estimated from simulation (as here) includes a component accounting for uncertainty in the smoothing parameter (Wood, 2016). Plotting the matrices for empirical Bayes estimator (both uncorrected and corrected for smoothing parameter uncertainty) and for the fully Bayesian case, we see that the corrected estimates and fully Bayesian have greater uncertainty especially in the top left corner of the matrix. This corresponds to the smooth of , which contains no signal (as in the previous example). In the uncorrected case, we don’t take into account the uncertainty we have in the smoothing parameter, the model is certain that the term should be flat.

Figure 5: Comparison of the posterior variance-covariance matrix entries between smoothing uncertainty uncorrected and corrected empirical Bayes (“EB”) and full Bayesian methods. Note the increase in uncertainty moving from left to right between the plots.

4 Discussion

This article has highlighted the Bayesian interpretation of generalized additive models (specifically as implemented in mgcv), which are often thought of as a “frequentist only” method. These results and the connections between the two approaches have been known for some time (at least as far back as Whittle (1958)). Though the connections are old, recent work has shown that there are many aspects which are only just starting to be exploited by practitioners.

The random effects interpretation of the models described here is very useful, though there is considerable confusion over what “random effects” really mean (leading to at least one statistician (Gelman, 2005) to propose that the term be abolished). Hodges and Clayton 2011 characterize “old” and “new” random effects. The models discussed here are firmly in the “new” camp as, in their words: “the levels of the effect are not draws from a population because there is no population. The mathematical form of a random effect is used for convenience only” and that they are “formal devices to facilitate smoothing or shrinkage, interpreting those terms broadly”.

Computationally, the interpretation of REML estimation as either “classical” or Bayesian has been covered in the literature (Robinson, 1991), the equivalence of REML to marginalization of the fixed effects to give an “approximately Bayesian” estimate goes back to Harville 1974. Intuitively, REML samples from the prior implied by the penalty and evaluates the “average likelihood” given that draw—poor values of the smoothing parameter lead to curves too far from the data, either by being too smooth or not smooth enough (Wood et al., 2016, section 6.2.6).

Probably the closest relatives of the kind of models described above are those fitted using INLA (Rue et al., 2009; Lindgren et al., 2011). These are Bayesian additive models, restricted to the subset of models which are latent Gaussian. From Rue et al. (2009): “Latent Gaussian models are a subset of all Bayesian additive models with a structure additive predictor [(3

) above], namely those which assign a Gaussian prior to [their parameters] … the vector of hyperparameters which are not necessarily Gaussian”. This is exactly the case we have here. The major point of divergence between INLA and the GAMs specified here is that the structure of the prior variance-covariance matrices: those here do not lead to corresponding sparse precision/penalty matrices.

The MCMC approach starts to become more useful when smooth components are just one part of a complex model, especially those with large, hierarchical random effects structures (Wood, 2016). This ability to embed GAMs within larger models should facilitate some interesting developments for combining different data types for different models. For example, when building species distribution models of biological population, there are often many different data sources available, some include only whether the animal is present at a location, some if it is present or absent and others include a count of the animal and even an estimate of the probability that an animal is detected. One can imagine setting up a model for presence only data, which then gives a posterior estimate of the probability of presence, if the same spatial basis functions are used for a subsequent model built using the presence/absence data, the presence only parameter estimates can be used as a prior. The higher-quality data (in the sense of being more information rich, as well as potentially more reliable due to better field methodology) can be used to update the species distribution.

Bayesian interpretations of GAMs fitted “via frequentist methods” have been helpful to construct more reasonable estimates of uncertainty (including smoothing parameter uncertainty) and in order to understand how to construct confidence intervals that have good coverage properties (and what those properties really mean). These links can surely be used further to develop other new methodology and enhance our understanding of the models that we fit. In the author’s view, it is a shame that these conceptual links have not been better recognized and exploited further; even a very popular textbook (Ruppert et al., 2003) describes the mixed model representation of the GAM as a “convenient fiction” and consider the priors “reasonable (but not compelling).” Moving beyond mere computational convenience and harnessing the broader Bayesian framework implicit in this modelling strategy seems like a fertile ground for future work.

Acknowledgements

This work was funded by OPNAV N45 and the SURTASS LFA Settlement Agreement, and being managed by the U.S. Navy’s Living Marine Resources program under Contract No. N39430-17-C-1982. I would like to thank the following people for useful discussions/provocation: Jay ver Hoef, Steve Buckland, Len Thomas, David Borchers, Mark Bravington, Richard Glennie, Andrew Seaton, Daniel Simpson and Finn Lindgren.

References

  • Augustin et al. (2009) N. H. Augustin, M. Musio, K. von Wilpert, E. Kublin, S. N. Wood, and M. Schumacher. Modeling Spatiotemporal Forest Health Monitoring Data. Journal of the American Statistical Association, 104(487):899–911, Sept. 2009. ISSN 0162-1459, 1537-274X. doi: 10.1198/jasa.2009.ap07058. URL http://www.tandfonline.com/doi/abs/10.1198/jasa.2009.ap07058.
  • Augustin et al. (2013) N. H. Augustin, V. M. Trenkel, S. N. Wood, and P. Lorance. Space-time modelling of blue ling for fisheries stock management: space-time modelling of blue ling. Environmetrics, 24(2):109–119, Mar. 2013. ISSN 11804009. doi: 10.1002/env.2196. URL http://doi.wiley.com/10.1002/env.2196.
  • Brezger et al. (2005) A. Brezger, T. Kneib, and S. Lang. BayesX : Analyzing Bayesian Structured Additive Regression Models. Journal of Statistical Software, 14(11), 2005. ISSN 1548-7660. doi: 10.18637/jss.v014.i11. URL http://www.jstatsoft.org/v14/i11/.
  • Carpenter et al. (2017) B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A Probabilistic Programming Language. Journal of Statistical Software, Articles, 76(1):1–32, 2017. ISSN 1548-7660. doi: 10.18637/jss.v076.i01. URL https://www.jstatsoft.org/v076/i01.
  • Craven and Wahba (1978) P. Craven and G. Wahba. Smoothing noisy data with spline functions. Numerische mathematik, 31(4):377–403, 1978.
  • DeBoor (1978) C. DeBoor. A Practical Guide to Splines. Springer New York, 1978. ISBN 978-0-387-98922-8.
  • Fahrmeir and Lang (2001) L. Fahrmeir and S. Lang. Bayesian inference for generalized additive mixed models based on Markov random field priors. Journal of the Royal Statistical Society: Series C (Applied Statistics), 50(2):201–220, 2001.
  • Fahrmeir et al. (2004) L. Fahrmeir, T. Kneib, and S. Lang. Penalized structured additive regression for space-time data: a Bayesian perspective. Statistica Sinica, pages 731–761, 2004.
  • Fahrmeir et al. (2010) L. Fahrmeir, T. Kneib, and S. Konrath. Bayesian regularisation in structured additive regression: a unifying perspective on shrinkage, smoothing and predictor selection. Statistics and Computing, 20(2):203–219, Apr. 2010. ISSN 0960-3174, 1573-1375. doi: 10.1007/s11222-009-9158-3. URL http://link.springer.com/10.1007/s11222-009-9158-3.
  • Gelman (2005) A. Gelman. Analysis of variance — why it is more important than ever. The Annals of Statistics, 33(1):1–53, Feb. 2005. ISSN 0090-5364. doi: 10.1214/009053604000001048. URL http://projecteuclid.org/euclid.aos/1112967698.
  • Gelman (2006) A. Gelman. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian Analysis, 1(3):515–534, Sept. 2006. ISSN 1936-0975. doi: 10.1214/06-BA117A. URL http://projecteuclid.org/euclid.ba/1340371048.
  • Gu and Wahba (1991) C. Gu and G. Wahba. Minimizing GCV/GML Scores with Multiple Smoothing Parameters via the Newton Method. SIAM Journal on Scientific and Statistical Computing, 12(2):383–398, Mar. 1991. ISSN 0196-5204, 2168-3417. doi: 10.1137/0912021. URL http://epubs.siam.org/doi/10.1137/0912021.
  • Harville (1974) D. A. Harville. Bayesian Inference for Variance Components Using Only Error Contrasts. Biometrika, 61(2):383, Aug. 1974. ISSN 00063444. doi: 10.2307/2334370. URL http://www.jstor.org/stable/2334370?origin=crossref.
  • Hastie and Tibshirani (1986) T. Hastie and R. Tibshirani. Generalized Additive Models. Statistical Science, 1(3):297–318, 1986.
  • Hastie and Tibshirani (1990) T. Hastie and R. Tibshirani. Generalized Additive Models. Number 43 in Monographs on Statistics and Applied Probability. Chapman and Hall, 1990.
  • Hastie and Tibshirani (1993) T. Hastie and R. Tibshirani. Varying-Coefficient Models. Journal of the Royal Statistical Society. Series B (Methodological), 55(4):757–796, 1993. URL http://www.jstor.org/stable/2345993.
  • Hastie et al. (2009) T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer series in statistics New York, 2nd edition, 2009.
  • Hastie and Tibshirani (2000) T. J. Hastie and R. Tibshirani. Bayesian backfitting. Statistical Science, 15(3):196–223, 2000.
  • Hodges and Clayton (2011) J. S. Hodges and M. K. Clayton. Random Effects Old and New. Technical Report, University of Minnesota; Minneapolis, MN, 2011.
  • Kass and Steffey (1989) R. E. Kass and D. Steffey. Approximate Bayesian Inference in Conditionally Independent Hierarchical Models (Parametric Empirical Bayes Models). Journal of the American Statistical Association, 84(407):717, Sept. 1989. ISSN 01621459. doi: 10.2307/2289653. URL http://www.jstor.org/stable/2289653?origin=crossref.
  • Kimeldorf and Wahba (1970) G. S. Kimeldorf and G. Wahba. A Correspondence Between Bayesian Estimation on Stochastic Processes and Smoothing by Splines. The Annals of Mathematical Statistics, 41(2):495–502, 1970.
  • Lancaster and Šalkauskas (1986) P. Lancaster and K. Šalkauskas. Curve and Surface Fitting: An Introduction. Computational mathematics and applications. Academic Press, 1986. ISBN 978-0-12-436061-7. URL https://books.google.co.uk/books?id=8VyvQgAACAAJ.
  • Lang and Brezger (2004) S. Lang and A. Brezger. Bayesian P-Splines. Journal of Computational and Graphical Statistics, 13(1):183–212, Mar. 2004. ISSN 1061-8600, 1537-2715. doi: 10.1198/1061860043010. URL http://www.tandfonline.com/doi/abs/10.1198/1061860043010.
  • Lindgren et al. (2011) F. Lindgren, H. Rue, and J. Lindström.

    An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach: Link between Gaussian Fields and Gaussian Markov Random Fields.

    Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(4):423–498, Sept. 2011. ISSN 13697412. doi: 10.1111/j.1467-9868.2011.00777.x. URL http://doi.wiley.com/10.1111/j.1467-9868.2011.00777.x.
  • Marra and Wood (2011) G. Marra and S. N. Wood. Practical variable selection for generalized additive models. Computational Statistics & Data Analysis, 55(7):2372–2387, July 2011. ISSN 01679473. doi: 10.1016/j.csda.2011.02.004. URL http://linkinghub.elsevier.com/retrieve/pii/S0167947311000491.
  • Marra and Wood (2012) G. Marra and S. N. Wood. Coverage Properties of Confidence Intervals for Generalized Additive Model Components: Coverage properties of GAM intervals. Scandinavian Journal of Statistics, 39(1):53–74, Mar. 2012. ISSN 03036898. doi: 10.1111/j.1467-9469.2011.00760.x. URL http://doi.wiley.com/10.1111/j.1467-9469.2011.00760.x.
  • Marra et al. (2012) G. Marra, D. L. Miller, and L. Zanin. Modelling the spatiotemporal distribution of the incidence of resident foreign population: Spatiotemporal Smoothing of Resident Foreign Population. Statistica Neerlandica, 66(2):133–160, May 2012. ISSN 00390402. doi: 10.1111/j.1467-9574.2011.00500.x. URL http://doi.wiley.com/10.1111/j.1467-9574.2011.00500.x.
  • Nychka (1988) D. Nychka. Bayesian Confidence Intervals for Smoothing Splines. Journal of the American Statistical Association, 83(404):1134, Dec. 1988. ISSN 01621459. doi: 10.2307/2290146. URL http://www.jstor.org/stable/2290146?origin=crossref.
  • Rasmussen and Williams (2006) C. E. Rasmussen and C. K. I. Williams.

    Gaussian Processes for Machine Learning

    .
    MIT Press, 2006.
  • Reiss and Ogden (2009) P. T. Reiss and T. R. Ogden. Smoothing parameter selection for a class of semiparametric linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(2):505–523, 2009.
  • Robinson (1991) G. K. Robinson. That BLUP is a good thing: the estimation of random effects. Statistical Science, 6(1):15–51, 1991.
  • Rue and Held (2005) H. Rue and L. Held. Gaussian Markov Random Fields: Theory and Applications, volume 104 of Monographs on Statistics and Applied Probability. Chapman & Hall, London, 2005.
  • Rue et al. (2009) H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(2):319–392, Apr. 2009. ISSN 13697412, 14679868. doi: 10.1111/j.1467-9868.2008.00700.x. URL http://doi.wiley.com/10.1111/j.1467-9868.2008.00700.x.
  • Ruppert et al. (2003) D. Ruppert, M. Wand, and R. Carroll. Semiparametric Regression. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2003. ISBN 978-0-521-78516-7. URL https://books.google.co.uk/books?id=D46M2Lmli9QC.
  • Silverman (1985) B. W. Silverman. Some Aspects of the Spline Smoothing Approach to Non-Parametric Regression Curve Fitting. Journal of the Royal Statistical Society. Series B (Methodological), 47(1):1–52, 1985.
  • Simpson et al. (2017) D. Simpson, H. Rue, A. Riebler, T. G. Martins, and S. H. Sørbye. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors. Statistical Science, 32(1):1–28, Feb. 2017. ISSN 0883-4237. doi: 10.1214/16-STS576. URL http://projecteuclid.org/euclid.ss/1491465621.
  • Speed (1991) T. P. Speed. Comment on That BLUP is a good thing: the estimation of random effects (by G. K. Robinson). Statistical Science, 6(1):42–44, 1991.
  • Verbyla et al. (1999) A. P. Verbyla, B. R. Cullis, M. G. Kenward, and S. J. Welham. The analysis of designed experiments and longitudinal data by using smoothing splines. Journal of the Royal Statistical Society: Series C (Applied Statistics), 48(3):269–311, 1999.
  • Wahba (1978) G. Wahba. Improper priors, spline smoothing and the problem of guarding against model errors in regression. Journal of the Royal Statistical Society. Series B (Methodological), pages 364–372, 1978.
  • Wahba (1983) G. Wahba. Bayesian "Confidence Intervals" for the Cross-Validated Smoothing Spline. Journal of the Royal Statistical Society. Series B (Methodological), 45(1):133–150, 1983. URL http://www.jstor.org/stable/2345632.
  • Wahba (1985) G. Wahba. A Comparison of GCV and GML for Choosing the Smoothing Parameter in the Generalized Spline Smoothing Problem. The Annals of Statistics, 13(4):1378–1402, 1985. doi: 10.1214/aos/1176349743. URL https://doi.org/10.1214/aos/1176349743.
  • Wahba (1990) G. Wahba. Spline Models for Observational Data. Society for Industrial and Applied Mathematics, 1990. URL http://dx.doi.org/10.1137/1.9781611970128.
  • Wahba and Wold (1975) G. Wahba and S. Wold. A completely automatic french curve: fitting spline functions by cross validation. Communications in Statistics, 4(1):1–17, Jan. 1975. ISSN 0090-3272. doi: 10.1080/03610927508827223. URL http://www.tandfonline.com/doi/abs/10.1080/03610927508827223.
  • Watson (1984) G. S. Watson.

    Smoothing and interpolation by kriging and with splines.

    Journal of the International Association for Mathematical Geology, 16(6):601–615, Aug. 1984. ISSN 0020-5958, 1573-8868. doi: 10.1007/BF01029320. URL http://link.springer.com/10.1007/BF01029320.
  • Whittle (1958) P. Whittle.

    On the Smoothing of Probability Density Functions.

    Journal of the Royal Statistical Society. Series B (Methodological), 20(2):334–343, 1958. URL http://www.jstor.org/stable/2983894.
  • Wood (2003) S. N. Wood. Thin plate regression splines. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1):95–114, 2003.
  • Wood (2006) S. N. Wood. On Confidence Intervals For Generalized Additive Models Based On Penalized Regression Splines. Australian & New Zealand Journal of Statistics, 48(4):445–464, Dec. 2006. ISSN 1369-1473, 1467-842X. doi: 10.1111/j.1467-842X.2006.00450.x. URL http://doi.wiley.com/10.1111/j.1467-842X.2006.00450.x.
  • Wood (2011) S. N. Wood. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(1):3–36, 2011.
  • Wood (2016) S. N. Wood. Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv. Journal of Statistical Software, 75(7), 2016. ISSN 1548-7660. doi: 10.18637/jss.v075.i07. URL http://www.jstatsoft.org/v75/i07/.
  • Wood (2017) S. N. Wood. Generalized Additive Models. An Introduction with R. Texts in Statistical Science. CRC Press, 2nd edition, 2017.
  • Wood et al. (2016) S. N. Wood, N. Pya, and B. Säfken. Smoothing Parameter and Model Selection for General Smooth Models. Journal of the American Statistical Association, 111(516):1548–1563, Oct. 2016. ISSN 0162-1459, 1537-274X. doi: 10.1080/01621459.2016.1180986. URL https://www.tandfonline.com/doi/full/10.1080/01621459.2016.1180986.