INLA goes extreme: Bayesian tail regression for the estimation of high spatio-temporal quantiles

This work has been motivated by the challenge of the 2017 conference on Extreme-Value Analysis (EVA2017), with the goal of predicting daily precipitation quantiles at the 99.8% level for each month at observed and unobserved locations. We here develop a Bayesian generalized additive modeling framework tailored to estimate complex trends in marginal extremes observed over space and time. Our approach is based on a set of regression equations linked to the exceedance probability above a high threshold and to the size of the excess, the latter being modeled using the generalized Pareto (GP) distribution suggested by Extreme-Value Theory. Latent random effects are modeled additively and semi-parametrically using Gaussian process priors, which provides high flexibility and interpretability. Fast and accurate estimation of posterior distributions may be performed thanks to the Integrated Nested Laplace approximation (INLA), efficiently implemented in the R-INLA software, which we also use for determining a nonstationary threshold based on a model for the body of the distribution. We show that the GP distribution meets the theoretical requirements of INLA, and we then develop a penalized complexity prior specification for the tail index, which is a crucial parameter for extrapolating tail event probabilities. This prior concentrates mass close to a light exponential tail while allowing heavier tails by penalizing the distance to the exponential distribution. We illustrate this methodology through the modeling of spatial and seasonal trends in daily precipitation data provided by the EVA2017 challenge. Capitalizing on R-INLA's fast computation capacities and large distributed computing resources, we conduct an extensive cross-validation study to select model parameters governing the smoothness of trends. Our results outperform simple benchmarks and are comparable to the best-scoring approach.



There are no comments yet.


page 23


Bayesian space-time gap filling for inference on extreme hot-spots: an application to Red Sea surface temperatures

We develop a method for probabilistic prediction of extreme value hot-sp...

Bayesian prior elicitation and selection for extreme values

A major issue of extreme value analysis is the determination of the shap...

Spatial hierarchical modeling of threshold exceedances using rate mixtures

We develop new flexible univariate models for light-tailed and heavy-tai...

evgam: An R package for Generalized Additive Extreme Value Models

This article introduces the R package evgam. The package provides functi...

Latent Gaussian Models for High-Dimensional Spatial Extremes

In this chapter, we show how to efficiently model high-dimensional extre...

A flexible, semi-parametric, cluster-based approach for predicting wildfire extremes across the contiguous United States

This paper details the methodology proposed by the Lancaster Ducks team ...

Spatiotemporal wildfire modeling through point processes with moderate and extreme marks

Accurate spatiotemporal modeling of conditions leading to moderate and l...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

At its origins, Extreme-Value Theory focused on describing the asymptotic behavior of sample maxima (see, e.g., the monograph of Gumbel.1958), and equivalent theoretical statements were later established for threshold exceedances (Balkema.deHaan.1974; Pickands.1975). Statistical methodology for threshold exceedances based on a limiting point process representation or on the related generalized Pareto (GP) distribution was developed by Hosking.Wallis.1987 and Davison.Smith:1990 among others, and numerous applications of threshold-based approaches to environmental extremes observed over space and/or time have since been published; see, e.g., Huser.Davison:2014, Thibaud.Opitz.2015 and Bacro.etal:2017 for recent applications to precipitation data, ChavezDemoulin.Davison.2005 for temperature data, and Northrop.Jonathan:2011 and Jonathan.etal:2014

for oceanographic data. In contrast with the block maximum approach, the use of threshold exceedances allows detailed modeling of trends, seasonality and extremal clustering characteristics due to short-term dependence, while also giving more flexibility for balancing bias and variance; however, choosing a good threshold remains challenging

(; Scarrott.MacDonald:2012), which has lead some authors to implement Bayesian algorithms accounting for threshold uncertainty (

Fully Bayesian modeling approaches for spatial and/or temporal extremes often rely on latent processes embedded into the GP parameters to capture trends and dependence (; Bopp.Shaby.2017). In particular,

use Gaussian processes to capture spatial dependence and covariate-driven trends in precipitation data, taking advantage of simulation-based Markov chain Monte-Carlo (MCMC) methods for the estimation of posterior distributions. Here, we adopt a similar model structure for capturing spatio-temporal trends in precipitation data and for providing extreme quantile predictions. However, our statistical inference approach relies on the Integrated Nested Laplace Approximation (INLA) of posterior distributions, which is much faster and sidesteps convergence issues in simulation-based techniques, while providing highly accurate results

(; Martins.etal:2013; Furthermore, our choice of prior distributions allows us to appropriately smooth predicted quantiles over space and time, while borrowing strength across nearby space-time points, which is especially important when predicting rare events. We also derive the penalized complexity (PC) prior distribution for the GP tail index, which provides a principled prior choice (Simpson.etal:2017) for this crucial parameter by penalizing the distance to a baseline model possessing a light, exponentially-decaying, tail.

Our strategy for modeling space-time trends in marginal precipitation extremes can be decomposed into three stages, each consisting of a suitable univariate response distribution combined with a regression equation capturing systematic temporal and spatial effects. The latter are described semi-parametrically in terms of appropriate Gaussian process priors. In the first modeling stage, we fit a Gamma regression to the positive precipitation intensities, in order to fix a suitable high spatio-temporal threshold. Next, we estimate the overall excess probability above this threshold through Bernoulli regression. Finally, we fit a GP distribution to the observed threshold exceedances, assuming a constant tail index and a scale parameter varying in space and time. We then predict extreme spatio-temporal quantiles by combining the posterior mean predictions from the Bernoulli and GP stages. For reasons of modeling and computational complexity, our Bayesian regression models are based on the working assumption that the data are conditionally independent with respect to the latent spatio-temporal trend components. By imposing this model simplification, the computational efficiency of INLA allows us to conduct an extensive cross-validation study for selecting certain crucial model parameters.

This new methodology, whose development was motivated by the challenge organized for the 10th Extreme Value Analysis conference (EVA2017) held in June 2017 in Delft, Netherlands, has been applied to Dutch precipitation data, with the ultimate goal of predicting spatio-temporal quantiles at the level; see Wintenberger:2018 for more details on the dataset, the evaluation criterion, and the results of this competition.

In the remainder of the paper, Section 2 presents the dataset and some preprocessing steps. The Bayesian space-time regression framework for tail modeling is developed in Section 3, where the derivation of the penalized complexity prior for the tail index is presented in Section 3.3. We explain the Bayesian estimation approach using INLA in Section 4. The cross-validation study and our final results for the Dutch precipitation application are reported in Section 5. Some concluding remarks with an outlook towards possible future developments are summarized in Section 6.

2 Data

The complete dataset consists of daily precipitation accumulations (in inches) measured at stations during the period 1972–2016. The data were divided into a training set (1972–1995) made available to the teams participating to the EVA2017 challenge, and a validation set (1996–2016) used to assess the high quantile predictions of the different teams, and therefore revealed only afterwards. Some of the stations were active during most of the training period, but many were not, resulting in a mixed dataset comprising a few time-rich stations and many time-poor ones; the available sample size varies from at Stations 6–10 and 37 to at Station 39; see Figure 1 and Wintenberger:2018 for more details on the data. In terms of modeling, this implies that it is crucial to build a spatial model that borrows strength across nearby stations to obtain robust and reliable predictions at unobserved, or scarcely observed, locations.

Figure 1: Original precipitation time series measured at 40 stations during the training period. All time series are displayed using a common x-axis (time) and y-axis (precipitation amounts in inches). For each station, the sample size, ranging from to for Station 39, is indicated in the top left corner of the corresponding panel.

The exact coordinates of the stations were not disclosed before the end of the competition to prevent reverse engineering. However, shifted coordinates were provided and are displayed in Figure 2. As shown in Wintenberger:2018, the region of study, revealed only afterwards, covers the Netherlands almost entirely. Although this country is quite small and mostly flat, which suggests that a stationary (or mildly non-stationary) process over space might be reasonable in this specific example, we opted to build a flexible Bayesian model for extreme events able to capture potentially complex trends in space and time; in this way, our general modeling and estimation approach may be applied to a wide range of scenarios. As Figure 2 shows, time-rich stations are scattered across the study region, which is essential for borrowing strength across locations and efficiently estimating spatial trends.

Figure 2:

Map of monitoring stations, plotted using the shifted coordinates provided by the EVA2017 challenge, and colored according to the estimated spatial random effect for Stage 1 (Gamma distribution) in our “best model”; see §

3.2 and 5.2 for further details. Red circles indicate time-rich stations with at least observations (i.e., about years of data) over the training period. Wintenberger:2018 displays the true locations and geographical map.

Further exploratory plots (not shown) reveal that the distribution of precipitation varies slightly over time, with the strongest precipitation usually occurring during the summer. To capture this, our model described in §3 features a seasonal effect defined over weeks.

Figure 1 also reveals that some parts of the data still contain erroneous measurements. For example, Stations 1 and 2 are geographically very close to each other, but their time series are very dissimilar; the data recorded in Station 1 appear in fact to be extremely small compared to the rest of the stations. Furthermore, Station 32 has a very long series of zeros from 1977 to 1995. Finally, Stations 4, 5, 11, 19, 23, 26, 28, 35 and 39 have a series of constant measurements during the period 1977–1978. We therefore removed all these highly suspicious data before proceeding further with the model fitting.

3 Modeling

3.1 Asymptotic tail models

In classical Extreme-Value Theory, the Fisher–Tippett–Gnedenko theorem characterizes the convergence of renormalized maxima taken over a sequence of increasingly large blocks of independent and identically distributed (i.i.d.) random variables; see

Coles.2001, Chapter 3. The class of possible limit distributions may be summarized in terms of a single parametric family known as the generalized extreme-value (GEV) distribution. Equivalently, the generalized Pareto (GP) distribution may be used to model high threshold exceedances; see Coles.2001, Chapter 4, and Davison.Smith:1990. The block maximum and threshold exceedance approaches can be unified using point process theory (Coles.2001, Chapter 7, Davison.Huser:2015). One assumes in practice that the asymptotic distribution provides a reasonable model for sample maxima or threshold exceedances for a fixed and finite block size or threshold, respectively. Provided that the asymptotic GP distribution holds exactly to describe the tail of a random variable above a predefined high threshold , the upper tail may be represented in terms of the exceedance probability and the GP distribution of positive threshold exceedance , i.e.,


where , is the tail index and is a scale parameter. When , the expression in (1) is interpreted as the limit and corresponds to the exponential survivor function , . When , the GP distribution has a power-law decay with infinite upper endpoint, and when , it has finite and parameter-dependent upper endpoint , which leads to certain complications with likelihood-based inference. As our main goal is to model precipitation extremes, which are usually heavy-tailed, we restrict ourselves in this paper to and thus exclude the case below.

To avoid confounding problems due to the correlation between estimated GP parameters, it may be convenient to reparametrize the GP distribution using the tail index and a specific -quantile for some fixed probability of interest , i.e.,


In our Bayesian spatio-temporal analysis of extreme precipitation described in §5, we use the parametrization (2) in terms of the median.

In practice, the equality between the left- and right-hand sides of (1) should only be interpreted as an approximation for large , and a careful bias-variance assessment must be performed to fix a suitable threshold above which observations are deemed to be extreme; too low a threshold may cause sub-asymptotic bias, while too high a threshold implies large estimation variance due to the small number of threshold exceedances; see Davison.Smith:1990, Northrop.Jonathan:2011 and Scarrott.MacDonald:2012 for standard diagnostics. In our application, we select by cross-validation; see §5.

We now present our general three-stage modeling strategy for estimating high quantiles, relaxing the above description to account for spatio-temporal trends in marginal extremes.

3.2 Bayesian tail regression framework

The modeling of univariate extremes using the GP distribution has been popularized by Davison.Smith:1990, who advocate to capture systematic variation in extreme events by including fixed covariate effects into the GP scale (and potentially, if strongly supported by the data, also the tail index ). Similarly, using the equivalent point process representation of high threshold exceedances, Coles.Tawn:1996 propose to include spatial covariates into the extreme-value location and scale parameters, while treating the tail index as constant. With a different objective in mind, formulate an exponential regression model for the (positive) tail index, and suggest that it may be adapted to incorporate exogeneous covariate information. To flexibly handle non-stationarity at high levels, Davison.Ramesh:2000 advocate a local likelihood approach for time series extremes, while ChavezDemoulin.Davison.2005 propose a generalized additive modeling (GAM) framework, whereby smoothing splines are incorporated into GP parameters. In the Bayesian framework, Casson.Coles.1999 and build hierarchical models for high threshold exceedances, which include latent spatial random effects fitted using expensive simulation-based MCMC methods.

The spatio-temporal Bayesian hierarchical approach that we develop here is similar in spirit to the purely spatial model of; however, the fast and accurate inference method based on INLA presented in §4

makes it possible to consider very high-dimensional data and complex space-time models, comprising fixed and random effects that may potentially be organized hierarchically within the regression equations.

Let denote daily precipitation observed at location and time , where is our study region. Our modeling and estimation strategy relies on the following three stages:

  1. Positive precipitation intensities are first modeled assuming a Gamma distribution with mean varying in space and time and constant shape parameter , i.e.,


    where is the Gamma function. For the purpose of using INLA, our parametrization of the Gamma distribution is slightly changed with respect to the classical one. A high space-time threshold is then chosen as the -quantile of positive precipitation intensities, deduced after fitting model (3) to the data, where is a fixed probability. The parameter defines the level of the threshold at each space-time location, and therefore controls the accuracy of the asymptotic GP approximation and the number of threshold exceedances in the following stages. As involves a crucial bias-variance trade-off, we selected it by cross-validation in §5.

  2. Using the space-time threshold obtained in Stage 1, exceedance indicators are modeled as Bernoulli random variates, i.e., setting ,


    where denotes the overall probability of exceeding the threshold, i.e., taking into account both the positive and zero parts of the precipitation distribution.

  3. Using the space-time threshold obtained in Stage 1, positive threshold exceedances are assumed to follow the GP distribution parametrized in terms of its -quantile and tail index ; recall (1) and (2). As the tail index usually lacks information for being accurately estimated, we treat it as constant over space and time. Moreover, assuming that the non-stationary patterns in the bulk and the tail of the distribution might be quite similar to each other, we further specify , where is the Gamma mean estimated in the first stage using a rich training dataset, and is a spatio-temporal correction. This allows to borrow strength from the bulk for more accurate tail estimation. Since we expect to be close to stationary in practice, estimation is simplified in a Bayesian framework, where informative priors can be used.

The Gamma distribution in Stage 1 has been used a lot in hydrology as a whole statistical model for (positive) precipitation intensities (Wilks:2006). However, its tail decays exponentially fast, and is therefore often found to be too light and to underestimate probabilities associated with extreme events (Katz.etal:2002); by contrast, the GP model used in Stage 3 is motivated by asymptotic theory, and can capture heavy tails. Therefore, Stage 1 is only used here to select a suitable threshold in a potentially highly non-stationary context and to get a reasonable prior for the size and shape of non-stationaries in the tail, while Stages 2 and 3 provide a complete characterization of the tail in terms of the threshold exceedance probability , the GP -quantile and the tail index . Note that Stage 1 is “context-dependent”, in the sense that the Gamma distribution might need to be replaced by another distribution if the data were different, while Stages 2 and 3 are “context-independent”, as Extreme-Value Theory holds under general assumptions. The overall -quantile of the precipitation distribution, for , is then


where denotes the GP survivor function.

In order to capture systematic space-time variations in the data’s tail behavior, we use a regression formulation with suitable link functions and an additive structure in the predictor components. To keep the model fairly simple and robust while allowing for high flexibility, we assume that each predictor includes (additionally to the intercept) a spatial random effect and a temporal random effect chosen to be separable, i.e.,

where denote fixed intercepts, are spatial random effects defined at each station, and are temporal random effects defined on a weekly basis and cyclic with a yearly period. The superscripts refer to each model stage. In our model, we include the logarithm of the Gamma mean (or, in practice, of its estimated posterior mean) as an additive offset into the GP -quantile, such that can be interpreted as a residual effect after having removed the scaling implied by the Gamma distribution for positive precipitation. If the residual space and time effects and , respectively, are not significantly nonzero, we can conclude that such trends in threshold exceedances are already well explained by the Gamma model. Since we adopt a Bayesian framework with Gaussian process priors centered at zero, including this offset also prevents an overly strong influence of the prior specification on posterior distributions when only few exceedances are observed. Routine calculations show that the GP density with the above link function is log-concave with respect to the predictor , which is beneficial for INLA and optimization routines in general.

Each spatial effect, denoted by for simplicity, is assumed to be driven by a zero mean Gaussian process prior with precision and Matérn correlation function (Stein:1999; Lindgren.etal:2011; Lindgren.Rue:2015), i.e.,


where denotes the modified Bessel function of second kind of order , is the spatial range parameter, and is the distance between two locations . The spatial effect allows us to borrow strength across locations, its main purpose being to make predictions at unobserved (or scarcely observed) stations. Its small-scale behavior related to sample path regularity is of relatively minor importance here, and so we fix . The range parameter , however, controls the amount of spatial smoothing, and we select it by cross-validation in §5 to optimize spatial prediction. Unlike , the precision is estimated, rather than post-selected by cross-validation.

Each temporal effect, denoted by for simplicity, is assumed to be driven by a second-order Gaussian random walk prior defined over weeks with precision for its weekly innovations (Lindgren.Rue:2008 and Rue.Held:2005, Chapter 3). Let denote the week associated to time ; one has , where, simultaneously for all weeks ,


and the Gaussian process is restricted to sum to zero for identifiability and to be cyclic over a period of one full year (i.e., 52 “weeks” with approximate length of days). Because the precision controls the smoothness of the weekly effect, we select it by cross-validation in §5 to optimize temporal prediction.

Although the inference approach based on INLA may be exploited with additional fixed effects and more complicated random effects, potentially including space-time interactions, we restricted ourselves to the above structure, which was found to be flexible enough and to provide robust and interpretable results in our application. For the shape of the Gamma distribution, we fix a moderately informative Gamma prior distribution with shape and mean , and due to the large sample size we can expect a negligible influence of the prior distribution on posterior estimates. The intercepts have noninformative Gaussian priors centered at . The prior choice for the tail index is discussed in §3.3

. Moreover, the priors for the various hyperparameters and additive regression components are mutually independent.

3.3 Penalized complexity prior for the tail index

The tail index plays a crucial role in high quantile estimation. Light exponential tails arise with , while heavier power-law tails arise with . When , the mean is infinite, and when , the variance is infinite. Because of this, too large values of are unrealistic for many data types and typically lead to uncertainty inflation for high quantiles. It is therefore important to choose a suitable prior distribution for , giving priority to light and moderately heavy tails while properly downweighting unrealistically heavy tails.

An elegant approach to choosing priors in a principled way when little expert knowledge is available consists in using the penalized complexity (PC) priors introduced by Simpson.etal:2017, which penalize the “distance” from a base model at constant rate, independently of its parametrization. These priors take the geometry induced by the choice of model parametrization into account, therefore avoiding intricate interpretation problems that arise otherwise. Furthermore, these priors are designed to allow for shrinkage towards a simpler reference model, which helps to prevent over-fitting by penalizing complex models. They also provide an “objective” (i.e., “automatic”) way of choosing the prior distribution family, while keeping some degree of subjectivity in selecting the penalization rate parameter.

Following Simpson.etal:2017, a natural definition of the “distance” of a model with respect to a reference model

(here assumed to be in the same family for simplicity) may be based on the Kullback-Leibler divergence (KLD) by setting


For the GP distribution (1) with , a natural choice of base model is the exponential distribution which arises when . In this way, a prior assuming constant decay rate in terms of the Kullback-Leibler divergence (8) penalizes departure from an exponential tail, which usually makes sense for environmental data; the heavier the tail, the stronger the penalty. Let , , , be the GP density and , , , be the exponential density. Knowing that the GP distribution (1) with parameters has mean when , and using the change of variable , one obtains


which does not depend on the scale parameter , as expected. To define a PC prior for the tail index , we assume that a given model for the data is penalized at constant rate in terms of its “distance” to the reference model , therefore involving the exponential distribution in the “metric” space defined through . Because the KLD (9) converges to infinity as , such a prior will put zero mass on , hence preventing infinite-mean models to occur. This can be seen as an advantage for applications in hydrology, where the tail index is usually quite small. We here propose two possible prior choices, which are based on (i) Equation (9) and (ii) an approximation of (9) as :

  1. The first possibility uses the exact formula (9), which yields and implies that the corresponding PC prior for with support is


    where the penalization rate parameter is .

  2. The second possibility is to approximate the KLD (9) by , as . This yields and implies that the corresponding approximate PC prior for , with support , is exponential with rate , i.e.,


Our two proposed priors are illustrated in Figure 3.

Figure 3: PC priors for the GP tail index using the exact KLD formula in (10) (left) or the approximate KLD formula in (11) (right), i.e., using an exponential prior, setting the penalization rate parameter to (light blue), (dark blue), (purple), (red), and (thick black), which was combined with (11) in our application.

These priors are very similar to each other when the penalization rate parameter is large (giving strong preference to ) but they differ significantly when is small. The PC prior using the exact KLD formula in (10) gives zero probability to , while the prior (11) gives exponentially decreasing but positive probability to all . This bounded/unbounded support distinction between the two proposed priors is more apparent for small ; the PC prior (10) becomes concentrated around as , which is not very intuitive. In practice, the shrinkage rate may be elicited from prior information in specific applications. For example, when the tail is known to be relatively light, seems to be a reasonable choice, and both priors should give similar results. In this case, both densities resemble an exponential density, which shrinks the tail index towards ; large values of will only arise if the data are heavy-tailed and provide strong evidence for it. In our precipitation example in §5, we use a fairly informative exponential prior for with rate (i.e.,

), which sets the prior probability for tail indices larger than

to be only about ; see Figure 3.

4 Bayesian inference

4.1 Posterior distributions

The three regression models of our application are fitted separately, and the following exposition concentrates on this approach. It would be possible to have some of the fixed and random effects in common between the GP exceedance distribution and the Bernoulli distribution of exceedance indicators; in this case, we would fit those two regressions as a single model with two types of data/likelihood combinations. However, the Gamma regression must always be fitted separately in a first step, since it determines the data used for the other two regressions. By putting prior distributions on the fixed and random effects of the regression and on the various hyperparameters, we can apply Bayes’ formula and infer the posterior distributions of model components of interest.

In what follows, the notation is independent of the data/likelihood combination, given in each of the three stages as (1) positive precipitation intensities/Gamma, (2) exceedance indicators/Bernoulli and (3) threshold exceedances/GP, respectively. For the observed data vector with components

, , we write in short, and the vector of latent Gaussian predictors is denoted as such that . We denote hyperparameters related to the likelihood by the vector , in our case given by the Gamma shape parameter and GP tail index, while the Bernoulli has no hyperparameter. The vector regroups all variables arising in the fixed and random effects, which includes , weekly effects , which are based on a discretization of the year using variables, and spatial effects for observation and prediction sites . Hyperparameters related to the Gaussian priors of predictor components are collected in a vector ; they include the Matérn precision and the precision of random walk innovations. The predictor vector is linearly related to the latent effects through an observation matrix determined by the structure of observation points and latent components, such that . The prior distribution on is Gaussian with zero mean and precision matrix (i.e., inverse covariance matrix) . Owing to prior independence between the predictor components , and , is block-diagonal with three blocks. For parameters in and that are not fixed to a specific value, we define a prior distribution with , which is assumed to factorize over the components of . The Gaussian density of is written .

Using to denote the likelihood of a data point conditional on its predictor and likelihood hyperparameters , the joint density of , and may be expressed as


For notational convenience, we now suppose that the predictor vector is a subvector of with , . We are mainly interested in the posterior distributions of the variables in and , whose joint posterior density conditional on the data is calculated through the Bayes formula as


where is a normalizing constant related to the hyperparameters of the latent Gaussian field. Specifically, we are interested in the univariate posterior distributions of predictors and components of , which requires integrating the density (13) with respect to all components of and except the one of interest. Since we cannot provide an exact analytical solution of this high-dimensional integral, we could resort to simulation-based MCMC techniques (see, e.g.,, Chapter 5) for producing a large representative sample of and . Instead, we propose using INLA, which exploits suitably chosen variants of analytical Laplace approximations of integrand functions (Tierney.Kadane.1986), such that integrals are over Gaussian densities and their evaluation becomes straightforward.

4.2 Integrated Nested Laplace Approximation (INLA)

INLA (;; Opitz.2017b) has become a cutting-edge tool for fast and accurate inference in complex and hierarchically structured GAM-like models with latent Gaussian structure and sparse precision matrix, such as our three models described above. For a vector whose th component has been removed, we write . INLA provides accurate approximations of the following two types of univariate posterior densities:


where is the length of . If a hyperparameter has been fixed to a specific value, then its prior density can be interpreted as a Dirac mass of at this value, and the related integral vanishes. In general, the number of estimated hyperparameters should be kept small, since integration with respect to components of is carried out numerically through an astute choice of discretization points. The main obstacle remains integration with respect to the Gaussian components , for which the Laplace approximation is applied in a nested way: first, to approximate the posterior of the hyperparameters , and second to integrate with respect to the Gaussian vector . The log-concavity of the likelihood with respect to is crucial since it ensures that a useful Gaussian approximation to in (13) can be calculated by matching the mode and the curvature around it using an interative Newton–Raphson scheme based on second-order Taylor expansions. Indeed, owing to the conditional independence of with respect to , the precision matrix in the approximation is of the form , where the components in the added diagonal matrix must be positive and are closely related to the second derivative of with respect to

; hence the requirement of log-concavity. Notice that a simplified approximation scheme skips the second Laplace approximation by simply using a conditional Gaussian distribution for

based on the first Laplace approximation, but slight accuracy problems may arise with non-Gaussian univariate likelihoods. For more details on the approximation mechanism and its implementation, see; and Opitz.2017b.

5 Results for the Dutch precipitation data application

5.1 Cross-validation study

The goal of the EVA2017 challenge was to predict the overall -quantile of daily precipitation, with , for each month and station , where , , denote two different sets of stations (so-called Challenges 1 and 2, respectively, see Wintenberger:2018). The target quantile is so high that fully non-parametric methods would likely perform quite badly and be outperformed by parametric alternatives; furthermore, we expect extreme-value models (such as the GP distribution (1

)) to provide a good fit at this level. The evaluation criterion used to rank the predictions submitted by all teams was based on the quantile loss function defined as


where and represent observations and predicted quantiles, respectively. As the -quantile of the random variable minimizes the expected risk function (Koenker:2005), this justifies using (16) for assessing quantile predictions, although alternative risk measures may also be used (Daouia.etal:2018). To optimize our spatio-temporal prediction performance at stations with poor time coverage, we select the most crucial hyperparameters, i.e., the Matérn range parameter in (6), the temporal precision parameter in (7), and the threshold probability , using three different cross-validation (CV) criteria based on (16). An alternative would have been to estimate such parameters directly in the Bayesian model by specifying suitable prior distributions, but since we do not capture stochastic dependence in observations at nearby space-time points in our models, overfitting could have been an issue with such an approach. As these hyperparameters determine the effective amount of information that is used or borrowed from neighboring space-time locations, they may strongly affect our predictions and thus need to be selected with care; recall that controls the bias-variance trade-off of the asymptotic GP approximation, controls the amount of spatial smoothing, and controls the amount of temporal smoothing of the corresponding random effects. Let denote the true -quantile for station and time , and denote our estimate of based on Equation (5), obtained by fitting the three stage-model described in §3 to the entire dataset using INLA as explained in §4. Let also and denote these predicted quantiles obtained after removing station and year , respectively. We consider the following CV criteria designed to optimize spatial prediction, temporal prediction, or a combination of both:


here, denotes the set of time points (days) within a specific year . The lower the CV criteria the better the model. As the target probability is extreme, it may be better in practice to evaluate , and for some in order to prevent overfitting. We tried to use but without any major difference in the results. The results presented below and in §5.2 are based on . The final monthly quantile predictions were then extracted from our selected “best model” by averaging the daily predictions for that month.

After projecting the (shifted) longitude-latitude coordinates to a proper metric system, we considered km, with , and , and fitted our three-stage hierarchical Bayesian tail regression model for all combinations of these hyperparameters. Notice that (for positive precipitation intensities) roughly corresponds to an overall probability of , as there were about of zeros observed at each station. Although model estimation based on INLA is much faster than classical simulation-based MCMC methods and it bypasses convergence assessment issues, the total number of fits to perform in order to compute the cross-validation criteria (17), (18) and (19) is equal to ( models times hold-out samples, corresponding to the sum of stations and years). In our case, a single fit (for all three stages) took about hours overall on cores, and we used the KAUST Supercomputer Shaheen II to compute them all in parallel. The total number of core-hours used was about half a million, which corresponds roughly to years of computation on a single-core machine. This cross-validation study underlines the importance of fast and accurate inference methods coupled with large distributed computing resources.

Figure 4: Space-time cross-validation scores (19) with , displayed on a common logarithmic color scale. Each panel displays the CV scores in terms of and for specific values of (see panel titles). Lower values (i.e., darker blue cells) are better models.

Figure 4 displays the space-time cross-validation scores (19) for all models, while Table 1 presents a summary of the five best models. Overall, it appears that smaller probabilities yield much better predictions, thanks to the reduced variability due to the larger effective sample size of threshold exceedances. As for the other hyperparameters and , it is difficult to draw general conclusions, as the quite “noisy” diagnostics shown in Figure 4 make interpretation difficult. Nonetheless, the model that provides the best space-time predictions according to (19) has , and (Model 122), and it yields reasonable results; hence, we decided to proceed with Model 122 as our “best model”. As the models reported in Table 1 mostly yield a similar interpretation in terms of the estimated random effects, the next section showcases the results obtained for Model 122.

Range Temp. SD Proba. Cross-validation criteria
Model ID Space (17) Time (18) Space-time (19)
Table 1: Five “best models” ranked according to the space-time CV criterion (19) with . Columns report the model ID, the Matérn spatial range

, the temporal standard deviation (SD)

, the threshold probability for positive precipitation intensities, and the CV scores (17), (18) and (19), respectively.

5.2 Final results and interpretation

Here, we provide further results and interpretation for our selected model.

Figure 5: Spatial (top) and weekly (bottom) effects displayed for the three stages (left to right) of our selected “best model” (Model 122 in Table 1). Black dots and curves show the posterior means, while blue segments and curves are

pointwise credible intervals.

Figure 5 shows the posterior means and pointwise credible intervals estimated for the spatial and weekly random effects in each of the three stages (i.e., Gamma, Bernoulli and GP models), while Figure 2 maps the spatial effect estimated in the first stage (Gamma). Overall, the uncertainty in the Gamma model is much lower than the Bernoulli and GP models, as it uses more information. Although these results neglect model selection uncertainty and model misspecification due to the conditional independence assumption of the data given the latent parameters, the weekly effect appears overall highly significantly different from zero for the Gamma model, while the GP model fitted to threshold exceedances (i.e., to the top of positive precipitation data) seems close to stationary in space and time when taking the uncertainty into account. As the GP scale parameter involves the Gamma mean as a multiplicative offset, this confirms that shape and size of non-stationary patterns are similar in the bulk and the tail of the distribution. The spatial patterns look quite stationary overall.

As expected, the credible intervals for the spatial effect in the Gamma model at time-rich stations are much narrower than those at time-poor stations; for example, Station 12, with observations and surrounded by many other stations in the middle of the study region has the narrowest credible interval, while Station 9, with no observation and located on the Northern border of the study region has the widest credible interval. Interestingly, Stations 21 and 22, with very poor observational records, but surrounded by Stations 19, 23 and 26 with rich time series, have significant (or almost significant) positive effects, which confirms that our model succeeds in borrowing strength across nearby locations. Furthermore, the flexibility of our model is demonstrated by Station 27, located in the sea and far from the other stations, which stands out with a strong and highly significant negative effect; the corresponding time series shown in Figure 1 indeed reveals that this particular station is less exposed to intense precipitation than stations inland.

The weekly effect of the Gamma model is quite smooth and well estimated. It shows that precipitation intensity tends to be quite mild in April and stronger during summer, especially in July and September–October. The weekly effect of the Bernoulli model, however, has an opposite pattern and shows that the threshold is more frequently exceeded from September to February than during the other months, which suggests that there are more wet days during the fall and winter, as expected. The GP weekly effect finally reveals that the largest exceedances over the threshold (and hence, the most intense precipitation events) tend to occur during the summer, though the associated uncertainty is high.

The GP tail index has posterior mean , with credible interval , which shows a significant departure from an exponential density (arising with

) and reveals that precipitation data in the Netherlands are heavy-tailed with finite first and second moments. This also suggests that the posterior distribution for

is quite far from its prior displayed in Figure 3; although the prior for was chosen to be quite informative with strong shrinkage towards light tails, the thousands of threshold exceedances available to fit the model provided strong evidence for heavier tails.

Figure 6: Boxplots of the cubic root of precipitation intensities for each month and station. Red curves are our model-based predictions of the -quantile, with , while blue curves are the empirical -quantiles pooling all stations together. Stations, where prediction was required, are indicated by a (Challenge 1) and (Challenges 1 & 2).

To conclude our analysis, Figure 6 displays monthly boxplots of the data together with our final -quantile predictions, for each month and station; the results are also compared to empirical quantiles computed for each month by pooling all stations together under the assumption of spatial stationarity. While our estimated weekly effect is quite weak but clearly visible, the difference between stations appears to be almost negligible, which suggests that we could have perhaps simplified the model by removing the spatial effect. Nevertheless, our Bayesian approach based on penalized complexity priors is designed to shrink the model towards a simple reference model, which helps in getting robust estimates and avoiding overfitting. Figure 6 also illustrates the ability of our spatio-temporal modeling approach based on Extreme-Value Theory to extrapolate predictions to extreme quantiles, even at locations where no data have been recorded.

6 Conclusion

Estimating extreme spatio-temporal quantiles from non-stationary data is not an easy task, and we think that the Extreme-Value Analysis conference 2017 challenge, which has motivated this work, has positively contributed to advancing the existing methods and the current literature.

In this paper, we have combined the generalized Pareto distribution from Extreme-Value Theory with a flexible Bayesian latent Gaussian modeling approach. Our proposed three-stage model features spatial and temporal random effects that can capture systematic variations in the data. Our proposed generalized additive structures embedded in each stage provide interesting insight into the data’s bulk and tail behaviors and could be made more complex if required by the context. Although the dataset studied for this competition is fairly small and weakly non-stationary, our very fast and accurate inference approach based on INLA could be applied to extremely high-dimensional and highly non-stationary space-time data with additional hierarchical levels. Moreover, the combination of INLA and access to large distributed computing resources made it possible to run an extensive cross-validation analysis to select crucial hyperparameters and optimize our space-time quantile predictions.

Despite the model complexity, our approach based on penalized complexity (PC) priors guarantees stable and robust quantile predictions and contributes to avoid overfitting, which is especially important when predicting extremes. In this paper, we have derived the PC prior for the tail index, which controls the tail decay rate and therefore highly impacts extrapolations to high levels. Our proposed prior favors light exponential-like tail decay rates while penalizing unrealistically heavy tails.

Although the distribution in the first stage might need to be modified with other types of environmental data, our framework based on Extreme-Value Theory is very general and the methodology could be easily adapted to different contexts. Model estimation is executed using the R package R-INLA, which is convenient and easy to use. For reproducibility purposes, our code is illustrated on

Although predicted quantiles from our model were quite good overall, certain aspects could have been improved. In particular, excluding months instead of years in our cross-validation study might have improved the estimation of the temporal effect, defined on a weekly basis. Furthermore, it would have been possible to cross-validate the tail index, as well, which is a crucial parameter for predicting extreme quantiles. However, because of our limited (though large) computational resources, we opted to perform our cross-validation study on parameters that control bias-variance trade-offs relevant for optimizing spatio-temporal prediction. Finally, we could have considered a more flexible model by allowing, for example, the tail index to vary over seasons or months, instead of being constant over space and time. However, this would increase the number of hyperparameters in the model, whose estimation might be more tricky and computer-demanding to perform using INLA.

In future work, it would be interesting to replace the generalized Pareto distribution by more flexible sub-asymptotic tail models (see, e.g., Papastathopoulos.Tawn:2013; Naveau.etal:2016), which could potentially be applied at much lower thresholds. Furthermore, as our model assumes that the data are conditionally independent given the latent parameters similarly to, it is not well-suited to properly capture space-time dependence, and it would be interesting to extend our methodological framework to more complex spatial (Wadsworth.Tawn:2012; Thibaud.etal:2013; Thibaud.Opitz.2015; Huser.etal:2017; Castro.Huser:2017; Huser.Wadsworth:2018) and spatio-temporal (Davis.etal:2013; Huser.Davison:2014; Bacro.etal:2017) models for extremes. However, such extremal models are much more difficult and expensive to fit, and when the primary goal is to estimate high marginal quantiles as in this work, the exact characterization of the dependence structure is a secondary issue, or even perhaps a nuisance.


The research reported in this publication was supported by funding from King Abdullah University of Science and Technology (KAUST). Support from the KAUST Supercomputing Laboratory and access to Shaheen II is gratefully acknowledged.