DeepAI
Log In Sign Up

Aggregating distribution forecasts from deep ensembles

04/05/2022
by   Benedikt Schulz, et al.
6

The importance of accurately quantifying forecast uncertainty has motivated much recent research on probabilistic forecasting. In particular, a variety of deep learning approaches has been proposed, with forecast distributions obtained as output of neural networks. These neural network-based methods are often used in the form of an ensemble based on multiple model runs from different random initializations, resulting in a collection of forecast distributions that need to be aggregated into a final probabilistic prediction. With the aim of consolidating findings from the machine learning literature on ensemble methods and the statistical literature on forecast combination, we address the question of how to aggregate distribution forecasts based on such deep ensembles. Using theoretical arguments, simulation experiments and a case study on wind gust forecasting, we systematically compare probability- and quantile-based aggregation methods for three neural network-based approaches with different forecast distribution types as output. Our results show that combining forecast distributions can substantially improve the predictive performance. We propose a general quantile aggregation framework for deep ensembles that shows superior performance compared to a linear combination of the forecast densities. Finally, we investigate the effects of the ensemble size and derive recommendations of aggregating distribution forecasts from deep ensembles in practice.

READ FULL TEXT VIEW PDF

page 13

page 28

page 30

06/17/2021

Machine learning methods for postprocessing ensemble forecasts of wind gusts: A systematic comparison

Postprocessing ensemble weather predictions to correct systematic errors...
03/18/2018

Combining Probabilistic Load Forecasts

Probabilistic load forecasts provide comprehensive information about fut...
04/12/2022

Surrogate Ensemble Forecasting for Dynamic Climate Impact Models

As acute climate change impacts weather and climate variability, there i...
11/20/2019

Predictive properties of forecast combination, ensemble methods, and Bayesian predictive synthesis

This paper studies the theoretical predictive properties of classes of f...
02/26/2021

Deep Quantile Aggregation

Conditional quantile estimation is a key statistical learning challenge ...
02/01/2021

CRPS Learning

Combination and aggregation techniques can improve forecast accuracy sub...

Code Repositories

1 Introduction

Probabilistic forecasts in the form of predictive probability distributions over future quantities or events aim to quantify the uncertainty in the predictions and are essential to optimal decision making in applications

(Gneiting and Katzfuss, 2014; Kendall and Gal, 2017). Motivated by their superior performance on a wide variety of machine learning tasks, much recent research interest has focused on the use of deep neural networks (NNs) for probabilistic forecasting. Different approaches for obtaining a forecast distribution as the output of a NN have been proposed over the past years, including parametric methods where the NN outputs parameters of a parametric probability distribution (Lakshminarayanan et al., 2017; D’Isanto and Polsterer, 2018; Rasp and Lerch, 2018), semi-parametric approximations of the quantile function of the forecast distribution (Bremnes, 2020) and nonparametric methods where the forecast density is modeled as a histogram (Gasthaus et al., 2019; Li et al., 2021)

. To account for the randomness of the training process based on stochastic gradient descent methods, NNs are often run several times from different random initializations.

Lakshminarayanan et al. (2017) refer to this simple to implement and readily parallelizable approach as deep ensembles. Deep ensembles of NN models for probabilistic forecasting thus yield an ensemble of predictive probability distributions. To provide a final probabilistic forecast, the ensemble of predictive distributions needs to be aggregated to obtain a single forecast distribution.

The problem of combining predictive distributions has been studied extensively in the statistical literature, see Gneiting and Ranjan (2013) and Petropoulos et al. (2022, Section 2.6) for overviews. Combining probabilistic forecasts from different sources has been successfully used in a wide variety applications including economics (Aastveit et al., 2019), epidemiology (Cramer et al., 2021; Taylor and Taylor, 2021), finance (Berkowitz, 2001), signal processing (Koliander et al., 2022) and weather forecasting (Baran and Lerch, 2016, 2018), and constitutes one of the typical components of winning submissions to forecasting competitions (Bojer and Meldgaard, 2021; Januschowski et al., 2021). On the other hand, forecast combination also forms the theoretical framework of some of the most prominent techniques in machine learning such as boosting (Freund and Schapire, 1996), bagging (Breiman, 1996)

or random forests

(Breiman, 2001)

, which are based on the idea of building ensembles of learners and combining the associated predictions. Generally, the individual component models (or ensemble members) can be based on entirely distinct modeling approaches, or on a common modeling framework where the model training is subject to different input datasets of other sources of stochasticity. The latter is the case for deep ensembles where the main sources of uncertainty in the estimation are the random initialization of the network parameters and the stochastic gradient descent algorithm for the optimization. For general reviews on ensemble methods in machine learning, we refer to

Dietterich (2000), Zhou et al. (2002) and Ren et al. (2016).

While the arithmetic mean is a powerful and widely accepted method for aggregating single-valued point forecasts, the question how probabilistic forecasts should be combined is more involved and has been a focus of research interest in the literature on statistical forecasting (Gneiting and Ranjan, 2013; Lichtendahl et al., 2013; Petropoulos et al., 2022). We will focus on readily applicable aggregation methods for the combination of probabilistic forecasts from deep ensembles. A widely used approach is the linear aggregation of the forecast distributions, an approach that is often referred to as linear (opinion) pool (LP). In the context of deep ensembles, Lakshminarayanan et al. (2017) apply the LP and linearly combine density forecasts. An alternative is given by aggregating the forecast distributions on the scale of quantiles by linearly combining the corresponding quantile functions, an approach that is commonly referred to as Vincentization (VI; see, for example, Genest, 1992).

The main aim of our work is to consolidate findings from the statistical and machine learning literature on forecast combination and ensembling for probabilistic forecasting. Using theoretical arguments, simulation experiments and a case study on probabilistic wind gust forecasting, we systematically investigate and compare aggregation methods for probabilistic forecasts based on deep ensembles, with different ways to characterize the corresponding forecast distributions. This study is motivated by and based on our work in Schulz and Lerch (2022), where we use ensembles of NNs to statistically postprocess probabilistic forecasts for the speed of wind gusts and propose a common framework of NN-based probabilistic forecasting methods with different types of forecast distributions. In the following, we apply a two-step procedure by first generating an ensemble of probabilistic forecasts and then aggregating them into a single final forecast, which matches the typical workflow of forecast combination from a forecasting perspective. Alternatively, it is also possible to incorporate the aggregation procedure directly into the model estimation (Kim et al., 2021).

The remainder of the paper is organized as follows. Section 2 introduces relevant metrics for evaluating probabilistic forecasts and the forecast aggregation methods. Three NN-based methods for probabilistic forecasting are presented in Section 3 along with a discussion of how the different aggregation methods can be used to combine the corresponding predictive distributions of an ensemble of such forecasts. In Section 4, we conduct a comprehensive simulation study that is followed up by a case study on probabilistic weather prediction in Section 5. Section 6 concludes with a discussion. R (R Core Team, 2020) code with implementations of all methods is available online (https://github.com/benediktschulz/agg_distr_deep_ens).

2 Combining probabilistic forecasts

Probabilistic forecasts given in the form of predictive probability distributions for future quantities or events aim to quantify the uncertainty inherent to the prediction. In the following, we first summarize how such distribution forecasts can be evaluated, and then formally introduce the LP and VI methods for aggregating probabilistic forecasts.

2.1 Assessing predictive performance

In our evaluation of predictive performance, we will follow the principle of Gneiting et al. (2007) that a probabilistic forecast should aim to maximize sharpness subject to calibration. Calibration refers to the statistical consistency between the forecast distribution and the observation, whereas sharpness is a property of the forecast alone and refers to the degree of forecast uncertainty. A forecast is said to be sharper, the smaller the associated uncertainty.

Quantitatively, calibration and sharpness can be assessed simultaneously using proper scoring rules (Gneiting and Raftery, 2007). A scoring rule assigns a penalty to a pair of a probabilistic forecast and corresponding observation and is called proper if the underlying true distribution scores lowest in expectation, that is, for all if . Our forecast evaluation in the following will mainly focus on the widely used continuous ranked probability score (CRPS; Matheson and Winkler, 1976)

(2.1)

where

is a forecast distribution with finite first moment and

is the indicator function. Proper scoring rules such as the CRPS are not only used for forecast evaluation but also provide valuable tools for estimating model parameters. In the case of the CRPS, the estimation typically relies on closed-form analytical expressions of the integral in (2.1) (see, for example, Jordan et al., 2019) and is referred to as optimum score estimation (Gneiting and Raftery, 2007).

To compare competing forecasting methods based on a proper scoring rule with respect to a benchmark and an optimal forecast, we calculate the associated skill score, for example, the continuous ranked probability skill score (CRPSS). Let denote the mean score of the forecasting method of interest over a given dataset, the corresponding mean score of the benchmark forecast, and that of the (typically hypothetical) optimal forecast. The associated skill score is then calculated via

(2.2)

and simplifies to

if . In contrast to proper scoring rules, skill scores are positively oriented with 1 indicating optimal predictive performance, 0 no improvement over the benchmark and a negative skill a decrease in performance.

Further, we assess calibration qualitatively via histograms of the probability integral transform111Technically, we here use the unified PIT, a generalization proposed in Vogel et al. (2018), due to the format of some of the aggregated forecast distributions. (PIT),

. A probabilistic forecast is (well-)calibrated, if the PIT is uniformly distributed, resulting in a flat histogram. An U-shaped PIT histogram indicates underdispersion (or overconfidence), that is, a lack of spread in the forecast distribution, whereas a hump-shaped histogram indicates overdispersion (or underconfidence), that is, too much spread. In addition, we will generate quantile-based prediction intervals (PIs) to assess the calibration of the forecast distributions via the empirical coverage, and the sharpness via the length of the PIs. If a forecast is well-calibrated, the empirical coverage should resemble the nominal coverage, and a forecast is the sharper, the smaller the length of the PI. The nominal level of the PIs is a tuning parameter for evaluation, we here choose the specific level of

from the application in Schulz and Lerch (2022), which forms the basis of our case study in Section 5. Finally, we measure accuracy based on the mean forecast error of the median derived from the predictive distribution, , , which is positive in case of overforecasting and negative for underforecasting. For further background and details on the assessment of probabilistic forecasts, we refer to Schulz and Lerch (2022, Appendix A) and the references therein.

2.2 Combining predictive distributions

In the following, given

individual probabilistic forecasts we aim to aggregate, we will denote their cumulative distribution functions (CDFs) by

and their quantile functions by . The aggregation methods introduced below will typically assign weights , …, to the individual forecast distributions.

2.2.1 Linear pool (LP)

The most widely used approach for forecast combination is the LP, which is the arithmetic mean of the individual forecasts (Stone, 1961)

. For probabilistic forecasts, the LP is calculated as the (in our case equally) weighted average of the predictive CDFs and results in a mixture distribution. Equivalently, the LP can be calculated by averaging the probability density functions (PDFs). We define the predictive CDF of the LP via

(2.3)

where for with . Note that the weights need to sum up to one to ensure that yields a valid CDF.

The LP has some appealing theoretical properties222For example, Lichtendahl et al. (2013) and Abe et al. (2022) show that the score of the LP forecast is at least as good as the average score of the individual components in terms of different proper scoring rules. and has been the prevalent forecast aggregation method over the last decades. For example, Lakshminarayanan et al. (2017) use the LP to combine density forecasts of multiple NNs introducing the term deep ensembles.

However, there are disadvantages to the use of the LP that is known to have suboptimal properties when aggregating probabilities, since a linear combination of probability forecasts results in less sharp and more underconfident forecasts (Ranjan and Gneiting, 2010). Gneiting and Ranjan (2013) extend this result to the general case of predictive distributions by showing that in case of distribution forecasts sharpness decreases and dispersion increases. In particular, a (non-trivial) combination of calibrated forecasts is not calibrated anymore. In the context of deep ensembles, these downsides have also been observed in recent studies (Rahaman and Thiery, 2020; Wu and Gales, 2021).

In our simulation and case study conducted in the following, we apply the aggregation methods to forecasts produced by the same data-generating mechanism based on an ensemble of NNs, which differ only in the random initialization. Therefore, we do not expect systematic differences between the individual forecasts and only consider equally weighted averages. In the following, we will refer to the LP as the equally weighted average given by for in (2.3). Figure 1 illustrates the effect of forecast combination via the LP.

Figure 1:

PDF, CDF and quantile function of two normally distributed forecasts

and (, , ) together with forecasts aggregated via the methods presented in Section 2. V and V use the intercept , V and V the weight .

2.2.2 Vincentization (VI)

While the LP aggregates the forecasts on a probability scale, VI performs a quantile-based linear aggregation (Vincent, 1912; Ratcliff, 1979; Genest, 1992). We extend the standard VI framework333To the best of our knowledge, VI is usually only applied with standardized weights with , and without the intercept . Exceptions include Wolffram (2021) and related, unpublished simulation experiments by Anja Mühlemann (University of Bern, 2020, personal communication). by defining the VI quantile function via

(2.4)

where and for . In contrast to the LP, the weights do not need to sum to one and only their non-negativity is required to ensure the monotonicity of the resulting quantile function .444Note that in general is not the quantile function corresponding to the CDF of the LP, even for and equal weights. Further, a real-valued intercept is added to the aggregated quantile functions to correct for systematic biases.

As for the LP, we only consider equally weighted averages for VI, that is, for . Given equal weights, we consider four different variants of VI. First, with weights that sum up to 1 and no intercept, that is, and , which is referred to by V. Similar to the LP, V does not require the estimation of any parameters. Further, we consider VI variants where we estimate the parameters and both independently (while the other is fixed at the values of V) and also simultaneously, resulting in the three variants V (where and is estimated), V (where and is estimated) and V (where both and

are estimated). The parameters are estimated minimizing the CRPS following the optimum scoring principle. The standard procedure for training machine learning models where the available data is split into training, validation and test datasets offers a natural choice for estimating the combination parameters. Given NN models estimated based on the training set (where the validation set is used to determine hyperparameters), we estimate the coefficients of the VI approaches separately in a second step based on the validation set, which can be seen as a post-hoc calibration step

(Guo et al., 2017). During this second step, the component models with quantile functions are considered fixed and we only vary the combination parameters in (2.4). In the following, we will restrict our attention to fixed training and validation sets, but an extension of the approach described here to a cross-validation setting is straightforward. Table 1 provides an overview of the abbreviations and important characteristics of the different forecast aggregation methods we will consider below.

Abbr. Scale Formula Parameters Estimation
LP Probability - -
V Quantile - -
V Quantile CRPS
V Quantile CRPS
V Quantile , CRPS
Table 1: Overview of the aggregation methods for probabilistic forecasts, with and denoting the predictive CDFs and quantile functions of the individual components models. The column ‘Parameters’ indicates which parameters are estimated based on data, following the procedure described in Section 2.2.2.

VI (in the form of V) has recently received more research interest in the machine learning literature and has for example been used by Kirkwood et al. (2021) and Kim et al. (2021) to aggregate probabilistic predictions. Related work in the statistical literature includes comparisons to the LP which demonstrate that VI tends to perform better than the LP (Lichtendahl et al., 2013; Busetti, 2017).

Regarding the different NN-based methods for probabilistic forecasting that will be introduced in Section 3, we now consider the special case of VI for location-scale families. Given a CDF , a distribution is said to be an element of a location-scale family if its CDF satisfies

where denotes the location and the scale parameter. Popular examples of location-scale families include the normal and logistic distributions. Unlike the LP, which results in a wide-spread, multi-modal distribution, VI is shape-preserving for location-scale families (Thomas and Ross, 1980). Shape-preserving here means that if the individual forecasts are elements of the same location-scale family, the aggregated forecast is as well. Further, the parameters of the aggregated forecast and are given by the weighted averages of the individual parameters and , , together with the intercept in case of the location parameter, that is,

Here, we will only consider the case of for . Lichtendahl et al. (2013), who compare the theoretical properties of the LP and V

, note that the aggregated predictive distributions both yield the same mean but the VI forecasts are sharper, that is, the VI predictive distribution has a variance smaller or equal to that of the LP.

To highlight the effects of the individual VI parameters, we note that the intercept only has an effect on the location of the resulting aggregated distribution, while the weight has an effect on both the location and the spread. If it is larger than 1, the spread increases compared to the average spread of the individual forecasts, and it decreases for values smaller than 1. However, a weight not equal to 1 also shifts the location of the distribution. Figure 1 illustrates this in the exemplary case of two normal distributions.

3 Neural network methods for probabilistic forecasting

In the context of probabilistic wind gust prediction, Schulz and Lerch (2022) propose a framework for NN-based probabilistic forecasting that encompasses different approaches to obtain distribution forecasts as the output of a NN. The general framework is illustrated in Figure 2 and forms the basis of our work here. In the following, we briefly introduce three network variants and refer to Schulz and Lerch (2022) for details.

While the three variants differ in their characterization of the forecast distribution and the loss function employed in the NN, their use in practice shares a common methodological feature that constitutes the main motivation for our work here. As discussed in the introduction, extant practice in NN-based forecasting often relies on an ensemble of NN models trained based on randomly initialized weights and batches to account for the randomness of the stochastic gradient descent methods applied in the training process. This raises the question of how the distribution forecast from the three network variants can be combined using the aggregation methods described in Section

2.2, which we will discuss below.

Figure 2: Graphical illustration of the general framework for NN-based probabilistic forecasting.

3.1 Distributional regression network (DRN)

In the distributional regression network (DRN) approach, the forecasts are issued in the form of a parametric distribution. Under the parametric assumption

, the predictive distribution is characterized by the distribution parameter (vector)

, where is the parameter space. Different variants of the DRN approach have been proposed over the past years and can be traced back to at least Bishop (1994). Lakshminarayanan et al. (2017) and Rasp and Lerch (2018) use a normal distribution with , Schulz and Lerch (2022) use a zero-truncated logistic distribution with , where for both distributions is the location and the scale parameter, and Bishop (1994) and D’Isanto and Polsterer (2018) use a mixture of normal distributions. To estimate the parameters of the NN, proper scoring rules such as the CRPS (Rasp and Lerch, 2018; D’Isanto and Polsterer, 2018; Schulz and Lerch, 2022) or the negative log-likelihood (Lakshminarayanan et al., 2017) serve as custom loss functions. Extensions of the DRN approach to other parametric families are generally straightforward provided that analytical closed-form expressions of the selected loss function are available (for example, Ghazvinian et al., 2021; Chapman et al., 2022).

Both Lakshminarayanan et al. (2017) and Rasp and Lerch (2018) generate an ensemble of networks based on random initilalization. While Lakshminarayanan et al. (2017) propose to use the LP to aggregate the forecast distributions, Rasp and Lerch (2018) instead combine the forecasts by averaging the distribution parameters. Since the normal distribution (which we will also employ in the simulation study below) is a location-scale family, parameter averaging is equivalent to V. Although the logistic distribution also forms a location-scale family, the truncated variant used in Schulz and Lerch (2022) does not, and parameter averaging is not equivalent to V. However, in the context of the case study we found the differences between parameter averaging and V to be negligibly small and, in this particular case, therefore approximated the VI approaches by the corresponding parameter averages. To evaluate the LP forecasts, we draw a random sample of size 1 000 from the mixture distribution by first randomly choosing an ensemble member and then generating a random draw from the corresponding distribution.

3.2 Bernstein quantile network (BQN)

Bremnes (2020) proposes a semi-parametric extension of the DRN approach we refer to as Bernstein quantile network (BQN). The probabilistic forecast is given in form of the quantile function , which is modeled as a linear combination of Bernstein polynomials, that is,

where and is the -th basis Bernstein polynomial of degree , . The basis coefficients , which define the predictive distribution, are obtained as output of the NN. The parameters of the NN are estimated by minimizing the quantile loss evaluated at pre-defined quantile levels. Note that the support of the forecast distribution is given by .

To aggregate ensembles of BQN forecasts, Bremnes (2020) and Schulz and Lerch (2022) average the individual basis coefficient values across ensemble members. This is equivalent to V, which is obvious from the quantile function of the general case of VI for BQN forecasts,

where is the coefficient of the -th basis polynomial of the -th ensemble member, , .

Since a closed form of the CDF or density of a BQN forecast is not readily available, the LP cannot be expressed in a similar fashion. Analogous to DRN, the evaluation of the LP forecasts will therefore be based on a random sample of size 1 000 drawn from the aggregated distribution. Here, the inversion method allows to sample from the individual BQN forecasts. Further, the VI forecasts are evaluated based on a sample of 100 equidistant quantiles.555The numbers of samples and quantiles were chosen based on simulation experiments and theoretical considerations. Compared to random samples from the forecast distributions, a smaller number of equidistant quantiles is required to achieve approximations of the same accuracy, see Krüger et al. (2021) and references therein for a discussion of sample-based estimation of the CRPS.

3.3 Histogram estimation network (HEN)

The last method considered here is the histogram estimation network (HEN) which divides the support of the target variable in bins and assigns each bin the probability of the observation falling in that bin. Variants of this approach have been proposed in a variety of applications (for example, Gasthaus et al., 2019; Li et al., 2021). Mathematically, the HEN forecast is given by a piecewise uniform distribution. Let denote the edges of the bins with probabilities , , where it holds that . The CDF of a HEN forecast is then given by the piecewise linear function

We here follow Schulz and Lerch (2022) by considering fixed bins and estimate only the corresponding probabilities as output of the NN. In the simulations study, the edges are given by 50 equidistant empirical quantiles of the training data (unique to the second digit), and for the case study, we use a semi-automated procedure specific to the application that is described in detail in Schulz and Lerch (2022). As for DRN, the parameters can be estimated via CRPS minimization or maximum likelihood. Here, we use the latter, which corresponds to minimizing the categorical cross-entropy, a standard approach for classification tasks in machine learning.

Regarding the aggregation of an ensemble of HEN forecasts in case of fixed bins, the LP is equivalent to averaging the bin probabilities since

where and is the probability of the -th bin for the -th ensemble member, , . An exemplary application of the LP for an approach akin to HEN forecasts in a stacked NN can be found in Clare et al. (2021).

By contrast to the LP, the VI approach exhibits a particular advantage for HEN forecasts in that it results in a finer binning than the individual HEN models. To illustrate this effect, we note that the quantile function of the HEN forecast is a piecewise linear function with edges depending on the accumulated bin probabilities, that is, , . In mathematical terms, the quantile function is given for by

Therefore, the resulting VI quantile function is a piecewise linear function with one edge for each accumulated probability of the individual forecasts. As the forecast probabilities differ for each member of the deep ensemble, the associated quantile functions are subject to a different binning. Since the set of edges of the aggregated VI forecast is given by the union of all individual edges, this leads to a smoothed final forecast distribution with a finer binning than the individual model runs that differs for every forecast case, and eliminates the potential downside of too coarse fixed bin edges. Figure 3 illustrates the effects of the LP and V for two exemplary HEN forecasts.

Figure 3: PDF, CDF and quantile function of two HEN forecasts and together with forecasts aggregated via the LP and V. The dashed vertical lines indicate the binning with respect to , and for the CDF plot and with respect to in the quantile function plot.

4 Simulation study

We compare the performance of the five aggregation methods for each of the three network variants in a simulation study. The simulation setting is adopted from Li et al. (2021), who investigate a variant of the HEN approach. From the six models they propose for the data-generating process, we consider two here and report results for the remaining four in the supplemental material.

We do not tune the specific architectures and hyperparameters of the individual NN models in each of the scenarios of the simulation study, but instead use the configurations that have proven to work well in Schulz and Lerch (2022). This is done intentionally in order to also generate forecasts that are not well-calibrated or subject to systematic biases, which allows us to assess the performance of the aggregation methods in situations when the forecasts are not optimal, see, in particular, the results for Scenario 2 reported below.

For each scenario, we generate training sets of size 6,000 and test sets of size 10,000. The simulations are repeated 50 times. We generate a series of 40 individual network ensemble members for each of the three network variants and consider aggregation of the first members, where . As benchmark, we will consider an optimal probabilistic forecast based on the inherent uncertainty of the data-generating process denoted by the noise term .666Note that the simulations are based on a finite sample, so even the optimal forecast might result in a small bias or an empirical coverage not equal to the nominal value. In the following, the CRPSS in (2.2) will be calculated using the average CRPS of the individual networks for and the CRPS of the optimal forecast for .777 Note that this does not correspond to the mean improvement over the individual forecasts. However, averaging the median skill scores of the individual ensemble member predictions over the repetitions of the simulations yields qualitatively analogous results (not shown).

4.1 Scenario 1

As our first simulation scenario, we consider a linear model with normally distributed errors. Based on a random vector of predictors , which serves as the input of the networks, and the random coefficient vectors , which are fixed for each run of the simulation and unknown to the forecaster, the target variable is calculated via

where , , and . The optimal forecast is then given by

Figure 4:

Different evaluation metrics, the estimated intercept

and the relative weight difference for the three network variants in Scenario 1 of the simulation study, where DE denotes the average score of the deep ensemble members. Note the different scales on the y-axis.
Figure 5: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 1.

The key results for this simulation scenario are summarized in Figure 4, which shows different evaluation metrics averaged over the 50 repetitions of the simulation study. We start by comparing the aggregation methods for DRN, where the CRPSS indicates that aggregation via the VI approaches improves the network average by up to 12.5%, while the LP improves the forecast performance by at most 2.5%. Here, the best VI approach is to fix the intercept and weights instead of estimating them from the training data. In Figure 4, the relative weight difference of the estimated weight and a standardized weight for an ensemble of size , given by illustrates that the estimated weights are not equal to standardized weights. The flat PIT histograms in Figure 5 indicate that the individual component forecasts are already well-calibrated and corrections via coefficient estimation are not necessary. The average PI length of the network forecasts, which is identical to that of V and V, is smaller than that of the optimal forecast. Note that having sharper forecasts than comes at the cost of a lack in calibration. Comparing the aggregation methods, we find that the LP increases the PI length as expected due to its theoretical properties. V and V here increase the PI length because their estimated weights are larger than standardized ones. All aggregation methods increase the empirical coverage, which improves the predictive performance, because the coverage of the network average is smaller than the nominal value. In terms of accuracy, all methods are unbiased since they are approximately as accurate as the optimal forecast.

For BQN, the results are qualitatively similar, however, since the BQN forecasts are not as well-calibrated as those of DRN, there are some differences which we highlight in the following. The estimated weights of V and V are larger than standardized ones and result in a smaller CRPSS difference to V, which still performs best. The V and V forecasts are therefore less sharp than the network average and as sharp as the LP. The empirical coverage of the individual BQN forecasts is larger than the nominal value, thus that of the forecasts aggregated via VI approaches is as well. Interestingly, the LP decreases the coverage and is closest to the nominal value. Further, the VI forecasts are positively biased, the LP is instead close to being unbiased. Although the LP performs favorable in terms of the empirical coverage and accuracy, it performs worse than the VI approaches, even though the difference is smaller than in the case of DRN.

In contrast to DRN and BQN, the HEN forecasts are not well-calibrated but instead overdispersed, as indicated by the bulk-shaped histograms in Figure 5. In addition to the lack of calibration, the forecasts are also not sharp since the PIs are more than twice as large as those of the optimal forecast. These deficiencies result in a substantially worse CRPS compared to DRN and BQN. While the LP, V and V are unable to correct the systematic miscalibration, V and V result in well-calibrated forecasts, which is indicated by the flat PIT histograms in Figure 5. The estimated weights are smaller than standardized ones for all ensemble sizes, therefore the forecasts become sharper. The PI coverage of the overdispersed forecasts is, as expected, 2.5% larger than the nominal value. The corrections of V and V result in coverages closer to and even smaller than the nominal value. Further, note that V estimates smaller weights than V, but also a positive intercept larger than that of V in order to balance the effect of the weights on the location of the aggregated distribution. However, the positive intercepts estimated by V and V result in a larger bias. The correction of the overdispersion is also reflected in the CRPSS, where V and then V outperform the other approaches by a wide margin. Note that the LP improves the predictive performance and performs equally well as V and V. However, although all aggregation methods correct systematic errors and improve predictive performance, the aggregated forecasts are still not competitive to those of DRN and BQN in terms of the CRPS.

Figure 6:

Boxplots over the CRPSS values of the 50 runs in Scenario 1 of the simulation study. Note some outliers and that the boxes of the LP for BQN and DRN are cut off to improve readability.

Finally, we investigate the effect of the ensemble size on the performance of the aggregation methods, in particular on the CRPSS, considering the variability over the 50 runs (Figure 6). For all network variants and aggregation methods, most improvement is obtained up to ensembles of size 10. The median CRPSS increases up to a size of 20 after which only minor further improvements can be observed. Interestingly, the variability over the runs does not decrease for larger ensemble sizes. Comparing the aggregation methods, more outliers are observed for methods that estimate parameters. For DRN, we see that parameter estimation may result in forecasts worse than the network average in a few cases, on the other hand the same results are obtained for V forecasts in case of HEN. Regarding systematic differences between the network variants, we find that the variation across simulation runs is notably lower for HEN. This can partly be explained by the fact that the DRN and BQN forecasts are much closer to the optimal forecasts and thus small absolute deviations in the CRPS result in larger differences in the skill.

4.2 Scenario 2

In the second scenario, we consider a skewed distribution with a nonlinear mean function. The target variable

is defined by

where , , and . The optimal forecast is given by

Figure 7: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 2.
Figure 8: Evaluation metrics for the three network variants in Scenario 2, where DE denotes the average score of the deep ensemble members. Mind the different scales on the y-axis.

PIT histograms of the individual and aggregated forecasts for the different network variants are shown in Figure 7. In contrast to the first scenario, none of the network variants produces calibrated forecasts and their PIT histograms indicate systematic deviations from uniformity. As to be expected due to the wrong distributional assumption, DRN based on a normal distribution is not able to yield calibrated forecasts for an underlying skewed normal distribution, but also the semi-parametric BQN and HEN forecasts fail to provide calibrated forecasts. The HEN forecasts are strongly overdispersed and again result in the worst CRPS among the network variants, see Figure 8. None of the aggregation methods is able to correct the systematic lack of calibration for all of the network variants. That said, aggregation still improves the overall predictive performance in terms of the CRPS.

For DRN, we find that the LP outperforms the VI approaches. Interestingly, this is the case even though the LP forecasts are the least sharp and have a higher PI coverage that is farther away from the nominal value. In contrast to the first scenario, even the aggregated DRN forecasts perform notably worse than BQN. The VI approaches perform equally well and increase the coverage of the forecasts such that they are closer to the nominal value. While V and V estimate coefficients close to the nominal values, V estimates larger weights, and therefore yields larger PIs, and a negative intercept in order to balance the shift in the location. Still, V does not outperform the other VI approaches.

The results are again qualitatively similar for BQN. The main difference is that the LP does not outperform the VI approaches, as all aggregation methods result in an improvement in terms of the mean CRPS of up to 16%. Further, the LP again yields the least sharp forecasts and all methods increase the PI coverage.

Next, we consider the HEN forecasts. In contrast to Scenario 1, weight estimation via V and V is not able correct the systematic errors and outperform the other approaches. All VI approaches perform equally well and outperform the LP, which still improves the network average. The LP yields the least sharp forecasts, followed by V that estimates weights larger than standardized ones together with a negative intercept, as for DRN and BQN. The negative intercepts of V and V improve the accuracy, as they decrease the forecast bias.

Figure 9: Boxplots over the CRPSS values of the 50 runs in Scenario 2 of the simulation study.

Regarding the effect of the ensemble size, the largest improvements of the aggregation methods are again obtained for up to 10 ensemble members and only minor improvements can be observed for sizes larger than 20, see Figure 9. In contrast to Scenario 1, it can be noted that the variability over the runs decreases as the ensemble size increases, and that the degree of variability is similar for all aggregation methods within one network variant. A direct comparison of the network variants indicates that the variability generally increases with the overall skill of the aggregated forecast.

5 Case study

Modern weather forecasts are usually based on ensemble simulations from numerical weather prediction (NWP) models, consisting of a set of deterministic forecasts that differ in their initial conditions (Bauer et al., 2015). These ensemble weather predictions continue to be subject to systematic errors and require corrections via distributional regression models, a process which is referred to as postprocessing. Over the past years, much research interest has been focused on modern machine learning approaches for postprocessing, where NN models enable the incorporation of arbitrary input predictors and yield flexible forecast distributions. Exemplary NN-based postprocessing methods include the approaches introduced in Section 3, see Vannitsem et al. (2021) for an overview of recent developments.

Our case study focuses on the application of the aggregation methods to probabilistic wind gust forecasting over Germany using forecast distributions obtained as the output of NN methods for ensemble postprocessing, and is based on Schulz and Lerch (2022). We use the same dataset and consider 4 of the 22 available forecast lead times (0, 6, 12, 18h). In the following, we will typically evaluate the predictive performance aggregated over those lead times and note that while there are minor differences across lead times, the results are qualitatively similar and all the main conclusions are valid for all the considered lead times.

In our implementation of DRN, BQN and HEN for probabilistic forecasting via ensemble postprocessing, we follow Schulz and Lerch (2022) and refer to the descriptions there for implementation details including minor technical adjustments to the general descriptions in Section 3. For each of the lead times, we generate an ensemble of 100 models for each of the network variants based on random initialization, which form the basis for our study of the different aggregation methods. We randomly draw a subset of these 100 models for each of the considered ensemble sizes and repeat this procedure 20 times to account for uncertainties. Therefore, 20 aggregated forecasts based on a pool of 100 network ensemble members are generated for each model variant and each ensemble size.

Note that the underlying distribution of the target variable is of course unknown in the case study, and following common practice the observed value is used as the (hypothetical) optimal forecast resulting in a CRPS of 0 to calculate the CRPSS in (2.2). The magnitude of the CRPSS values of the simulation and case study is thus not directly comparable.

Figure 10: Evaluation metrics for the three network variants aggregated over all lead times considered in the case study, where DE denotes the average score of the deep ensemble members. Note the different scales on the vertical axis.

Figure 10 summarizes the key results of the case study. Applying the aggregation methods to the DRN and BQN forecasts leads to similar results as in the simulation study. Although the LP improves predictive performance with a skill of up to 1.6%, the VI approaches are superior to the LP. Among the considered VI approaches, coefficient estimation leads to better predictions with V and V performing best, followed by V and V.

As expected, the LP yields less sharp forecasts than the network average indicated by larger PIs, which are the least sharp for DRN. V also issues less sharp forecasts than the network average, which is identical to that of V and V, and V produces the sharpest forecasts. This is due to the fact that V estimates weights smaller and V larger than the nominal value of V. As in the simulation study, V estimates a more extreme intercept than V, which balances the effect of the weight estimation. The PI coverage increases for all aggregation methods and both network variants. For both network variants, the V forecasts have the smallest coverage closest to the nominal value, whereas V results in a coverage larger than the other VI approaches. Only for DRN, the LP has a larger PI coverage.

The results of the aggregated HEN forecasts are again qualitatively different from those of DRN and BQN. While still outperforming benchmark methods from statistics and machine learning, the HEN method does not perform as well as DRN and BQN in the comparison in Schulz and Lerch (2022) and is subject to more systematic errors. Although the ranking of the aggregation methods is identical to that of DRN and BQN, the magnitude of the differences in the CRPSS for the superior method is notably larger, and since V is able to improve some of the systematic errors, it clearly outperforms the other approaches. The most significant difference to the other aggregation methods is that V estimates more extreme coefficients. As for BQN, this results in the largest PIs and largest coverage, in both cases followed by the LP.

Regarding the accuracy of the forecasts produced by the different aggregation methods, the results are qualitatively similar for all three network variants. The two methods that estimate an intercept have the largest absolute biases. This is a somewhat counter-intuitive result, since it can be expected that including an intercept should enable the correction of systematic biases. As noted in Schulz and Lerch (2022), there are minor structural differences in the distribution of the observed values in the test and validation datasets. Due to the average observed values in the test data being somewhat smaller than those in the validation dataset, the data the coefficients are estimated on is not fully representative of the test data and overfitting may occur.

Figure 11: Boxplots over the CRPSS values of the 20 draws for each ensemble size in the case study for a lead time of 18h.

To assess the effect of the ensemble size on the predictive performance in Figure 11, we pick one specific lead time, namely 18h, to avoid distortions in the boxplots caused by the minor variations in the magnitude of the improvements over lead times. The results coincide with the corresponding main conclusions of the simulation study in that we observe most improvement up to ensembles of size 10 and only minor for ensembles of size larger than 20. In the case study, the improvement up to size 10 is more pronounced than in the simulation study and strongly suggests that a network ensemble should include at least 10 members. Finally, we note that the variability of the CRPSS decreases for larger ensemble sizes.

6 Discussion and conclusions

We have conducted a systematic comparison of aggregation methods for the combination of distribution forecasts from ensembles of neural networks based on different random initializations, so-called deep ensembles. In doing so, our work aims to reconcile and consolidate findings from the statistical literature on forecast combination and the machine learning literature on ensemble methods. Specifically, we propose a general Vincentization framework where quantile functions of the forecast distributions can be flexibly combined, and compare to the results of the widely used linear pool, where the probabilistic forecasts are linearly combined on the scale of probabilities. For deep ensembles of three variants of NN-based models for probabilistic forecasting that differ in the characterization of the output distribution, aggregation with both the LP and VI improves the predictive performance. The VI approaches show superior performance compared to the LP. For example, given ensemble members that are already calibrated, V preserves the calibration and improves the predictive accuracy while the LP decreases sharpness with more dispersed forecasts. If the individual forecast distributions are subject to systematic errors such as biases and dispersion errors, coefficient estimation via V, V and V is able to correct these errors and improve the predictive performance considerably, otherwise V should be preferred. While these combination approaches require the estimation of additional combination coefficients, the computational costs are negligible compared to the generation of the NN-based probabilistic forecasts and can be performed on the validation data without restricting the estimation of the NNs.

Even though forecast combination generally improves the predictive performance, Scenario 2 of the simulation study demonstrates that for example the lack of calibration of the severely misspecified individual forecast distributions cannot be corrected by the aggregation methods considered here. In the context of NNs and deep ensembles, the calibration of (ensemble) predictions and re-calibration procedures have been a focus of much recent research interest (Guo et al., 2017; Ovadia et al., 2019). For example, in line with the results of Gneiting and Ranjan (2013), deep ensemble predictions based on the LP were found to be miscalibrated and should be re-calibrated after the aggregation step (Rahaman and Thiery, 2020; Wu and Gales, 2021). A wide range of re-calibration methods, which simultaneously aggregate and calibrate the ensemble predictions (such as the V, V and V approaches presented in Section 2.2.2 for VI), have been proposed in order to correct the systematic errors introduced by the LP in the context of probability forecasting for binary events (Allard et al., 2012)

. For example, the beta-transformed LP composites the CDF of a Beta distribution with the LP

(Ranjan and Gneiting, 2010), and Satopää et al. (2014)

propose to aggregate probabilities on a log-odds scale. Some of these approaches can be readily extended to the case of forecast distributions considered here

(Gneiting and Ranjan, 2013). For VI, more sophisticated approaches that allow the weights to depend on the quantile levels might improve the predictive performance (Kim et al., 2021). Further, moving from a linear combination function towards more complex transformations allowing for non-linearity might help to correct more involved calibration errors.

We have restricted our attention to ensembles of NN-based probabilistic forecasts generated based on random initializations. While such deep ensembles have been demonstrated to work well in many settings (Lee et al., 2015; Fort et al., 2019; Ovadia et al., 2019), a variety of alternative approaches for uncertainty estimation in NNs has been proposed including Bayesian NNs (Neal, 2012) or generative models (Mohamed and Lakshminarayanan, 2016). A particularly prominent approach to deal with the uncertainty in the estimation of NNs is dropout (Srivastava et al., 2014; Gal and Ghahramani, 2016). Dropout can not only be used as a regularization method during estimation but also for prediction, which results in an ensemble of forecasts and is readily applicable for the different variants of NN methods considered here. Compared to deep ensembles based on random initialization, a potential advantage of dropout-based ensembles is that the lower computational costs make the generation of larger ensembles more feasible. The aggregation methods we investigated are agnostic to the generation of the ensembles of distribution forecasts provided that they can be considered as realizations of the same basic type of model, and are thus readily applicable to dropout-based ensembles.888 In experiments with dropout ensembles in the context of the case study (not shown), we found that aggregating forecast distributions improves the predictions, but the overall performance of both the individual and combined dropout-based forecasts is substantially worse compared to the ensembles considered here. Therefore, an interesting avenue for future work is to investigate the performance of the combination methods for different approaches to generate NN-based probabilistic forecasts, for example within the framework of comprehensive simulation testbeds (Osband et al., 2021).

Finally, we summarize three key recommendations for aggregating distribution forecasts from deep ensembles based on our results. First, to optimize the final predictive performance of the aggregated forecast, the individual component forecasts should be optimized as much as possible.999 Abe et al. (2022) find that deep ensembles do not offer benefits compared to single larger (that is, more complex) NNs. Our results do not contradict their findings since we address a conceptually different question and argue that given the generation of a deep ensemble, the individual members’ forecasts should be optimized as much as possible. In this situation, a single NN will generally not be able to match the predictive performance of the associated deep ensemble. While forecast combination improves predictive performance, it generally did not effect the ranking of the different NN-variants for generating probabilistic forecasts, and can be unable to fix substantial systematic errors. Second, generating an ensemble with a size of a least 10 appears to be a sensible choice, with only minor improvements being observed for more than 20 members. This corresponds to the results in Fort et al. (2019) and ensemble sizes typically chosen in the literature (Lakshminarayanan et al., 2017; Rasp and Lerch, 2018), but the benefits of generating more ensemble members need to be balanced against the computational costs, and sometimes smaller ensembles have been suggested (Ovadia et al., 2019; Abe et al., 2022). Third, aggregating forecast distributions via VI is often superior to the LP. Thereby, the choice of the specific variant within the general framework depends on potential misspecifications of the individual component distributions, as discussed above. Note that these conclusions, in particular the superiority of the quantile aggregation approaches, refer to the specific situation of deep ensembles considered here. The property of shape-preservation justifies the use of VI from a theoretical perspective in a setting where the ensemble members are based on the same model and data. If the ensemble members differ in terms of the model used to generate the forecast distribution or the input data they are based on, shape-preservation might not be desired. Instead, a model selection approach based on the LP, which allows for obtaining a multi-modal forecast distribution, might better represent the possible scenarios that may materialize.

Acknowledgments

The research leading to these results has been done within the project C5 “Dynamical feature-based ensemble postprocessing of wind gusts within European winter storms” of the Transregional Collaborative Research Center SFB/TRR 165 “Waves to Weather” funded by the German Research Foundation (DFG). Sebastian Lerch gratefully acknowledges support by the Vector Stiftung through the Young Investigator Group “Artificial Intelligence for Probabilistic Weather Forecasting”. We thank Daniel Wolffram, Eva-Maria Walz, Anja Mühlemann, Alexander Jordan and Tilmann Gneiting for helpful comments and discussions.

References

  • Aastveit et al. (2019) Aastveit, K. A., Mitchell, J., Ravazzolo, F. and Van Dijk, H. K. (2019). The evolution of forecast density combinations in economics. Oxford University Press.
  • Abe et al. (2022) Abe, T., Buchanan, E. K., Pleiss, G., Zemel, R. and Cunningham, J. P. (2022). Deep Ensembles Work, But Are They Necessary? Preprint, available at https://doi.org/10.48550/arXiv.2202.06985.
  • Allard et al. (2012) Allard, D., Comunian, A. and Renard, P. (2012). Probability aggregation methods in geoscience. Mathematical Geosciences, 44, 545–581.
  • Baran and Lerch (2016) Baran, S. and Lerch, S. (2016). Mixture EMOS model for calibrating ensemble forecasts of wind speed. Environmetrics, 27, 116–130.
  • Baran and Lerch (2018) Baran, S. and Lerch, S. (2018). Combining predictive distributions for the statistical post-processing of ensemble forecasts. International Journal of Forecasting, 34, 477–496.
  • Bauer et al. (2015) Bauer, P., Thorpe, A. and Brunet, G. (2015). The quiet revolution of numerical weather prediction. Nature, 525, 47–55.
  • Berkowitz (2001) Berkowitz, J. (2001). Testing density forecasts, with applications to risk management. Journal of Business & Economic Statistics, 19, 465–474.
  • Bishop (1994) Bishop, C. M. (1994). Mixture density networks. Technical report, available at https://publications.aston.ac.uk/id/eprint/373/1/NCRG_94_004.pdf.
  • Bojer and Meldgaard (2021) Bojer, C. S. and Meldgaard, J. P. (2021). Kaggle forecasting competitions: An overlooked learning opportunity. International Journal of Forecasting, 37, 587–603.
  • Breiman (1996) Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140.
  • Breiman (2001) Breiman, L. (2001). Random forests. Machine Learning, 45, 5–32.
  • Bremnes (2020) Bremnes, J. B. (2020). Ensemble postprocessing using quantile function regression based on neural networks and Bernstein polynomials. Monthly Weather Review, 148, 403–414.
  • Busetti (2017) Busetti, F. (2017). Quantile aggregation of density forecasts. Oxford Bulletin of Economics and Statistics, 79, 495–512.
  • Chapman et al. (2022) Chapman, W. E., Monache, L. D., Alessandrini, S., Subramanian, A. C., Ralph, F. M., Xie, S.-P., Lerch, S. and Hayatbini, N. (2022). Probabilistic predictions from deterministic atmospheric river forecasts with deep learning. Monthly Weather Review, 150, 215–234.
  • Clare et al. (2021) Clare, M. C., Jamil, O. and Morcrette, C. J. (2021). Combining distribution-based neural networks to predict weather forecast probabilities. Quarterly Journal of the Royal Meteorological Society, 147, 4337–4357.
  • Cramer et al. (2021) Cramer, E. Y., Ray, E. L., Lopez, V. K., Bracher, J., Brennen, A., Rivadeneira, A. J. C., Gerding, A., Gneiting, T., House, K. H., Huang, Y. et al. (2021). Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the US. Preprint, available at https://www.medrxiv.org/content/10.1101/2021.02.03.21250974v3.
  • Dietterich (2000) Dietterich, T. G. (2000). Ensemble methods in machine learning. In Lecture Notes in Computer Science. Springer, Berlin, Heidelberg, 1–15.
  • D’Isanto and Polsterer (2018) D’Isanto, A. and Polsterer, K. L. (2018). Photometric redshift estimation via deep learning-generalized and pre-classification-less, image based, fully probabilistic redshifts. Astronomy & Astrophysics, 609, A111.
  • Fort et al. (2019) Fort, S., Hu, H. and Lakshminarayanan, B. (2019). Deep ensembles: A loss landscape perspective. Preprint, available at https://doi.org/10.48550/arXiv.1912.02757.
  • Freund and Schapire (1996) Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on Machine Learning. 148–156.
  • Gal and Ghahramani (2016) Gal, Y. and Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on Machine Learning. 1050–1059.
  • Gasthaus et al. (2019) Gasthaus, J., Benidis, K., Wang, Y., Rangapuram, S. S., Salinas, D., Flunkert, V. and Januschowski, T. (2019). Probabilistic forecasting with spline quantile function RNNs. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics. 1901–1910.
  • Genest (1992) Genest, C. (1992). Vincentization revisited. The Annals of Statistics, 20, 1137–1142.
  • Ghazvinian et al. (2021) Ghazvinian, M., Zhang, Y., Seo, D.-J., He, M. and Fernando, N. (2021). A novel hybrid artificial neural network - parametric scheme for postprocessing medium-range precipitation forecasts. Advances in Water Resources, 151, 103907.
  • Gneiting et al. (2007) Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 69, 243–268.
  • Gneiting and Katzfuss (2014) Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and Its Application, 1, 125–151.
  • Gneiting and Raftery (2007) Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102, 359–378.
  • Gneiting and Ranjan (2013) Gneiting, T. and Ranjan, R. (2013). Combining predictive distributions. Electronic Journal of Statistics, 7, 1747–1782.
  • Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y. and Weinberger, K. Q. (2017). On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning. 1321–1330.
  • Januschowski et al. (2021) Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H. and Gasthaus, J. (2021). Forecasting with trees. International Journal of Forecasting in press.
  • Jordan et al. (2019) Jordan, A., Krüger, F. and Lerch, S. (2019). Evaluating probabilistic forecasts with scoringRules. Journal of Statistical Software, 90, 1–37.
  • Kendall and Gal (2017) Kendall, A. and Gal, Y. (2017).

    What uncertainties do we need in Bayesian deep learning for computer vision?

    In Proceedings of the 31st International Conference on Neural Information Processing Systems. 5580–5590.
  • Kim et al. (2021) Kim, T., Fakoor, R., Mueller, J., Smola, A. J. and Tibshirani, R. J. (2021). Deep quantile aggregation. Preprint, available at https://doi.org/10.48550/arXiv.2103.00083.
  • Kirkwood et al. (2021) Kirkwood, C., Economou, T., Odbert, H. and Pugeault, N. (2021). A framework for probabilistic weather forecast post-processing across models and lead times using machine learning. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379, 20200099.
  • Koliander et al. (2022) Koliander, G., El-Laham, Y., Djurić, P. M. and Hlawatsch, F. (2022). Fusion of probability density functions. Preprint, available at https://doi.org/10.48550/arXiv.2202.11633.
  • Krüger et al. (2021) Krüger, F., Lerch, S., Thorarinsdottir, T. and Gneiting, T. (2021).

    Predictive inference based on Markov chain Monte Carlo output.

    International Statistical Review, 89, 274–301.
  • Lakshminarayanan et al. (2017) Lakshminarayanan, B., Pritzel, A. and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems. 6403–6414.
  • Lee et al. (2015) Lee, S., Purushwalkam, S., Cogswell, M., Crandall, D. and Batra, D. (2015). Why M heads are better than one: Training a diverse ensemble of deep networks. Preprint, available at https://doi.org/10.48550/arXiv.1511.06314.
  • Li et al. (2021) Li, R., Reich, B. J. and Bondell, H. D. (2021). Deep distribution regression. Computational Statistics and Data Analysis, 159, 107203.
  • Lichtendahl et al. (2013) Lichtendahl, K. C., Grushka-Cockayne, Y. and Winkler, R. L. (2013). Is it better to average probabilities or quantiles? Management Science, 59, 1594–1611.
  • Matheson and Winkler (1976) Matheson, J. E. and Winkler, R. L. (1976). Scoring rules for continuous probability distributions. Management Science, 22, 1087–1096.
  • Mohamed and Lakshminarayanan (2016) Mohamed, S. and Lakshminarayanan, B. (2016). Learning in implicit generative models. NIPS 2016 Workshop on Adversarial Training, available at https://doi.org/10.48550/arXiv.1610.03483.
  • Neal (2012) Neal, R. M. (2012). Bayesian learning for neural networks. Springer Science & Business Media.
  • Osband et al. (2021) Osband, I., Wen, Z., Asghari, S. M., Dwaracherla, V., Hao, B., Ibrahimi, M., Lawson, D., Lu, X., O’Donoghue, B. and Van Roy, B. (2021). Evaluating predictive distributions: Does Bayesian deep learning work? Preprint, available at https://doi.org/10.48550/arXiv.2110.04629.
  • Ovadia et al. (2019) Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., Dillon, J. V., Lakshminarayanan, B. and Snoek, J. (2019). Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems. 12.
  • Petropoulos et al. (2022) Petropoulos, F., Apiletti, D., Assimakopoulos, V., Babai, M. Z., Barrow, D. K., Taieb, S. B., Bergmeir, C., Bessa, R. J., Bijak, J., Boylan, J. E. et al. (2022). Forecasting: theory and practice. International Journal of Forecasting in press.
  • R Core Team (2020) R Core Team (2020). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.
  • Rahaman and Thiery (2020) Rahaman, R. and Thiery, A. H. (2020). Uncertainty Quantification and Deep Ensembles. In Advances in Neural Information Processing Systems.
  • Ranjan and Gneiting (2010) Ranjan, R. and Gneiting, T. (2010). Combining probability forecasts. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 72, 71–91.
  • Rasp and Lerch (2018) Rasp, S. and Lerch, S. (2018). Neural networks for postprocessing ensemble weather forecasts. Monthly Weather Review, 146, 3885–3900.
  • Ratcliff (1979) Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics. Psychological Bulletin, 86, 446–461.
  • Ren et al. (2016) Ren, Y., Zhang, L. and Suganthan, P. N. (2016). Ensemble classification and regression-recent developments, applications and future directions. IEEE Computational Intelligence Magazine, 11, 41–53.
  • Satopää et al. (2014) Satopää, V. A., Baron, J., Foster, D. P., Mellers, B. A., Tetlock, P. E. and Ungar, L. H. (2014).

    Combining multiple probability predictions using a simple logit model.

    International Journal of Forecasting, 30, 344–356.
  • Schulz and Lerch (2022) Schulz, B. and Lerch, S. (2022). Machine learning methods for postprocessing ensemble forecasts of wind gusts: A systematic comparison. Monthly Weather Review, 150, 235–257.
  • Srivastava et al. (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–1958.
  • Stone (1961) Stone, M. (1961). The opinion pool. The Annals of Mathematical Statistics, 32, 1339–1342.
  • Taylor and Taylor (2021) Taylor, J. W. and Taylor, K. S. (2021). Combining probabilistic forecasts of covid-19 mortality in the united states. European Journal of Operational Research in press.
  • Thomas and Ross (1980) Thomas, E. A. and Ross, B. H. (1980). On appropriate procedures for combining probability distributions within the same family. Journal of Mathematical Psychology, 21, 136–152.
  • Vannitsem et al. (2021) Vannitsem, S., Bremnes, J. B., Demaeyer, J., Evans, G. R., Flowerdew, J., Hemri, S., Lerch, S., Roberts, N., Theis, S., Atencia, A., Bouallègue, Z. B., Bhend, J., Dabernig, M., Cruz, L. D., Hieta, L., Mestre, O., Moret, L., Plenković, I. O., Schmeits, M., Taillardat, M., den Bergh, J. V., Schaeybroeck, B. V., Whan, K. and Ylhaisi, J. (2021). Statistical postprocessing for weather forecasts: Review, challenges, and avenues in a big data world. Bulletin of the American Meteorological Society, 102, E681 – E699.
  • Vincent (1912) Vincent, S. B. (1912). The functions of the Vibrissae in the behavior of the white rat. Animal Behavior Monographs, 1.
  • Vogel et al. (2018) Vogel, P., Knippertz, P., Fink, A. H., Schlueter, A. and Gneiting, T. (2018). Skill of global raw and postprocessed ensemble predictions of rainfall over northern tropical Africa. Weather and Forecasting, 33, 369–388.
  • Wolffram (2021) Wolffram, D. (2021). Building and Evaluating Forecast Ensembles for COVID-19 Deaths. M.Sc. thesis, Karlsruhe Institute of Technology.
  • Wu and Gales (2021) Wu, X. and Gales, M. (2021). Should ensemble members be calibrated? Preprint, available at https://doi.org/10.48550/arXiv.2101.05397.
  • Zhou et al. (2002) Zhou, Z.-H., Wu, J. and Tang, W. (2002). Ensembling neural networks: many could be better than all. Artificial Intelligence, 137, 239–263.

Supplementary material

S1 Further simulation results

Here, we present additional results for the remaining scenarios proposed in Li et al. (2021). Their models 1 and 4 correspond to our scenarios 1 and 2 considered in the main text. The results for their models 5 and 6 are almost identical to those of their model 2, and are thus not included here.

s1.1 Additional simulation settings

Scenario 3

This scenario is based on model 2 of Li et al. (2021) and uses a mixture distribution with a nonlinear mean function,

where , , , , . The optimal forecast is given by the conditional distribution of , that is,

Scenario 4

This scenario is based on model 3 of Li et al. (2021) and also uses a mixture distribution with a nonlinear mean function

where , , , , . The optimal forecast is given by the conditional distribution of , that is,

s1.2 Results

Analogous to the presentation of the results for the first two simulation scenarios, results for Scenario 3 are shown in Figures S1S3, and results for Scenario 4 in Figures S4S6. The overall results and main conclusions are qualitatively similar to those discussed for the first two simulation settings in the main text, we thus limit the discussion here to some noteworthy differences. In both additional settings, the magnitude of improvements by forecast aggregation is smaller, in particular for Scenario 4. Scenario 4 further provides the only case where the HEN forecasts are better calibrated than those of DRN and BQN, and also substantially larger improvements by forecast combination are observed for HEN.

Figure S1: Evaluation metrics for the three network variants in Scenario 3, where DE denotes the average score of the deep ensemble members. Note the different scales on the vertical axes.
Figure S2: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 3.
Figure S3: Boxplots over the CRPSS values of the 50 runs in Scenario 3 of the simulation study.
Figure S2: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 3.
Figure S4: Evaluation metrics for the three network variants in Scenario 4, where DE denotes the average score of the deep ensemble members. Note the different scales on the vertical axes.
Figure S5: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 4.
Figure S6: Boxplots over the CRPSS values of the 50 runs in Scenario 4 of the simulation study.
Figure S5: PIT histograms for the three network variants of the deep ensemble (DE) and aggregated forecasts for an ensemble of size 2 in Scenario 4.