Evaluating epidemic forecasts in an interval format

by   Johannes Bracher, et al.

For practical reasons, many forecasts of case, hospitalization and death counts in the context of the current COVID-19 pandemic are issued in the form of central predictive intervals at various levels. This is also the case for the forecasts collected in the COVID-19 Forecast Hub run by the UMass-Amherst Influenza Forecasting Center of Excellence. Forecast evaluation metrics like the logarithmic score, which has been applied in several infectious disease forecasting challenges, are then not available as they require full predictive distributions. This note provides an overview of how established methods for the evaluation of quantile and interval forecasts can be applied to epidemic forecasts. Specifically, we discuss the computation and interpretation of the weighted interval score, which is a proper score that approximates the continuous ranked probability score. It can be interpreted as a generalization of the absolute error to probabilistic forecasts and allows for a simple decomposition into a measure of sharpness and penalties for over- and underprediction.



There are no comments yet.


page 1

page 2

page 3

page 4


An extended note on the multibin logarithmic score used in the FluSight competitions

In recent years the Centers for Disease Control and Prevention (CDC) hav...

Forecasting Based on Surveillance Data

Forecasting the future course of epidemics has always been one of the ma...

Recalibrating probabilistic forecasts of epidemics

Distributional forecasts are important for a wide variety of application...

Proper scoring rules for evaluating asymmetry in density forecasting

This paper proposes a novel asymmetric continuous probabilistic score (A...

Uncertainty quantification for epidemiological forecasts of COVID-19 through combinations of model predictions

A common statistical problem is prediction, or forecasting, in the prese...

The Murphy Decomposition and the Calibration-Resolution Principle: A New Perspective on Forecast Evaluation

I provide a unifying perspective on forecast evaluation, characterizing ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There is a growing consensus in infectious disease epidemiology that epidemic forecasts should be probabilistic in nature, i.e. should not only state one predicted outcome, but also quantify their own uncertainty. This is reflected in recent forecasting challenges like the US CDC FluSight Challenge (McGowan et al., 2019) and the Dengue Forecasting Project (Johansson et al., 2019), which required participants to submit forecast distributions for binned disease incidence measures. Storing forecasts in this way enables the evaluation of standard scoring rules like the logarithmic score (Gneiting and Raftery, 2007), which has been used in both of the aforementioned challenges. This approach, however, requires that a simple yet meaningful binning system can be defined and is followed by all forecasters. In acute outbreak situations like the current COVID-19 outbreak, where the range of observed outcomes varies considerably across space and time and forecasts are generated under time pressure, it may not be practically feasible to define a reasonable binning scheme.

An alternative is to store forecasts in the form of predictive quantiles or intervals. This is the approach used in the COVID-19 Forecast Hub (UMass-Amherst Influenza Forecasting Center of Excellence, 2020). The Forecast Hub serves to aggregate COVID-19 death and hospitalization forecasts in the United States and is the data source for the official CDC COVID-19 Forecasting page (https://www.cdc.gov/coronavirus/2019-ncov/covid-data/forecasting-us.html). Contributing teams are asked to report the predictive median and central prediction intervals with nominal levels , meaning that the (0.01, 0.025, 0.05, 0.10, …, 0.95, 0.975, 0.99) quantiles of predictive distributions have to be made available. Using such a format, predictive distributions can be stored in reasonable detail independently of the expected range of outcomes. However, suitably adapted scoring methods are required, as e.g. the logarithmic score cannot be evaluated based on quantiles alone. This note provides an introduction to established quantile and interval-based scoring methods (Gneiting and Raftery, 2007, Section 6) with a focus on their application to epidemiological forecasts.

2 Common scores to evaluate full predictive distributions

Proper scoring rules (Gneiting and Raftery, 2007) are today the standard tools to evaluate probabilistic forecasts. Propriety is a desirable property of a score as it encourages honest forecasting, meaning that forecasters have no incentive to report forecasts differing from their true belief about the future. We start by providing a brief overview of scores which can be applied when the full predictive distribution is available.

A widely used proper score is the logarithmic score. In the case of a discrete set of possible outcomes (as is the case for binned measures of disease activity), it is defined as (Gneiting and Raftery, 2007)

Here is the probability assigned to the observed outcome by the forecast . The log score is positively oriented, meaning that larger values are better. A potential disadvantage of this score is that it degenerates to if . In the FluSight Challenge the score is therefore truncated at a value of (Centers for Disease Control and Prevention, 2019).

Until the 2018/2019 edition, a variation of the logarithmic score called the multibin logarithmic score was used in the FluSight Challenge. For discrete and ordered outcomes it is defined as (Centers for Disease Control and Prevention, 2018)

i.e. also counts probability mass within a certain tolerance range of ordered categories. The goal of this score is to measure “accuracy of practical significance” (Reich et al., 2019, p. 3153). It thus offers a more accessible interpretation to practitioners, but has the disadvantage of being improper (Bracher, 2019; Reich et al., 2019a).

An alternative score which is considered more robust than the logarithmic score (Gneiting et al., 2007) is the continuous ranked probability score111Note that in the case of integer-valued outcomes the CRPS simplifies to the ranked probability score, compare Czado et al. (2009) and Kolassa (2016).


is interpreted as a cumulative distribution function (CDF). The CRPS represents a generalization of the absolute error to probabilistic forecasts (implying that it is negatively oriented) and has been commonly used to evaluate epidemic forecasts

(Held et al., 2017; Funk et al., 2019). The CRPS does not diverge to even if a forecast assigns zero probability to the eventually observed outcome, making it less sensitive to occasional misguided forecasts. It depends on the application setting whether an extreme penalization of such “missed” forecasts is desirable or not, and in certain contexts the CRPS may seem lenient. A practical advantage, however, is that there is no need for thresholding it at an arbitrary value.

3 Scores for forecasts provided in an interval format

Both the logS and the CRPS cannot be evaluated directly if forecasts are provided in an interval format. If many intervals are provided, approximations may be feasible to some degree, but problems arise if observations fall in the tails of predictive distributions (see Discussion section). It is therefore advisable to apply scoring rules designed specifically for forecasts in a quantile/interval format. A simple proper score which requires only a central prediction interval (in the following: PI) is the interval score (Gneiting and Raftery, 2007, Section 6.2 and references therein)

where is the indicator function and and are the and quantiles of . It consists of three intuitively meaningful quantities:

  • The width of the central PI, which describes the sharpness of .

  • A penalty term for observations falling below the lower endpoint of the PI. The penalty is proportional to the distance between and the lower end of the interval, with the strength of the penalty depending on the level (the higher the nominal level of the PI the more severe the penalty).

  • An analoguous penalty term for observations falling above the upper end of the PI.

To provide more detailed information on the predictive distribution it is common to report not just one, but several central PIs at different levels , along with the predictive median . The latter can informally be seen as a central prediction interval at level . To take all of these into account, a weighted interval score can be evaluated:

This score is a special case of the more general quantile score (Gneiting and Raftery, 2007, Section 6.1, Corollary 1) and proper for any set of non-negative (un-normalized) weights . A natural choice is to set


with , as for large and equally spaced values of (stretching over the unit interval) it can be shown that under this choice of weights


This follows directly from known properties of the quantile score and CRPS (Laio and Tamea, 2007; Gneiting and Ranjan, 2011), see Appendix A

. Consequently the score can be interpreted heuristically as a measure of distance between the predictive distribution and the true observation, where the units are those of the absolute error, on the natural scale of the data. Indeed, in the case

where only the predictive median is used, is simply the absolute error. Furthermore, the CRPS reduces to the absolute error when is a point forecast (Gneiting and Raftery, 2007). We will use the specification (1) of the weights in the remainder of the article, but remark that different weighting schemes may be reasonable depending on the application context.

In practice, evaluation of forecasts submitted to the COVID-19 Forecast Hub will be done based on the predictive median and prediction intervals with (implying nominal coverages of ). This corresponds to the quantiles teams are required to report in their submissions and implies that relative to the CRPS, slightly more emphasis is given to intervals with high nominal coverage.

Note that a score corresponding to one half of what we refer to as the WIS was used in the 2014 Global Energy Forecasting Competition (Hong et al., 2016, Section 3.2). The score was framed as an average of pinball losses for the predictive 1st through 99th percentiles. Here we preferred to motivate the score through central predictive intervals at different levels, which are a commonly used concept in epidemiology.

Similarly to the interval score, the weighted interval score can be decomposed into weighted sums of the widths of PIs and penalty terms, including the absolute error. These two components represent the sharpness and calibration of the forecasts, respectively, and can be used in graphical representations of obtained scores (see Section 5.1).

Figure 1: Example: Logarithmic score, absolute error, interval score (with ), CRPS and two versions of the weighted interval score. These are denoted by (with , ) and WIS (, ). Scores are shown as a function of the observed value . The predictive distribution is negative binomial with expectation 60 and size 4. Note that the top left panel shows the negative logS, i.e. , which like the other scores is negatively oriented (smaller values are better).

4 Qualitative comparison for different scores

We now compare various scores using simple examples, covering scores for point predictions, prediction intervals and full predictive distributions.

4.1 Illustration for an integer-valued outcome

Figure 1 illustrates the behaviour of five different scores for a negative binomial predictive distribution with expectation and size parameter

(standard deviation

). We consider the logarithmic score, absolute error, interval score with (), CRPS, and two versions of the weighted interval score. Firstly, we consider a score with and , which we denote by . Secondly, we consider a more detailed score with and , denoted by WIS (as this is the version used in the COVID-19 Forecast Hub we will focus on it in the remainder of the article). The resulting scores are shown as a function of the observed value . Note that the top left panel shows the negative logarithmic score, i.e. , so that the curve is oriented the same way as for the other scores (lower values are better). Qualitatively all curves look similar. However, some differences can be observed. The best (lowest) negative logS is achieved if the observation coincides with the predictive mode. For the interval-based scores, AE and CRPS, the best value results if equals the median (for the in the middle right panel there is a plateau as it does not distinguish between values falling into the 80% PI). The negative logS curve is smooth and increases the more steeply the further away the observed is from the predictive mode. The curve shows some asymmetry, which is absent or less pronounced in the other plots. The IS and WIS curves are piecewise linear, with more modest slopes closer to the median and more pronounced ones towards the tails. Both versions of the WIS represent a good approximation to the CRPS. For the more detailed version with 11 intervals plus the absolute error, slight differences to the CRPS can only be seen in the extreme upper tail. When comparing the CRPS and /WIS scores to the absolute error, it can be seen that the latter are larger in the immediate surroundings of the median (and always greater than zero), but lower towards the tails. This is because they also take into account the uncertainty in the forecast distribution.

4.2 Differing behaviour if agreement between predictions and observations is poor

Qualitative differences between the logarithmic and interval-based scores occur predominantly if observations fall into the tails of predictive distributions. We illustrate this with a second example. Consider two negative binomial forecasts: with expectation 60 and size 4 (standard deviation ) as before, and with expectation 80, and size 10 (standard deviation . thus has higher expectation than and is sharper. If we now observe , i.e. a count considerably higher than suggested by either or , the two scores yield different results, as illustrated in Figure 2.

  • The logS favours over , as the former is more dispersed and has slightly heavier tails. Therefore is considered somewhat more “plausible” under than under (, ).

  • The WIS (with as in the previous section), on the other hand, favours as its quantiles are generally closer to the observed value (, ).

This behaviour of the WIS is referred to as sensitivity to distance (Gneiting and Raftery, 2007). In contrast, the logS is a local score which ignores distance. Winkler (1996, pp. 16–17) argues that local scoring rules can be more suitable for inferential problems, while sensitivity to distance is reasonable in many decision making settings. In the public health context, say a prediction of hospital bed need on a certain day in the future, it could be argued that for the forecast was indeed more useful than . While a pessimistic scenario under (defined as the 95% quantile of the predictive distribution) implies 128 beds needed and thus fell considerably short of , it still suggested more adequate need for preparation than , which has a 95% quantile of 118.

Figure 2: Example: Negative logarithmic score and weighted interval score (with ) as a function of the observed value . The predictive distributions (green) and (red) are negative binomials with expectations and sizes . The black dashed line shows as discussed in the text.

We argue that poor agreement between forecasts and observations is more likely to occur for COVID-19 deaths than e.g. for seasonal ILI intensity, which due to larger amounts of historical data is more predictable. Sensitivity to distance then leads to more robust scoring with respect to decision making, without the need to truncate at an arbitrary value (as required for the log score).

5 Application to FluSight forecasts

In this section some additional practical aspects are discussed using historical forecasts from the 2016/2017 edition of the FluSight Challenge for illustration.

5.1 An easily interpretable graphic display of the WIS

The straightforward decomposition of the WIS into the average width of PIs and average penalty for observations outside the various PIs (see Section 2) enables an intuitive graphical display to compare different forecasts and understand why one forecast outperforms another. Distinguishing also between penalties for over- and underprediction can be informative about systematic biases or asymmetries. Note that decompositions of quantile or interval scores for visualization purposes have been suggested before, see e.g. Bentzien and Friederichs (2014).

Figure 3: Comparison of one-week-ahead forecasts by KCDE and SARIMA over the course of the 2016/2017 FluSight season. The top row shows the interval score with , the bottom row the weighted interval score with . The panels at the right show mean scores over the course of the season. All bars are decomposed into the contribution of interval widths (i.e. a measure of sharpness; blue) and penalties for over- and underprediction (orange and red, respectively). Note that the absolute values of the two scores are not directly comparable as the WIS involves re-scaling the included interval scores.

Figure 3 shows a comparison of the and WIS (with as before) obtained for one-week-ahead forecasts by the KCDE and SARIMA models during the 2016/2017 FluSight Challenge (obtained from https://github.com/FluSightNetwork/cdc-flusight-ensemble, Reich et al. 2019, Ray et al. 2017). It can be seen that, while KCDE and SARIMA issued forecasts of similar sharpness (average widths of PIs, blue bars), SARIMA is more strongly penalized for PIs not covering the observations (orange and red bars). Broken down to a single number, the bottom right panel shows that predictions from KCDE and SARIMA were on average off by 0.25 and 0.35 percentage points, respectively (after taking into account the uncertainty implied by their predictions). Both methods are somewhat conservative, with 80% PIs covering 88% (SARIMA) and 100% of the observations (KCDE). When comparing the plots for and WIS, it can be seen that the former strongly punishes larger discrepancies between forecasts and observations while ignoring smaller differences. The latter translates discrepancies to penalties in a smoother fashion, as could already be seen in Figure 1.

5.2 Empirical agreement between different scores

To explore the agreement between different scores, we applied several of them to one- through four-week-ahead forecasts from the 2016/2017 edition of the FluSight Challenge. We compare the negative logarithmic score, the negative multibin logarithmic score with a tolerance of 0.5 percentage points (both with truncation at ), the CRPS,222To evaluate the CRPS and interval scores we simply identified each bin with its central value to which a point mass was assigned. the absolute error of the median, the interval score with and the weighted interval score (with and as in the prevous sections). Figure 4 shows scatterplots of mean scores achieved by 26 models (averaged over weeks, forecast horizons and geographical levels; the naïve uniform model was removed as it performs clearly worst under almost all metrics).

As expected, the three interval-based scores correlate more strongly with the CRPS and the absolute error than with the logarithmic score. Correlation between the WIS and CRPS is almost perfect, meaning that in this example the approximation (2) works quite well based on the 23 available quantiles. Agreement between the interval-based score and the logS is mediocre, in part because the models FluOutlook_Mech and FluOutlook_MechAug receive comparatively good interval-based scores (as well as CRPS, absolute errors and even MBlogS), but exceptionally poor logS. The reason is that while having a rather accurate central tendency, they are too sharp with tails that are too light. This is sanctioned severely by the logarithmic score, but much less so by the other scores (this is related to the discussion in Section 4.2). The WIS score (and thus also the CRPS) shows remarkably good agreement with the MBlogS, indicating that distance-sensitive scores may be able to formalize the idea of a score which is slightly more “generous” than the logS while maintaining propriety. Interestingly, all scores agree that the three best models are LANL_DBMplus, Protea_Cheetah and Protea_Springbok.

Figure 4: Comparison of 26 models participating in the 2016/2017 FluSight Challenge under different scoring rules. Shown are mean scores averaged over one through four week ahead forecasts, different geographical levels, weeks and forecast horizons. Compared scores: negative logarithmic score and multibin logarithmic score, continuous ranked probability score, interval score (), weighted interval score with . Plots comparing the WIS to CRPS and AE, respectively, also show the diagonal in grey as these three scores operate on the same scale. All shown scores are negatively oriented. The models FluOutlook_Mech and FluOutlook_MechAug are highlighted in orange as they rank very differently under different scores.

6 A brief remark on evaluating point forecasts

While the main focus of this note is on the evaluation of interval forecasts, we also briefly address how point forecasts submitted to the COVID-19 Forecast Hub will be evaluated. As in the FluSight Challenge (Centers for Disease Control and Prevention, 2019), the absolute error (AE) will be applied. This implies that teams should report the predictive median as a point forecast (Gneiting, 2011). Using the absolute error in combination with WIS is appealing as both can be reported on the same scale (that of the observations). Indeed, as mentioned before, the absolute error is the same as the WIS (and CPRS) of a distribution putting all probability mass on the point forecast.

The absolute error, when averaged across time and space, is dominated by forecasts from larger states and weeks with high activity (this also holds true for the CRPS and WIS). One may thus be tempted to use a relative measure of error instead, such as the mean absolute percentage error (MAPE). We argue, however, that emphasizing forecasts of targets with higher expected values is meaningful. For instance, there should be a larger penalty for forecasting 200 deaths if 400 are eventually observed than for forecasting 2 deaths if 4 are observed. Relative measures like the MAPE would treat both the same. Moreover, the MAPE does not encourage reporting predictive medians nor means, but rather obscure and difficult to interpret types of point forecasts (Gneiting, 2011; Kolassa, 2016). It should therefore be used with caution.

7 Discussion

In this paper we have provided a practical and hopefully intuitive introduction on the evaluation of epidemic forecasts provided in an interval or quantile format. It is worth emphasizing that the concepts underlying the suggested procedure are by no means new or experimental. Indeed, they can be traced back to Dunsmore (1968) and Winkler (1972). As mentioned before, a special case of the WIS was used in the 2014 Global Energy Forecasting Competition (Hong et al., 2016). A scaled version of the interval score was used in the 2018 M4 forecasting competition (Makridakis et al., 2020).

Note that we restrict attention to the case of central prediction intervals, so that each prediction interval is clearly associated with two quantiles. The evaluation of prediction intervals which are not restricted to be central is conceptually challenging (Askanazi et al., 2018), and we refrain from adding this complexity.

The method advocated in this note corresponds to an approximate CRPS computed from prediction intervals at various levels. A natural question is whether such an approximation would also be feasible for the logarithmic score, leading to an evaluation metric closer to that from the FluSight

Challenge. We see two principal difficulties with such an approach. Firstly, some sort of interpolation method would be needed to obtain an approximate density or probability mass function within the provided intervals. While the best way to do this is not obvious, a pragmatic solution could likely be found. A second problem, however, would remain: For observations outside of the prediction interval with the highest nominal coverage (98% for the

COVID-19 Forecast Hub) there is no easily justifiable way of approximating the logarithmic score, as the analyst necessarily has to make strong assumptions on the tail behaviour of the forecast. As such poor forecasts typically have a strong impact on the average log score, they cannot be neglected. And given that forecasts are often evaluated for many locations (e.g., over 50 US states and territories), even for a perfectly calibrated model there will on average be one such observation falling in the far tail of a predictive distribution every week. One could think about including even more extreme quantiles to remedy this, but forecasters may not be comfortable issuing these and the conceptual problem would remain. This, of course, is linked to the general problem of low robustness of the logarithmic score. We therefore argue that especially in contexts with low predictability such as the current COVID-19 pandemic, distance-sensitive scores like the CRPS or WIS are an attractive option.

Appendix A Relationship between quantile score, interval score and CRPS

The standard piecewise linear quantile score (Gneiting and Raftery 2007, Section 6.1, Gneiting 2011) for the level is defined as

where is the quantile of the forecast and is the observed outcome. It can be shown by some simple re-ordering of terms that the interval score of a central (1 - ) PI can be computed from the quantile scores at levels and as


Moreover it is known (Laio and Tamea, 2007; Gneiting and Ranjan, 2011) that


meaning that the CRPS can be approximated by an average of quantile scores at a large number of equally spaced levels. Combining (3) and (4) one then gets

where are a large number of equally spaced values stretching over the unit interval.


Code to reproduce Figures 1–4 has been made available at


We thank Ryan Tibshirani and Sebastian Funk for their insightful comments. The work of Johannes Bracher was supported by the Helmholtz Foundation via the SIMCARD Information & Data Science Pilot Project. Tilmann Gneiting is grateful for support by the Klaus Tschira Foundation. Evan L. Ray and Nicholas G. Reich were supported by the US Centers for Disease Control and Prevention (1U01IP001122). The content is solely the responsibility of the authors and does not necessarily represent the official views of the CDC.


  • Askanazi et al. (2018) Askanazi, R., F. X. Diebold, F. Schorfheide, and M. Shin (2018). On the comparison of interval forecasts. Journal of Time Series Analysis 39(6), 953–965.
  • Bentzien and Friederichs (2014) Bentzien, S. and P. Friederichs (2014). Decomposition and graphical portrayal of the quantile score. Quarterly Journal of the Royal Meteorological Society 140(683), 1924–1934.
  • Bracher (2019) Bracher, J. (2019). On the multibin logarithmic score used in the FluSight competitions. Proceedings of the National Academy of Sciences 116(42), 20809–20810.
  • Centers for Disease Control and Prevention (2018) Centers for Disease Control and Prevention (2018). Forecast the 2018–2019 Influenza Season Collaborative Challenge. Accessible online at https://predict.cdc.gov/api/v1/attachments/flusight%202018%E2%80%932019/flu_challenge_2018-19_tentativefinal_9.18.18.docx, retrieved on 23 April 2019.
  • Centers for Disease Control and Prevention (2019) Centers for Disease Control and Prevention (2019). Forecast the 2019–2020 Influenza Season Collaborative Challenge. Accessible online at https://predict.cdc.gov/api/v1/attachments/flusight_2019-2020/2019-2020_flusight_national_regional_guidance_final.docx.
  • Czado et al. (2009) Czado, C., T. Gneiting, and L. Held (2009). Predictive model assessment for count data. Biometrics 65(4), 1254–1261.
  • Dunsmore (1968) Dunsmore, I. R. (1968). A Bayesian approach to calibration. Journal of the Royal Statistical Society: Series B (Methodological) 30(2), 396–405.
  • Funk et al. (2019) Funk, S., A. Camacho, A. J. Kucharski, R. Lowe, R. M. Eggo, and W. J. Edmunds (2019, 02). Assessing the performance of real-time epidemic forecasts: A case study of Ebola in the Western Area region of Sierra Leone, 2014-15. PLOS Computational Biology 15(2), 1–17.
  • Gneiting (2011) Gneiting, T. (2011). Making and evaluating point forecasts. Journal of the American Statistical Association 106(494), 746–762.
  • Gneiting et al. (2007) Gneiting, T., F. Balabdaoui, and A. E. Raftery (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 69(2), 243–268.
  • Gneiting and Raftery (2007) Gneiting, T. and A. E. Raftery (2007).

    Strictly proper scoring rules, prediction, and estimation.

    Journal of the American Statistical Association 102(477), 359–378.
  • Gneiting and Ranjan (2011) Gneiting, T. and R. Ranjan (2011). Comparing density forecast using threshold- and quantile-weighted scoring rules. Journal of Business and Economic Statistics 29(3), 411–422.
  • Held et al. (2017) Held, L., S. Meyer, and J. Bracher (2017). Probabilistic forecasting in infectious disease epidemiology: the 13th Armitage lecture. Statistics in Medicine 36(22), 3443–3460.
  • Hong et al. (2016) Hong, T., P. Pinson, S. Fan, H. Zareipour, A. Troccoli, and R. J. Hyndman (2016). Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond. International Journal of Forecasting 32(3), 896 – 913.
  • Johansson et al. (2019) Johansson, M. A., K. M. Apfeldorf, S. Dobson, J. Devita, A. L. Buczak, B. Baugher, L. J. Moniz, T. Bagley, S. M. Babin, E. Guven, T. K. Yamana, J. Shaman, T. Moschou, N. Lothian, A. Lane, G. Osborne, G. Jiang, L. C. Brooks, D. C. Farrow, S. Hyun, R. J. Tibshirani, R. Rosenfeld, J. Lessler, N. G. Reich, D. A. T. Cummings, S. A. Lauer, S. M. Moore, H. E. Clapham, R. Lowe, T. C. Bailey, M. García-Díez, M. S. Carvalho, X. Rodó, T. Sardar, R. Paul, E. L. Ray, K. Sakrejda, A. C. Brown, X. Meng, O. Osoba, R. Vardavas, D. Manheim, M. Moore, D. M. Rao, T. C. Porco, S. Ackley, F. Liu, L. Worden, M. Convertino, Y. Liu, A. Reddy, E. Ortiz, J. Rivero, H. Brito, A. Juarrero, L. R. Johnson, R. B. Gramacy, J. M. Cohen, E. A. Mordecai, C. C. Murdock, J. R. Rohr, S. J. Ryan, A. M. Stewart-Ibarra, D. P. Weikel, A. Jutla, R. Khan, M. Poultney, R. R. Colwell, B. Rivera-García, C. M. Barker, J. E. Bell, M. Biggerstaff, D. Swerdlow, L. Mier-y Teran-Romero, B. M. Forshey, J. Trtanj, J. Asher, M. Clay, H. S. Margolis, A. M. Hebbeler, D. George, and J.-P. Chretien (2019). An open challenge to advance probabilistic forecasting for dengue epidemics. Proceedings of the National Academy of Sciences 116(48), 24268–24274.
  • Kolassa (2016) Kolassa, S. (2016). Evaluating predictive count data distributions in retail sales forecasting. International Journal of Forecasting 32(3), 788–803.
  • Laio and Tamea (2007) Laio, F. and S. Tamea (2007). Verification tools for probabilistic forecasts of continuous hydrological variables. Hydrology and Earth System Sciences Discussions 11(4), 1267–1277.
  • Makridakis et al. (2020) Makridakis, S., E. Spiliotis, and V. Assimakopoulos (2020). The M4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting 36(1), 54–74.
  • McGowan et al. (2019) McGowan, C., M. Biggerstaff, M. Johansson, K. Apfeldorf, M. Ben-Nun, L. Brooks, M. Convertino, M. Erraguntla, D. Farrow, J. Freeze, S. Ghosh, S. Hyun, S. Kandula, J. Lega, Y. Liu, N. Michaud, H. Morita, J. Niemi, N. Ramakrishnan, E. Ray, N. Reich, P. Riley, J. Shaman, R. Tibshirani, A. Vespignani, Q. Zhang, C. Reed, and The Influenza Forecasting Working Group (2019). Collaborative efforts to forecast seasonal influenza in the United States, 2015–2016. Scientific Reports, paper no. 683.
  • Ray et al. (2017) Ray, E. L., K. Sakrejda, S. A. Lauer, M. A. Johansson, and N. G. Reich (2017). Infectious disease prediction with kernel conditional density estimation. Statistics in Medicine 36(30), 4908–4929.
  • Reich et al. (2019) Reich, N. G., L. C. Brooks, S. J. Fox, S. Kandula, C. J. McGowan, E. Moore, D. Osthus, E. L. Ray, A. Tushar, T. K. Yamana, M. Biggerstaff, M. A. Johansson, R. Rosenfeld, and J. Shaman (2019). A collaborative multiyear, multimodel assessment of seasonal influenza forecasting in the United States. Proceedings of the National Academy of Sciences 116(8), 3146–3154.
  • Reich et al. (2019a) Reich, N. G., D. Osthus, E. L. Ray, T. K. Yamana, M. Biggerstaff, M. A. Johansson, R. Rosenfeld, and J. Shaman (2019a). Reply to Bracher: Scoring probabilistic forecasts to maximize public health interpretability. Proceedings of the National Academy of Sciences 116(42), 20811–20812.
  • UMass-Amherst Influenza Forecasting Center of Excellence (2020) UMass-Amherst Influenza Forecasting Center of Excellence (2020). COVID-19 Forecast Hub. Accessible online at https://github.com/reichlab/covid19-forecast-hub.
  • Winkler (1972) Winkler, R. L. (1972). A decision-theoretic approach to interval estimation. Journal of the American Statistical Association 67(337), 187–191.
  • Winkler (1996) Winkler, R. L. (1996). Scoring rules and the evaluation of probabilities (with discussion). Test 4, 1–60.