Forecast evaluation with imperfect observations and imperfect models

06/10/2018
by   Philippe Naveau, et al.
0

The field of statistics has become one of the mathematical foundations in forecast evaluations studies, especially in regard to computing scoring rules. The classical paradigm of proper scoring rules is to discriminate between two different forecasts by comparing them with observations. The probability density function of the observed record is assumed to be perfect as a verification benchmark. In practice, however, observations are almost always tainted by errors. These may be due to homogenization problems, instrumental deficiencies, the need for indirect reconstructions from other sources (e.g., radar data), model errors in gridded products like reanalysis, or any other data-recording issues. If the yardstick used to compare forecasts is imprecise, one can wonder whether such types of errors may or may not have a strong influence on decisions based on classical scoring rules. Building on the recent work of Ferro (2017), we propose a new scoring rule scheme in the context of models that incorporate errors of the verification data, we compare it to existing methods, and applied it to various setups, mainly a Gaussian additive noise model and a gamma multiplicative noise model. In addition, we frame the problem of error verification in datasets as scoring a model that jointly couples forecasts and observation distributions. This is strongly connected to the so-called error-in-variables models in statistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2019

Scale invariant proper scoring rules Scale dependence: Why the average CRPS often is inappropriate for ranking probabilistic forecasts

Averages of proper scoring rules are often used to rank probabilistic fo...
research
09/21/2020

Optimal probabilistic forecasts: When do they work?

Proper scoring rules are used to assess the out-of-sample accuracy of pr...
research
12/23/2020

Beyond Strictly Proper Scoring Rules: The Importance of Being Local

The evaluation of probabilistic forecasts plays a central role both in t...
research
10/22/2021

Validation of point process predictions with proper scoring rules

We introduce a class of proper scoring rules for evaluating spatial poin...
research
10/16/2019

Multivariate Forecasting Evaluation: On Sensitive and Strictly Proper Scoring Rules

In recent years, probabilistic forecasting is an emerging topic, which i...
research
10/16/2007

Probabilistic coherence and proper scoring rules

We provide self-contained proof of a theorem relating probabilistic cohe...
research
11/29/2022

Scaling-aware rating of count forecasts

Forecasts crave a rating that reflects the forecast's quality in the con...

Please sign up or login with your details

Forgot password? Click here to reset