# Forecast evaluation with imperfect observations and imperfect models

The field of statistics has become one of the mathematical foundations in forecast evaluations studies, especially in regard to computing scoring rules. The classical paradigm of proper scoring rules is to discriminate between two different forecasts by comparing them with observations. The probability density function of the observed record is assumed to be perfect as a verification benchmark. In practice, however, observations are almost always tainted by errors. These may be due to homogenization problems, instrumental deficiencies, the need for indirect reconstructions from other sources (e.g., radar data), model errors in gridded products like reanalysis, or any other data-recording issues. If the yardstick used to compare forecasts is imprecise, one can wonder whether such types of errors may or may not have a strong influence on decisions based on classical scoring rules. Building on the recent work of Ferro (2017), we propose a new scoring rule scheme in the context of models that incorporate errors of the verification data, we compare it to existing methods, and applied it to various setups, mainly a Gaussian additive noise model and a gamma multiplicative noise model. In addition, we frame the problem of error verification in datasets as scoring a model that jointly couples forecasts and observation distributions. This is strongly connected to the so-called error-in-variables models in statistics.

There are no comments yet.

## Authors

• 8 publications
• 7 publications
• ### Scale invariant proper scoring rules Scale dependence: Why the average CRPS often is inappropriate for ranking probabilistic forecasts

Averages of proper scoring rules are often used to rank probabilistic fo...
12/11/2019 ∙ by David Bolin, et al. ∙ 0

• ### Optimal probabilistic forecasts: When do they work?

Proper scoring rules are used to assess the out-of-sample accuracy of pr...
09/21/2020 ∙ by Gael M. Martin, et al. ∙ 0

• ### Validation of point process predictions with proper scoring rules

We introduce a class of proper scoring rules for evaluating spatial poin...
10/22/2021 ∙ by Claudio Heinrich-Mertsching, et al. ∙ 0

• ### Multivariate Forecasting Evaluation: On Sensitive and Strictly Proper Scoring Rules

In recent years, probabilistic forecasting is an emerging topic, which i...
10/16/2019 ∙ by Florian Ziel, et al. ∙ 0

• ### Probabilistic coherence and proper scoring rules

We provide self-contained proof of a theorem relating probabilistic cohe...
10/16/2007 ∙ by Joel Predd, et al. ∙ 0

• ### Using scoring functions to evaluate point process forecasts

Point process models are widely used tools to issue forecasts or assess ...
03/22/2021 ∙ by Jonas Brehmer, et al. ∙ 0

• ### From Proper Scoring Rules to Max-Min Optimal Forecast Aggregation

This paper forges a strong connection between two seemingly unrelated fo...
02/14/2021 ∙ by Eric Neyman, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In verification and recalibration of ensemble forecasts, an essential verification step is to find data that precisely identify the forecast events of interest, the so-called verification dataset (see, e.g., Jolliffe04). Verification data are of different types, for example reanalysis data; and, in practice, the true process of interest is rarely directly observed. Jolliffe04, in Section 1.3 of their book, discussed the uncertainty associated with verification data (such as sampling uncertainty, direct measurement uncertainty, or changes in locations in the verification dataset). In the field of data assimilation, uncertainty in verification data is routinely taken into account in assimilation schemes (see, e.g., Daley93; Waller14; Janjic17). This issue is also known within the verification forecast community; but, to our knowledge, it has not strongly been put forward until recently (Ferro17). A distribution-based verification approach was proposed by Murphy87

, where joint distributions for forecast and observation account for the information and interaction of both datasets. Various approaches have been proposed in the literature. Some methods attempt to correct the verification data and use regular scoring metrics to evaluate forecasts, such as perturbed ensemble methods

(Anderson96; Hamill01) or (see also Bowler08; Gorgas12)

for other approaches. Other methods consider observations as random variables

(see, e.g., Candille08; Pappenberger09; Pinson12) and express scoring metrics in that context. Weijs11, following the decomposition of a score into its reliability, resolution, and uncertainty components, accounted for the uncertainty relative to the truth or to the climatology. Siegert16

proposed a Bayesian framework to jointly model the prediction and observation in a signal-plus-noise model that embedded the variability between the two datasets. These studies highlight the benefit of accounting for observation errors; however, many of them focus on categorical variables

Bowler06 or are computationally challenging for scalar variables (see, e.g., Candille08; Pappenberger09; Pinson12).

Particularly relevant to our work is the recent work of Ferro17, who framed in precise mathematical terms the problem of error in verification data. His setup relies on modeling the probabilistic distribution of the verification data given the underlying physical process that is not observed. In terms of assessment, this framework is rooted in the concept of proper scoring rules that is now commonly used in weather forecasting centers to compare different forecasts (see, e.g., Gneiting07). Ferro17

also proposed a solution to take into account verification error when computing a scoring rule. Building on this solid foundation, we extend and improve Ferro’s work. By taking full advantage of the information assumed in the probability model at hand, we show that a new correction of proper scoring rules provides a smaller variance than does the one proposed by Ferro, while keeping the same mean with respect to the true but unobserved reference; see Section

2 and Proposition 1. Another improvement is the range of examples treated; see Sections 3-5. In addition to revisiting the univariate Gaussian error additive model, we treat other models, in particular the gamma error model; and we explore multivariate setups. Similarly, in terms of scoring rules, we not only treat the logarithmic score but also investigate the continuous ranked probability score (CRPS), a well-known score in the forecast community. Another added value of our work is that, in Section 5, we view the issue of error in verification datasets within the framework of jointly modeling forecasts and observations distributions (see, e.g., Ehm16). This coupling can be described, in a statistical sense, as a multivariate error-in-variable (EIV) model (see, e.g., Fuller87) that generalizes the classical Gaussian additive error model. One key element of our work is that, in contrast to most past studies that focused on the mean (with respect to observations) of a given score, we view all scores as random variables (driven by observational draws). In some cases, we show that one can derive the explicit expressions of the corrected score probability density function (pdf); see Sections 3 and 4. We conclude in Section 6 by highlighting the main ideas of our work and discussing a few possible directions to go further in the analysis of imperfect observations.

## 2 Proper scoring rules under uncertainty

In this section, we briefly describe the framework of proper scoring rules and discuss the class of scores proposed by Ferro17 to account for error in verification data and the framework that we propose. Verification data are assumed to be a realization of an unobserved process tainted with error. In this work, the error will be modeled in two cases, one additive and one multiplicative. In the following, we denote the verification data by and its generating random process by , and the underlying true random process by and its realization by . Distinguishing between the unobserved truth () and the observed but incorrect verification data () is fundamental to understanding the impact of imperfect observations on forecast scoring.

### 2.1 Proper scores

A probabilistic scoring rule, say , is a function with two inputs, the forecast distribution and a realization , which stands as a verification datapoint. The score provides one scalar output indicating about how far the forecast distribution is from the verification data : the smaller , the better. Complex mathematical properties are required for consistent ranking of forecast distributions. For instance, the statistical property of properness, defined by Gneiting07, prevents a scoring rule from favoring any probabilistic distribution over the distribution of the verification data. Properness is expressed as

 EY(s(fY,Y))≤EY(s(f,Y)),

where corresponds to the pdf of and represents any forecast pdf. This mathematical property means that, on average, a forecast can never be better than the observational distribution .

An incorrect selection of the best forecast can occur if the forecasts are ranked differently depending on the reference used, i.e. the verification data or the true underlying process. Ferro17 showed that forecasts can be misleading, in terms of score values, in the presence of noise in the verification data. Frameworks that embed the observational error are proposed and compared in the following subsections.

### 2.2 Corrected proper scores

Ferro17 proposed a mathematical scheme to correct a score when error is present in the verification data. This modified score, denoted in this work, is derived from a classical proper score, say . With these notations, and were linked by using the conditional expectation of given , that is,

 s0(f,x)=E(s∧(f,Y)|X=x). (1)

This ensures that . In other words, the score computed from the ’s provides the same value on average that the proper score computed from the unobserved ’s. In terms of assumptions, we note that the conditional law of given needs to be known in order to compute from , e.g. see Definitions 2 and 3 in Ferro17. We also note that represents one way to correct the given proper score , but it may not be unique. One can find a different strategy by walking into Ferro’s footsteps, but backward. By using the conditional pdf of the random variable given , Equation (1) allows one to build from a given . We propose to follow the opposite path: Given , we construct a new score, say , from the conditional pdf of given and such that . This leads us to define

 s∨(f,y)=E(s0(f,X)|Y=y). (2)

This strategy appears practical in the sense that we start with any classical score and transform it to take into account the observational error. The drawback is that we need to compute the pdf of the conditional variable from the pdfs of the two random variables and

. Still, this is a just an application of Bayes theorem, and it is easily feasible in many practical cases. Additionally,

is automatically a proper score with respect to the hidden vector

, in the sense that

 EY[s∨(f,Y)]=EY[so(f,X)].

At this stage, we have two possible ways to build a corrected score, and it is natural to compare their properties. By construction, the random variables and have the same mean

 EY[s∧(f,Y)]=EX[so(f,X)]=EY[s∨(f,Y)], for any forecast pdf f,

and therefore another criterion is necessary in order to compare these two scoring rules. It seems natural to investigate second-order features, and the following proposition enables us to rank these two scores with respect to their variances.

###### Proposition 1

Let be the pdf of any given forecast. Let be a proper score scoring rule relative to the unobserved truth represented by the variable . The scoring rules defined above, and , verify the following variance inequality:

 VY[s∧(f,Y)]≥VY[s∨(f,Y)]. (3)

All proofs can be found in the Appendix.

Proposition 1 indicates that if one is ready to view ’s as a random variable and prefers scores with smaller variances, then should be favored over . Still, Proposition 1 is general, and treating specific cases will be useful in order to illustrate our strategy.

## 3 Additive Gaussian observation error

In this section,

denotes a Gaussian distribution with mean

and variance . Ferro’s simplest example is the following:

 Model (A){X∼N(μ0,σ20),Y=X+N(0,ω2),

where all Gaussian variables are assumed to be independent; is observed, but is not, being the hidden truth.

### 3.1 Corrected log scores

For a Gaussian predictive pdf with mean and variance , the logarithm score, defined by

 s0(f,x)=logσ+12σ2(x−μ)2+12log2π, (4)

is well suited and has been widely used in the literature. For the additive error Model (A), Ferro’s paper clearly showed that the classical logarithm score, is misleading and has to be modified before being computed from the noisy observations . Applying (1) brought the correct adapted score,

 s∧(f,y)=logσ+(y−μ)2−ω22σ2+12log2π, (5)

that satisfies the important property

 EY(s∧(f,Y))=EX(s0(f,X))=logσ+(μ0−μ)2+σ202σ2+12log2π. (6)

Since is a proper score with respect to , the new score can provide the same expectation as if the vector was observed. Note that the score never takes advantage of the knowledge of present in Model (A). Only the noise level is used in Equation (5). The reason is that was built from .

Applying basics properties of Gaussian conditioning (see the Appendix for details), our score defined by (2) can be written as

 s∨(f,y)=logσ+12σ2{ω2σ20σ20+ω2+(¯y−μ)2}+12log2π, (7)

where Note that and now appear in (7). This new corrected score integrates the knowledge contained in the conditional pdf of . As shown in Proposition 1, this added information reduces the variance of this score w.r.t. to .

Besides their variances, one may wonder how the distributions of and are different. The following proposition summarizes the distributional features of both scores.

###### Proposition 2

Under the Gaussian additive model (A), the random variables associated with the corrected log-scores defined by (5) and (7) can be written as

 s∧(f,Y)d=a∧+b∧χ2∧ and s∨(f,Y)d=a∨+b∨χ2∨,

where means equality in distribution and and

represent noncentral chi-squared random variables with one degree of freedom and respective non-centrality parameters

 λ∧=(μ0−μ)2σ20+ω2 and% λ∨=(μ0−μ)2σ20+ω2(σ20+ω2σ20)2.

The explicit expressions of the constants , , and can be found in the Appendix. The score also admits an explicit density; see the Appendix.

Since the variance of a noncentral chi-squared random variable with one degree of freedom and noncentrality parameter equals , one can check that

 VY[s∧(f,Y)]VY[s∨(f,Y)]=1+2λ∧p20+2p0λ∧ with p0=(σ20σ20+ω2)2.

As already known from Proposition 1, the variance of is always greater than that of . This explicit variance formula highlights the key role of in comparing the two corrected scores. For cases with a large noise error , should be preferred over in terms of variance reduction.

Having explicit expressions of the score distributions in terms of chi-squared densities, one can visually compare their densities. In Figure 1, we arbitrarily choose , for the forecast pdf , and for the hidden truth , and different values of for the noise variance of . The blue, red, and green curves represent the density of the three variables , , and , respectively. The vertical orange line corresponds to the common mean of the three plotted densities

 EY[s∧(f,Y)]=EY[s∨(f,Y)]=EX(s0(f,X))=logσ+(μ0−μ)2+σ202σ2+12log2π;

see Equation (6). As expected from Proposition 1, our proposed score (red pdf) is more centered on the orange vertical line than is Ferro’s score (blue pdf). We also note that properness is a concept based on averaging scores; but the asymmetry of these three noncentral chi-squared densities may challenge the ordering with respect to the mean. The average may not be the best feature to characterize noncentral chi-squared pdfs.

### 3.2 Corrected CRPS

Besides the logarithmic score, the CRPS is another classical proper scoring rule used in weather forecast centers. It is defined as

 c0(f,x)=E|Z−x|−12E|Z−Z′|, (8)

where and are iid copies random variables with continuous pdf . The CRPS can be rewritten as

 c0(f,x)=x+2E(Z−x)+−2E(Z¯¯¯¯F(Z)),

where represents the positive part of and

corresponds to the survival function associated to the cumulative distribution function (cdf)

. For example, the CRPS for a Gaussian forecast with parameters and is equal to

 c0(f,x)=x+2σ[ϕ(x−μσ)−x−μσ¯¯¯¯Φ(x−μσ)]−[μ+σ√π], (9)

where and

are the pdf and cdf of a standard normal distribution

Gneiting05; Taillardat16.

###### Proposition 3

Under the Gaussian additive model (A), the random variable associated with the corrected log scores defined by (2) can be written as

 c∨(f,¯Y)=¯Y+2σω[ϕ(¯Y−μσω)−¯Y−μσω¯Φ(¯Y−μσω)]−[μ+σ√π], (10)

where and the random variable follows a Gaussian pdf with mean and variance .

In Figure 2, the distribution of the CRPS is plotted under three cases. One corresponds to the ideal case where the underlying true process is used as a verification data. The second case is the realistic one with the observation process used as verifying data. The third case corresponds to the score corrected under the conditional distribution from Equation (10

). Density estimates are shown for different values of

. The CRPS evaluated on is less centered than are the two others. One can notice the benefit of the proposed correction in terms of centering on the mean value and also in reducing variance of the score. Indeed, the corrected CRPS shows the narrowest distribution around its mean.

## 4 Multiplicative gamma distributed observation error

The Gaussian assumption is appropriate when dealing with averages, for example, mean temperatures; however, the normal hypothesis cannot be justified for positive and skewed variables such as precipitation intensities. An often-used alternative in such cases is to use a gamma distribution, which works fairly well in practice to represent the bulk of rainfall intensities. Hence, we assume in this section that the true but unobserved

now follows a gamma distribution with parameters and

 fX(x)=βα00Γ(α0)xα0−1exp(−β0x), for x>0.

In the Gaussian case, an additive model was used to link the hidden truth and the observational vector. For positive random variables such as precipitation, Gaussian additive models cannot be used to introduce noise. Instead, we prefer to use a multiplicative model of the type

 Model (B){X∼Gamma(α0,β0),Y=X×ϵ, (11)

where is a positive random variable independent of . To make feasible computations, we model the error as an inverse gamma pdf with parameters and :

 fϵ(u)=baΓ(a)u−a−1exp(−bu), for u>0.

The basic conjugate prior properties of such gamma and inverse gamma distributions allows us to easily derive the pdf

.

### 4.1 Corrected log score

To compute a log score, we need to model the forecast pdf. In this section we consider a gamma distribution with parameters and for the prediction. With obvious notations, the logarithmic score for this forecast becomes

 s0(f,x)=(1−α)logx+βx−αlogβ+logΓ(α). (12)
###### Proposition 4

Under the gamma multiplicative model (B), the random variable associated with the corrected log scores defined by (2) and written as (12) can be expressed as

 s∨(f,y)=(1−α)(ψ(α0+a)−log(β0+b/y))+βα0+aβ0+b/y−αlogβ+logΓ(α), (13)

where represents the digamma function defined as the logarithmic derivative of the gamma function, namely, .

In the multiplicative case, the variance of the verification data can decrease and hence affects the performance of the associated score. Figure 3 shows the distributions of the three log-scores presented in this section; in this case the variances of and of decrease. One can see the issue with the distribution of that is not centered around and has a decreasing variance. Similar conclusions to those in the previous sections can be drawn in terms of the the benefit of the correction of the score.

### 4.2 Corrected CRPS

For a gamma forecast with parameters and given by (12), the corresponding CRPS (see, e.g., Taillardat16; Scheuerer15) is equal to

 c0(f,x)=[αβ−1βB(.5,α)]−x+2[xβf(x)+(αβ−x)¯¯¯¯F(x)]. (14)
###### Proposition 5

Under the gamma multiplicative model (B), the random variable associated with the corrected CRPS defined by (2) can be expressed from (8) as

 c∨(f,y) = [αβ−1βB(.5,α)]−α0+aβ0+by+2βα−1(β0+b/y)α0+aB(α,α0+a)(β+β0+b/y)α+α0+a +2(β0+b/y)α0+aΓ(α)Γ(α0+a)∫+∞0(αβ−x)Γ(α,βx)xα0+a−1exp(−(β0+b/y)x)dx.

Details of the computations are found in the Appendix.

## 5 Joint distribution of errors in forecasts and observations

Model (A) was useful for understanding the role of observational errors, but its simplicity limits its application in practice. In particular, it does not incorporate the fundamental idea that the forecast and observational vectors are both driven by some hidden indirectly observed process, for example, the state of the atmosphere. To illustrate this joint distribution of forecasts and observations, we recall Table 2 (p. 516) of Ehm16, who studied the following type of Gaussian additive models:

 Model (C)⎧⎪ ⎪⎨⎪ ⎪⎩X∼N(μ0,σ20),Y=X+N(0,ω2),Z=X+N(0,σ2Z),

where all Gaussian variables are assumed to be independent; and are observed, but is not, being the hidden truth. In a study by Ehm16, and were set equal to one. If this hidden variable (one can think of “the state of atmosphere"), , was perfectly known, then would contains (stochastically) the same information as does when . The term ideal forecaster was introduced to describe in this case. The climatological forecaster corresponds to the unconditional distribution, that is, a centered normal distribution with variance equal to . The main differences between Model (A) and Model (C) are that the forecast is now linked to the hidden truth and the error term is added to .

Model (C) introduces the fact that and both can be imperfect, that is, with error terms, and both depend on the hidden . In such a context, one can wonder whether scores can be modified to handle this joint information, not only for Model (C) but in the general context where the conditional distributions of and are assumed to be known. Our strategy is to extend Definition (2) by incorporating the information about contained in . This leads us to define

 s∨(f,(y,z))=E(s0(f,X)|Y=y,Z=z). (16)

By construction, the random variable satisfies

 E(Y,Z)[s∧(f,(Y,Z))]=EX[so(f,X)], for any forecast pdf f.

At this stage, we highlight two points.

First, our goal is to forecast , the hidden truth, and consequently the forecast pdf used in is ideal when it is equal to the pdf of . In practice, whether the distributional forecast produced by a weather prediction center aims at forecasting or

may not be clear. In data assimilation, however, this distinction is clear in the sense that the object of interest based on Kalman filtering techniques is the hidden state

.

Second, the definition in (16) depends on the forecast random variable. If two weather prediction centers produce two different forecast variables, say and , then three corrected scores could be computed:

 s∨(f,(y,z(1))) = E(s0(f,X)|Y=y,Z(1)=z(1)), s∨(f,(y,z(2))) = E(s0(f,X)|Y=y,Z(2)=z(2)), s∨(f,(y,z(1),z(2))) = E(s0(f,X)|Y=y,Z(1)=z(1),Z(2)=z(2)).

These three scores, as well as , will be proper with respect to the hidden variable . This leads to the question of how to compare them. The following proposition gives some suggestions.

###### Proposition 6

Let be the pdf of any given forecast. Let be a proper score scoring rule relative to the unobserved truth represented by the variable . The random variables defined from the scoring rules and verify the following variance inequality:

 VY[s∨(f,Y)]≥V(Y,Z)[s∨(f,(Y,Z))]. (17)

Basically, this proposition tells us that whenever the variable contains some information about , the conditional score “squeezes out" such a knowledge about and consequently reduces the variance of the corrected score. If does not bring information about , namely, , then . If two different forecast variables, say and , are available, then, using the same type of argument, one can show that the corrected score based on the concatenation of and has a smaller variance

 V(Y,Z(i))[s∨(f,(Y,Z(i)))]≥V(Y,Z(1),Z(2))(s0(f,X)|Y=y,Z(1)=z(1),Z(2)=z(2)), for i∈{1,2}.

To illustrate this proposition, we connect Model (C) and the EIV models (see, e.g., Fuller87), and we derive the corrected log score in a multivariate Gaussian additive context.

### 5.1 Error-in-variable context

In this section, we combine and extend Model (A) and Model (C) into a multivariate context:

 EIV⎧⎪⎨⎪⎩X∼N(% \boldmathμX,ΣX),Y=X+\boldmathϵY,Z=X+\boldmathϵZ (18)

where and represent Gaussian model errors and these error terms are independent of and between them. Basically, this system of equations tells us that the forecast aims at mimicking the truth described by ; but the forecast is imperfect, and the forecaster error has to be added. The same interpretation can be made for . Besides the multivariate aspect, the main difference between Model (C) and Model (18) is that the forecast may not be ideal because the mean and variance of and can be different. This implies that a corrected score based on and has to take into into account these discrepancies. Model (18) can be viewed as a member of the EIV class that encompasses a wide large of models (see, e.g., Fuller87). Our specific model is close to the models studied in the context of climate detection and attribution (see, e.g., Hannart14; Ribes16).

Given the observational vector of and the forecast , one can compute the conditional distribution of given (this can also be interpreted in a Bayesian framework). With respect to the EIV system defined by (18), the solution for such given is

 [X|Y,Z] ∼ N(¯¯¯¯¯X,(Δ−1+Ω−1+Σ−1X)−1), (19)

where the mean is simply a weighted average that takes into account the information on and :

 ¯¯¯¯¯X=(Δ−1+Ω−1+Σ−1X)−1[Δ−1(Y−\boldmathα)+Ω−1(Z−\boldmathβ)+Σ−1X% \boldmathμX]. (20)
###### Proposition 7

In the context of the EIV model (18), the corrected version of the generalization of (7) is

 s∨(f,¯¯¯¯¯X)=

where denotes the trace operator for any forecast density .

Similar to previous sections, one could investigate the analytical distribution of the score , which corresponds to the trace of a noncentral Wishart distribution. However the general expression of this distribution is hardly tractable Mathai82

but can be expressed as a generalized chi-squared distribution in particular conditions

Pham15

## 6 Discussion and conclusion

Building on Ferro17’s elegant representation of imperfect observations, we have quantified, in terms of variances and even distributions, the need to account for the error associated with the verification observation when evaluation probabilistic forecasts with scores. An additive framework and a multiplicative framework have been studied in detail to account for the error associated with verification data; and an additional setup is proposed to account for both observational and forecast errors. Both setups involve a probabilistic model for the accounted errors and a probabilistic description of the underling non-observed physical process. Although we look only at idealized cases where the parameters of the involved distributions are assumed to be known, this approach enables us to understand the importance of accounting for the error associated with the verification data. Moreover, the study raises the important point of investigating the distribution of scores when the verification data is considered to be a random variable. Indeed, investigating the means of scores may not provide sufficient information to compare between score discriminative capabilities. One could choose to take into account the uncertainty associated with the inference of distribution parameters. In this case, a Bayesian setup could elegantly integrate the different hierarchies of knowledge and a priori information.

## Acknowledgments

We thank Aurélien Ribes for his helpful comments and discussions. We also thank the Office of Science and Technology of the Embassy of France in the United States, Washington, DC, for supporting our collaboration through the initiative Make Our Planet Great Again. This material is based in part on work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357.

## Conflict of interest

You may be asked to provide a conflict of interest statement during the submission process. Please check the journal’s author guidelines for details on what to include in this section. Please ensure you liaise with all co-authors to confirm agreement with the final statement.

## Proof of Proposition 1:

For any random variable, say , its mean can be written conditionally to the random variable in the following way:

 E[U]=E[E[U|Y=y]].

In our case, the variable and . This gives . To show inequality (3), we use the classical variance decomposition

 V[U]=V[E[U|Y=y]]+E[V[U|Y=y]].

With our notations, we have

 V[so(f,X)] = V[E[so(f,X)|Y=y]]+E[V[so(f,X)|Y=y]], = V[s∨(f,Y)]+ a non-negative term.

 V[so(f,X)]≥V[s∨(f,Y)]. (21)

To finish our proof, we use the same variance decomposition but with the following form:

 V[V]=V[E[V|X=x]]+E[V[V|X=x]].

Taking gives

 V[sc(f,Y)] = V[E[sc(f,Y)|X=x]]+E[V[sc(f,Y)|X=x]], = V[so(f,X)]+ a non-negative term.

Coupling this with inequality (21) provides

 V[s∧(f,Y)]≥V[so(f,X)]≥V[s∨(f,Y)].

Inequality (3) follows.

## Proof of Equation (7):

To express the score proposed in (2), one needs to derive the conditional distribution from Model (A). More precisely, the Gaussian conditional distribution of given is equal to

 [X|Y=y]∼N(¯y,ω2σ20σ20+ω2),

where is a weighted sum that updates the prior information about with the observation ,

 ¯Y=ω2σ20+ω2μ0+σ20σ20+ω2Y∼N(μ0,σ20×σ20σ20+ω2).

Combining this information with Equations (2) and (4) leads to

 s∨(f,y) = logσ+12σ2{E[(X−μ)2|Y=y]}+12log2π, = logσ+12σ2{V[X|Y=y]+(E[X|Y=y]−μ)2}+12log2π, = logσ+12σ2⎧⎨⎩ω2σ20σ20+ω2+(ω2σ20+ω2μ0+σ20σ20+ω2y−μ)2⎫⎬⎭+12log2π.

By construction, we have

 EY(s∨(f,Y))=E¯Y(s∨(f,¯Y))=EX(s0(f,X)).

This means that, to obtain the right score value, we can first compute as the best estimator of the unobserved and then use it into in the corrected score .

## Proof of Proposition 2:

For Model (A), both random variables and are normally distributed with the same mean but different variances, and , respectively. Since a chi-square distribution can be defined as the square of a Gaussian random variable, it follows from (5) and (7) that

 s∧(f,Y)d=a∧+b∧χ2∧, and s∨(f,Y)d=a∨+b∨χ2∨,

where means equality in distribution and

 a∧=logσ−ω22σ2+12log2π%anda∨=logσ+12σ2ω2σ20σ20+ω2+12log2π

and

 b∧=σ20+ω22σ2, and b∨=σ20+ω22σ2(σ20σ20+ω2)2,

and and represent noncentral chi-squared random variables with one degree of freedom and respective noncentrality parameters

 λ∧=(μ0−μ)2σ20+ω2 and% λ∨=(μ0−μ)2σ20+ω2(σ20+ω2σ20)2.

## Proof of Proposition 3:

To compute the corrected CRPS, one needs to calculate the conditional expectation of under the distribution of . We first compute the expectation and then substitute by and its distribution with mean

. From Equation (9) we obtain

If follows a normal distribution with mean and variance , that is, with a standard random variable, then we can define the continuous function with Then, we apply Stein’s lemma (Stein81), which states because is a standard random variable. It follows with the notations and that

 E[X−μσ¯¯¯¯Φ(X−μσ)] = λE[¯¯¯¯Φ(λ+bσZ)]+bσE[Zh(Z)], = λE(P[Z′>(λ+bσZ)])+bσE[h′(Z)], where Z′ has a standard normal distribution = λE(P[Z′−bσZ>λ])−b2σ2E[ϕ(a+bZ−μσ)],

with

 λP[Z′−bσZ>λ] = λ¯¯¯¯Φ[λ√1+b2/σ2]=λ¯¯¯¯Φ[a−μ√σ2+b2] (23) = a−μσ¯¯¯¯Φ[a−μ√σ2+b2].

Then

 E[X−μσ¯¯¯¯Φ(X−μσ)] = a−μσ¯¯¯¯Φ[a−μ√σ2+b2]−b2σ2E[ϕ(a+bZ−μσ)],

and

 E[ϕ(a+bZ−μσ)] = 1√2πE(exp(−12(a+bZ−μσ)2)) = 1√2πE(exp(−b22σ2(Z+a−μb)2))

is a noncentered chi-square distribution with one degree of freedom and a noncentral parameter

with known moment generating function

 G(t;k=1,λ=(a−μb)2)=exp(λt1−2t)(1−2t)k/2.

It follows that

 E[ϕ(a+bZ−μσ)] = 1√2πG(t=−b22σ2;k=1,λ=(a−μb)2) = 1√2π