Robust Scenario Interpretation from Multi-model Prediction Efforts

08/09/2022
by   Yuanhao Lu, et al.
0

Multi-model prediction efforts in infectious disease modeling and climate modeling involve multiple teams independently producing projections under various scenarios. Often these scenarios are produced by the presence and absence of a decision in the future, e.g., no vaccinations (scenario A) vs vaccinations (scenario B) available in the future. The models submit probabilistic projections for each of the scenarios. Obtaining a confidence interval on the impact of the decision (e.g., number of deaths averted) is important for decision making. However, obtaining tight bounds only from the probabilistic projections for the individual scenarios is difficult, as the joint probability is not known. Further, the models may not be able to generate the joint probability distribution due to various reasons including the need to rewrite simulations, and storage and transfer requirements. Without asking the submitting models for additional work, we aim to estimate a non-trivial bound on the outcomes due to the decision variable. We first prove, under a key assumption, that an α-confidence interval on the difference of scenario predictions can be obtained given only the quantiles of the predictions. Then we show how to estimate a confidence interval after relaxing that assumption. We use our approach to estimate confidence intervals on reduction in cases, deaths, and hospitalizations due to vaccinations based on model submissions to the US Scenario Modeling Hub.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset