On the role of data, statistics and decisions in a pandemic

by   Beate Jahn, et al.
The University of Göttingen

A pandemic poses particular challenges to decision-making with regard to the types of decisions and geographic levels ranging from regional and national to international. As decisions should be underpinned by evidence, several steps are necessary: First, data collection in terms of time dimension as well as representativity with respect to all necessary geographical levels is of particular importance. Aspects such as data quality, data availability and data relevance must be considered. These data can be used to develop statistical, mathematical and decision-analytical models enabling prediction and simulation of the consequences of interventions. We especially discuss the role of data in the different models. With respect to reporting, transparent communication to different stakeholders is crucial. This includes methodological aspects (e.g. the choice of model type and input parameters), the availability, quality and role of the data, the definition of relevant outcomes and tradeoffs and dealing with uncertainty. In order to understand the results, statistical literacy should be better considered. Especially the understanding of risks is necessary for the general public and health policy decision makers. Finally, the results of these predictions and decision analyses can be used to reach decisions about interventions, which in turn have an immediate influence on the further course of the pandemic. A central aim of this paper is to incorporate aspects from different research backgrounds and review the relevant literature in order to improve and foster interdisciplinary cooperation.



page 19


Accounting for Uncertainty During a Pandemic

We discuss several issues of statistical design, data collection, analys...

Reflections, Learnings and Proposed Interventions on Data Validation and Data Use for Action in Health: A Case of Mozambique

The ideal of a country's health information system (HIS) is to develop p...

Analysing the combined health, social and economic impacts of the corovanvirus pandemic using agent-based social simulation

During the COVID-19 crisis there have been many difficult decisions gove...

"Public(s)-in-the-Loop": Facilitating Deliberation of Algorithmic Decisions in Contentious Public Policy Domains

This position paper offers a framework to think about how to better invo...

Optimal governance and implementation of vaccination programs to contain the COVID-19 pandemic

Once a viable vaccine for SARS-CoV-2 has been identified, vaccination up...

FAIR Data Pipeline: provenance-driven data management for traceable scientific workflows

Modern epidemiological analyses to understand and combat the spread of d...

A review and evaluation of secondary school accountability in England: Statistical strengths, weaknesses, and challenges for 'Progress 8'

School performance measures are published annually in England to hold sc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In December 2019, the first cases of coronavirus disease 2019 (COVID-19) were reported in Wuhan, China and the outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was declared a pandemic in March 2020 by the World Health Organization. In order to control the spread of the virus and limit the negative consequences of the pandemic, important decisions had and still have to be taken. These concern the spread of the disease, its impact on health, the utilization of health care resources or potential effects of counter measures, to name some examples. In the course of the pandemic, availability and quality of data, the application of models and interpretations of results as well as contradicting statements of scientists caused confusion and fostered intense debate. The role, use and misuse of modeling for infectious disease policy making have been critically discussed (james2021use; holmdahl2020wrong). Furthermore, the CODAG reports (codag-reports) clarify why models can lead to conflicting conclusions and discuss the purposes of modeling and the validity of the results. For instance, policies to contain the pandemic were mainly guided by the 7-day incidence for a long period of time. Measures such as curfews, limited numbers of guests at events and restricted opening hours of stores were driven by this figure. However, considering the 7-day incidence alone does not provide a meaningful view of the overall picture as discussed by codag. As mentioned in the series “Unstatistik” (Unstatistik202010) a value of 50 cases per 100,000 inhabitants in October 2020 in Germany had an entirely different meaning than six months earlier due to changes in testing strategies, improved treatments etc. Concerning expected intensive care patients and deaths, a value of, e.g. 50 in October is likely to correspond to a value of 15 to 20 in April, possibly even less (Unstatistik202010).

As known from the field of evidence-based medicine and health data and decision science, decisions should be underpinned by the best available evidence. For evidence-based decision making, three components are important: data, statistical, mathematical and decision-analytic models (which reduce the amount and the complexity of the data to meaningful indices, visualizations, and/or predictions), and a set of available decisions, interventions or strategies with their consequences described through a utility or loss function (decision-making framework) and the related tradeoffs. Scientists of the German Network Evidence-Based Medicine raised the question on “COVID-19: Where is the evidence?”

(EBM2020) which motivated among others a discussion about the need of randomized controlled trials to investigate the effectiveness of preventive measures, feasibility of such studies and longitudinal, representative data generation.

As a statistician, the process of knowledge gain starts with a research question and continues with the acquisition of data, which then enters into the statistical model, see Figure 1 in DAGStat2020 for an illustration. Data acquisition here might either refer to the design of an adequate experiment or to the use of so-called secondary data, which has been collected for a different purpose. Also in the latter case, statistical principles of design are relevant (rubin2008)

. In Bayesian statistics, other information can be incorporated as

a priori information (e.g. ohagan2004bayesian). This might stem from previous studies or might be based on expert opinions. The prior information combined with the data (likelihood) then results in a posterior distribution. In modeling contexts outside statistics, data and/or information is used in different ways. Simulation models use prior information (again either based on data or on other sources such as expert opinion or believes) to determine the parameters of interest and are usually validated on a data set. The formal representation of the mathematical or decision-analytic model makes assumptions about the system that generates the data, and the (mis)match between data and model then provides insights that can be used as basis for decisions. In contrast to statistical models, the order of data and modeling is thus reversed. An illustration is depicted in Figure 1. For the purpose of illustration, Figure 1 depicts the process as a sequence of steps. As the pandemic progresses, however, some steps such as data capturing and modeling might be iterated. Throughout this manuscript, we will refer to data if we are talking about a data set as used in statistics and we will use the term information for other (prior) information that does not necessarily originate from a data set.

Figure 1: For evidence-based decision making, a complex process is necessary. In both statistical and mathematical/decision-analytic modeling, the path starts with a research question and ends in guidance for decision making. In statistics (upper path) using data as the basis for modeling is common. In mathematical and decision-analytic modeling (lower path), models are based on subject matter knowledge and validated or calibrated using data sets.

As mentioned above, modeling is an integral part of evidence-based decision making. Here, we distinguish three purposes of modeling, which are summarized in Table 1. The first category contains models which aim at explaining patterns and trends in the data. The second category aims at predicting the present (so-called now-casting) or the future (forecasting). Finally, decision-analytic models aim at informing decision makers by simulating the consequences of interventions and their related tradeoffs (e.g. benefit-harm tradeoff, cost-effectiveness tradeoff).

Purpose: Explanation Prediction Decision Analysis
Goal Explain patterns, trends and interactions. Predict present (e.g.  value) or future (e.g. ICU bed capacity) Evaluate predefined alternative interventions, actions or technologies (e.g. vaccination strategies)
Focus Dynamic patterns Absolute numbers Ranking of size and direction of effect
Statistical approaches

Factor analysis, cluster analysis, regression, contingency analyses

Time series analysis, Repeated measure analysis, machine learning.

Statistical decision theory
Simulation models
ABM, SD, differential equations Differential equations, ABM Differential equations, ABM, MSM, SD, DES

, State Transition Models, Decision Trees

Importance to
correctly assess
Minor role (+) Major role (+++) Intermediate (++)
Importance to
correctly assess
Intermediate (++) Minor role (+) Major role (+++)
High risk of overinterpretation w.r.t. causality Highly sensitive to context Highly dependent on comprehensive choice of options and outcomes

ABM Agent-based model, DES discrete event simulation, SD system dynamics, ICU intensive-care unit, MSM Microsimulation Modeling

Table 1: Overview of modeling purposes and approaches. For each different group of models (columns) we describe goals, approaches and challenges.

In medical decision making and health economics, this leads into a formal decision framework, which relates to statistical decision theory (siebert2003should; siebert2005using). This framework includes the following elements (parmigiani2009decision). The decision maker has to choose among a set of different actions. The consequences of these actions depend on an unknown “state of the world”. The basis for decision making now depends on the quantitative assessment of these consequences. To this end, a loss or utility function must be defined which allows the quantification of benefits, risk, cost or other consequences of different actions. Minimizing the loss function (or maximizing the utility function) then leads to optimal decisions.

Data scientists such as statisticians, computer scientists, mathematicians, biologists, physicists and psychologists play an important role in processing the data and contributing to the interdisciplinary research starting from data acquisition and data processing through statistical analysis and development of models to interpretation of results and the communication with different stakeholders including the scientific community, decision makers and the general public. With this publication, we aim to provide an overview of the most important aspects and key concepts to initiate further interdisciplinary exchange. A key lesson that should be learned from the SARS-CoV-2 pandemic is that individual and political decision making relies on data from a variety of sources. Generation and analysis of these data requires a coordinated multidisciplinary effort. Digital infrastructures provide us with the means to federate and integrate data for statistical and mathematical modeling. In the future, they should also be used to coordinate teams of data scientists such as statisticians, mathematicians, physicists and computer scientists in interdisciplinary collaborations with epidemiologists, virologists and immunologists.

The paper is organized as follows. In Section 2, we discuss requirements of data quality and why this is fundamental for the entire process. We then move on to the modeling part. Section 3 deals with the different purposes of modeling. In Section 4, we explain how decisions can be informed based on these models. We discuss aspects of the reporting and communication of results in Section 5, and Section 6 deals with the last step of the process: political decisions. We conclude with some final remarks in Section 7.

2 Data availability and quality

An essential basis for evidence-based policy, but also for research, is high quality data. Mere presence of data is not enough, as the process of data definition, collection, and processing determines the quality of the data in reflecting the phenomena on which to provide evidence. Poor definition of data concepts and variables as well as bad choices in their collection and processing can lead to misleading data, that is data with severe and incurable bias, or to an unacceptably large remaining level of uncertainty about the phenomena of interest, so that any results generated with that data form an inadequate basis for decision making. Below, we show which quality characteristics have to be considered when planning a data collection or when assessing the quality of already existing data for a task at hand. The underlying concepts are general and well-known. Despite that, they are regretfully often neglected, and thus we summarize them here in short form in the context of policy making as eight characteristics. For a thorough treatment, we refer to the Principles in the European Statistics Code of Practice (european).

  1. Suitability for a target: Data in itself is neither good nor bad, but only more or less suitable for achieving a certain goal. In order to assess data, it is first necessary to understand, agree on, and describe the goal that the data are supposed to support.

  2. Relevance: Data must provide relevant information to achieve the goal. To do this, the data must measure the characteristics needed (e.g. how to measure population immunity?) on the right individuals (e.g. representative sample for generalization, or high-resolution data for local action?).

  3. Transparency: The data collection process must be transparent in terms of origin, time of data collection and nature of the data. Transparency is a requirement for peer-review processes to ensure correctness of results, and for an adequate modeling of uncertainties.

  4. Quality standards: Data are well suited for policy making requiring general overviews and spatio-temporal trends if local data collection follows a clear and uniform definition of what is recorded and how it has been recorded. Standardization includes, for example, harmonization of data processes, adequate training of the persons involved in the collection, and monitoring of the processes.

  5. Truthfulness and trustworthiness: To place trust in the data, these must be collected and processed independently, impartially and objectively. In particular, conflicts of interest should be avoided in order not to jeopardize their credibility.

  6. Sources of error: Most data contain errors, such as measurement errors, input errors, transmission errors or errors that occur due to non-response. With a good description of data collection and data processing (see transparency above), possible sources of error can be assessed and incorporated into the modeling for the quantification of uncertainty and the interpretation of results.

  7. Timeliness and accuracy: Ideally, data used for policy-making should meet all quality criteria. However, information derived from data must additionally be up-to-date and some decisions (e.g. contact restrictions) cannot be postponed to wait until standardized processes have been defined and implemented and optimal data has been collected. The greater uncertainty in the data associated with this must be met with transparency and with great care in its interpretation.

  8. Access to data for science: In order to achieve the overall goal of evidence-based policy making, it is important to make good data available as a resource to a wide scientific public. This allows for the data to be analyzed in different contexts and with different methods, and enables the data to be interpreted from the perspective of different social groups and scientific disciplines.

The preceding items mainly refer to a primary data generating process, that is, when the data are directly generated to provide information on a pre-defined target. Especially in the context of COVID-19, often information from available sources have to be considered, where the data generation does not necessarily coincide with the aim of the study. One prominent example is the number of infections, which are gained by the local health authorities, but used for comparing regional incidences which are the basis of several political actions. Particular attention in this case has to be paid to selection biases. One way to assess (and thus address) selection bias would come from accompanying information on asymptomatically infected persons gained through representative studies. Other information to mitigate selection bias comes from the number of tests and the reasons for testing, but these are not appropriately reported in Germany. Both problems yield biased regional incidences. Hence, modeling based on these data may cause misleading results and has to be considered carefully. Additionally, the data generating process may be subject to informative sampling (cf. pfeffermann2009inference).

The above aspects always have to be seen in light of the research question. Incidences and infection patterns need highly different data. Available data are often inappropriate or must be accompanied by additionally gathered data sources. Due to the highly volatile character of COVID-19 infections, data gathering, especially via additional samples, must be very carefully planned to foster the necessary quality to provide the foundation for policy actions.

Representativity implies drawing adequate conclusions from the sample on the population or parameters of the population. To achieve this, known inclusion probabilities on a complete list of elements must be given in order to allow statistical inference. Nowadays, the term representativity is generalized to cover regional smaller granularity, as well as in accordance with the time scale. Further details, especially for subgroup representativity, can be drawn from

gabler2013reprasentativitat. In household or business surveys, the term representativity has to be seen in the context of non-response and its compensation (cf. schnell2019survey). In practice, the term representativity is often recognized as a sufficiently high quality sample. This is entirely misleading. Indeed, statistical properties, and especially accuracy, have to be additionally considered (cf. munnich2020qualitat). Finally, it has to be pointed out that these aspects have to be separately considered for each variable or target of interest.

3 From data to insights – the purposes of modeling

Data and decisions are often linked using statistical models or simulations. As discussed in the introduction, modeling can serve three purposes. Each of them can be approached from either a statistical or a mathematical modeling perspective. As already noted, data can play different roles in these models. While statistical models use data as the basis for the model itself, simulations are based on parameters according to prior information and predictions based on the simulation can be checked against real data to assess the precision and validity of the constructed model.

An important aspect is handling and communicating uncertainty. In statistical models, different types of uncertainty occur: sampling variation, model uncertainty, incomplete data, applicability of information and confounding are common examples (e.g. altman2014uncertainty; abadie2020sampling; chatfield1995model). In mathematical and decision-analytic models, there are usually alternative approaches to determine the values for key parameters used in simulations. For decision-making purposes, it is, therefore, important to compare different methods for determining the indicators. Consequently, in most cases, not a single number but an interval or distribution has to be considered.

3.1 Modeling for explanation

The main goal of these models is to explain patterns, trends or interactions. Statistical models for this purpose include, for example, regression models (fahrmeir2007regression) as well as factor, cluster or contingency analyses (e.g. fabrigar2011exploratory; duran2013cluster). In this context, associations are often misinterpreted as causal relationships. The discovery of correlations and associations, however, cannot be equated to establishing causal claims. In statistics and clinical epidemiology, for example, the Bradford Hill criteria (hill1965) can be used to define a causal effect.

From a statistical perspective, there are two possibilities to tackle this issue. The gold standard is to design a randomized experiment, which enables causal conclusions. In the context of the Corona pandemic, randomized controlled trials were used for assessment of COVID-19 treatments including the RECOVERY platform trial leading to publications such as, for example, horby2020lopinavir; abani2021convalescent; recovery2020effect. Also in the development of vaccines, randomized controlled trials played a vital role (e.g. baden2021nejm; nejm2021vaccine).

Where randomized experiments are not possible and observational data is used instead, causal conclusions are harder to draw. In order to get valid estimates in this situation, a common approach is the counterfactual framework by

Rubin1974. For simplicity, assume that we are interested in the effect of a binary “treatment” (could, for example be “lockdown” vs. “no lockdown”) on some outcome (e.g. number of infections with COVID-19). Then we denote the outcome that would have been observed under treatment and the outcome that would have been observed under no treatment (). A causal effect of on is now present, if for an individual. In practice, however, only one outcome can be observed for each individual. Thus, it is only possible to estimate an average causal effect, i.e.  (Hernan2020causal). Different possibilities for estimating a causal effect have been proposed, for example, propensity score methods (Cochran1973), the parametric g-formula (robins2004effects), marginal structural models (robins2000marginal), structural nested models (robins1998structural) and graphical models (didelez2007graphical). Recent works have shown that these methods have difficulties when it comes to small sample studies as in the context of COVID-19 (friedrich2020causal).

Mathematical models and simulations can also be used to understand and explain dynamic patterns. Examples are simulation studies for public health interventions such as lockdown and exit strategies, where general consequences of different measures can be compared. Seemingly simple simulation models have played an important role in communicating the dynamics during a pandemic. In these models, assumptions about the system that generates the data and (causal) relationships are made. The (mis)match between data and model then provides insights that can be used as basis for decisions. However, this procedure does not establish causal relationships in the statistical sense described above.

The main challenge in modeling for explanation is good communication, irrespective of whether the model is based on statistical or mathematical approaches. Anticipating the human bias for interpreting results causally, clear statements need to be made to which extent (from “not at all” to “plausible”) specific detected associations allow some causal interpretation and why. The two extremes, 1) the simple disclaimer that “correlation is not causation” and 2) unreflected causal interpretation, do no justice to the complexity of the problem as outlined by the methods above.

3.2 Modeling for prediction

In statistical prediction models, the modeler can choose from large toolboxes in (spatio)-time-series analysis as well as statistics and machine learning (ML). Examples cover simple but interpretable ARIMA models (benvenuto2020application; roy2021spatial)

, support vector machines

(rustam2020covid), joint hierarchical Bayes approaches (flaxman2020estimating)

or state-of-the art ML methods, such as long short-term memory (LSTM) or extreme gradient boosting (XGBoost)

(luo2021time). A comprehensive overview is also given by kristjanpoller2021causal.

For predictions based on such models, one can distinguish two different aims: now-casting and forecasting. For now-casting, information up to the current date and state are used to estimate or predict key figures, like the value, for example, which estimates during a pandemic how many people an infected person infects on average. In forecasting, spatio-temporal predictions or simulations are used to look ahead in time, as in weather forecasts or to estimate the required number of ICU beds. In these models, a causal relationship between the predictors and the outcome is required, while now-casting can also be achieved with predictive variables that do not necessarily have a causal effect on the outcome. Several statistical models have been proposed for now-casting, for example hierarchical Bayesian models (gunther2021nowcasting) or trend regression models (kuchenhoff2021analysis), see also, among others, altmejd2020nowcasting; schneble2021nowcasting; salas2021improving for related concepts.

Dynamic Models/Time-variant Dynamics

A unique feature of pandemic assessment is the dynamic nature of the event. By this we not only refer to the explosive (exponential) growth that may occur but the fact that the properties of the processes that describe spatio-temporal changes are a function of time themselves. This is due to the fact that the behavior of the people continuously changes the properties of the system that we are trying to understand and make predictions for. This contrasts with other natural system, like the current weather, and most systems in the engineering and physical sciences.

Simple infectious disease compartmental models can be described by the stock of susceptible , the stock of infected , the stock of removed population (either by death or recovery), the probability of disease transmission in a contact , the recovery rate , the death rate and the birth rate (kermack1927contribution; hethcote2000mathematics; andersson2012stochastic; Grassly2008). Here,

where denotes the time point and is the total number of individuals in the population, i.e. .

Assuming that the susceptible individual first goes through a latent period after infection before becoming infectious, adapted models such as SEI, SEIR or SEIRS, depending on whether the acquired immunity is permanent or not can be applied (jit2011modelling). Modeling the COVID-19 pandemic, applications include further approaches such as SIR-X accounting for the removal (quarantine) of symptomatic infected individuals and various other extensions and applications (dehning2020inferring; lehr).

In this deterministic compartment model, predictions are determined entirely by their initial conditions, the set of underlying equations, and the input parameter values. Deterministic compartmental models have the advantage of being conceptually simple and easy to implement, but they lack for example stochasticity inherent in infectious disease transmission. In stochastic compartment models, the occurrence of events like transmission of infection or recovery is determined by probability distributions. Therefore, the chain of events (like an outbreak) is not exactly predictable. However, there are many possible types of stochastic epidemic models

(BRITTON201024; Kretzschmar2020).

The main objective of forecasting in this context are numerically precise predictions, such as the number of ICU beds. For this objective, the reliability or accuracy of the predictions highly depends on the availability and quality of the data used to estimate the values of the parameters in the underlying mathematical or statistical models.

It should be noted that these models are highly sensitive to the context in the following sense: changes in the underlying system in variables that are not part of the model can lead to changes in the relationship between the selected predictors and the predictions, rendering the predictions and their assumed uncertainty meaningless.

While it is widely appreciated that, for example, weather forecasts are only reliable for a couple of days, forecasting during a pandemic is even more complicated since the behavior of people influences the process that is being modeled. Forecasting during pandemics is, therefore, itself a continuous process with time-varying parameters. For this reason, such modeling effort is a complex undertaking requiring a range of data and expertise. Such activities should therefore be realized and coordinated through cross-disciplinary teams. To account for regional differences, one would expect a collective of modeling groups that support decision making for different parts of a country.

3.3 Decision-analytic modeling

Depending on the research question, different modeling approaches are used for decision-analytic modeling and development of computer simulations (IQWIG2020; Roberts2012). These include decision tree models, state transition models, discrete event simulation models, agent-based models, and dynamic transmission models.

The selection of the model type depends on the decision problem and the disease. In general, decision trees are applied for simple problems, without time-dependent parameters and with a fixed and comparatively short time horizon. If the decision problem requires the evaluation over a longer time period and if parameters are time or age dependent, state-transition cohort (Markov) models (STM) could be applied. STM allow for modeling of different health states and transitions between these states, and therefore, also for repeated events. They are applied when time to event is important. If the decision problem can be represented in an STM “with a manageable number of health states that incorporate all characteristics relevant to the decision problem, including the relevant history, a cohort simulation should be chosen because of its transparency, efficiency, ease of debugging, and ability to conduct specific value of information analyses.”

(Siebert2012). If the representation of the decision problem would lead to an unmanageable number of states, then an individual-level state-transition model is recommended (Siebert2012). Especially in situations where interactions of individuals among each other or the health-care system need to be considered, that is, when we are confronted with scarce physical resources, queuing problems and waiting lines (e.g., limited testing capacities), discrete event simulation (DES) would be an appropriate modeling technique. DES allows to incorporate time-to-event data (e.g., time to progression) and physical resources are explicitly defined (Karnon2012). Modeling types such as differential equation systems, agent-based models and system dynamics account for the specific features of infectious diseases such as the transmissibility from infected to susceptible individuals and the uncertainties arising from complex natural history and epidemiology (Pitman2012; Grassly2008; jit2011modelling).

Decision-Analytic Models
State transition models Discrete event simulation
Agent-based models
matical /
Operations research, decision analysis, machine learning Markov chains, Monte Carlo methods Monte Carlo methods Differential equations (deterministic or stochastic) Monte Carlo methods Monte Carlo methods
Components Decision nodes, chance nodes, end nodes, paths (Health) states, transition probabilities Individuals (Entities), resources, event states Compartments, transition rates Individuals (agents), attributes, learning rules, environment Individuals, attributes, learning rules, environment
Time Not relevant Time-dependent parameters Discrete time intervals Continuous Continuous Continuous
Role and type of data Model building/ Parameter estimation/ validation Parameter estimation/ validation Parameter estimation/ validation Parameter estimation/ validation Parameter estimation/ validation Survey data
Other information Expert opinion Expert opinion Expert opinion Expert opinion Expert opinion/ believes
Prominent application areas Engineering, law Health care strategies Queues Infectious diseases Biological processes Traffic flow
SARS-CoV-2 / COVID-19 examples Testing strategies for university campuses (Pelt2021) Evaluation of vaccination strategies (Kohli2021) Balancing scarce resources (melman2021balancing) Contact tracing strategies (Kretzschmar2020) Vaccination strategies (Jahn2021) Herd immunity (bock2020mitigation)
Table 2: Overview of differences and similarities of simulation models commonly used as basis for health decision sciences.

Decision tree models

In a decision tree model, the consequences of alternative interventions or health technologies are described by possible paths. Decision trees start with decision nodes, followed by alternative choices (interventions, technologies, etc.) of the decision maker. For each alternative, the patients’ paths, which are determined by chance and that are outside the decision maker’s control, are then described by chance nodes. At the end of the paths, the respective consequences of each path are shown. Consequences or outcomes may include symptoms, survival, quality of life, number of deaths or costs. Finally, the expected outcomes of each alternative choice are calculated by averaging (i.e. the weighted average of an entire pathway) (Hunink2001; Rochau2015).Pelt2021, for example, evaluated COVID-19 testing strategies for repopulating university campuses in a decision tree analysis.

Figure 2: Example: A Cost-Effectiveness Framework for COVID-19 Treatments for Hospitalized Patients in the United States, doi: 10.1007/s12325-021-01654-5

State transition models

A state transition model is conceptualized in terms of a set of (health) states and transitions between these states. Time is represented in time intervals. Transition probabilities, time cycle length, state values (“rewards”) and termination criteria are defined in advance. During the simulations, individuals can only be in one state in each cycle. Paths of individuals determined by events during a cycle are modeled with a Markov cycle tree that uses a set of random nodes. The average number of cycles in which individuals are in each state can be used in conjunction with the rewards (e.g. life years, health-related quality of life or costs) to estimate the consequences in terms of life expectancy, life expectancy given quality of life and expected costs of alternative interventions or health technologies. There are two common types of analyses of state transition models: Cohort models (“Markov”) (Beck1983; Sonnenberg1993) and individual-level models (“firstorder Monte Carlo” models) (Spielauer2007; GrootKoerkamp2010; Weinstein2006)

. Simple cohort models are defined in mathematical literature as discrete-time Markov chains. A discrete-time Markov chain is a sequence of random variables

representing health states with the Markov property, namely that the probability of moving to the next health state depends only on the present state and not on the previous states:

Generalized models such as continuous time Markov chains with finite or infinite state space are not commonly applied in health-decision science. Applications of state transition models in the pandemic include evaluation of treatments (Sheinson2021) and vaccination strategies (Kohli2021).

Discrete Event Simulation (DES)

Discrete event simulation is an individual-level simulation (Pidd2004; Karnon2012; Jun1999; Zhang2018). The core concepts of DES are entities (e.g. patients), attributes (e.g. patient characteristics), events, resources (i.e. physical resources such as medical staff and medical equipment), queues and time (Pidd2004; Banks2005; Jahn2010a). Similar to decision trees and state transition models, health outcomes and costs of alternative health technologies can be assessed. In addition to these outcomes, performance measures such as resource use or waiting times can be calculated, as physical resources (e.g. hospital beds) can be explicitly modeled (Jahn2010; Jahn2010a). The term discrete refers to the fact that DES moves forward in time at discrete intervals (i.e. the model jumps from the time of one event to the time of the next) and that events are discrete (mutually exclusive) (Karnon2012). Model applications in COVID-19 include optimizations of processes with scarce resources such as bed capacities (melman2021balancing) or testing stations (saidani2021designing) and laboratory processes (gralla2020discrete).

Dynamic Models/Time-variant Dynamics

The dynamic SIR type models explained in Section 3.2 can also be used in the context of decision-analytic modeling. This model type can be extended by further compartments such as Death (D) and other states (X), reflecting, for example, quarantine or other states relevant to the research question. Such SIRDX models have been used frequently to model non-pharmaceutical intervention effects during the COVID-19 pandemics (nussbaumer2020quarantine). As in Markov state-transition models, a deterministic cohort simulation approach is used to model the distribution of compartments over time. Deterministic compartment models are useful for modeling the average behavior of disease epidemics in larger populations. When stochastic effects (e.g., the extinction of disease in smaller populations), more complex interactions between disease and individual behavior or distinctly nonrandom mixing patterns (e.g., the spread of the disease in different networks) are relevant, stochastic agent-based approaches can be used (see next section).

Agent-based models (ABM)

Agent-based modeling includes individual-level simulation (Karnon2012). ABMs have been used to model biological processes, ecological systems, traffic management, customer flow management or stock markets, and in recent years increasingly for cost-effectiveness analyses in health care (Marshall2015; Bonabeau2002; Macal2008). ABMs represent complex systems in which individual ’agents’ act autonomously and are capable of interactions (Miksch2019). These agents can represent the heterogeneity of individuals, and the behavior of individuals can be described by simple rules. Such rules include how agents interact, move between geographical zones, form households or consume (Chhatwal2015; Bruch2015; Hunter2017). ABMs are often applied to study “emergent behavior” as a result of these predefined rules. In infectious disease modeling, agent behaviors combined with transmission patterns and disease progression lead to population-wide dynamics, such as disease outbreaks (Macal2010). ABMs are also used in public health studies to model noncommunicable diseases (nianogo2015agent). A comparison of ABM, DES and system dynamics can be found in Marshall2015; Marshall2015a; Pitman2012. ABM are increasingly applied for COVID-19 evaluations (Bicher2021; Jahn2021; Hoffrage2261). In agent-based models, either all affected individuals are simulated individually, or specific networks of individuals are integrated into the simulation.


Microsimulation methods, introduced by Orcutt.1957, are used to simulate policy actions on real populations. Li.2013 describe microsimulations as “a tool to generate synthetic micro-unit based data, which can then be used to answer many “what-if” questions that, otherwise, cannot be answered”. The main difficulty for microsimulation is considered as the appropriate data source, on which these simulations can be conducted. Often, survey data are used. Nowadays, the first step of microsimulations is the realistic generation of data in the necessary geographic depth (e.g. Li.2013). A full-population approach is described in mda2021.03. Thereafter, the scenario-based microsimulation analysis yields the necessary information for building conclusions for policy support.

In microsimulation methods, we distinguish between static and dynamic models. The latter can be divided in time-continuous and time-discrete models. An overview of microsimulation methods is given in Li.2013 and the references therein. For modeling COVID-19, dynamic models have to be considered. bock2020mitigation presents a continuous time SIR microsimulation approach approach as an example for a dynamic transmission model. In contrast to ABM, other microsimulations are often based on survey data, or realistic but synthetically extended, survey data. The above-mentioned cohort simulations are usually deterministic simulations, in which an initial cohort of interest is followed over different paths over time, and thus leading to a distribution of outcomes after the analytic-time horizon. Recently, the dividing line between these methods and the related terminology has become blurred. Which method is ultimately used often depends on the background of the research team.

As is apparent from the list above, there is a very wide range of approaches that are used for decision-analytical modeling. General guidances on how to choose among basic approaches for a given problem at hand do exist (IQWIG2020; Roberts2012; Siebert2012).

4 Decision analysis

The models described in Section 3 can now be used to inform decision making. Therefore, the so-called decision analysis framework is used. Decision analysis aims at supporting decisions under uncertainty by means of systematic, explicit and quantitative methods. In particular, computer simulations and prediction models as described above are used to calculate the short-term and long-term benefits and harms (as well as the costs) of alternative interventions, technologies, or measures in health care (Schoeffski2011). The decision-analytic framework includes, among other things, the relevant health states and events considered to describe possible disease trajectories, the type of analysis (e.g., benefit-harm, cost-benefit, budget-impact analyses (Drummond)), and the simulation method (cohort- or individual-based). In addition to base-case analysis (using the most likely parameters), scenario and sensitivity analyses (Briggs2012) should be performed to show the robustness or uncertainty of the results.Value of information analysis can be applied to assess the value of future research to reduce uncertainty (Fenwick2020; siebert2013enough).

4.1 Decision trade-offs

A central idea in decision analysis is that trade-offs in outcomes of alternative choices are formalized and, if possible, quantified. In addition, the tradeoff between such outcomes is explicitly expressed, usually in the form of an incremental tradeoff ratio. In the context of a benefit-harm analysis, for example, this relates to quantifying the benefits of COVID-19 vaccination in terms of (incremental) deaths avoided and the harms of vaccination in terms of (incremental) potential side effects. In general, two or more interventions can be compared in a stepwise incremental fashion (keeney1976). Benefit-harm analyses are often applied in screening evaluations. To detect efficient strategies, first so-called strongly dominated strategies are excluded. These are strategies that result in higher harms (e.g. due to testing or invasive diagnostic work-up) and lower benefits (e.g., cancer-cases avoided, life-years gained) than other strategies. Second, weakly dominated strategies are excluded, that is strategies that result in higher harms per additional benefit compared with the next most harmful strategy, or in other words, strategies that are strongly dominated by a linear combination of any two other strategies. Third, the incremental harm-benefit ratios (IHBRs) are calculated for the non-dominated strategies.

There is no general benchmark for how much additional harm individuals are willing to accept for one additional benefit. Strategies are explored as a function of willingness-to-accept thresholds and they are displayed as harm-benefit acceptability curves on the efficiency frontier (Neumann2016).

Figure 3: Example: efficiency frontier for harms (average number of screening examinations) and benefits (life years gained) for breast cancer screening strategy, starting age of screening 40, 45, 50; A annual screening, B biennial screening, H hybrid strategies of annual screening starting at age 40 or 45, followed by biennial screening at age 50; black labeled strategies are efficient; doi: 10.7326/M15-1536

In Figure 3, screening strategies on the line (efficiency frontier) are considered efficient because they achieve the greatest gain per mammography compared with the strategy immediately below it. Strategies below the line are dominated. The slope of the efficiency frontier represents the additional life-years gained per increase in mammography. (Mandelblatt2016).

In this context, the choice of measures that are presented and discussed also influences decision behavior (Ariely2008). The same applies to changes in decision-making due to alternatives that are presented. Regarding optimization of vaccination interventions, temporal aspects of availability and effectiveness of vaccinations can be considered. Alternative strategies (for example, immediate vaccination with lower vaccine protection compared with later vaccination with expected higher effectiveness but risk of intermediate infection) can be evaluated.

4.2 Statistical decision theory

As a more general framework, statistical decision theory can help to make decisions on a formal basis. In this framework, the decision maker has to choose among a set of different actions by quantitatively assessing the consequences of these actions. To this end, we consider a loss function , where the unknown parameter refers to the “state of the world”. The interesting question for a statistician now is, how to use data in order to make optimal decisions. Thus, assume we observe an experimental outcome with possible values in a set , which depends on the unknown parameter . Furthermore, let be the corresponding likelihood function. Then, we define a decision function which turns data into actions (parmigiani2009decision). To choose between decision functions, we measure their performance by a risk function

These can be approached from either a frequentist perspective (e.g. the minimax decision rule) or a Bayesian perspective, where the risk is associated with a prior distribution . For a more thorough treatment of these concepts, we refer to parmigiani2009decision.

Examples for loss functions in the context of COVID-19 include negative reward functions in Markovian decision models (eftekhari2020markovian) or the social loss function as proposed in a recent discussion paper of the European Commission (buelens2021lockdown).

5 Reporting and Communication

For data analysis and modeling to have an impact as a component of decision making, appropriate reporting and communication is key. There are numerous standards for statistical reporting in the application areas, such as the ESS standard for quality reporting, the CONSORT, PRISMA, CHEERs guidelines, and others (see https://equator-network.org). These standards are based on commonly accepted core quality principles and values such as accuracy, relevance, timeliness, clarity, coherence, and reproducibility. For measures to restrain and overcome an epidemic effectively, communication among experts that follows highest professional and ethical standards is not sufficient. In a democratic society, policy measures can only be implemented if they are based on the acceptance of the wider population. This puts high demands on skills associated with communicating statistical evidence on the side of scientists, governments and media, and a citizenry able to understand statistical messages.

In recent decades, there have been numerous publications, initiatives, and ideas to improve the communication of quantitative and statistical information, see Hoffrage2261; Tufte2001; Rosling2011; otavamylona2020, to name only a few. Data journalism has recently taken off as an innovative component of news publishing, and COVID-19 provides numerous excellent examples, often using an interactive visual format on the Internet, such as dashboards. A fundamental problem in assessing probabilities lies, for example, in the intuitive conflation of subjective risks (“how likely am I to become infected”) and general risks (“how likely is it that some person will become infected”). Another issue is that of equating sensitivity of a diagnostic test and the positive predictive value (Eddy1982; Gigerenzer2007; McDowell2019; Binder2020). In particular, the prevalence (or base rate) is often neglected leading to this confusion. Fact boxes combined with icon arrays are recommended for the presentation of test results. Both representations are based on natural frequencies ([62]; Krauss2020) and present case numbers as simply and concretely as possible. Many scientific studies show that icon arrays help people understand numbers and risks more easily (e.g. McDowell2019). The Harding Center for Risk Literacy shows many other examples of transparent communication of risks, including COVID-19222https://www.hardingcenter.de/de/mrna-schutzimpfung-gegen-covid-19-fuer-aeltere-menschen.

While human thinking tends towards pattern simplification and political communication also prefers a simple cause-effect relationship, real phenomena are often multivariate. Thus, when studying COVID-19 and predicting its spread, it is not only important to consider its symptomatology, the incidence and geographic distribution of diseases, population behavior patterns, government policies and impacts on the economy, on schools, on people in nursing homes and on social life as a whole, but to integrate these into the data analyses and the communication of results. Associations observed in the data can often be caused by third-party variables (confounders). In addition, much of the data comes from observational studies, which usually makes a robust causal attribution problematic. As many of these phenomena cannot be studied other than by observation (for ethical and feasibility reasons) causal attribution might be achieved as a scientific consensus opinion among scientists from the relevant disciplines that understand the complexity of the models and the subject matter studied.

Visual representations take a central position in public communication and aim to represent the corresponding dynamics and contents in a quickly understandable way. Usually either time-dependent parameters or data with a spatial reference are visualized. For spatially distributed data, choropleth maps are predominantly used, in which administrative regions defined by the responsible health authorities are colored according to the distribution density of the infection figures or variables derived from them (see Figure 4

). Their visual perception problems - such as the visual dominance of the area of administrative regulatory frameworks that have no direct relation to infection events - are well known but still widespread. In addition, the use of ordinance thresholds as the basis for color scaling is often at odds with color schemes that emphasize real spatial distributional differences.

Figure 4: Choropleth map of the incidence figures for Germany by district. Source: Robert-Koch-Institute https://app.23degrees.io/export/oCRP768wQ3mCswE7-choro-corona-faelle-pro-100-000/image.

For time-dependent parameters, different variants of time series diagrams are used, predominantly line and column diagrams. The use of logarithmic scales in time series diagrams should be evaluated with caution. On the one hand, they tempt superficial readers to underestimate dynamic growth processes; on the other hand, they increase the demands on the mathematical and statistical literacy of the readership without corresponding advantages of visual representation. Figure 5 shows the time course of the 7-day incidence per 100,000 people between 24 January and 4 February 2021 for some selected countries. While the differences appear relatively small on the logarithmic scale, the linear scale shows considerable differences.

Figure 5: The 7-day incidence for different countries over the course of time. On the logarithmic scale (left graphic), differences seem small. The linear scale (right graphic), however, shows considerable differences. Source: Our World in Data, https://ourworldindata.org/covid-cases?country=IND USA GBR CAN DEU FRA.

6 Decision making in politics

We now briefly discuss some aspects inherent to political decision making.

6.1 Political accountability and performance measurement

The essential element of democracies is that governments are elected. Ideally, “good” government should lead to re-election and “bad” government should lead to the respective government being voted out of office (Przeworski2019). The existing statistical analyses on the pandemic thus serve two purposes. First, the analyses are of fundamental importance to citizens, as they provide data and metrics (e.g. infection rates, death rates, utilization of intensive treatment capacity, excess death rate in non-Corona contexts due to Corona-related resource shortages), which the citizens can use to evaluate the performance of the government. This judgment then serves as an important cue for developing the voting decision. Second, in anticipation of this “threat” to be held accountable for its performance, the government makes its decisions to maximize its probability of getting reelected. Statistical information thus is fundamental for the actions of both key figures in democracy.

6.2 The specific nature of political decisions

The set of options, out of which the government can choose, consists of interventions. These interventions (measures) have short-term and long-term consequences (outcomes), which can be evaluated with the help of the methods of Health Decision Science. To estimate what kind of intervention will have what particular effect on the relevant indicators, the government uses statistical analyses and scientific models. This requires knowledge of the causal relationships between the measures and the “physical” effects (e.g. contact reduction) and psychologically induced behavioral changes they bring about. The latter, however, are based on complex action models: certain information and attitudes (trust in the government, willingness to listen to government orders (compliance), etc.), which in turn interact with situational perceptions (variation in the perception of one’s own risk of infection among different contacts, e.g. family members and good acquaintances vs. strangers) lead to the formation of certain behavioral dispositions and thus ultimately to this behavior itself. As all models rest on assumptions, the results produced by the models are inherently uncertain, dependent on the accurateness of the assumptions. So strictly speaking this “knowledge” is some kind of justified belief and describes what is reasonable to expect.

But even knowing what will be the case depending on whether intervention A or intervention B is chosen would not mean knowing whether the government should choose A or B. The results of the scientific models deliver expected outcomes, which can be represented by a vector of statistical key figures. But no infection, death or reproduction figures, no matter how high, can be used to directly deduce what should be done or how the relationship between benefit and risk, as well as economic and ethical aspects, should be evaluated.

To reach a decision the government has to take into account the expected values of all indicators in the multidimensional outcome vector, which are relevant for gaining acceptability, and to construct out of these some overall order of evaluation over the possible interventions. For achieving this it is necessary to consider the trade-offs (Section 4.1) between different goals in the comparison of different interventions. The relevant decision is therefore not a banal maximization decision (maximizing life expectancy or minimizing the number of deaths), but it must make trade-offs regarding the extent to which it wants to restrict opportunities and freedoms in order to achieve, for example, an increase in the probability of survival. The trade-offs are to be mapped and evaluated in the decision-analytical model.

It is not only about the trade-off between years of life gained and economic disadvantages of a lockdown, but also about many other aspects such as quality of life and aspects of morally significant value (Singer1972), for example, opportunities to visit our elderly relatives, social value of interpersonal contacts per se, freedom of persons, value of preserving a cultural scene, etc. These latter trade-offs can often only be partially represented in simulation models. Often trade-offs exist between benefits (gratifications) now and benefits delayed into the future. In this regard the “lockdown” vs. “Unlocked Economic Activity” decision is not simply a single decision where static payoffs can be weighed against each other. Rather, it also involves decisions that open other and new decision paths or close certain decision paths. In this respect, the constraints of a lockdown, for example, are in a good case “just” the price one pays for getting incidence levels down to the point where one can thereafter keep levels stable with far less severe measures. The main purpose of choosing such measures may be to prevent situations in which decisions lead to moral dilemmas (such as triage decisions, see e.g.  Gelinsky). Therefore, in addition to the model results, policy-makers must include social, ethical/moral, legal and wider aspects such as acceptability and feasibility in the overall policy consideration.

7 Conclusions and discussion

Reaching a decision based on data requires several steps, which we have illustrated in this paper: Data provide the basis for different kinds of models, which can be used for prediction, explanation and decision making. This forms the basis for making decisions within a formal framework. The results of these models must be communicated to non-scientists in order to gain acceptance of and compliance with political decisions. Each of these steps comes with its own caveats and requires sound statistical knowledge. It is important to note that the considerations described here not only apply to the current pandemic, but also extend to future pandemics333https://www.statnews.com/2021/05/18/luck-is-not-a-strategy-the-world-needs-to-start-preparing-now-for-the-next-pandemic/ and other (political) challenges such as the climate change debate. Recently, international recommendations are being developed to prepare for the future discussing issues of development of data infrastructure, use and misuse of statistics (rss) or the role of modeling to help in time-critical decisions on mitigation measures (EGE2020). The German Consortium in Statistics (DAGStat), a network of 13 statistical associations and the German Federal Office of Statistics444https://www.dagstat.de/en/about-us/cooperating-societies, initiated a collaboration of scientists with backgrounds in all areas of statistics as well as epidemiology, decision analysis and political sciences to critically discuss the role of data and statistics as a basis for decision-making motivated by the COVID-19 pandemic. Results are published in (i) a white paper (DAGStat2021), (ii) an upcoming publication on data and data infrastructure in Germany and (iii) within this publication.

In the wake of the COVID-19 pandemic, reporting of infection numbers and derived epidemiological indicators boomed, demonstrating with dramatic clarity the knowledge gap between experts, policymakers, and the public. Thus, members of the public need to have an approximate understanding as to the rate of spread (see, e.g., Gigerenzer2007; gigerenzer2003simple). The Corona crisis brought into general public awareness that our social interaction and political decisions are essentially based on data, modeling, the weighing of risks and benefits, and thus on probability estimates and expected values. Clearly, we need additional efforts to promote statistical or data literacy at all levels of society. This issue has been raised by the DAGStat before, see DAGStat2020. Equally important, we as the scientific community need to increase our efforts in making ourselves more understandable.

A pandemic poses particular challenges to society as a whole. In order to tackle these as efficiently as possible, interdisciplinary cooperation as e.g. fostered by the DAGStat, is essential. In this paper, we have therefore incorporated and combined aspects from different disciplines. In doing so, we found that similar concepts are often considered in different areas, but different notations and wordings can hinder transferability. In this sense, this paper also aims at bridging the gaps between disciplines and widening the research focus of statistical disciplines.


Conflict of interest

The authors declare that they have no conflict of interest.

Availability of data and material

Not applicable.

Code availability

Not applicable.

Authors’ contributions