1 Introduction
The objective of most statistical analyses is to make inferences about the data generating process underlying a randomized trial or an observational study. In practice, statistical inference is concerned with purely datadriven tasks, such as prediction, estimation (parameterfitting), and hypothesis testing. In recent decades, the advent of causal inference has triggered a shift in focus, particularly within the data analysis community, toward a territory that has traditionally evaded statistical reach: the causal mechanism underlying a data generating process. Statistical inference relies on patterns present in the observed data (i.e. statistical correlation), and therefore is unable, alone, to answer questions of causal nature (Pearl, 2010; Pearl et al., 2016). Nonetheless, questions about cause and effect are of prime importance in all scientific fields.
Causal inference augments statistical inference by clearly delineating the possible questions a practitioner can answer from the observed data. Causal inference requires an apriori specified set of (untestable) assumptions about the data generating mechanism. Once the practitioner posits a causal model, encoded in the language of graphical models or structural equations, the initial steps of the causal inference provide an algorithmic framework for a transparent and explicit translation of the desired causal quantity into a statistical estimand (i.e. function of the observed data distribution).
This translation is not guaranteed, as it depends on the underlying scientific question, the structure of the causal model, and the observed data. (Identifiability of causal effects is discussed in more detail in Section 2.4.) Formal frameworks for causal inference, however, do provide guidance on further data collection efforts and additional assumptions needed for such translation. Altogether these frameworks yield a statistical parameter that as closely as possible matches the underlying scientific question and thereby ensure that the researcher’s objective is driving the statistical analysis (as opposed to letting the statistical analysis determine the question asked and answered).
In this paper, we review a general framework for causal inference in Data Science. As presented in previously (van der Laan and Rose, 2011; Petersen and van der Laan, 2014; Balzer et al., 2016), the “Causal Roadmap” involves (1) specifying the scientific question; (2) building an accurate causal model of our knowledge; (3) defining the target causal quantity; (4) linking the observed data to the causal model; (5) assessing identifiability of our causal parameter; (6) estimating the resulting statistical parameter; and finally (7) interpreting the resulting estimates. Each step of the Roadmap is illustrated with an example from HIV prevention and treatment.
2 The Roadmap for Causal Inference
2.1 Specify the Scientific Question
The first step of the Roadmap is to specify our scientific question. This helps frame our objective in a more detailed way, while incorporating knowledge about the study. In particular, we need to specify the target population, the exposure, and the outcome of interest. Consider, for example, the potential impact of pregnancy on clinical outcomes among HIVpositive women. While optimizing virologic outcomes is essential to preventing mothertochildtransmission of HIV, the prenatal period could plausibly disrupt or enhance HIV care for a pregnant woman (Joint United Nations Programme on HIV/AIDS (UNAIDS), 2014). Thus, as our running example, we ask, what is the effect of pregnancy on HIV RNA viral suppression (500 copies/mL) among HIVpositive women of childbearing age (1549 years) in East Africa?
This question provides a clear definition of the study variables and objective of our research. It also makes explicit that the study only makes claims about the effect of a specific exposure (pregnancy), outcome (HIV RNA viral suppression), and target population (HIVpositive women of childbearing age in East Africa). Any claims outside the study context (e.g. different exposure, outcome, or target population) represent distinct questions and thus trips down the Roadmap. The temporal cues present in the research question are of particular importance. They represent the putative cause (pregnancy) and effects of interests (viral suppression), and they are frequently used as a basis for specifying the causal model, our next step.
2.2 Specify the Causal Model
One of the appealing features of causal modeling, and perhaps the reason behind its success, is the rich and flexible language used to encode mechanisms underlying a data generating process. Here, we focus on Pearl (2000)’s structural causal models, which bring together causal graphs and structural equations in one unified framework (Pearl, 1988; Goldberger, 1972; Duncan, 1975). Structural causal models formalize our knowledge (however limited) of the study, including the relationships between variables and the role of unmeasured factors. The NeymanRubin potential outcomes framework is an alternative, but is not covered here (Neyman, 1923; Rubin, 1974; Holland, 1986; Rubin, 1990).
Let us consider again our running example of the impact of pregnancy on HIV viral suppression among women in East Africa. Denote as the set of baseline (preexposure) covariates, including demographics and past HIV care (e.g. use of antiretroviral therapy). The exposure
is a binary variable indicating that the woman is known to be pregnant. The outcome
is a binary indicator of currently having a suppressed HIV viral replication: 500 copies per mL. These constitute the set of endogenous variables, denoted . They are believed to be the most relevant to the research question. Endogenous variables can either be observed or latent (i.e. unobserved but believed to be essential to the scientific question).Each endogenous variable is associated with a latent (or unobserved) background factor, denoted and called the set of exogenous variables. These variables can be thought of as error terms associated with each endogenous variable. They account for all other unobserved sources that might influence each of the endogenous variables and can share common components. In our example, unmeasured background factors might include the date of HIV infection, date of conception, her partner’s HIV status, and her genetic profile.
Causal Graphs:
The “causal story” of the data can be conveyed using the language of graphs (Pearl et al., 2016). Graphical models consist of a set of nodes (or vertices) representing the set of exogenous and endogenous variables, and a set of directed or undirected edges connecting these nodes. Here, we are interested in graphical models that are directed and acyclic (i.e. a fully directed graph with no path from a node to itself). This particular representation provides for an equivalence between causal assumptions and graphical structure.
In Directed Acyclic Graphs (DAGs), two nodes are adjacent if there exists an edge between them, and a path between two nodes and is a sequence of adjacent nodes starting from and ending in . If an edge is directed from node to node , then is the parent of , and is the child of . More generally, for any path that starts from node , the set of nodes included in this path are descendants of , and is the ancestor of all the nodes included in this set.
We can now use the language of graphical models to explicitly encode our causal assumptions about the underlying data generating process. Specifically, a variable is a direct cause of another variable , if is the child of in the causal graph. Also, a variable is a cause of another variable , if is a descendant of in the causal graph (Pearl, 2000).
To illustrate, Figure 1(a) provides a DAG corresponding to our running example. From this graph, we can make the following statements:

Baseline characteristics may affect whether a woman is pregnant and her HIV viral suppression status .

Being pregnant may affect her HIV viral suppression status .

Unmeasured factors may affect a woman’s baseline characteristics, her fertility, and her suppression outcome.
In Figure 1(a), a single node represents all the common, unmeasured factors that could impact the baseline covariates, exposure, and outcome. In an alternative representation in Figure 1(b), we have explicitly shown each exogenous variable () as a separate node and as parent to its corresponding endogenous variable , respectively. Dashed doubleheaded arrows denote shared unmeasured sources.
Both representations make explicit that there could be unmeasured common causes of the covariates and the exposure , the exposure and the outcome , and the covariates and the outcome . In other words, there is measured confounding and unmeasured confounding present in this study. Altogether, we have not made unsubstantiated assumptions about the causal relationships between the variables. Instead, we only relied on the assumed timeordering between variables.
Causal graphs can be extended to accommodate more complicated data structures. Suppose, for example, plasma HIV RNA viral levels are missing for some women in our population of interest. Then we could modify our causal model to account for incomplete measurement. Specifically, we redefine the exposure node for pregnancy as and introduce a new intervention node defined as indicator that plasma HIV RNA level is measured. The resulting causal graph is represented in Figure 2. We refer the readers to Mohan et al. for detailed discussion of formulating a causal model for the missingness mechanism and to Petersen et al. for a real world application handling missingness on both HIV status and viral loads (Mohan et al., 2013; Petersen et al., 2017; Balzer et al., 2017).
Other extensions can also be made to account for common complexities, such as longitudinal exposures, censoring, effect mediation, and dynamic (i.e. personalized) treatment regimes. In subsequent steps, we discuss how altering the causal graph, particularly by removing edges, is equivalent to making additional assumptions about the data generating process. Before doing so, however, we present the causal model in its structural form.
NonParametric Structural Equations:
Structural causal models also encode information about the data generating process with a nonparametric set of equations. This structural form facilitates the isolation of the causal effect of interest by decoupling the unmeasured factors from the endogenous variables at each step of the data generating process. Intuitively, a structural causal model describes the relevant features of the world that allow nature to assign values to variables of interest (Pearl, 2000; Pearl et al., 2016).
Formally, we define a structural causal model, denoted , by the set of exogenous variables , the set of endogenous variables , and a set of functions that deterministically assign a value to each variable in , given as input the values of other variables in and . These nonparametric structural equations allow us to expand our definition of causal assumptions (Pearl, 2000; Pearl et al., 2016). Variable is considered to be a direct cause of variable , if appears in the function assigning a value to . Variable is also a cause of variable , if is direct cause of or any causes (parents) of .
In our HIV viral suppression example, the corresponding structural equations are
where the set of functions encode the mechanism deterministically generating the value of each endogenous variable. The set includes the exogenous variables associated with each endogenous variable in the set . The unmeasured variables
have a joint probability distribution
and coupled with the set of structural equations give rise to a particular data generating process that is compatible with the causal assumptions implied by the structural causal model .In our example, for a given probability distribution and set of structural equations , the structural causal model describes the following data generating process. For each woman,

Draw the exogenous variables from the joint probability distribution . Intuitively, when we sample a woman from the population, we obtain all the unmeasured variables that could influence her baseline covariates, pregnancy status, and suppression outcome.

Generate baseline covariates deterministically using as input to function ; the measured baseline characteristics include demographic and clinical factors, such as age, education attained, socioeconomic status, use of antiretroviral therapy, and prior suppression status.

Generate pregnancy status deterministically using and as inputs to function . Recall is an indicator equaling 1 if the woman is known to be pregnant and 0 otherwise.

Generate HIV suppression outcome deterministically using , and as inputs to function . Recall is an indicator equaling 1 if her HIV RNA viral suppression is less than 500 copies per mL and 0 otherwise.
It is important to note that the set of structural equations are nonparametric. In other words, the explicit relationship between the system variables, as captured by the set of equations , are left unspecified. If knowledge is available regarding a relationship of interest, it can be readily incorporated in the structural equations. For instance, in a twoarmed randomized trial with equal allocation, the function that assigns a value to the exposure variable can be explicitly encoded as , where is an indicator function.
2.3 Define the Target Causal Quantity
Once the causal model is specified, we become licensed to ask questions of causal nature. The rationale comes from the observation that the structural causal model is not restricted to the particular setting of the study, but can also describe the same system under (carefully) changed conditions. Since the structural equations are autonomous, we can perform modifications to our causal model to evaluate hypothetical, counterfactual scenarios that would otherwise never be realized, but correspond to our underlying scientific questions of interest.
In our running example, we are interested in the effect of pregnancy on HIV RNA viral suppression. To translate this scientific question into a welldefined causal quantity, we can intervene on the exposure (pregnancy) to deterministically set the exposure to for each woman in the target population in one scenario, and then set in another, while keeping everything else constant (i.e. same population of women over the same time frame).
The postintervention causal graph is given in Figure 3 and the structural equations become
These interventions generate counterfactual outcomes for . These causal quantities represents the participant’s HIV viral suppression status, if possibly contrary to fact, her pregnancy status were (1: yes, 1: no).
Using our causal model endowed with a joint probability distribution over exogenous and endogenous variables , we can define the target causal parameter in terms of the distribution of counterfactual outcomes. One common choice is the Average Treatment Effect (ATE), given by
(1) 
where the expectation is taken over the target population of interest. The target causal parameter represents the difference in the expected counterfactual outcomes (HIV viral suppression) if all women in the target population (HIVpositive, childbearing age) were pregnant and the expected counterfactual outcome if the same women were not pregnant. In this example, it is both physically impossible and unethical to design an experiment to measure these counterfactual outcomes. Nonetheless, counterfactuals provide a language to address pressing questions in many datadriven fields, including Public Health, Medicine, Education, Economics, and Political Science.
Before discussing how these causal quantities can be identified with the observed data, we emphasize that for simplicity we have focused on a binary exposure, occurring deterministically at a single time point. Scientific questions corresponding to categorical, continuous, stochastic, and longitudinal exposures are also encompassed in this framework, but beyond the scope of this primer. We also note that other summary measures are possible (e.g. sample average, ratios, marginal structural models) and may better capture the researcher’s scientific question.
2.4 Link the Observed Data to the Causal Model
Thus far, we have discussed the structural causal model , capturing the causal assumptions about the study variables, and used counterfactuals to translate our study question into a clearly defined causal parameter. The next step is to provide an explicit link between the observed data and the specified structural causal model.
Returning to our running example, suppose we have a a simple random sample of women from our target population. On each woman, we measure her baseline covariates (e.g. demographic and clinical factors), pregnancy status , and suppression outcome . These measurements constitute our observed data for each woman in our sample . Therefore, we have independent, identically distributed copies of , which are drawn from some probability distribution . (As before, we note that other sampling schemes (e.g. casecontrol) are accommodated by this framework, but are beyond the scope of this primer.)
If we believe that our causal model accurately describes the data generating process, we can assume that the observed data are generated by sampling repeatedly from a distribution compatible with the structural causal model. In other words, the structural causal model provides a description of the study under existing conditions (i.e. the real world) and under specific intervention (i.e. the counterfactual world). As a result, the observed outcome equals the counterfactual outcome when the observed exposure equals the exposure of interest (here, ).
In our example, all the endogenous variables are observed (i.e. ); therefore, we can write
(2) 
where an integral replaces the summation for continuous variables. From this equation, we see that the structural causal model
, defined as the collection of all possible joint distributions of the exogenous and endogenous variables
, implies the statistical model , defined as the collection of all possible joint distributions for the observed data . The structural causal model rarely implies restrictions on the resulting statistical model , which is thereby often nonparametric. The Dseparation criteria of Pearl (2000) can be used to evaluate what (if any) statistical assumptions are implied by the causal model. The true observed data distribution is an element of the statistical model .2.5 Assessing Identifiability
In the previous section, we established a bridge between our structural causal model and our statistical model . However, we have not yet discussed the conditions under which causal assumptions and observed data can be combined to answer causal questions. Structural causal models provide general tools that allow the practitioner to license a target causal quantity to be translated into a statistical estimand, which is a welldefined function of the observed data distribution .
Recall in Section 2.3 that we defined our target causal quantity as the average treatment effect : the difference in the expected viral suppression status if all women were pregnant versus if none were. If given a causal model and its link to the observed data, the target causal parameter can be expressed as a function of the observed data distribution , then the causal parameter is called . If not, we can still explicitly state and evaluate the assumptions needed to render the target causal parameter identifiable from the observed data distribution.
One of the main tools for assessing identifiability of causal quantities is a set of criteria based on causal graphs. In general, these criteria provide a systematic approach to identify an appropriate adjustment set for the target causal quantity. Here, we focus on identifiability for the effect of a single intervention (a.k.a. “pointtreatment”) and present the backdoor criterion and the frontdoor criterion. For a detailed presentation of graphical methods for assessing identifiability in causal graphs, the reader is referred to Pearl (2000); Pearl et al. (2016).
Formally, given any ordered pair of variables (
) in a directed acyclic graph, a set of variables is said to satisfy the backdoor criterion with respect to () if (1) the descendants of do not include any node in , and (2) blocks every backdoor path (i.e. a path that contains an arrow into ) between and . The rationale behind this criterion is that, for to be the appropriate adjustment set that isolates the causal effect of on , we must block all spurious paths between and , and leave directed paths from to unblocked. This criterion does not, however, cover all possible graph structures.Alternatively, a set of variables satisfies the frontdoor criterion with respect to an ordered pair of variables () if (1) all directed paths from to are blocked by , (2) all paths from to are blocked, and (3) all paths from to containing an arrow into are blocked by . One notices that the frontdoor criterion is more involved than its backdoor counterpart, in the sense that it requires more stringent conditions to hold for a given adjustment set to satisfy identifiability of the causal effect of interest. In practice, it is often the case that the backdoor criterion is enough to identify the needed adjustment set, especially in pointtreatment settings (i.e. when the exposure occurs at single point in time).
In our running example, the set of baseline covariates (the proposed adjustment set) will satisfy the backdoor criterion with respect to the effect of pregnancy on HIV viral suppression , if the following two conditions hold:

No node in is a descendant of .

All backdoor paths from to must be blocked by .
Looking at the posited causal graph from Figure 1(a), we see that the first condition holds, but the second is violated. There exists a backdoor path from to through the unmeasured background factors . Therefore, our target causal quantity is not identifiable given the current causal model. Intuitively, the unmeasured common causes (a.k.a. confounders) of pregnancy and HIV viral suppression obstruct the isolation of the causal effect of interest.
Nonetheless, we can explicitly state and consider the plausibility of the causal assumptions needed for identifiability. In particular, the following independence assumptions are enough to satisfy the backdoor criterion and thus identify the causal effect in this pointtreatment setting.

There must not be any unmeasured common causes of the exposure (pregnancy) and the outcome (viral suppression): ,

There must not be any unmeasured common causes of the exposure (pregnancy) and the baseline covariates:

There must not be any unmeasured common causes of the baseline covariates and the outcome (viral suppression) .

These criteria are reflected in the causal graphs shown in Figure 4. When the backdoor criterion holds, we can be sure that all of the observed association between the exposure and outcome is due to the causal effect of interest (as opposed to spurious sources of correlation).
The independence assumptions in 1.a (, ) hold by design in a (stratified) randomized trial, where the unmeasured factors determining the exposure assignment (e.g. a coin flip) are independent of all other unmeasured factors. As a result, these independence assumptions (1.a and/or 1.b) are sometimes called the randomization assumption and equivalently expressed as .
With these assumptions, we can express the distribution of counterfactual outcomes in terms of the distribution of the observed data:
where the summation generalizes to an integral for continuous covariates. The first equality is by the law of iterated expectations; the second by the randomization assumption, and the final by the established link between the causal and statistical model (Section 2.4).
Likewise, we can translate our target causal parameter, the average treatment effect , into a statistical estimand, often called the Gcomputation identifiability result (Robins, 1986):
(3) 
where the summation generalizes to an integral for continuous covariates. Thus, our statistical target is the difference in the expected outcome, given the exposure and covariates, and the expected outcome, given no exposure and covariates, averaged with respect to the distribution of the baseline covariates . In our example, is the difference in the strataspecific probability of viral suppression, given pregnant or not, standardized with respect to the distribution of adjustment strata .
The same quantity can be expressed in inverse probability weighting form
(4) 
The latter representation highlights an additional data support condition, known as positivity:
Each exposure level of interest must occur with a positive probability within the strata of the adjustment set . We refer the reader to Petersen et al. (2012) for a discussion of this assumption and approaches when it is theoretically or practically violated.
Overall, the identifiability step is essential to specifying the needed adjustment set, linking our wishedfor causal effect to a parameter estimable from the observed data distribution. Even in a pointtreatment setting, there are many sources of association between our exposure and outcome of interest, including measured confounding, unmeasured confounding, selection bias, and common statistical paradoxes (e.g. Berkson’s bias and Simpson’s Paradox). Furthermore, in the setting of longitudinal exposures with timedependent confounding, the needed adjustment set may not be intuitive and the shortcomings of traditional approaches become more pronounced (Robins et al., 2000; Robins and Hernán, 2009; Pearl et al., 2016). Indeed, methods to distinguish between correlation and causation are crucial in the era of “Big Data”, where the number of variables is growing with increasing volume, variety, and velocity (Rose, 2012; Marcus and Davis, 2014; Balzer et al., 2016).
Nonetheless, it is important to note that specifying a causal model (Step 2) does not guarantee the identification of a causal effect (Step 5). Causal frameworks do, however, provide insight into the limitations and full extent of the questions that can be answered given the data at hand. They further facilitate the discussion of modifications to the study design, the measurement additional variables, and sensitivity analyses (Robins et al., 1999; Imai et al., 2010; VanderWeele and Arah, 2011; Díaz and van der Laan, 2013).
In fact, even if the causal effect is not identifiable (e.g. Figure 1), the Causal Roadmap still provides us with a statistical parameter (e.g. Eq. 3) that comes as close as possible to the causal effect of interest given the limitations in the observed data. In the next sections, we discuss estimation of this statistical parameter and use identifiability results (or lack there of) to inform the strength of our interpretations.
2.6 Estimate the Target Statistical Parameters
Once the statistical model and estimand have been defined, the causal framework returns to traditional statistical inference to estimate parameters of a given observed data distribution. Popular methods for estimation and inference for , which would equal the average treatment effect if the identifiability assumptions held, include parametric Gcomputation, Inverse Probability Weighting (IPW), and Targeted Maximum Likelihood Estimation (TMLE) (e.g. Robins (1986); Robins et al. (2000); Bodnar et al. (2004); van der Laan and Rubin (2006); Hernán and Robins (2006); Cole and Hernán (2008); Taubman et al. (2009); Robins and Hernán (2009); Young et al. (2011); Westreich et al. (2012); van der Laan and Rose (2011); Petersen et al. (2014); Schnitzer et al. (2014); Gruber and van der Laan (2015); Benkeser et al. (2017); Zhang et al. (2017)).
Here, we focus on substitution estimators based on the Gcomputation identifiability result as defined in Eq. (3). Specifically, we can implement parametric Gcomputation (a.k.a. a simple substitution estimator) with the following steps.

Regress the outcome (HIV viral suppression) on the exposure (pregnancy) and covariate adjustment set to estimate the conditional expectation .

Based on the estimates from the Step 1, generate the predicted outcomes (viral suppression status) for each woman in the target population while deterministically setting the value of the exposure (pregnancy) to the desired level (1: yes, 0: no), but keeping the covariates the same:
For a binary outcome this step is corresponds to generating the predicted probabilities for exposure levels .

Obtain a point estimate by taking a sample average of the difference in the predicted outcomes from Step 2:
where denotes the empirical distribution and the sample average serves as the nonparametric maximum likelihood estimator of the covariates distribution .
The performance of this simple substitution estimator depends on consistent estimation of the conditional expectation of the outcome, given the exposure and covariates
. If sufficient background knowledge is available, a parametric regression model can be used. (Technically, if this knowledge were available, it should already be encoded in the causal model (i.e. yielding parametric structural equations in Step 2).) This, however, is usually not the case in realworld studies with a large number of covariates and potentially complicated relationships. Furthermore, we want to avoid introducing new (and unsubstantiated) assumptions during the estimation step. While nonparametric methods (e.g. stratification) can can break down due to sparsity, recent advances in machine learning can help us learn these complex relationships without introducing new assumptions.
Targeted Maximum Likelihood Estimation (TMLE) provides a general approach to constructing doubly robust, efficient substitution estimators (van der Laan and Rubin, 2006; van der Laan and Rose, 2011). TMLE employs Super Learner, an ensemble machine learning algorithm, to provide an initial estimator of the conditional mean outcome (van der Laan et al., 2007). This initial estimator is then updated using information from the estimated exposure assignment mechanism , also estimated with Super Learner. Targeted predictions of the outcome under each exposure of interest are generated, and finally a point estimate is obtained by taking the average difference in these targeted predictions.
The updating step in TMLE serves to solve the efficient score equation (a.k.a. the efficient influence curve equation) and reduce statistical bias for the target estimand
. Also a result, TMLE, under standard regularity conditions, is asymptotically linear and the Central Limit Theorem can be used for statistical inference (i.e. the construction of WaldType 95% confidence intervals) despite the use of machine learning for initial estimation of the conditional mean outcome
) and the exposure mechanism (van der Laan and Rose, 2011). Furthermore, the estimator is double robust in that it will be consistent if either or is consistently estimated. (Collaborative TMLE further improves upon this robustness result (van der Laan and Gruber, 2010; Gruber and van der Laan, 2015).) If both are estimated consistently and at reasonable rates, the estimator will be asymptotically efficient and achieve the lowest possible variance over a large class of estimators.
2.7 Interpretation of Results
The final step of the Roadmap involves the interpretation of the results. We have seen that the causal inference framework clearly delineates the assumptions made from domain knowledge (Step 2), and the ones desired for identifiability (Step 5). In other words, this framework ensures that the assumptions needed to augment the statistical results with a causal interpretation are made explicit. In this regard, Petersen and van der Laan argue that the Roadmap provides a hierarchy of “increasing strength of assumptions” and ranging from purely statistical to replicating the results that would have been seen in a randomized trial (Petersen and van der Laan, 2014).
In our running example, the causal model shown in Figure 1 represents our knowledge of the data generating process; there are measured () as well as unmeasured () common causes of the exposure (pregnancy) and the outcome (viral suppression). Thus, the lack of identifiability prevents any causal effect interpretation. We would only be licensed to make causal claims if the assumptions in the causal graphs in Figure 4 held. Thus, we can interpret a point estimate of as the difference in the probability of HIV RNA viral suppression associated with pregnancy after controlling for the measured demographic and clinical confounders.
3 Conclusion
We have presented an overview one framework for causal inference. We emphasized how the Causal Roadmap helps ensure consistency and transparency between the imperfect nature of real world data, and the complexity associated with questions of causal nature. Of course, this work serves only as a primer to causal inference in Data Science, and we have only presented the fundamental concepts and tools in the causal inference arsenal.
Indeed, this framework can be extended to richer and more complicated data structures. For instance, our running example for average treatment effect only considered an exposure at a single time point. However, as demonstrated in Tran et al. (2016), the Causal Roadmap can also handle multiple intervention nodes with timedependent confounding. Other recent avenues of research include longitudinal treatment effects, mediation, dynamic (personalized) regimes, stochastic interventions, incomplete measurement, clustered data structures, and the generalizability/transportability of effects (e.g. Robins et al. (2000); Hernán et al. (2000); van der Laan and Robins (2003); Bodnar et al. (2004); Bang and Robins (2005); Petersen et al. (2006); Hernán et al. (2006); van der Laan and Petersen (2007); Bembom and van der Laan (2007); Robins et al. (2008); Cole and Hernán (2008); Robins et al. (2008); Taubman et al. (2009); Kitahata et al. (2009); Hernán and Robins (2009); VanderWeele (2009); Cain et al. (2010); Petersen and van der Laan (2011); Hernán and VanderWeele (2011); Petersen (2011); van der Laan and Gruber (2012); Westreich et al. (2012); Díaz and van der Laan (2012); Zheng and van der Laan (2012); Mohan et al. (2013); Bareinboim and Pearl (2013); van der Laan (2014); Schnitzer et al. (2014); Petersen et al. (2014, 2017); Balzer et al. (2017); Lesko et al. (2017); Benkeser et al. (2017); Balzer et al. (2018)).
As a final note, a Data Scientist may debate the usefulness of applying the causal inference machinery to her own research. We hope to have clarified that if appropriately followed, the Causal Roadmap forces us to think carefully about the goal of our research, the context in which data were collected, and to explicitly define and justify any assumptions. It is our belief that conforming to the rigors of this causal inference framework will improve the quality and reproducibility of all scientific endeavors that rely on real data to understand how nature works.
References
 Balzer et al. (2016) L. Balzer, M. Petersen, and M. van der Laan. Tutorial for causal inference. In P. Buhlmann, P. Drineas, M. Kane, and M. van der Laan, editors, Handbook of Big Data. Chapman & Hall/CRC, 2016.
 Balzer et al. (2017) L. Balzer, J. Schwab, M. van der Laan, and M. Petersen. Evaluation of progress towards the UNAIDS 909090 HIV care cascade: A description of statistical methods used in an interim analysis of the intervention communities in the SEARCH study. Technical Report 357, University of California at Berkeley, 2017. URL http://biostats.bepress.com/ucbbiostat/paper357/.
 Balzer et al. (2018) L. Balzer, W. Zheng, M. van der Laan, M. Petersen, and the SEARCH Collaboration. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a clusterlevel exposure. Stat Meth Med Res, OnlineFirst, 2018.
 Bang and Robins (2005) H. Bang and J. Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61:962–972, 2005.
 Bareinboim and Pearl (2013) E. Bareinboim and J. Pearl. A general algorithm for deciding transportability of experimental results. Journal of Causal Inference, 1(1):107–134, 2013. doi: 10.1515/jci20120004.
 Bembom and van der Laan (2007) O. Bembom and M. van der Laan. A practical illustration of the importance of realistic individualized treatment rules in causal inference. Electronic Journal of Statistics, 1:574–596, 2007.
 Benkeser et al. (2017) D. Benkeser, M. Carone, M. van der Laan, and P. Gilbert. Doubly robust nonparametric inference on the average treatment effect. Biometrika, 104(4):863–880, 2017.
 Bodnar et al. (2004) L. Bodnar, M. Davidian, A. SiegaRiz, and A. Tsiatis. Marginal Structural Models for Analyzing Causal Effects of Timedependent Treatments: An Application in Perinatal Epidemiology. American Journal of Epidemiology, 159(10):926–934, 2004.
 Cain et al. (2010) L. Cain, J. Robins, E. Lanoy, R. Logan, D. Costagliola, and M. Hernán. When to start treatment? A systematic approach to the comparison of dynamic regimes using observational data. The International Journal of Biostatistics, 6(2):Article 18, 2010.
 Cole and Hernán (2008) S. Cole and M. Hernán. Constructing inverse probability weights for marginal structural models. American Journal of Epidemiology, 168(6):656–664, 2008.
 Díaz and van der Laan (2012) I. Díaz and M. van der Laan. Population intervention causal effects based on stochastic interventions. Biometrics, 68(2):541–549, 2012.
 Díaz and van der Laan (2013) I. Díaz and M. van der Laan. Sensitivity analysis for causal inference under unmeasured confounding and measurement error problems. Int J Biostat, 9:149–160, 2013.
 Duncan (1975) O. Duncan. Introduction to Structural Equation Models. Academic Press, New York, 1975.
 Goldberger (1972) A. Goldberger. Structural equation models in the social sciences. Econometrica: Journal of the Econometric Society, 40:979–1001, 1972.
 Gruber and van der Laan (2015) S. Gruber and M. van der Laan. Consistent causal effect estimation under dual misspecification and implications for confounder selection procedures. Stat Methods Med Res, 24(6):1003–1008, 2015. PMID: 22368176.
 Hernán and Robins (2006) M. Hernán and J. Robins. Estimating causal effects from epidemiological data. J Epidemiol Community Health, 60(7):578–586, 2006. PMCID: PMC2652882.
 Hernán and Robins (2009) M. Hernán and J. Robins. Comment on: Early versus deferred antiretroviral therapy for HIV on survival. New England Journal of Medicine, 361(8):823–824, 2009.
 Hernán and VanderWeele (2011) M. Hernán and T. VanderWeele. Compound treatments and transportability of causal inference. Epidemiology, 22:368–377, 2011.
 Hernán et al. (2000) M. Hernán, B. Brumback, and J. Robins. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIVpositive men. Epidemiology, 11(5):561–570, 2000.
 Hernán et al. (2006) M. Hernán, E. Lanoy, D. Costagliola, and J. Robins. Comparison of dynamic treatment regimes via inverse probability weighting. Basic & Clinical Pharmacology & Toxicology, 98(3):237–242, 2006.
 Holland (1986) P. Holland. Statistics and causal inference. Journal of the American Statistical Association, 81(396):945–960, 1986.
 Imai et al. (2010) K. Imai, L. Keele, and T. Yamamoto. Identification, inference, and sensitivity analysis for causal mediation effects. Statistical Science, 25:51–71, 2010.
 Joint United Nations Programme on HIV/AIDS (UNAIDS) (2014) Joint United Nations Programme on HIV/AIDS (UNAIDS). The gap report. Geneva, Switzerland, 2014.
 Kitahata et al. (2009) M. Kitahata, S. Gange, A. Abraham, B. Merriman, M. Saag, A. Justice, et al. Effect of early versus deferred antiretroviral therapy for HIV on survival. New England Journal of Medicine, 360(18):1815–1826, 2009.
 Lesko et al. (2017) C. Lesko, A. Buchanan, D. Westreich, J. Edwards, M. Hudgens, and S. Cole. Generalizing study results: a potential outcomes perspective. Epidemiology, 2017.
 Marcus and Davis (2014) G. Marcus and E. Davis. Eight (no, nine!) problems with big data. The New York Times, 2014. URL http://www.nytimes.com/2014/04/07/opinion/eightnonineproblemswithbigdata.html.
 Mohan et al. (2013) K. Mohan, J. Pearl, and J. Tian. Graphical models for inference with missing data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1277–1285. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/4899graphicalmodelsforinferencewithmissingdata.pdf.
 Neyman (1923) J. Neyman. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes (In Polish). English translation by D.M. Dabrowska and T.P. Speed (1990). Statistical Science, 5:465–480, 1923.
 Pearl (1988) J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo, A, 1988.
 Pearl (2000) J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, New York, 2000. Second ed., 2009.
 Pearl (2010) J. Pearl. An introduction to causal inference. The International Journal of Biostatistics, 6(2):Article 7, 2010.
 Pearl et al. (2016) J. Pearl, M. Glymour, and N. Jewell. Causal Inference in Statistics: A Primer. John Wiley and Sons Ltd, Chichester, West Sussex, UK, 2016.
 Petersen (2011) M. Petersen. Compound treatments, transportability, and the structural causal model: the power and simplicity of causal graphs. Epidemiology, 22:378–381, 2011.
 Petersen and van der Laan (2011) M. Petersen and M. van der Laan. Case Study: Longitudinal HIV Cohort Data. In M. van der Laan and S. Rose, editors, Targeted Learning: Causal Inference for Observational and Experimental Data. Springer, New York Dordrecht Heidelberg London, 2011.
 Petersen and van der Laan (2014) M. Petersen and M. van der Laan. Causal models and learning from data: Integrating causal modeling and statistical estimation. Epidemiology, 25(3):418–426, 2014.
 Petersen et al. (2006) M. Petersen, S. Sinisi, and M. van der Laan. Estimation of direct causal effects. Epidemiology, 17(3):276–284, 2006.
 Petersen et al. (2012) M. Petersen, K. Porter, S. Gruber, Y. Wang, and M. van der Laan. Diagnosing and responding to violations in the positivity assumption. Statistical Methods in Medical Research, 21(1):31–54, 2012. doi: 10.1177/0962280210386207.
 Petersen et al. (2014) M. Petersen, J. Schwab, S. Gruber, N. Blaser, M. Schomaker, and M. van der Laan. Targeted maximum likelihood estimation for dynamic and static longitudinal marginal structural working models. Journal of Causal Inference, 2(2), 2014. doi: 10.1515/jci20130007.
 Petersen et al. (2017) M. Petersen, L. Balzer, D. Kwarsiima, N. Sang, et al. Association of implementation of a universal testing and treatment intervention with HIV diagnosis, receipt of antiretroviral therapy, and viral suppression among adults in East Africa. JAMA, 317(21):2196–2206, 2017. doi: 10.1001/jama.2017.5705.
 Robins (1986) J. Robins. A new approach to causal inference in mortality studies with sustained exposure periods–application to control of the healthy worker survivor effect. Mathematical Modelling, 7:1393–1512, 1986. doi: 10.1016/02700255(86)900886.
 Robins and Hernán (2009) J. Robins and M. Hernán. Estimation of the causal effects of timevarying exposures. In G. Fitzmaurice, M. Davidian, G. Verbeke, and G. Molenberghs, editors, Longitudinal Data Analysis, chapter 23. Chapman & Hall/CRC, Boca Raton, FL, 2009.
 Robins et al. (1999) J. Robins, A. Rotnitzky, and D. Scharfstein. Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In M. Halloran and D. Berry, editors, Statistical Models in Epidemiology: The Environment and Clinical Trials. Springer, New York, 1999.
 Robins et al. (2000) J. Robins, M. Hernán, and B. Brumback. Marginal structural models and causal inference in epidemiology. Epidemiology, 11(5):550–560, 2000.
 Robins et al. (2008) J. Robins, L. Orellana, and A. Rotnitzky. Estimation and extrapolation of optimal treatment and testing strategies. Statistics in Medicine, 27(23):4678–4721, 2008.
 Rose (2012) S. Rose. Big data and the future. Significance, 9(4):47–48, 2012.
 Rubin (1974) D. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5):688–701, 1974. doi: 10.1037/h0037350.
 Rubin (1990) D. B. Rubin. Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science, 5(4):472–480, 1990.
 Schnitzer et al. (2014) M. Schnitzer, M. van der Laan, E. Moodie, and R. Platt. Effect of breastfeeding on gastrointestinal infection in infants: a targeted maximum likelihood approach for clustered longitudinal data. Annals of Applied Statistics, 8(2):703–725, 2014.
 Taubman et al. (2009) S. Taubman, J. Robins, M. Mittleman, and M. Hernán. Intervening on risk factors for coronary heart disease: an application of the parametric Gformula. International Journal of Epidemiology, 38(6):1599–1611, 2009.
 Tran et al. (2016) L. Tran, C. Yiannoutsos, B. Musick, K. WoolsKaloustian, A. Siika, S. Kimaiyo, M. van der Laan, and M. Petersen. Evaluating the impact of a HIV lowrisk express care taskshifting program: A case study of the targeted learning roadmap. Epidemiologic Methods, 5(1):69–91, 2016.
 van der Laan (2014) M. van der Laan. Causal inference for a population of causally connected units. Journal of Causal Inference, 0(0):1–62, 2014. doi: 10.1515/jci20130002.
 van der Laan and Gruber (2010) M. van der Laan and S. Gruber. Collaborative double robust targeted maximum likelihood estimation. The International Journal of Biostatistics, 6(1), 2010. doi: 10.2202/15574679.1181.
 van der Laan and Gruber (2012) M. van der Laan and S. Gruber. Targeted minimum loss based estimation of causal effects of multiple time point interventions. The International Journal of Biostatistics, 8(1), 2012.
 van der Laan and Petersen (2007) M. van der Laan and M. Petersen. Causal effect models for realistic individualized treatment and intention to treat rules. The International Journal of Biostatistics, 3(1):Article 3, 2007.
 van der Laan and Robins (2003) M. van der Laan and J. Robins. Unified Methods for Censored Longitudinal Data and Causality. SpringerVerlag, New York Berlin Heidelberg, 2003.
 van der Laan and Rose (2011) M. van der Laan and S. Rose. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer, New York Dordrecht Heidelberg London, 2011.
 van der Laan and Rubin (2006) M. van der Laan and D. Rubin. Targeted maximum likelihood learning. The International Journal of Biostatistics, 2(1):Article 11, 2006. doi: 10.2202/15574679.1043.
 van der Laan et al. (2007) M. van der Laan, E. Polley, and A. Hubbard. Super learner. Statistical Applications in Genetics and Molecular Biology, 6(1):25, 2007. doi: 10.2202/15446115.1309.
 VanderWeele (2009) T. VanderWeele. Marginal structural models for the estimation of direct and indirect effects. Epidemiology, 20(1):18–26, 2009.
 VanderWeele and Arah (2011) T. VanderWeele and O. Arah. Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders. Epidemiology, 22:42–52, 2011.
 Westreich et al. (2012) D. Westreich, S. R. Cole, J. G. Young, F. Palella, P. C. Tien, L. Kingsley, S. J. Gange, and M. A. Hernán. The parametric gformula to estimate the effect of highly active antiretroviral therapy on incident aids or death. Statistics in Medicine, 31(18):2000–2009, 2012. doi: 10.1002/sim.5316.
 Young et al. (2011) J. Young, L. Cain, J. Robins, E. O’Reilly, and M. Hernán. Comparative effectiveness of dynamic treatment regimes: An application of the parametric gformula. Stat Biosci, 3:119–143, 2011.
 Zhang et al. (2017) Y. Zhang, J. Young, M. Thamer, and M. Hernán. Comparing the effectiveness of dynamic treatment strategies using electronic health records: An application of the parametric gformula to anemia management strategies. Health Serv Res, May, 2017.
 Zheng and van der Laan (2012) W. Zheng and M. van der Laan. Targeted maximum likelihood estimation for natural direct effects. The International Journal of Biostatistics, 8(1):1–40, 2012.
Comments
There are no comments yet.