1 Introduction
Experimental studies such as randomized controlled trials (RCT) are considered the goldstandard in hypothesis testing. For safety and efficacy reasons and regulatory purposes, most new drugs or treatments are studied through RCTs (greene2012reform). RCTs provide the best mechanism to identify the causal effect of treatments or interventions, by adjusting for observed and unobserved confounders under the rubric of a potential outcome framework (fisher1960design). Despite clear advantages of RCTs in drugtrials, in practice, they are expensive, timeconsuming, and not feasible in many cases due to ethical reasons. Other issues with RCTs include low recruitment rate, loss to followup, insufficient sample size, and being prone to selection bias (nichol2010challenging; frieden2017evidence). While RCTs remain the best way to establish causation, large amounts of data captured with new technologies during routine healthcare (e.g., electronic health records (EHR) or wearable devices), colloquially termed big health data, has the potential to discover causal effects from observational studies to complement RCTs. With proper methodological considerations, observational studies can provide a way to emulate RCTs and go beyond statistical correlation (hernan2008observational; hernan2017observational).
In the 1970s, the potential outcome framework was extended to observational studies to identify causal relationships from observational data through the Rubin Causal Model (RCM) (rubin1974estimating; rosenbaum1983central; holland1986statistics). Recent advances in structural causal model (SCM) provides the methodological framework under the potential outcome framework for graphically formalizing the identification of causal effects from observational and experimental data (pearl2009causality; pearl2016causal). SCMs can be used to emulate RCTs from observational data in many cases if the graphical model is identifiable (bareinboim2016causal), which signifies the capability of estimating the interventional distribution () from the available data with the assumptions incorporated in the model.
Experimental studies (including RCTs) frequently explore and report survival analysis measures. Survival analysis is the branch of statistics that analyzes the expected duration of timetoevent with outcome statistics such as hazard ratio, odds ratio, and risk ratio. Survival analysis has been wellstudied under the potential outcome framework with experimental studies and with RCM for observational studies
(cole2004adjusted; hernan2010hazards). Recent research has also studied survival analysis with RCM for observational studies considering the data generating mechanism or the study designs to estimate outcome statistics such as hazard ratio, odds ratio, risk ratio, and risk difference (didelez2007mendelian; hernan2004definition).Commonly reported outcomes from survival analysis in experimental clinical studies include the survival curve and hazard ratio (HR). The survival curve graphically reports the hazard in a population and represents the fraction of the population that survived in the treatment and the control group over time. HR describes the comparative hazard between the treatment and the control group. Hazard function, or simply hazard signifies the rate of eventsofinterest (e.g., a death) at time , conditional on survival until time and beyond (spruance2004hazard). Even though HR is widely used in practice as a standard tool for comparative evaluation of the outcome between treatment and control groups, it depends on the length of the study and, by definition, has an inherent selection bias (since only the survived population at time are selected at time ) (hernan2010hazards). In addition, both the survival curve and HR do not consider the study design, that is RCT versus observational study, in their formalization. Consequently, it is difficult to interpret the results of an intervention from only the reported hazard ratio (hernan2010hazards) and compare different studies with varying study designs and time lengths. The researcher has to consider the design of the study, length of the study as well has the hazard ratio to understand the effectiveness of the treatment. Structural causal models (SCMs) provide a framework to explicitly define the design of the study, the assumptions for the study, as well as the length of the study. However, to the best of our knowledge, a framework to compute the hazard ratio with SCMs does not exist.
Previous approaches for adjusted survival curves under the rubric of RCM used inverse probability weighting (IPW) to adjust for confounders in the estimand
(cole2004adjusted). However, this approach has a strong assumption, namely ignorability (rubin1978bayesian; angrist1996identification). The ignorability assumption states that there are no unobserved confounders in the model, and the variables considered for IPW satisfy the backdoor criterion. Although an approach with instrumental variable can be used when the treatment assignment is nonignorable (angrist1996identification), in practice, this is rather a strong assumption and a variable can be a mediator, a collider, an Mbias, or a confounder (lederer2019control). In this paper, we formulate the estimation of the hazard ratio from observational studies under the rubric of SCMs that does not depend on the ignorability assumption. We provide a principled approach to define observational studies using SCMs, redefine with timespecific survival as outcomes (instead of survival time as the only outcome), and therefore mathematically transform observational studies to the corresponding experimental studies by adjusting for confounders with the backdoor criterion and then, sample from the experimental studies to estimate hazard ratios. We provide the mathematical formalization of the approach with a simple causal graph and with detailed mathematical derivation, and validate the results with a simulated data set and a benchmark data set on Ewing’s sarcoma.1.1 Clinical Relevance
Most clinical research reports HR with survival analysis. However, the reported HR and its process of calculation do not take into account the study design (e.g., RCT vs. observational study) and corresponding assumptions (e.g., ignorability). This makes it harder to compare the results of different studies with different study designs, sample populations, study lengths and assumptions. Our proposed method with SCMs estimates HR by explicitly describing the study designs and assumptions for a better clinical understanding of the effect of the treatment of interest.
1.2 Technical Significance
We propose a novel approach to estimate the HR from observational studies with SCM, taking the causal relationship between treatment and outcome into account. In HR calculation for survival analysis of observational studies, our review of the literature identifies a lack of causal interpretation. Our proposed approach first develops a timeinvariant causal model and estimates the survival time after adjusting for the confounders in the SCM using backdoor adjustment and docalculus. The development of an SCM enables us to identify the confounding variables, unlike with the ignorability assumption where we adjust for every variable available (except treatment and outcome), as well as properly adjust using the minimal set, thus reducing computational requirements. The computed survival times are considered “asif” they were sampled from an RCT. The newly adjusted survival times are capable of expressing the true causal effect of treatment on the outcome through the survival curve and HR. We validate the proposed method in both simulated experiments and with observational data.
1.3 Generalizable Insights
We propose a novel method of estimating the HR for observational studies under the rubric of SCMs. The method can be used for any observational studies with survival data, after defining the SCM. Our method of estimating the HR through SCMs clearly defines the studydesign and assumptions in the model. All the source code for this study is shared with the research community through a GIT repository. A Pythonbased library has been released that takes the data, the graph, and length of the study as input and provides the adjusted survival curve with backdoor adjustment and the hazard ratio as the output. Our approach is limited in the cases when i) the SCM is not defined and ii) the SCM is not identifiable through the adjustment formula or backdoor adjustment (i.e., there is no backdoor set).
2 Related Work
Survival analysis (kleinbaum2010survival) is a methodological approach for modeling and comparing the timetoevent between two populations. The event is called a hazard, which can be death, an adverse clinical event, or a mechanical failure for physical systems. It compares the condition of survival in the treatment versus control group, and reports outcomes with statistical measures such as the HR. Frequently reported approaches in survival analysis include Kaplan Meier survival curve, Cox proportional hazards model, life tables, and survival trees,. We review a nonparametric approach of the Kaplan Meier survival curve and the semiparametric approach of the Cox proportional hazards model.
The Kaplan Meier survival curve (kaplan1958nonparametric) is a nonparametric statistic representing the survival function and HR in the treatment and the control group. It provides a visual comparison between survival functions in different treatment or control groups; it does not differentiate between RCTs or observational studies. Data from both of the approaches can be plotted as the Kaplan Meier curve. It is up to the individual researcher to interpret and explain the Kaplan Meier curve based on the study design. Cox PH model, on the other hand, is computationally complex. However, it is a commonly used approach for survival analysis, and is widely used to compute the HR in epidemiological studies. The key aspect of it is the underlying proportional hazards assumption (cox1972regression)
, stating that the HRs of the treatment and control group are proportional and is a function of the covariates. It is a semiparametric model since no assumption is made about the baseline hazard function (i.e., hazard function with no covariates). In general, it is effective in estimating both regression coefficients (
) and the HR (kleinbaum2010survival). Futher, it is unbiased (when estimated considering all possible covariates).We review existing approaches to compute the HR for observational and experimental studies. Previous work on survival analysis for observational data with RCM under potential outcome used IPW to adjust for confounders (cole2004adjusted). However, RCM requires the ignorability assumption that all variables considered for adjustment with IPW satisfy the backdoor criterion. In reality, a variable can also be a mediator, and in those cases adjusting for the mediators will result in inaccurate analysis. It has also been shown that the HR estimation approach has an inherent selection bias (hernan2004structural; hernan2010hazards) as only the patients who survived at time were sampled at time to be considered for the estimation. SCMs provide the mathematical machinery to identify the backdoor variables given a causal graph. We used the same Ewing’s sarcoma data set as studied in (cole2004adjusted) with the same assumptions (i.e., all the covariates satisfy the backdoor criterion) to arrive at the same result as a validation strategy for our approach.
For survival analysis, it was shown that in some cases the Kaplan Meier curve may show no difference between the treatment and control groups when in reality there is a statistically significant difference in the HR, if it is adjusted properly (makuch1982adjusted). The rationale behind this phenomenon is that a nonparametric approach is used to plot the survival curve, whereas a semiparametric method is used to calculate the HR. The authors (makuch1982adjusted) presented an approach to construct a plot of the survival curve consistent with the HR calculated. In this work, the adjusted survival curve for a specific treatment group was estimated by calculating a mixture of the estimated survival functions for separate strata, and weighted based on the distribution of the covariate in the sample dataset. However, the approach does not consider the design of the study in the survival analysis.
To extend the existing definition of the Cox PH Model, the Marginal Structural Cox PH Model has been introduced and used to find the effect of Zidovudine on the survival of HIVpositive men (hernan2000marginal). Statistical analysis in the presence of timedependent confounders is commonly done through a standard Cox PH model. However, Robin (robins1997causal) has previously shown that this approach cannot adjust for all biases. Similar to previous work under the RCM, the authors used the conditional ignorability assumption. This is a much stronger assumption compared to using the SCM to identify confounding variables opening the backdoor. Several other researchers (satten2001kaplan; rotnitzky2014inverse) have used the IPW approach, although without using SCMs.
The existing literature to compute the HR does not consider the study design and might lead to misinterpretation if the data were not sampled correctly or adjusted for the right confounding variables. While previous research alludes to this problem, they do not provide the mathematical machinery for survival analysis. Although the traditional Cox PH Model can minimize the effects of biases, it is not the same as “adjustment” of confounding variables. The bias is reduced by fitting the Cox PH regression model until convergence (cox1972regression), it does not consider the study design or the data generating mechanism. The model fitting approach does not generate a causally meaningful interpretation despite reduction in biases. Our goal is to formulate an approach that estimates the HR through a causal formulation considering the data generating mechanism with SCM, that portrays the direct causal effect of treatment on outcome, in terms of the HR metric. The assumption of variables opening the backdoor path in the SCM as confounders and adjustment on the dataset based on that enables a more causally interpretable estimation of the HR.
3 Background
3.1 Hazard Ratio
To define the HR, we use the hazard function (spruance2004hazard) in the Cox proportional hazard model:
(1) 
Based on this, the Hazard Ratio (HR) is defined (kleinbaum2010survival) as:
(2) 
Here, represents the hazard function at time
and the vector with the covariates of the model
. can also be written as , where is the treatment, are the confounders, and are the other associated covariates. represents the value of the covariate vector with value of the treatment set as 1 (), making . represents the maximum likelihood estimates (MLE) for each covariate. In other words, is the corresponding coefficient for each covariate that fits the data into a converging model for the Cox regression.As expressed in Equation 1, the proportional hazard assumption defines the hazard function to be composed of the baseline hazard function (i.e., hazard when all covariates are set to 0), multiplied with the exponential of the sum of multiplied by the corresponding covariate.
Since we have defined the HR and hazard function, we can simplify the equation of the HR to:
(3)  
In other words, the HR is equivalent to the exponential of the regression coefficient . However, computing is nontrivial since, in practice, one does not know the baseline hazard function (). We can only estimate the HR using the maximum likelihood function, and iterating until the model converges to a predefined error bound (kleinbaum2010survival).
Although the HR is an important outcome, it has limitations in explaining causal relationships. No causal mechanism is understood from the HR. This is because the HR is calculated from the convergence of regression models and, confounding and other such bias is handled by simply including the covariates to the model. It is then up to the individual researcher to make sure that the right data are used to measure the HR and interpret accordingly. For example, an HR calculated from an RCT provides the casually linked hazard for the intervention, whereas the same HR calculated from an observational study simply provides a correlated hazard. This existing approach simplifies the calculation and reduces the burden on the researcher. However, we frequently find differences between the survival curve and the HR. This difference, or bias, arises because of the inherent definitions of the survival curve and Cox PH model.
3.2 Structural Causal Models
Structural causal models (SCMs), developed on the foundations of probabilistic graphical models, draw inferences that explain the causal relationship between variables. With an SCM, a causal model is defined first and is expressed with a graphical representation. Definition 1 gives the formal description of an SCM: (bareinboim2016causal; pearl2009causality).
[Structural Causal Model]
A structural causal model is a 4tuple
where:

is a set of background (exogenous) variables that are determined by factors outside of the model,

V is a set of observable (endogenous) variables that are determined by variables in the model (i.e., determined by variables in ),

F is a set of functions such that each is a mapping from the respective domains of to , where and and the entire set forms a mapping from to . In other words, each in , assigns a value to that depends on the values of the select set of variables , and

is a probability distribution over the exogenous variables.
An SCM is often expressed by a causal graph . Each node in represents an observed or unobserved variable, and each directed edge represents the causal relationships between them. To find the causal effect of variable on variable , docalculus is introduced (pearl2016causal). Docalculus is used to map the observational reality to the corresponding experimental reality with the identifiability equation by adjusting for different kinds of biases (e.g., confounding bias), if it exists. The backdoor criterion provides a powerful tool to identify the variables that need to be adjusted for this transformation (in other words, adjust for confounding bias) and is defined in definitions 2 and 3.
[Backdoor Criterion] Given an ordered pair of variables
in a directed acyclic graph , a set of variables satisfies the backdoor criterion relative to if no node in is a descendant of , and blocks every path between and that contains an arrow into .[Backdoor Adjustment] If a set of variables satisfies the backdoor criterion relative to , then the causal effect of on is identifiable and is given by the formula:
3.3 Problem Definition
Our research problem is to develop a method to compute the HR for observational studies by leveraging the SCM by explicitly declaring our assumptions and adjusting for the right confounders. The goal is to acknowledge the defined roles of variables in the SCM, and use a minimum set of confounders to adjust for backdoor, thus building a computationallyefficient and more accurate model for objective estimation and comparison. The algorithm will take three sets/inputs, (1) observational dataset consisting of treatment, outcome in survivaltime and other covariates, (2) SCM supporting the causal mechanism and dataset, , and, (3) lengthoftrial . At the completion of the algorithm, the output will be: (1) adjusted survival curve (nonparametric estimation), and (2) hazard ratio of treatment, (semiparametric estimation) (Figure 1). The assumption in our approach is that the observational data are available, and the SCM is fully specified.
4 Methods
In this section, we formalize our approach to mathematically transform the timedependent observational data to the corresponding experimental data by leveraging the SCM. We then use the adjusted dataset for estimating HR using Cox PH Model. Our proposed approach focuses on causal effect of treatment on outcome to measure HR. We start with an observational study scenario and define all related assumptions. The schematic diagram for the proposed approach is shown in Figure 1.
4.1 Assumptions
We assume a simple observational study for a population, consisting of treatment , confounding variable , and outcome in survival time . In this scenario, treatment is a dichotomous variable ( signifying treatment and signifying control). Outcome is the survival time from the beginning of the study and is a continuous variable in time units. Although the confounding variables can be a categorical or continuous variable, for simplicity, we assume the confounder to be a dichotomous variable. This observational study can be expressed as an SCM and with a graphical form through causal directed acyclic graph (causal DAG) in Figure 2, where treatment, confounder, and outcome is expressed by the nodes , and respectively.
From the definition of the SCM, we can express the underlying functions defining the causal relationships between variables by: , , . Here, is the set of exogenous variables, is the set of endogenous variables, is the set of structural functions.

shows that confounder is independent of any other endogenous variables.

expresses the dependency of on . Since is parameter for both functions and , imposes a bias on the model (), and the function defines whether the bias is strong or weak.

defines the effect of and on the survival time . This function also depends on the baseline hazard function since this defines the rate of decline in survival.
We also assume to know the sample size of population and a maximum length of survival time .
4.2 Approach
4.2.1 Transformation of single study to multiple studies
Experimental studies commonly have different study timelengths, e.g., different number of days as the outcome endpoints (e.g., 30day survival, 90day survival, etc.). This variable is a dichotomous variable and describes a patients’ status of survival at the end of the study. While analyzing a study similar to these, we do not take into account survival at each day, or survival after endoftrial day, since we do not have the opportunity to do so. In our problem definition, we only have the survival time of individuals; however there is no defined end time for the trial. From the individual survival time, We can easily get the th day survival of every individual in the dataset, being the number of days from the beginning of the study. We use days as smallest unit of time, since we assume the dataset reports survival in units of days. However, it could be any other units of times (e.g. minutes, or weeks) depending on the problem domain and dataset.
Since our observational study has a maximum survival time of all individuals , we assume we calculate the variables , signifying the th day survival, ranging from to . Conversion of continuous variable describing survival time into multiple variables , each describing survival at the th day, essentially breaks down the single observational study into number of observational studies with variables , and , each of which is now a dichotomous variable.
Through the transformation, from a single SCM , we end up with different SCMs, each with the same treatment and the confounder , but different outcome (survival at th day). Note that, in our assumption, the causal graph is timeinvariant, i.e., the functional relationship between the variables does not change over time. This conversion is represented by different SCMs (Transformed graphs A, Figure 3 (a)), where .
An important point to note here is that, the single confounder and treatment from the original observational study is not being transformed, only the outcome is distributed into multiple variables. In other words, we assume a point intervention and the confounding variables are invariant in time. And since we are transforming from a single trial to multiple trials, the outcomes s of these separate trials are not conditionally dependent on each other (e.g. a RCT with 30day survival as outcome does not analyze about whether any patient died at 20th day or 29th day.).
However, in extracting information from obs. data, there is dependency between them. Specifically, has causal effect on (where ), since if is 0 (e.g. patient died at th day), all (where ) is 0 (e.g. patient remains dead for all consecutive days). Also, only has direct causal effect on , every other corresponding effect is mediated through. If has causal effect on , it is mediated through . For example, , in absence of any backdoor variables. The relationship between s is reflected through a single transformed SCM (Transformed graph B, Figure 3 (b)), where . The similarities between transformed graphs A and transformed graph B is that they both have same and , and the dissimilarities are:

Transformed graphs A portrays different trials with different outcomes, whereas transformed graph B is a single trial.

For transformed graphs A, (where ), however for transformed graph B, (where ).

Since two causal DAGs are different, transformed graphs A and transformed graph B have two different equations for .
For outcome  For outcome  For outcome 
(a) Converted Causal DAGs with no dependency between s  
(b) Converted single Causal DAG with dependencies between s 
In summary, we transform the single observational study into multiple different trials expressed through two different transformations (transformed graphs A and transformed graph B, Figure 3), each with the same treatment and confounding , but with different survival time as the outcomes, as the death (or failure) increases over time. These outcomes are the status of survival (or death) at th day, where is to ().
4.2.2 Generation of Survival Curve
Applying Backdoor adjustment formula in transformed graphs A, the causal effect of on (for all causal graphs) is:
In transformed graph B, the causal effect of on is:
Since (using rules of conditional probabilities), we can reduce this equation to,
Finally, for , (e.g. if a person is alive at 30th day, he has been alive for the last 29 days as well), , which reduces our equation down to the same as that of transformed graphs A:
This signifies whether we use transformed graphs A or transformed graph B, we end up with same adjustment formula.
For each of the newly transformed causal DAGs, we can now adjust for the confounder using the backdoor adjustment formula. We calculate adjusted probabilities and thus adjusted counts for each of the causal graphs. Using the values of , we generate survival curve with Kaplan Meier fitter.
4.2.3 Calculation of Hazard Ratio
Since we calculated for each of causal graphs, we know number of adjusted individuals alive at each unit (day) of time. This helps us build back the adjusted survival time for individuals, as it was in the original dataset. The newly calculated survival time is adjusted for the confounding bias, as if they were sampled from an RCT. We measure the HR using Cox PH model with the adjusted survival time as outcome.
4.2.4 Algorithm
Algorithm 1 generates adjusted Kaplan Meier curve as well as the HR from Cox PH model on the adjusted dataset. The input for the algorithm is the dataset, specifically, confounder , treatment , survival time , and event status . In the algorithm, variables in uppercase letters signify vectors, and variables in lowercase signify single variables. Internal procedures convert_single_to_multiple_trials (Algorithm 2) are shown separately.
5 Experiments and Applications
We evaluated the proposed approach for computing HR and visualizing survival curves with an experimental and observational dataset: (1) a synthetic dataset derived from a linear acyclic model with Gaussian noise; (2) a realworld dataset on diseasefree survival in patients with Ewing’s Sarcoma (makuch1982adjusted). The rationale to consider these two datasets are: (1) both of the underlying causal model has a backdoor path through confounders, and, (2) both these datasets have treatment and control group that satisfy the proportionality hazarads assumption.
5.1 Experimental Data
We simulate an observational study with patients. A subgroup of the patients received a treatment (), and the remaining patients did not (). We generate data on survival time (in days) defined as the outcome. The treatment assignment is confounded by sex (e.g. for female, for others). The scenario has a causal model as depicted in Figure 2.
For the simulation, we generated outcome variable survivaltime through defining a baseline hazard function. We defined survival time to be exponentially varying with time, in the form of: , with being confounder, being treatment, being the noise/error and being the index of patient. The other parameters were set to , they were selected such that the HR remains close to 1, however injection of bias through portrays different outcome in survival curve.
We simulate the study with a strong biased effect from confounder . We define the strength of bias by an imbalance of conditional probabilities in each stratum of through the function . For the defined strong bias case, and . It translates to, if stands for females in this trial, 75% received the drug, whereas, in or others, only 25% received the drug. In a randomized controlled trial, under a noconfoundingbias scenario, we should have .
After we generate the experimental data, we applied Algorithm 1. We compared the existing approach of survival curve and survival curve from the adjusted dataset sidebyside in Figure 4.
Table 1 presents the HR found in the fitted Cox PH model in three different processes:

using only the treatment and outcome from the original dataset,

using data of all covariates (treatment, outcome and confounder) from the original dataset, and

using only the treatment and outcome from the adjusted dataset following our proposed approach.
The first approach represents scenarios where: 1) we ignore confounding, assuming it does not impact the treatment, or, 2) we do not possess data on the confounding variable (unmeasured confounding). This approach, however, results in an incorrect approximation of the HR. The second approach represents the existing approach to calculate HR. The third one shows our approach, and it eliminates the need for using confounding in model fitting since we are already adjusting for that.
Existing model  Proposed model  



Transformed and adjusted data  
Hazard Ratio  1.66  0.08  1.00  
(95% Confidence Interval) 
(1.252.20)  (0.571.12)  (0.761.33) 
Here, in Figure 4, the difference in unadjusted survival curve is similar to fitting Cox PH model with only and , leaving out , as found following the first approach generating HR=1.66. On the other hand, the overlapping adjusted survival curve is validated by calculated HR, following both the existing approach with the Cox PH model (HR=0.8) and our algorithm (HR=1.0).
5.2 Ewing’s Sarcoma Data
We also applied the proposed method to a realworld dataset of patients with Ewing’s Sarcoma (makuch1982adjusted). The dataset was selected based on its survival data and known causal DAG consisting of confounders. The dataset consists of a total of 76 Ewing’s sarcoma patients with diseasefree survival days as the outcome. 47 of the patients received a novel treatment (S4), and 29 received (one of) three (S1—S3) standard treatments.
Existing model  Proposed model  



Transformed and adjusted data  
Hazard Ratio  0.53  1.12  1.04  
(95% Confidence Interval)  (0.300.96)  (0.592.11)  (0.571.87) 
The level of Serum lactic acid dehydrogenase (LDH) acted as the confounder, since high LDH levels indicated a lesser likelihood of treatment assignment along with an impact on survival time. In our analysis, we marked patients receiving S4 as the treatment group () and patients receiving S1S3 as the control group (). We applied our algorithm on this data set and the survival curve with the existing approach. Results of our algorithm is shown in Figure 5. The adjusted survival curve shows similar results, as demonstrated in Makuch et al. (makuch1982adjusted). We also present the calculated HR following the three processes described in the earlier subsection. In Table 2, the HR calculated by our approach (HR=1.04) differs from the HR calculated in the traditional way (HR=1.12), presenting the drug to be a little less hazardous. However, the 95% confidence interval for both of these coincide, signifying that the true value lies within this range.
6 Discussion and Conclusion
We propose a novel method to estimate the HR using the Cox PH Model through the transformation of observational data to corresponding experimental data leveraging an underlying SCM. The transformed data are mathematically guaranteed to be adjusted for the confounding bias with the assumption that the SCM represents the data generating mechanism. Previous approaches under RCM that estimate the survival curve use the ignorability assumption, and will not work when the variables selected do not satisfy the backdoor criterion. Ignorability assumption states that, distribution of the potential outcomes () is independent of the treatment variable by randomly assigning treatment: . An extension of the idea, conditional ignorability states, distribution of the potential outcomes () is independent of the treatment variable (), conditional on the covariates (): . Using conditional ignorability for adjustment on covariates allowed researchers to draw inferences from observational studies as well; however, adjusting all covariates irrespective of their causal relationship with treatment and outcome can contribute more bias to the model and incorrect estimation of causal effects.
We present two scenarios as example in Figure 6. In the first scenario (Figure 6, left), we show an SCM with treatment and outcome with a third covariate . Here acts as a mediator in between and , thus the backdoor adjustment gives a null set, meaning no adjustment is needed. The docalculus formula would be: . Adjusting on based on conditional ignorability will produce an incorrect estimation of causal effect. In the second scenario (Figure 6, right), we discuss a setting called frontdoor adjustment where we identify the variables to be adjusted with two applications of backdoor adjustment pearl2009causality. In an SCM with a mediator (shown in Figure 6 (right)), the covariate does not satisfy the backdoor criterion and acts as a mediator between treatment and outcome . Thus, adjusting with irrespective of its role as mediator will produce a biased estimate of the HR. The accurate backdoor adjustment formula (with as mediator) is . However, adjustment by assuming (and ) as confounder gives an incorrect adjustment formula: . Our approach (with backdoor criterion) can correctly identify the variables to be adjusted for estimating HR using SCM and docalculus.
Both the survival curve and the HR help to build a strong interpretation of the survival analysis of an experiment. The HR is most frequently reported as it summarizes the overall effect of treatment. However, survival curve encodes information on changes in survival over time (hernan2010hazards), which, in certain cases gives us better insight. Our proposed method is capable of generating both the survival curve and the HR, along with proper backdoor adjustment based on the underlying SCM. The HR calculated from the adjusted dataset requires only the treatment and outcome variables, and thus relies on direct causal relationships of treatment and outcome. For this purpose, we assume knowledge of the true causal model, an absence of unmeasured confounders, functional relationship in the SCM being timeinvariant, and, proportionality of the HR in the outcome. In reality, defining the causal graph with SCM, that is, causal structure learning, requires a principled approach. The development of statistical and computational algorithms for causal structure learning is an active research area (heinze2018causal; rottman2012causal), and, is not wellestablished in the current literature. We are currently working on a methodological framework to develop the causal graph with structure learning algorithm and domain expertise.
Acknowledgments
This work is partially supported by a number of grants from the Regenstrief Center for Healthcare Engineering at Purdue University and Ubicomp Lab at Marquette University.
Comments
There are no comments yet.