1 Notation and models
Conventionally, the rightcensored framework assumes the sample , where and ( stands for the usual indicator function, taking value if and otherwise), and and are the censoring and the survival times for the th subject (), respectively. Let denote the survival function of . The Cox proportional hazard model is given by
(1) 
where is the baseline hazard function,
is a random variable representing the study treatment,
is a random vector of exogenous measured predictors and
is a random vector of unmeasured predictors.The goal is to estimate the value of ; that is, the average change in the risk of an individual caused by a change in the received treatment.
We assume that an individual’s received treatment is the result of the selective process depicted by the equation,
(2) 
with a measured random vector and a random vector of unmeasured variables that may be correlated with unmeasured variables in . The term is an independent random component representing the difference between the variable and the predicted values obtained in the linear model. Note that the uncontrolled relationship between the unmeasured covariates affecting survival and treatment selection operate through the relationship between and and/or . If , the survival model just contains unmeasured covariates, , that are not unmeasured confounders.
If the random vector satisfies:

,

(exclusion restriction assumption),

(randomization assumption),
then can be considered an instrumental variable. The strength of the instrument is reflects the strength of the relationship between and . In a randomized trial with perfect compliance, assigned treatment is a perfect instrument. Assumptions , and can be reformulated and combined with the stable unit treatment value assumption, SUTVA, and the monotonicity assumption (Hernán and Robins, 2006) between the treatment to complete the conditions under which the IV identifies the causal effect of nonparametrically (i.e., without relying on (1) and (2)). Common instruments include prior institutional affinity for using a particular procedure, geographic region of residence, an individual’s differential access to certain treatments, and an individuals genes (aka Mendelian randomization, Thanasassoulis and O’Donnell (2009)).
2 Proposed methodology
The proposed methodology considers a standard first stage in which the relationship among the treatment, , the instrument variable, , and the measured confounding, , is estimated by any consistent method. We use simple linear regression models with standard least squares estimation but more flexible procedures could also be implemented. The first stage procedure is:

From the data, estimate the parameters to obtain the predicted values
Then, compute the residuals, .
It is worth noting that, under model (1) and assuming , is a consistent estimator of . If , estimates . Almost sure convergence of to is not guaranteed. The residual contains all the information about the unmeasured vector related with the treatment assignment and unrelated with ; that is, all the available information about unmeasured confounding is contained in the residuals provided by the model (1). However, also contains white noise pertaining to idiosyncratic or purely random factors affecting an individual’s treatment selection, which corresponds to the difference between the unmeasured covariates in the two models, and , and to the independent random term in model (2). We conjecture that the component of
due to white noise can be handled by specifying an individual
frailty in the outcome model in order to allow to more fully be able to perform its intended task of controlling for . The proposed second stage is:
Estimate the Cox proportional hazard regression with individual frailty:
where is the individual frailty term. A distribution should be specified for (e.g., logGaussian, Gamma). The parameter estimate of that results from this procedure is denoted .
Standard algorithms for estimating Cox models with frailties may be used to implement the procedure. For example, Therneau et al. (2003) proved that maximum likelihood estimation for the Cox model with a Gamma frailty can be accomplished using a general penalized routine, and Ripatti and Palmgren (2000) derived a similar argument for the Cox model with a Gaussian frailty.
2.1 Asymptotic properties of
We derive the asymptotic distribution of the 2SRIfrailty estimator for the case in which . If a similar derivation can be performed by decomposing and into common and orthogonal terms and making standard reliability assumptions on the distributions of these terms.
By adapting the convergence results for Cox’s partial likelihood (see, for instance, Theorem 5.3.1 in Kalbfleisch and Prentice (2002)) we obtain the following convergence results.
Theorem. Assume the causal models
(3)  
(4) 
with the random variables (the treatment), (measured covariate) and (unmeasured covariate). In addition, assume that
is normally distributed, that
is independently normally distributed random noise, and that (the instrument) is a random variable satisfying . Then, if the censoring time, , satisfies , we have the weak convergence,(5) 
where (
stands for the variance operator) can be consistently estimated from the survival and atrisk counting processes.
Proof. In absence of the frailty term, it is wellknown that the estimator of that maximizes the Coxmodel partial likelihood function obeys the asymptotic law
(6) 
where , and is the ( stands for dimension of the vector ) information matrix Kalbfleisch and Prentice (2002), which can be consistently estimated by .
In the presence of an individual frailty, different estimation methods have been proposed. Particularly, Ripatti and Palmgren (2000) and Vaida and Xu (2000)
studied the case of multiplicative lognormal distributed frailties. The proposed methodology obtains maximunlikelihood estimates of the regression parameters, the variance components and the baseline hazard, as well as empirical Bayes estimates of the random effects (frailties). Therefore, it suffices to prove that the second stage of our proposed 2SRIfrailty procedure is a Cox proportional hazard model with a gaussian frailty term in which the coefficient related with the treatment is unchanged from the original model (i.e.,
).Given the causal equation in (4) and , yields
Hence, . If , the linear part of the model in (3) can be rewritten as,
where , and . Due to and being independent normally distributed variables, is also asymptotically independent and normally distributed. If , then
with , and where, in this case, , and . Therefore, is asymptotically independent and normally distributed due to being independent and normally distributed. Hence, the survival model is given by
which has the form of a Cox proportional hazards model with frailty . Therefore, if is the estimator resulting from step , invoking the censoring time assumptions and using the convergence of the partial maximumlikelihood method given a consistent method of estimating the product of the baseline risk and the frailty (Ripatti and Palmgren, 2000; Vaida and Xu, 2000), it follows that
(7) 
with the component in the matrix corresponding to .
Remark. Normality of is required only when . In this case, the survival model does not contain unmeasured confounders, just unmeasured covariates. Such white noise can be omitted in standard linear models, but not in Cox regression models where it underestimates the treatment effect. The key point is that the first stage residual adds individual variability (a frailty) in the Cox model estimated in the secondstage.
3 Monte Carlo Simulation Study
To evaluate the behavior of the proposed methodology in finite samples, we conducted a range of Monte Carlo simulations. We found that, beyond the expected effect on precision of estimation, neither the baseline shape of the survival times nor the censorship distribution have any meaningful effect on the observed results. Therefore, we only show results when the baseline survival times follow a Weibull distribution with shape parameter two and scale parameters coherent with the proportional hazard assumption; that is, the scale parameter equates to . Here is the target. We subtract from , as opposed to drawing from a distribution with mean 0, to ensure that we obtain realistic survival times with binary without needing to alter the intercept. and denote the measured and unmeasured confounders, respectively. Both the measured () and unmeasured () covariates follow independent standard normal distributions, . Censoring was independently drawn from a Weibull distribution such that the expected censorship was 20%. Treatment assignment is based on the linear equation , where is the instrument, is the unmeasured covariate, and is the random noise. We set for a continuous exposure and for a binary exposure. All of , , and are drawn from independent standardnormal distributions. Notice that, after fixing the rest of the parameters, increasing yields an instrumental variable of lesser quality. Sample size was fixed at .
3.1 All Omitted Covariation is Unmeasured Confounding
We first suppose , and follow possibly correlated standard normal distributions. That is, the endogenous variable is continuous and there are no unmeasured predictors of survival time unrelated with treatment selection (i.e., while the true Cox model of the survival times may include a shared covariate with the treatment selection process, it does not include a frailty).
Figure 3 shows the median of the bias observed in 2000 Monte Carlo iterations for a stronger, , and weaker, , instrument. The Naive Cox model, which ignores the presence of omitted covariates, only performed well for (there are no omitted covariates and the Cox model is correct). The proposed algorithm, 2SRIfrailty (2SRIF), reduced bias the most when the omitted covariates had strong effects. When the presence of unmeasured confounding was weaker ( close to zero), and when there was no effect of the treatment, , both 2SRI and 2SRIF obtained similar results. Medianbias appeared to be invariant to the strength of the instrument.
Naive  2SRI  2SRIF  Naive  2SRI  2SRIF  
0.466  0.671  0.871  0.019  0.867  0.891  
0.000  0.931  0.931  0.000  0.917  0.922  
0.000  0.670  0.884  0.000  0.868  0.898  
0.952  0.951  0.912  0.952  0.851  0.912  
0.940  0.948  0.949  0.940  0.948  0.949  
0.943  0.947  0.915  0.943  0.947  0.915  
0.000  0.617  0.857  0.000  0.861  0.874  
0.000  0.935  0.940  0.000  0.902  0.907  
0.396  0.685  0.992  0.012  0.878  0.898  
Table 1 reports the coverage of the 95% confidence intervals computed from the standard asymptotic variance obtained from the secondstage Cox regression models of the respective procedures, ignoring the first stage variability.The proposed algorithm achieved coverage close to the nominal level in all cases, suggesting that it will be able to be implemented easily in practice. In contrast, the coverage of the naive Cox methods and the basic 2SRI algorithm was poor. As expected, the naive method performed correctly for , the case when the model is correct. As is wellknown, twostage instrumental variable methods lead to estimators () with the degree of variance inflation depending on the strength of the instrumental variable. We found that the length of the 95% confidence intervals ranged between 0.09 and 0.17 for the naive models, between 0.20 and 0.23 for 2SRI and between 0.21 and 0.26 for the proposed 2SRIF method, respectively, for both and . The fact that the 2SRIF procedure imposes a greater amount of inflation compared to 2SRI is helpful in terms of its ability to maintain the nominal level of coverage. Crucially, it appears that the 2SRIF’s assumption of the distribution of the frailty is helpful in addressing bias and does not suffer from the excessive and inappropriate gains in precision that accompany many procedures with parametric components.
3.2 Omitted Covariation is a Mixture of Unmeasured Confounding and a Pure Individual Frailty
Figure 4 depicts the observed median bias for the previous scenario for and cor. Note that implies that the survival model does not contain unmeasured confounders, only unmeasured covariates, while implies that all omitted covariation manifests as unmeasured confounding (i.e., is also related to treatment assignment). In the case, it is reassuring that the 2SRI and the 2SRIF procedures perform nearly as well as the naive Cox regression model, the true model in this scenario. The advantage of using 2SRIF versus 2SRI was larger for close to zero, which makes sense as the pure frailty variation is at its maximum, whereas at values close to the frailty has all but disappeared.
In order to check the robustness of the recommended gaussian frailty with respect to the unmeasured covariates distribution, we study the case where the unmeasured covariates, and , are not normally distributed. In particular, we considered the following scenarios:
MI. .
MII. .
MIII. .
MIV. .
where , and follow independent centered Gamma(1,1) and , and follow independent standard normal distribution. Note that parameter determines both the distribution and the relationship between and and it is chosen in order to keep constant the marginal variance. Figure 5 shows the median of the bias of 2SRIF algorithm observed in 2000 Monte Carlo iterations under the previous models with . Results suggest minimal impact of the unmeasured covariates distribution. As expected, when the frailty has the assumed distribution the bias is smaller but, crucially, observed biases were always smaller than under the 2SRI procedure.
3.3 Nonlinear Treatment Selection Model: Binary Exposure
The second scenario supposes , a binary exposure. Because other values of produced similar results, we only report results for which the correlation between and was fixed at .
Figure 6 depicts the median bias over 2000 Monte Carlo iterations when was directly estimated from Cox regression with Gaussian frailty, (due to the presence of an unmeasured covariate unrelated with treatment assignment, this model is the correct model in the absence of unmeasured confounding), the 2SRI procedure, and 2SRIF. A stronger () and a weaker () scenario was considered for the instrument. Not surprisingly, ignoring the presence of the frailty and estimating a standard Cox regression model results in larger bias, even in the absence of an unmeasured confounder. The 2SRI algorithm helps us to control just the part of the bias related with the treatment assignment but also fails to handle the frailty and its performance suffers as a result. The naive Cox model with a frailty performs much better than both the naive Cox model with no frailty and 2SRI, implying that accounting for the frailty may be more important than dealing with unmeasured confounding. However, the proposed 2SRIF methodology produces a yet further reduction in bias, to close to zero in all scenarios, which we conjecture is due to separating the idiosyncratic and confounding effects of . These results reveal that there is clear benefit to be gained in practice from using 2SRIF as an IV procedure for timetoevent data.
NaiveF  2RSI  2RSIF  NaiveF  2RSI  2RSIF  
0.325  0.885  0.943  0.102  0.916  0.947  
0.189  0.936  0.944  0.041  0.944  0.947  
0.158  0.885  0.938  0.037  0.925  0.946  
0.941  0.954  0.949  0.936  0.944  0.943  
0.946  0.943  0.947  0.950  0.949  0.947  
0.936  0.946  0.944  0.939  0.905  0.947  
0.158  0.883  0.936  0.036  0.943  0.942  
0.212  0.940  0.946  0.038  0.927  0.942  
0.321  0.880  0.933  0.099  0.914  0.948  
Both the 2SRI and the proposed algorithms achieved coverage close to the nominal level in all scenarios; the naive Cox model with Gaussian frailty (NaiveF) understandably yields good results only when (Table 2). As expected, the strength of the instrument affects the confidence interval widths. The width of the 95% confidence interval ranged between 0.44 and 0.57 for the NaiveF models. Under interval estimatior width ranged between 0.86 and 0.88 and between 0.91 and 1.07 for the 2SRI and 2SRIF, respectively. For the widths ranged between 1.23 and 1.31 and between 1.30 and 1.53 for 2SRI and 2SRIF, respectively. These results are consistent with the results for continuous exposures; the variance inflation under the 2SRIF procedure exceeds that under 2SRI which inturn exceeds that under NaiveF.
4 Realworld application: The Vascular Quality Initiative dataset
We apply 2SRIfrailty to nationwide data from the Vascular Quality Initiative (VQI) (http://www.vascularqualityinitiative.org) on patients diagnosed with carotid artery disease (carotid stenosis). These data contain comprehensive information on all patients suffering from carotid stenosis and is continually updated over time to facilitate determination of the best procedure or treatment approach to use on average and to determine which type of patients benefit the most from each procedure. However, the data are exposed to a plethora of selection biases raising concerns that naive analyses will yield biased results. Because the outcomes of most interest are events such as stroke or death that can occur at any point during followup, these data are ideal for application of the 2SRIfrailty procedure.
We employed 2SRIF to estimate the comparative effectiveness of carotid endarterectomy (CEA) versus carotid angioplasty and stenting (CAS), the two surgical procedures used to intervene on patients with carotid stenosis. The data consist of 28712 patients who received CEA and 8117 who received CAS, between 15 and 89 years of age, over 20032015. During followup, there were 3955 and 807 deaths in the CEA and CAS groups, respectively. Table 3
shows descriptive statistics for the measured covariates by procedure.
CEA  CAS  
n=28,712  n=8,117  
Age, meansd  70.29.4  69.110.3  
Male, n (%)  17,180 (59.8)  5,119 (63.1)  
Race, n (%)  
White  27,033 (94.2)  7,382 (90.9)  
Black  922 (3.2)  433 (5.3)  
Other  757 (2.6)  302 (3.8)  
Elective, n (%)  24,906 (86.7)  6,587 (81.1)  
Symptomatic, n (%)  11,168 (38.9)  4,282 (52,7)  
TIA or amaurosis, n (%)  6,405 (22.3)  2,001 (24.6)  
Stroke, n (%)  4,763 (16.6)  2,281 (28.1)  
Hypertension, n (%)  25,452 (88.6)  7,235 (89.1)  
Smoking History, n (%)  22,098 (77.0)  6,168 (76.0)  
Positive Stress Test, n (%)  2,655 (9.2)  677 (8.3)  
Coronary Disease, n (%)  8,586 (29.9)  2,790 (34.4)  
Heart Failure, n (%)  2,669 (9.3)  1,215 (15.0)  
Diabetes, n (%)  9,749 (33.9)  2,942 (36.2)  
COPD, n (%)  6,229 (21.7)  2,083 (25.7)  
Renal Insufficiency, n (%)  1,649 (5.7)  446 (5.5)  
HD, n (%)  263 (0.9)  114 (1.4)  
Prior ipsilateral CEA, n (%)  4,472 (15.6)  2,857 (35.2)  
Antiplatelet therapy, n (%)  
Aspirin  23,960 (83.4)  6,932 (85.4)  
P2y12 inhibitor  6,980 (24.3)  6,173 (76.0)  
Betablocker, n (%)  18,269 (63.6)  4,602 (56.7)  
Statin, n (%)  22,418 (78.1)  6,408 (78.9)  
Figure 7 depicts two KaplanMeier survival curves: crude (left) and adjusted by patient age, gender, ethnicity, race. type of surgery (elective/not elective), symptons (yes/no), hypertension diabetes, smoking history (yes/no), positive stress test, coronary disease, heart failure, diabetes, COPD, renal insufficiency, dialysis status (HD), prior ipsolateral CEA, and the use of antiplatelet therapy, betablokers and statin by using the weighted inverse propensity procedure MacKenzie et al. (2012). The crude hazard ratio (HR) comparing CEA to CAS was 0.719 (95% CI of (0.666; 0.777)). Adjusted HR was 0.693 (0.633; 0.760). The last HR is slightly modified when a frailty term is included: 0.685 (0.624; 0.753), and 0.676 (0.613; 0.745) for the Gaussian and Gamma cases, respectively.
As the instrumental variable we used the center level frequency of CEA versus CAS procedures over the twelve months prior to the current patient; that is, total CEA divided by total of CEA and CAS procedures in the twelve months prior to the current patient. This variable is justified as an instrument due to: 1) hospitals that performed a high relative amount of a certain procedure in the past are likely to keep doing so; 2) there should be no effect of the relative frequency of CEA vs CAS on a patient’s outcome except through its effect on treatment choice for that patient; 3) we know of no factors that would influence both this frequency and a patient’s outcome. Reasons 2) and 3) are contingent on adjusting for the total number of CEA and CAS procedures performed at the center over the past 12 months.
On the VQI data the IV is highly associated with treatment choice. The probability that a randomly selected subject undergoing CEA has a larger value of the instrument than a randomly selected subject undergoing CAS, was 0.881 (95% confidence interval of (0.876; 0.885)). This IV was unrelated with all of the measured confounders suggesting anecdotally that it may also be uncorrelated with any unmeasured confounders. Hence, it is reasonable to assume that the relationship of the instrument with mortality is solely due to its relationship with the treatment. Figure 8 (left side) shows the histogram of in both CEA and CAS groups, at right, we show the boxplot for the IV by surgical procedure.
HR (95% CI)  
Crude  0.719 (0.666; 0.777)  
Cox model (Naive)  0.693 (0.633; 0.760)  
Naive  frailty (gaussian)  0.685 (0.624; 0.753)  
Naive  frailty (gamma)  0.676 (0.613: 0.745)  
2SRI  0.901 (0.737; 1.100)  
2SRI  frailty (gaussian)  0.887 (0.724: 1.087)  
2SRI  frailty (gamma)  0.882 (0.716; 1.086)  
The treatment effect almost disappears when 2SRI is applied on the dataset: HR of 0.901 with a 95% confidence interval (0.737; 1.100). When the proposed 2SRIfrailty algorithm is used a similar result obtains: a HR of 0.887 (
) with a 95% confidence interval (0.724; 1.087). Similar results were also obtained under a gamma distributed frailty instead of the gaussian frailty. Table
4 shows the hazard ratios and 95% confidence intervals.5 Discussion
Instrumental variables methods are often used to account for unmeasured confounders. Although these methods have been widely studied in a variety of situations, their suitability for estimating Cox proportional hazard models is unclear. It is wellknown that, in this case, model misspecification can produce bias even when the omitted variables are unrelated with the treatment assignment (Aalen et al., 2015); that is, when they only affect the survival time. As suggested by our structural argument in the Introduction, an individual frailty appears able to solve this problem. We showed that the presence of idiosyncratic variation affecting treatment selection may induce a frailty in the instrumented survival time model even if there is no frailty in the original survival model. In practice, the most likely scenario is that both a true frailty and unmeasured confounding factors affect survival. For these reasons, we were motivated to develop and evaluate an IV procedure that, in the second stage, incorporates a frailty term.
Because the Cox model is nonlinear, our base strategy for dealing with unmeasured confounders was to use the twostage residual inclusion algorithm, 2SRI, adapted to the Cox model. As noted above, even when the true survival model does not contain omitted covariates, the 2SRI procedure induces a frailty in the secondstage Cox regression model from the inclusion of the residuals computed in the first stage. To account for this phenomenon, we added an individual frailty in the secondstage (instrumented) statistical model. Under standard reliability conditions, we proved the asymptotic consistency of the estimator defined under our 2SRIF procedure for the case when the univariate frailty distribution is correctly assumed to be Gaussian.
Monte Carlo simulations suggested that the proposed methodology (2SRIF) produces an important bias reduction and is superior to the 2SRI, particularly in the presence of an individual frailty due to unmeasured covariates unrelated with the treatment assignment. A very important finding is that the bias of the 2SRIF method was always close to zero even when the residuals from the treatment selection equation were not normally distributed. The Gaussian distribution can be directly justified when each individual frailty is the sum of different independent sources of heterogeneity. Furthermore, because the procedure with the Gaussian frailty was surprisingly robust to erroneously assumed frailty distributions, we recommend using a Gaussian frailty.
A controversial feature of our procedure is the inclusion of the individual frailty term. Although there exists a vast literature for the case where the frailty is common to a group of individuals (shared frailty), the number of references dealing with individual frailties is minimal. Consistency properties of the common estimation algorithm for Cox models with frailties were proved previously (Nielsen et al., 1992; Murphy, 1995). We adapted these theoretical results in deriving the consistency results presented herein. By specifying a distribution for its values, the individual frailty accounts for the omitted covariates unrelated with the treatment assignment – the extra variability introduced in the survival model from the first stage of the algorithm () and the portion of independent of , freeing the augmented firststage residual (the control function) to deal with unmeasured confounding. The resulting procedure estimates the average treatment effect conditional on the unmeasured confounder and the frailty (Yashin et al., 2001). Because in practice specification of the distribution of the frailty can be arbitrary, the observed results should be handled with caution (Hougaard, 1995) and they should be supported by sensitivity analyses considering different frailty distributions. In the VQI application, the empirical results were found to only have a slight dependence on the distribution of the frailty (see Figure 5). This is a key finding that justifies the use of the 2SRIF procedure and represents a major advance in the instrumental variables literature for the analysis of timetoevent outcomes in observational settings.
In the realworld application, a small but significant (at the 0.05 level) effect of the treatment is detected when the presence of omitted covariates on the Cox model is ignored. This effect almost disappears under 2SRI. When the 2SRIF method is used, the estimated effect of CEA over CAS is slightly larger. This result confirms that the effect of the procedure a patient receives is underestimated when unmeasured confounding is ignored (Chamberlain, 1985; Gail et al., 1984). It is worth noting that different patient enrollment rates were observed by treatment: while CAS patient censorship is constant across the followup, most of the CEA patients have followup above two years with an important number censored between the second and the fourth years. To the extent these differences are caused by an unmeasured confounder, this can introduce additional bias in the standard naive estimates and strongly motivates the use of an adequate instrumental variable procedure.
The method we developed conditions on all omitted covariates and assumes they have multiplicative effects on the hazard function under the Cox model, unlike recently developed methods that make unusual additive hazard assumptions in order to more simply account for unmeasured confounding (MacKenzie et al., 2014; Tchetgen Tchetgen et al., 2015). Therefore, we anticipate that our proposed and proven procedure will hold extensive appeal and be widely used in practice.
While it is encouraging that our null results cohere with those of recent RCTs (Rosenfield et al., 2016; Brott et al., 2016)
, thereby overcoming the unfavorable CAS results of the nonIV analyses, an effect close to 0 makes it difficult to distinguish our proposed IV procedure from the incumbent twostage residual inclusion method. However, when the true effect is 0 (HR of 1), the bias from ignoring the frailty is 0 due to the fact that omitting a frailty shrinks the true coefficient towards 0. Therefore, the differences between the 2SRIF and standard 2SRI procedure for the Cox model are expected to converge to 0 as the true treatment effect approaches 0. In this sense, the lack of extensive differences between the various 2SRI (frailty and standard) procedures is a realdata endorsement that our proposed 2SRIF procedure for the Cox model performs as it should by not rejecting the null hypothesis when the RCT results and the 2SRI results suggest that the true effect is close to 0.
Acknowledgement
This work was supported by a PatientCentered Outcomes Research Institute (PCORI) Award ME150328261. All statements in this paper, including its findings and conclusions, are solely those of the authors and do not necessarily represent the views of the PatientCentered Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee. The authors want to show their most sincerely grateful to the PCORI Patient Engagement and Governance Committee and specially to Jon Skinner for reading a draft of the manuscriptfor their efforts in the manuscripts revision and their help in developing the research proposal. The authors have no conflicts of interest to report.
References
 Aalen et al. (2015) Aalen, O. O., R. J. Cook, and K. Røysland (2015). Does cox analysis of a randomized survival study yield a causal treatment effect? Lifetime Data Analysis 21(4), 579–593.
 Anderson (2005) Anderson, T. (2005). Origins of the limited information maximum likelihood and twostage least squares estimators. Journal of Econometrics 127(1), 1–16.
 Angrist et al. (1996) Angrist, J., G. Imbens, and D. Rubin (1996). Identification of causal effects using instrumental variables identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434), 444–455.
 Brott et al. (2016) Brott, T. G., G. Howard, G. S. Roubin, J. F. Meschia, A. Mackey, W. Brooks, W. S. Moore, M. D. Hill, V. A. Mantese, W. M. Clark, C. H. Timaran, D. Heck, P. P. Leimgruber, A. J. Sheffet, V. J. Howard, S. Chaturvedi, B. K. Lal, J. H. Voeks, and R. W. I. Hobson (2016). Longterm results of stenting versus endarterectomy for carotidartery stenosis. New England Journal of Medicine 374(11), 1021–1031.
 Cai et al. (2011) Cai, B., D. S. Small, and T. R. T. Have (2011). Twostage instrumental variable methods for estimating the causal odds ratio: Analysis of bias. Statistics in Medicine 30(15), 1809–1824.
 Chamberlain (1985) Chamberlain, G. (1985). Heterogeneity, omitted variable bias, and duration dependence. In J. J. Heckman and B. S. Singer (Eds.), Longitudinal Analysis of Labor Market Data:. Cambridge: Cambridge University Press.
 Cox (1972) Cox, D. R. (1972). Regression models and lifetables. Journal of the Royal Statistics Society (B) 34(1), 187–220.
 Gail et al. (1984) Gail, M. H., S. Wieand, and S. Piantadosi (1984). Biased estimates of treatment effect in randomized experiments with nonlinear regressions and omitted covariates. Biometrika 71(3), 431–444.
 Greene and Zhang (2003) Greene, W. and G. Zhang (2003). Econometric analysis. Prentice Hall, New Jersey, USA.
 Hausman (1978) Hausman, J. A. (1978). Specification tests in econometrics. Econometrica 46(6), 1251–1271.
 Hernán (2010) Hernán, M. (2010). The hazards of hazard ratios. Epidemiology 21(1), 13–15.
 Hernán and Robins (2006) Hernán, M. and J. Robins (2006). Instruments for causal inference: an epidemioligist’s dream? Epidemiology 17(4), 360–372.
 Hougaard (1995) Hougaard, P. (1995). Frailty models for survival data. Lifetime Data Analysis 1(3), 255–273.
 Kalbfleisch and Prentice (2002) Kalbfleisch, J. D. and R. L. Prentice (2002). The statistical analysis of failure time data. John Wiley & Sons, Inc.
 Klungel et al. (2015) Klungel, U., de Boer A., B. S.V., G. R.H., and K. Roes (2015). Instumental variable analysis in epidemiologic studies: an overview of the estimation methods. Pharmaceutica Analytica Acta 6(4), 1–9.
 Li et al. (2015) Li, J., J. Fine, and A. Brookhart (2015). Instrumental variable additive hazards models. Biometrics 71(1), 122–130.
 MacKenzie et al. (2012) MacKenzie, T. A., J. R. Brown, D. S. Likosky, Y. Wu, and G. L. Grunkemeier (2012). Review of casemix corrected survival curves. The Annals of Thoracic Surgery 93(5), 1416 – 1425.
 MacKenzie et al. (2016) MacKenzie, T. A., M. Lberg, and A. J. O’Malley (2016). Patient centered hazard ratio estimation using principal stratification weights: application to the norccap randomized trial of colorectal cancer screening. Observational Studies 2, 29–50.
 MacKenzie et al. (2014) MacKenzie, T. A., T. D. Tosteson, N. E. Morden, T. A. Stukel, and A. J. O’Malley (2014). Using instrumental variables to estimate a cox’s proportional hazards regression subject to additive confounding. Health Services and Outcomes Research Methodology 14(1), 54–68.
 Martens et al. (2006) Martens, E. P., W. R. Pestman, A. de Boer, S. V. Belitser, and O. H. Klungel (2006). Instrumental variables: Application and limitations. Epidemiology 17(3), 261–267.
 Murphy (1995) Murphy, S. A. (1995). Asymptotic theory for the frailty model. The Annals of Statistics 23(1), 182–198.
 Nielsen et al. (1992) Nielsen, G. G., R. D. Gill, P. K. Andersen, and T. I. A. SÃ¸rensen (1992). A counting process approach to maximum likelihood estimation in frailty models. Scandinavian Journal of Statistics 19(1), 25–43.
 Normand et al. (2011) Normand, S.L. T., R. G. Frank, and A. J. O’Malley (2011). Estimating costoffsets of new medications: Use of new antipsychotics and mental health costs for schizophrenia. Statistics in Medicine 30(16), 1971–1988.
 Pearl (1995) Pearl, J. (1995). Causal diagrams for empirical research. Biometrika 82(4), 669.
 Ripatti and Palmgren (2000) Ripatti, S. and J. Palmgren (2000). Estimation of multivariate frailty models using penalized partial likelihood. Biometrics 56(4), 1016–1022.
 Robins and Tsiatis (1991) Robins, J. M. and A. A. Tsiatis (1991). Correcting for noncompliance in randomized trials using rank preserving structural failure time models. Communications in Statistics  Theory and Methods 20(8), 2609–2631.
 Rosenfield et al. (2016) Rosenfield, K., J. S. Matsumura, S. Chaturvedi, T. Riles, G. M. Ansel, D. C. Metzger, L. Wechsler, M. R. Jaff, and W. Gray (2016). Randomized trial of stent versus surgery for asymptomatic carotid stenosis. New England Journal of Medicine 374(11), 1011–1020.
 Schmoor and Schumacher (1997) Schmoor, C. and M. Schumacher (1997). Effects of covariate omission and categorization when analysing randomized trials with the cox model. Statistics in Medicine 16(3), 225–237.
 Tchetgen Tchetgen et al. (2015) Tchetgen Tchetgen, E., S. Walter, S. Vansteelandt, T. Martinussen, and M. Glymour (2015). Instrumental variable estimation in a survival context. Epidemiology 26(3), 402–410.
 Terza et al. (2008) Terza, J. V., A. Basu, and P. J. Rathouz (2008). Twostage residual inclusion estimation: Addressing endogeneity in health econometric modeling. Journal of Health Economics 27(3), 531 – 543.
 Terza et al. (2008) Terza, J. V., W. D. Bradford, and C. E. Dismuke (2008). The use of linear instrumental variables methods in health services research and health economics: a cautionary note. Health Research and Educational Trust 43(3), 1102–1120.
 Thanasassoulis and O’Donnell (2009) Thanasassoulis, P. and T. O’Donnell (2009). Mendelian randomization. Journal of American Medical Association 301(22), 2386–2388.
 Therneau et al. (2003) Therneau, T. M., P. M. Grambsch, and V. S. Pankratz (2003). Penalized survival models and frailty. Journal of Computational and Graphical Statistics 12(1), 156–175.
 Vaida and Xu (2000) Vaida, F. and R. Xu (2000). Proportional hazards model with random effects. Statistics in Medicine 19(24), 3309–3324.
 Wan et al. (2015) Wan, F., D. Small, J. E. Bekelman, and N. Mitra (2015). Bias in estimating the causal hazard ratio when using twostage instrumental variable methods. Statistics in Medicine 34(14), 2235–2265.
 Wienke (2010) Wienke, A. (2010). Frailty models in survival analysis. Florida: Chapman & Hall/CRC Biostatistics Series.
 Yashin et al. (2001) Yashin, A., I. MAchine, A. Begun, and J. Vaupel (2001). Hidden frailty: myths and reality. Research Report, Department of Statistics and Demography, Odense University.
Comments
There are no comments yet.