Semi-competing risks (Fine et al., 2001) occur in studies where observation of a nonterminal event (e.g., progression) may be pre-empted by a terminal event (e.g., death), but not vice versa. In randomized clinical trials to evaluate treatments of life-threatening diseases, patients are often observed for specific types of disease progression and survival. Often, the primary outcome is patient survival, resulting in data analyses focusing on the terminal event using standard survival analysis tools (Ibrahim et al., 2005). However, there may also be interest in understanding the effect of treatment on nonterminal outcomes such as progression, readmission, etc. An example is a randomized trial for the treatment of malignant brain tumors, where one of the important progression endpoints is based on deterioration of the cerebellum. An important feature of this progression endpoint is that it is biologically plausible that a patient could die without cerebellar deterioration. Thus, analyzing the effect of treatment on progression needs to account for the fact that progression is not well-defined after death.
Varadhan et al. (2014)
reviews models that have been proposed for analyzing semi-competing data. These models can be classified into two broad categories: models for the distribution of the observable data, e.g., cause-specific hazards, sub-distribution functions(Fix and Neyman, 1951; Hougaard, 1999; Xu et al., 2010; Lee et al., 2015) and models for the distribution of the latent failure times (Robins, 1995a, b; Lin et al., 1996; Wang, 2003; Peng and Fine, 2007; Ding et al., 2009; Peng and Fine, 2012; Chen, 2012; Hsieh and Huang, 2012). In these approaches, inference has focused on specific model parameters (e.g., regression coefficients). With the exception of Robins (1995a, b) and more recently Comment et al. (2019), none of the approaches have discussed causal interpretability of the target parameters. The latter work uses different models (parametric frailty models) and causal assumptions (latent ignorability) than our approach.
In this paper, we are interested in estimating the causal effect of treatment on the non-terminal endpoint from a randomized trial generating semi-competing risk data. Using the potential outcomes framework(Rubin, 1974), we propose a principal stratification estimand (Frangakis and Rubin, 2002) to quantify the causal effect. We then introduce assumptions that utilize baseline covariates to identify this estimand from the distribution of the observable data and propose a Bayesian nonparametric (BNP) approach for modeling this distribution. An important feature of BNP models is their large support, allowing us to approximate essentially arbitrary distributions (Ishwaran and James, 2001). To handle covariates, our approach is based on the dependent Dirichlet process (DDP) prior introduced by MacEachern (1999).
The paper is outlined as follows: Section 2 introduces the motivating brain tumor study. The formal definition of the causal estimand is introduced in Section 3. We introduce the BNP model in Section 4. A simulation study is summarized in Section 5. We analyze the brain tumor data in Section 6, and conclude with brief discussion in Section 7.
2 Motivating Brain Tumor Study
The methodology is motivated by a randomized and placebo-controlled phase II trial for 222 recurrent gliomas patients, who were scheduled for tumor resection with recurrent malignant brain tumors (Brem et al., 1995). Eligible patients had a single focus of tumor in the cerebrum, had a Karnofsky score greater than 60, had completed radiation therapy, had not taken nitrosoureas within 6 weeks of enrollment, and had not had systematic chemotherapy within 4 weeks of enrollment. The data include 11 baseline prognostic measures and a baseline evaluation of cerebellar function. The former includes age, race, Karnofsky performance score, local vs. whole brain radiation, percent of tumor resection, previous use of nitrosoureas, and tumor histology (glioblastoma, anapestic astrocytoma, oligodendrolioma, or other) at implantation. Patients were randomized to receive surgically implanted biodegradable polymer discs with or without 3.85% of carmustine. The follow-up duration was 1 year. Of the 219 patients with complete baseline measures, 204 were observed to die and 100 were observed to progress prior to death. Of the 15 patients who did not die, 4 were observed to have cerebellar progression. Our goal is to estimate the causal effect of treatment on time to cerebellar progression.
3 Causal Estimand and Identification Assumptions
3.1 Potential Outcomes and Causal Estimand
Let , and denote progression time, death time and censoring time, under treatment . Here represents control and treatment group, respectively. All event times are log-transformed. Fundamental to our setting is that (i.e., progression cannot happen after death).
The causal estimand of interest is the function
where is a smooth function of . Among patients who survive to time under both treatments, this estimand contrasts the risk of progression prior to time for treatment 1 relative to treatment 0. This estimand is an example of a principal stratum causal effect (Frangakis and Rubin, 2002).
3.2 Observed Data
Let denote treatment assignment and
denote a vector of the baseline covariates. Let, and . Let , , , and denote the observed event times and event indicators. The observed data for each patient are . We assume that we observe i.i.d. copies of . Throughout, variables subscripted by will denote data specific to patient .
3.3 Identification Assumptions
We introduce the following four assumptions that are sufficient for identifying our causal estimand.
Assumption 1: Treatment is randomized, i.e.,
This obviously holds by design in randomized trials as considered here.
Assumption 2: Censoring is non-informative in the sense that
and for all .
Let and denote the conditional hazard function and conditional distribution function of given , respectively. Under Assumptions 1 and 2, and are identified via the following formulae:
Furthermore, the conditional sub-distribution function of given and , , is identified via the following formula:
where . Together and identify the joint subdistribution for given .
The conditional joint distribution function ofgiven , , follows a Gaussian copula model, i.e.,
where is a standard normal c.d.f. and
is a bivariate normal c.d.f. with mean 0, marginal variances 1, and correlation. For fixed , is identified since and are identified. Similar assumptions have been used in the causal mediation literature (Daniels et al., 2012).
Assumption 4: Progression time under treatment is conditionally independent of death time under treatment given death time under treatment and covariates , i.e.,
Lemma: Under Assumptions 1-4, is identified from the distribution of the observed data as follows:
where is the empirical distribution of .
4 Bayesian Regression Model
4.1 Dependent Dirichlet Process - Gaussian Process Prior
We start with a review of the Dirichlet process as a prior for an unknown distribution and step by step extend it to the Dependent Dirichlet Process - Gaussian Process prior.
The Dirichlet process (DP) prior has been widely used as a prior model for a random unknown probability distribution. We writeif a random distribution of a -dimensional random vector follows a DP prior, where is known as the total mass parameter and is known as the base measure. Sethuraman (1994) provides a constructive definition of a DP, where , , and . In many applications, the discrete nature of is not appropriate. A DP mixture model extends the DP model by replacing each point mass with a continuous kernel. For example, a DP mixture of normals takes the form: , where is the density function of a multivariate normal random vector with mean vector and variance-covariance matrix .
To introduce a prior on the conditional (on covariates ) distribution () of , the DP mixture model has been extended to a dependent DP (DDP) by replacing in each term with , which is a multivariate stochastic process indexed by . A DDP mixture of normals takes the form:
To complete the prior specification, we need to posit a stochastic process prior for . A common specification are independent Gaussian process (GP) priors (MacEachern, 1999) on . A GP prior is specified such that for all and (, the distribution of
follows a multivariate normal distribution with mean vectorand covariance matrix where the entry is ). We write . For an extensive review of the GP priors, see Rasmussen and Williams (2006) and MacKay (1999). We model the mean function
as a linear regression on covariateswith covariance process specified as
where is the dimension of the covariate vector, and is a small constant (e.g., ) used to ensure that the covariance function is positive definite. To ensure a reasonable covariance structure, continuous covariates should be standardized to have mean 0 and variance 1. More flexible covariance functions can be considered if desired. Additional priors are introduced on the ’s and , the details of which are discussed in Appendix A1. We write .
4.2 Application to Semi-competing Risks Data
Separately for each treatment group , we posit independent DDP-GP’s on the unknown conditional (on ) probability measure () of . Since () and , the prior on induces priors on and (identified under Assumptions 1 and 2) and together with the Gaussian copula for implies a prior on the estimand . The prior on also induces priors on non-identified quantities which have no impact on our analysis. More specifics about our prior are presented in Appendix A1.
Before transitioning to the posterior sampling algorithm, note that the relevant portion of the observed data likelihood for individual , with data is
We include the second equality because it allows us to see that, using data augmentation to replace the integrals, the joint full data likelihood is . This will allow us to use existing posterior simulation techniques for DDP-GP models.
4.3 Posterior Simulation
The details of the MCMC algorithm are presented in Appendix A2. Here we focus on individuals assigned to treatment and suppress the dependence of the notation on . As noted above, the MCMC implementation is based on the full data likelihood. While is an infinite mixture of normals, we approximate it by a finite mixture with components. This finite mixture model for can be replaced by a hierarchical model where (1) is a latent variable that selects mixture component () with probability (properly normalized to handle the finite number of mixture components) and (2) given , the pair follows a multivariate normal distribution with mean and variance .
Posterior simulation is based on this hierarchical model characterization. Importantly, all of the full conditionals in the MCMC algorithm have a closed form representation. Details of the Markov chain Monte Carlo posterior simulation can be found in Appendix A2.
5 Simulation Studies
5.1 Simulation Setup
We considered three simulation scenarios to evaluate the performance of our proposed approach with 500 repeated simulations for each scenario. We generated . Independently of , we generated two independent covariates and , where followed a truncated normal distribution with mean 4.5, variance 1 and truncation interval and . For the first two simulation scenarios, we simulated progression time and death time on the log scale as follows:
In Scenario 1, we assumed followed a bivariate normal distribution with mean , marginal variances , and correlation In Scenario 2, we assumed to be a scaled multivariate
distribution with degree of freedom, mean , marginal variance , and correlation . Scenario 3 explored performance under a nonlinear covariate effect specification on progression and death times. We generated and , with following the same distribution as in Scenario 1.
In all scenarios, the censoring time on the log scale was generated independently according to a distribution. For the joint distribution of and in (4), we set in the Gaussian copula as the truth. We generated for independent patients and then coarsened to .
To explore sensitivity of with respect to , we conducted inference for under several values of
. For all three scenarios, we specified hyperparameters as described in Appendix A1.
For comparative purposes, we implemented a naive Bayesian (Naive) model by assuming that the conditional probability measure ( of follows a multivariate normal distribution with mean and variance-covariance matrix , with conjugate multivariate normal priors on and and an inverse Wishart prior on (i.e., , , and , ).
For each analysis, we ran 5,000 MCMC iterations with an initial burn-in of 2,000 iterations and a thinning factor of 10.
5.2 Simulation Results
We first report on the performance in terms of recovering the true treatment-specific marginal survival functions for time to death. For the BNP approach, Figure 1
shows, for each of the three simulation scenarios and by treatment group (first and second rows refer to treatments 0 and 1, respectively), the true survival functions (solid line), the posterior mean survival functions averaged over simulated datasets (dashed line), and 95% point-wise credible intervals (computed using quantiles) averaged over simulated datasets (dotted lines) on the original time scale (days). As another metric of performance, we computed, for each simulated dataset, the root mean squared error (RMSE) taken as the square root of the average of the squared errors at 34 equally-spaced grid points in log-scaled time interval. For each scenario, Table 1
summarizes the mean and standard deviation of RMSE across the 500 simulated datasets. Both Figure1 and Table 1 show that our proposed BNP procedure performs well, for each of the three scenarios, in terms of recovering the true survival function.
|Scenario 1||Scenario 2||Scenario 3|
|1||0.012 (0.007)||0.013 (0.007)||0.012 (0.006)||0.013 (0.007)|
|2||0.042 (0.022)||0.088 (0.032)||0.019 (0.007)||0.073 (0.035)|
|3||0.012 (0.006)||0.013 (0.007)||0.012 (0.007)||0.014 (0.007)|
Table 1 also shows the mean and standard deviation of RMSE for the Naive model. In scenario 1, the model matches the simulation true model, thereby yielding comparable results as the proposed BNP model. In contrast, the Naive model performs worse than the BNP model in scenario 2 when the fitted model does not match the simulation truth. In scenario 3, the BNP model performs slightly better than the Naive model. Overall, the proposed BNP model is more robust compared to the Naive model.
Evaluation of requires evaluation of as the second marginal under . Expression (5) allows us now to estimate
. Both the numerator and denominator can be evaluated as functionals of the currently imputed random probability measureof time to log progression and time to log death under treatment , marginalizing with respect to the empirical distribution of . Each iteration of the posterior MCMC simulation evaluates a point-wise estimate and we estimate the posterior mean of as across iterations. We also report the mean RMSE in estimating the by averaging over 500 repeated simulations under the proposed BNP model and the Naive model. Table 2 summarizes the results.
|1||0.286 (0.087)||0.328 (0.126)||0.059 (0.035)||0.073 (0.051)||0.185 (0.037)||0.207 (0.047)|
|2||0.277 (0.128)||0.493 (0.250)||0.090 (0.062)||0.199 (0.169)||0.261 (0.070)||0.243 (0.111)|
|3||0.106 (0.032)||0.105 (0.038)||0.033 (0.016)||0.035 (0.021)||0.086 (0.028)||0.097 (0.034)|
Figure 2 shows versus in the three scenarios, respectively, using and . As shown in Figure 2, in all three scenarios, when , the estimates under the proposed BNP model reliably recover the simulated true and avoid the excessive bias seen with other values. This agrees with the results reported in Table 2 that always yields the smallest mean RMSE in all three scenarios. Furthermore, when , the proposed BNP model has smaller mean RMSE compared to the Naive model. When or , the BNP model performs better or comparable to the Naive model in terms of providing smaller mean RMSE and variability of RMSE across simulations.
6 Brain Tumor Data Analysis
An initial analysis of the brain tumor death outcome using Kaplan-Meier is given in Figure 3, indicating that the treatment group has higher estimated survival probabilities. The estimated difference at 365 days is 2.6% (95% CI: -8.1% to 13.3%). Figure 3 plots the estimated posterior survival curves treatment and control groups with 95% credible intervals; panels (a) and (b) display the results for the BNP and Naive approaches, respectively. Using the BNP approach, the estimated posterior difference in survival at 365 days is 6.2% (95% CI: -1.2% to 13.3%). For the Naive approach, the estimated posterior difference in survival at 365 days is 8.4% (95% CI: 0.2% to 17.9%). The BNP approach produces comparable or higher treatment-specific estimates of survival and greater treatment differences than Kaplan-Meier. In contrast, the Naive approach produces comparable or lower (higher) estimate of survival for the control (treatment) group than Kaplan-Meier. Comparatively speaking, the Naive approach produces lower treatment-specific posterior estimates of survival than the BNP approach. When we compare the fit to the observed survival data of the BNP and Naive approaches using the log-pseudo marginal likelihood (LPML) (Geisser and Eddy, 1979), a leave-one-out cross-validation statistic, we see the BNP performs better. Specifically, the LPML for the treatment arm is -144 and -161 for the BNP and Naive approaches, respectively. The corresponding numbers for the control arm are -137 and -174.
|(a) BNP||(b) Naive|
For the BNP (see panel (a)) and Naive (see panel (b)) approaches, Figure 4 plots the posterior estimates (along with point-wise 95% credible intervals) of the causal estimand versus for three choices of , 0.2, 0.5 and 0.8. Except near , there are no appreciable differences between the two approaches. In addition, the results are insensitive to choice of . Overall, this analysis shows that there is a lower estimated risk of progression for treatment versus of control at all time points, except near zero. However, there is appreciable uncertainty, characterized by wide posterior credible intervals, that precludes more definitive conclusions about the difference between treatment groups with regards to progression. When we compare the fit to the observed survival and progression data of the BNP and Naive approaches using LPML, we see that the approaches perform comparably. Specifically, the LPML for the treatment arm is -227 and -232 for the BNP and Naive approaches, respectively. The corresponding numbers for the control arm are -215 and -214.
|(a) BNP||(b) Naive|
In this paper, we proposed a causal estimand for characterizing the effect of treatment on progression in a randomized trials with a semi-competing risks data structure. We introduced a set of identification assumptions, indexed by a non-identifiable sensitivity parameter that quantifies the correlation between survival under treatment and survival under control. Selecting a range of the sensitivity parameter in a specific trial will depend on clinical considerations. For example, in trial of a biomarker targeted therapy, one might expect weaker correlation, since survival under control might be primarily determined by co-morbidities and the survival under treatment might be more determined by the presence of the targeted molecular aberration. In contrast, for some chemotherapies, the same factors that impact survival under control may equally impact survival under treatment, e.g., co-morbidities, social support, etc. Fortunately, the sensitivity parameter is bounded between -1 and 1 and, in most settings, should be positive; a range should be selected in close collaboration with subject matter experts.
We proposed a flexible Bayesian nonparametric approach for modeling the distribution of the observed data. Since the causal estimand is a functional of the distribution of the observed data and , we draw inference about it using posterior summarization. Our procedure can easily be extended to accommodate a prior distribution on , which will allow for integrated inference. Our procedure also allows for posterior inferences about other identified causal contrasts such as the distribution of survival under treatment versus under control. The procedure can also be used for predictive inference for patients with specific covariate profiles.
This research is supported by NIH CA183854 and GM 112327. The authors would like to thank Drs. Henry Brem and Steven Piantadosi for providing access to data from the brain cancer trial.
- Brem et al. (1995) Brem, H., Piantadosi, S., Burger, P., Walker, M., Selker, R., Vick, N., Black, K., Sisti, M., Brem, S., Mohr, G., et al. (1995). Placebo-controlled trial of safety and efficacy of intraoperative controlled delivery by biodegradable polymers of chemotherapy for recurrent gliomas. The Lancet 345, 1008–1012.
- Chen (2012) Chen, Y.-H. (2012). Maximum likelihood analysis of semicompeting risks data with semiparametric regression models. Lifetime data analysis 18, 36–57.
- Comment et al. (2019) Comment, L., Mealli, F., Haneuse, S., and Zigler, C. (2019). Survivor average causal effects for continuous time: a principal stratification approach to causal inference with semicompeting risks. arXiv preprint arXiv:1902.09304 .
- Daniels et al. (2012) Daniels, M. J., Roy, J. A., Kim, C., Hogan, J. W., and Perri, M. G. (2012). Bayesian inference for the causal effect of mediation. Biometrics 68, 1028–1036.
et al. (2009)
Ding, A., Shi, G., Wang, W., and HSIEH, J.-J. (2009).
Marginal regression analysis for semi-competing risks data under dependent censoring.Scandinavian Journal of Statistics 36, 481–500.
- Fine et al. (2001) Fine, J. P., Jiang, H., and Chappell, R. (2001). On semi-competing risks data. Biometrika 88, 907–919.
- Fix and Neyman (1951) Fix, E. and Neyman, J. (1951). A simple stochastic model of recovery, relapse, death and loss of patients. Human Biology pages 205–241.
- Frangakis and Rubin (2002) Frangakis, C. E. and Rubin, D. B. (2002). Principal stratification in causal inference. Biometrics 58, 21–29.
- Geisser and Eddy (1979) Geisser, S. and Eddy, W. F. (1979). A predictive approach to model selection. Journal of the American Statistical Association 74, 153–160.
- Hougaard (1999) Hougaard, P. (1999). Multi-state models: a review. Lifetime data analysis 5, 239–264.
- Hsieh and Huang (2012) Hsieh, J.-J. and Huang, Y.-T. (2012). Regression analysis based on conditional likelihood approach under semi-competing risks data. Lifetime data analysis 18, 302–320.
- Ibrahim et al. (2005) Ibrahim, J. G., Chen, M.-H., and Sinha, D. (2005). Bayesian survival analysis. Wiley Online Library.
- Ishwaran and James (2001) Ishwaran, H. and James, L. F. (2001). Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association 96, 161.
- Lee et al. (2015) Lee, K. H., Haneuse, S., Schrag, D., and Dominici, F. (2015). Bayesian semiparametric analysis of semicompeting risks data: investigating hospital readmission after a pancreatic cancer diagnosis. Journal of the Royal Statistical Society: Series C (Applied Statistics) 64, 253–273.
- Lin et al. (1996) Lin, D., Robins, J., and Wei, L. (1996). Comparing two failure time distributions in the presence of dependent censoring. Biometrika 83, 381–393.
MacEachern, S. N. (1999).
Dependent nonparametric processes.
ASA proceedings of the Section on Bayesian Statistical Science, pages 50–55. American Statistical Association, pp. 50–55, Alexandria, VA.
- MacKay (1999) MacKay, D. (1999). Introduction to gaussian processes. Technical report, Cambridge University, http://wol.ra.phy.cam.ac.uk /mackay/GP/.
- Peng and Fine (2007) Peng, L. and Fine, J. P. (2007). Regression modeling of semicompeting risks data. Biometrics 63, 96–108.
- Peng and Fine (2012) Peng, L. and Fine, J. P. (2012). Rank estimation of accelerated lifetime models with dependent censoring. Journal of the American Statistical Association .
Rasmussen, C. and Williams, C. (2006).
Gaussian Processes for Machine Learning. ISBN 0-262-18253-X. MIT Press.
- Robins (1995a) Robins, J. M. (1995a). An analytic method for randomized trials with informative censoring: Part 1. Lifetime Data Analysis 1, 241–254.
- Robins (1995b) Robins, J. M. (1995b). An analytic method for randomized trials with informative censoring: Part ii. Lifetime data analysis 1, 417–434.
- Rubin (1974) Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology 66, 688.
- Sethuraman (1994) Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica 4, 639–650.
- Varadhan et al. (2014) Varadhan, R., Xue, Q.-L., and Bandeen-Roche, K. (2014). Semicompeting risks in aging research: methods, issues and needs. Lifetime data analysis 20, 538–562.
- Wang (2003) Wang, W. (2003). Estimating the association parameter for copula models under dependent censoring. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65, 257–273.
- Xu et al. (2010) Xu, J., Kalbfleisch, J. D., and Tai, B. (2010). Statistical analysis of illness–death processes and semicompeting risks data. Biometrics 66, 716–725.
A1: Determining Prior Hyperparameters
As priors for in the GP mean function, we assume . We assume , where . The precision parameter in the DDP is assumed to be distributed .
In applications of Bayesian inference with small to moderate sample sizes, a critical step is to fix values for all hyperparameters
. Inappropriate information could be introduced by improper numerical values, leading to inaccurate posterior inference. We use an empirical Bayes method to obtainby fitting a bivariate normal distribution for responses of patients under treatment , . For , we assume a diagonal matrix with the diagonal values being 10. After an empirical estimate of is computed, we tune and so that the prior mean of matches the empirical estimate, and . Finally, we assume .
A2: MCMC Computational Details
Unless required for clarity, we suppress dependence of the notation on treatment . Here is used to denote endpoint ( for progression and for death). We define
Let and , , (), is an matrix where the th row contains the -dimensional covariate vector for the -th patient, is an matrix where the entry is , is an matrix where the th row refers to the th patient in , the th column refers to patient and element is the indicator that the patient in th row is the same as the patient in the th column, is an identity matrix, where and , and .
For , we iterate through the following 6 updating steps:
where is the number of observations such that . Then and .
Assuming that ,
where is generated from step 1.
Update , where .
We write as where includes .
is point mass at .
If (i.e., ),