Bayesian Survival Analysis Using Gamma Processes with Adaptive Time Partition

by   Yi Li, et al.

In Bayesian semi-parametric analyses of time-to-event data, non-parametric process priors are adopted for the baseline hazard function or the cumulative baseline hazard function for a given finite partition of the time axis. However, it would be controversial to suggest a general guideline to construct an optimal time partition. While a great deal of research has been done to relax the assumption of the fixed split times for other non-parametric processes, to our knowledge, no methods have been developed for a gamma process prior, which is one of the most widely used in Bayesian survival analysis. In this paper, we propose a new Bayesian framework for proportional hazards models where the cumulative baseline hazard function is modeled a priori by a gamma process. A key feature of the proposed framework is that the number and position of interval cutpoints are treated as random and estimated based on their posterior distributions.



page 1

page 2

page 3

page 4


A general Bayesian bootstrap for censored data based on the beta-Stacy process

We introduce a novel procedure to perform Bayesian non-parametric infere...

Gaussian Processes for Survival Analysis

We introduce a semi-parametric Bayesian model for survival analysis. The...

The semi-Markov beta-Stacy process: a Bayesian non-parametric prior for semi-Markov processes

The literature on Bayesian methods for the analysis of discrete-time sem...

casebase: An Alternative Framework For Survival Analysis and Comparison of Event Rates

In epidemiological studies of time-to-event data, a quantity of interest...

Catastrophic failure and cumulative damage models involving two types of extended exponential distributions

The present study supposes a single unit and investigates cumulative dam...

Methods to Deal with Unknown Populational Minima during Parameter Inference

There is a myriad of phenomena that are better modelled with semi-infini...

Bayesian Non-Parametric Factor Analysis for Longitudinal Spatial Surfaces

We introduce a Bayesian non-parametric spatial factor analysis model wit...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Non-parametric and semi-parametric Bayesian analysis of time-to-event data have been well accepted in practice because they enable more flexible modeling strategy with fewer assumptions. While semi-parametric Cox proportional hazards models (Cox, 1972) avoid modeling the baseline hazard function by maximizing the partial likelihood, the Bayesian paradigm requires explicit parametrization of the baseline hazard function. To this end, various types of non-parametric prior processes have been developed, including gamma process (Kalbfleisch, 1978), beta process (Hjort et al., 1990), and Dirichlet process (Gelfand and Mallick, 1995), which can be used to model the cumulative baseline hazard or distribution functions. A broad overview can be found in Ibrahim et al. (2005).

Gamma process is one of the most widely used non-parametric process priors to model the cumulative baseline hazard in Bayesian proportional hazards models (Kalbfleisch, 1978; Clayton, 1991; Sinha et al., 1999, 2003, 2015). It has been adopted in many applications including multivariate methods (Dunson and Chen, 2004; Sen et al., 2010; Cai, 2010; Sreedevi and Sankaran, 2012; Ouyang et al., 2013) and variable selection methods (Lee et al., 2011; Savitsky et al., 2011; Gu et al., 2013; Zhao et al., 2014; Lee et al., 2015a), and other survival models (Gelfand and Kottas, 2003; Cho et al., 2009; Zhao et al., 2015). In the absence of the prior information, in the survival models where Gamma process prior is used for cumulative baseline hazard functions (Burridge, 1981; Lee et al., 2011; Savitsky et al., 2011; Gu et al., 2013; Zhao et al., 2014; Lee et al., 2015a)

, the time partition is specified either based on uniquely ordered failure times or equi-length intervals conditional on the prespecified number of interval cutpoints. The use of equally spaced intervals may lead to the time partition where some of the intervals have few failures. The possibility increases as the number of splits is set to a large value. On the other hand, setting the partition such that each interval contains the same number of failures may result in oversimplified estimation of baseline hazard for a long period time. Thus, fixing the partition of the baseline hazard timescale may impose unreasonable structure on the model and consequently standard error estimates may not reflect additional uncertainty associated with the number and position of the split times

(Haneuse et al., 2008). Therefore, it is very important to specify the optimal time partition in the Bayesian semiparametric analysis of time-to-event data.

While there have been studies on methods to relax the assumption of fixed time partition for models with other non-parametric processes (Arjas and Gasbarra, 1994; Haneuse et al., 2008; Lee et al., 2015b), to our knowledge, no such methods have been developed for the gamma process prior for cumulative baseline hazard function. In this paper, we propose a Bayesian framework that allows the number and position of interval splits to be determined by the data. We present an efficient computational scheme to fit the proposed models based on a reversible jump Markov chain Monte Carlo (MCMC) algorithm (Green, 1995).

The remainder of the paper is organized as follows. Section 2 describes the proposed Bayesian analysis framework including specification of prior distributions. Section 3 provides a detailed computational algorithm for obtaining samples from the joint posterior and two metrics for comparing goodness of fit across model specifications. Section 4 presents simulation studies under the four different scenarios to assess performance of our proposed methods. In Section 5, we apply the proposed framework to four different real-life survival data. Section 6 concludes with discussion.

2 A Bayesian Framework for Proportional Hazards Models

2.1 Model Specification and Observed Data Likelihood

Let denote the survival times of individuals in a population. Under the Cox proportional hazards model (Cox, 1972), the hazard at time

for an individual whose covariate vector is

is given by


where is the baseline hazard function and is a vector of regression coefficients. In the Bayesian paradigm, one is required to specify the baseline hazard function. We consider a non-parametric specification by taking the baseline hazard function to be a flexible mixture of piecewise constant functions (McKeague and Tighiouart, 2000). To this end, we construct a finite partition of the time axis, , with , for all . The survival time of the subject falls in one of the disjoint intervals. Given the partition (, ), we assume


where is the increment in the cumulative baseline hazard in the interval, that is . This specification is referred to as a piecewise exponential model (PEM) (Ibrahim et al., 2005). Let denote the right censoring time for the subject and let , if and otherwise. Furthermore, let denote the observed data for the subject. For a given specification of (1) and (2), the observed data likelihood as a function of the unknown parameters , is given by

where is the set of subjects at risk, is the set of subjects having failures in the interval, , and .

2.2 Hyperparameters and Prior Distributions

In the proposed Bayesian framework, we use a gamma process prior for the cumulative baseline hazard function (Kalbfleisch, 1978):


where is an increasing function with , and is a positive constant. The specified can be viewed as an initial guess of and is a specification of weight or confidence attached to that guess. In general, is taken to be a known parametric function, such as a Weibull distribution. Specifically, we consider , where

are hyperparameters. The gamma process prior (

3) implies that

’s follow independent gamma distributions:

Thus, the hyperparameters for consist of a specified parametric cumulative hazard function evaluated at the splits of the time partition, and a positive scalar quantifying the degree of prior confidence in .

We note that it is often challenging to determine the number of split times and their positions

. Thus the choice is often based on heuristic considerations. We avoid reliance on a fixed partition of the time axis by permitting the partition (

, ) to vary and to be updated via a reversible jump MCMC scheme (Green, 1995; McKeague and Tighiouart, 2000; Haneuse et al., 2008). Specifically,

is drawn from a Poisson distribution with rate

but restricted to be . Conditional on the number of splits, we take the positions to be a priori

distributed as the even-numbered order statistics. This can help decrease the probability of creating a new interval containing no events, which could make estimation of the baseline hazard in the interval difficult. Finally, we adopt a noninformative flat prior on the real line for regression parameters

. To summarize, our prior specification is as follows.

3 Posterior Inference and Model Comparison

3.1 Markov Chain Monte Carlo

To perform posterior estimation and inference, we use a random-scan Gibbs sampling algorithm to generate samples from the joint posterior distribution. Since updating the time partition (, ) requires a change in the dimension of the parameter space, we develop our computational scheme based on a Metropolis-Hastings-Green (MHG) step (Green, 1995). A detailed description of the complete algorithm, together with all necessary full conditional posterior distributions, is provided.

In the first step of the random-scan Gibbs sampling scheme, we select a move from the following four possible choices:

move RP: updating regression parameters
move BH: updating parameters for the cumulative baseline hazard
move BI: creating a new split time
move DI: removing a split time

If is the number of time splits at the current iteration, the probabilities for move and are

is determined so that and for . is the preassigned upper limit on the number of time splits and we set . For the remaining moves,

Move RP

The full conditional posterior distribution for , , is given by

where is without the element . Since the full conditional does not have standard form, we update each of regression coefficient using a random walk Metropolis-Hastings (MH) algorithm.

Move BH

The full conditional posterior distribution for , , is given by

Since the full conditional does not have standard form, we update ’s using a random walk MH algorithm.

Move BI

For a given partition , the cumulative baseline function )=. Updating this specification requires generating a proposal split and then deciding whether or not to accept the proposal. First, we proceed by selecting a proposal split time uniformly from among the observed event times that are not included in the current partition. Suppose falls in the interval (, ] in the current partition. That is, the induced proposal partition can be written as

Second, we calculate and of the two new intervals created by the split at time . To ensure the old height, , is a compromise of the two new heights, and , the former is taken to be the weighted mean of the latter:

We define the multiplicative perturbation , where Uniform(0, 1). Then the new and are given by

The Jacobian of the above system is

Finally, the acceptance probability in the MHG step is computed as the product of the likelihood ratio, prior ratio, proposal ratio, and Jacobian:

  1. Likelihood ratio:

  2. Prior ratio:

  3. Proposal ratio:

  4. Jacobian:

Move DI

The acceptance probability for the corresponding reverse move has the same form with the appropriate change of labelling of the partitions and variables, and the ratio terms inverted. Suppose we remove a randomly selected split time . The proposal partition of time axis consists of time splits as follows:

As similarly done for the move BI,

Therefore, the four components of the acceptance probability can be written as follows:

  1. Likelihood ratio:

  2. Prior ratio:

  3. Proposal ratio:

  4. Jacobian:

3.2 Model Comparison Criteria

In practice, analysts must balance model complexity with limitations of available information in the data. To this end, we develop two model comparison metrics based on the deviance information criterion (DIC) (Spiegelhalter et al., 2002) and the log-pseudo marginal likelihood (LPML) statistics (Geisser and Eddy, 1979). Various DIC constructions and extensions have been compared and tested in the case of mixtures of distributions and random effect models (Celeux et al., 2006). Since our proposed framework involves the PEM structure, we consider the DIC measure as suggested in Celeux et al. (2006) and estimate the quantity by using the following Monte Carlo approximation:

denotes the value of at the MCMC iteration, =. Note, a model with smaller DIC indicates a better fit of the model for the data. A general rule of thumb for model comparison is to consider differences of less than 2 to be negligible, differences between 2 and 6 to be indicative of positive support for the model with the lower value and differences greater than 6 to be strong support in value of the model with the lower value (Spiegelhalter et al., 2002).

The LPML criterion is computed as , where the subject-specific conditional predictive ordinates (CPO) is given by . Following Chen et al. (2012), CPO can be approximated via a Monte Carlo estimator:

Since CPO

is the marginal posterior predictive density of the outcome for the

subject given a dataset excluding the subject, a larger value of LPML indicates a better fit to the data. For practical interpretation, one can compute the so-called pseudo-Bayes factor (PBF) for two models by exponentiating difference in their LPML values

(Hanson, 2006).

Scenario Censoring Distribution of the
rate baseline hazard function
1 300 30% Weibull
2 300 30% Piecewise linear
3 300 50% Weibull
4 100 30% Weibull
Table 1: A summary of four simulation scenarios explored in Section 4.
Figure 1: Induced baseline survival function based on the hyperparameters considered in Section 4 and 5.

4 Simulation Studies

We refer to the proposed Bayesian framework as a PEM-gamma process model with a reversible jump MCMC scheme (GP-RJ). The overarching goals of the simulation studies are to investigate the small sample operating characteristics of the proposed GP-RJ framework and compare the performance with a PEM-gamma process model with a fixed time partition with equally spaced intervals (GP-EQ). GP-EQ model can be implemented by running GP-RJ with = set to .

4.1 Setting

We consider four data scenarios that vary in terms of sample size, censoring rates, and the true underlying baseline hazard distribution. The simulation setting is similar to that in (Lee et al., 2016) and a summary of the scenarios is provided in Table 1. In Scenarios 1, 3, and 4, the baseline hazard function is set to correspond to the hazard of a Weibull(, ). To evaluate the performance of the model when the true baseline hazard function is not monotone increasing or decreasing function like a Weibull, we consider a piecewise linear hazard function in Scenario 2: + with . In order to carry out investigation on the effect of sample size and availability of information in the data, we consider a higher censoring rate () in Scenario 3 and a smaller sample size () in Scenario 4.

For each of the four scenarios we generated 500 simulated datasets under the model outlined in Table 1. We specified that the hazard function depended on three covariates: , Normal(0, 1), and Bernoulli(0.5). The regression coefficients were set to . We used the simSurv function in the SemiCompRisks R package (Alvares et al., 2019) to simulate the survival data.

4.2 Analyses and Specification of Hyperparameters

For each of the 500 datasets generated under each of the four scenarios we fit GP-RJ and GP-EQ models. For GP-RJ model, we set the prior Poisson rate on the number of split times to be . We specified that the time partition for GP-EQ model was constructed with 11 equi-length intervals (). For both models, we set . The induced baseline survival function based on this choice of hyperparameters was given in Figure 1. We ran two independent chains for a total of 1 million scans each with the first half taken as burn-in. We computed the Gelman-Rubin potential scale reduction factors (PSRF) (Gelman et al., 2013) to assess convergence, specifically requiring the statistics to be less than 1.05 for all model parameters.

0.5 1.8 2.5 0.95 0.94 1.00 1.00
1 0.8 2.8 3.6 0.94 0.93 1.00 1.00
-0.5 2.2 2.7 0.95 0.96 1.00 1.00
0.5 1.7 2.0 0.94 0.95 1.00 1.00
2 0.8 2.5 2.8 0.94 0.93 1.00 0.99
-0.5 -0.4 -0.6 0.95 0.96 1.00 1.00
0.5 1.7 2.5 0.94 0.93 1.00 1.00
3 0.8 3.0 3.8 0.94 0.94 1.00 1.00
-0.5 2.6 3.3 0.95 0.95 1.00 1.00
0.5 6.2 6.2 0.95 0.94 1.00 1.00
4 0.8 3.3 3.3 0.94 0.94 1.00 1.00
-0.5 0.9 0.8 0.95 0.95 1.00 1.00
Table 2:

Estimated percent bias (PB), coverage probability (CP), and average relative width of 95% credible intervals (RW) for

for two analyses described in Section 4, across four simulation scenarios given in Table 1. For RW, the GP-RJ was taken as the referent and throughout values are based on results from 500 simulated datasets.
Figure 2: Simulation studies: (a)-(d) Estimated baseline hazard function, , for two analyses described in Section 4; (e) The posterior distribution of from the analysis under the GP-RJ model.

4.3 Results

In Figure 2 (a)-(d), we presented the mean estimated baseline hazard function under the four scenarios using GP-RJ and GP-EQ models. Both models performed well in terms of estimation of the baseline hazard function. In Scenarios 2 and 3, the baseline hazard from the analyses tended to be underestimated for . This was due to relatively fewer events being generated for the later time period under the two simulation scenarios. Note that the proposed GP-RJ model successfully smoothed out the process while the GP-EQ resulted in a piecewise constant baseline hazard as the conventional PEM model. In Figure 2 (e), we presented the distribution of the estimated number of time splits () for GP-RJ. While providing a smooth estimated baseline hazard function function like the truth, GP-RJ with utilized a smaller number of split times comparing to GP-EQ with across the four scenarios. In addition, it was shown that the more complicated the underlying hazard function was (Scenario 2) and the less the information was available from the data (Scenarios 3 and 4), the more split times GP-RJ tended to create.

Table 2 indicated that the proposed GP-RJ also performed well in terms of estimation and inference for . It was shown that percent bias was small across the board, and the estimated coverage probabilities were all close to the nominal 0.95. While GP-EQ yielded point estimates of that were slightly more biased comparing to GP-RJ, the difference between two models was not significant because of the nature of the proportional hazards assumption.

5 Applications

In this section, we demonstrate the practical utility of our proposed models using four different time-to-event datasets. The four datasets are Dutch male laryngeal cancer data (DLC) (Kardaun, 1983), German breast cancer data (GBC) (Schumacher et al., 1994), Netherlands Cancer Institute breast cancer data (NBC) (Van De Vijver et al., 2002), and American Association for Cancer Research breast cancer data (ABC) (Shapiro et al., 1993). We provide a summary of the four datasets in Table 3.

Data ID Description Censoring
Data 1. DLC Dutch male laryngeal cancer data 90 3 44%
Data 2. GBC German breast cancer data 686 8 56%
Data 3. NBC NCI breast cancer data 144 5 67%
Data 4. ABC AACR breast cancer data 255 2 60%
Table 3: A summary of the datasets analyzed in Section 5

5.1 Analyses and Specification of Hyperparameters

We fit the proposed GP-RJ models to the four real-life survival datasets. In the absence of further information, we set the Poisson rate parameter to 5, 10, 20, which reflected an a priori expectation of 6, 11, 21 intervals on hazard time scale, respectively. For illustration, we also presented the analyses based on GP-EQ models where the time partition was prespecified with three different values of = 5, 10, 20. Additional analyses were conducted with a fixed time partition constructed based on uniquely ordered event times, which we referred to as GP-UQ. For all of GP-RJ, GP-EQ, GP-UQ, we set as done in Section 4.

Results were based on samples from the joint posterior distribution obtained from two independent MCMC chains. Each chain was run for 1 million iterations with the first half taken as burn-in. We assessed convergence of the Markov chains through the calculation of the PSRF requiring the factors to be less than 1.05 for all model parameters. The overall acceptance rates for the MH and MHG steps in the MCMC scheme ranged between 30% and 40%, indicating that the algorithm was relatively efficient.

GP-EQ, 38.5 -19.9 510.9 -256.6 187.3 -93.3 207.2 -103.6
GP-EQ, 38.4 -19.8 508.0 -255.2 179.4 -90.1 206.5 -103.3
GP-EQ, 39.2 -20.3 509.6 -255.7 187.5 -92.9 207.0 -103.5
GP-UQ 39.9 -20.5 515.9 -259.0 189.1 -93.5 207.1 -103.6
GP-RJ, 38.1 -19.6 509.7 -255.9 180.9 -89.4 205.9 -103.0
GP-RJ, 38.1 -19.6 506.8 -254.4 178.0 -88.2 205.6 -102.8
GP-RJ, 37.0 -19.1 506.6 -254.4 179.7 -89.3 206.1 -103.0
is set to uniquely ordered event times. for DLC, GBC, NBC, ABC, respectively.
Table 4: DIC and LPML for seven models fit to the four datasets. The smallest DIC and the largest LPML values are highlighted in bold.
Figure 3: Estimated baseline hazard function, , for the two models fit to the four real-life survival datasets outlined in Table 3.
Figure 4: Analyses of the four real-life survival datasets outlined in Table 3: (a)-(d) The posterior distribution of

; (e)-(h) The posterior probabilities associated with the positions of the split times


5.2 Results

Table 4 presented DIC and LPML for the models we compared. Focusing on the analyses where for GP-EQ and for GP-RJ were set to the same value, GP-RJ model had equal or better fits to the four datasets than the corresponding GP-EQ model: e.g. differences in DIC ranged between for DLC data, for GBC data, for NBC data, and for ABC data. This implied that GP-RJ generally provided an equal or better model fit in the absence of the prior information on the time scale.

In Figure 3, we presented estimates of the baseline hazard function. We provided the results from the analyses using GP-RJ and GP-EQ that had the best model fits based on DIC and LPML for each dataset. As seen in Figure 3, the estimated baseline hazard functions were substantially smoothed out in the GP-RJ analyses. In terms of computation speed, GP-RJ ran times faster than the corresponding GP-EQ in the analyses of the DLC, GBC, and ABD datasets even though GP-RJ required the extra MHG steps to update (, ). This was not surprising because the posterior medians of from GP-RJ were 12, 17, and 8 for the three datasets (see Figure 4 (a), (b), and (d)), which were smaller than the prior Poisson rate ’s set for the analyses. In Figure 4 (e)-(h), we presented posterior probabilities associated with the positions of the split times . Note that the shape of the distribution of posterior probabilities of was generally consistent with the smooth or abrupt changes in the pattern of baseline hazard (Figure 3).

6 Discussion

We have described a new Bayesian framework for proportional hazards models where the cumulative baseline hazard function was modeled a priori by a gamma process. The proposed approach helps avoid the difficult task of specifying the number of the time splits and their positions by employing a random split-times model and a reversible jump MCMC scheme.

When compared to the model with the fixed time partition (GP-EQ), the proposed GP-RJ characterized the baseline hazard function as a notably smoother mixture of piecewise constant (Figure 2 and 3). Although requiring the extra steps to estimate (, ), GP-RJ generally performed well with a smaller number of (Figure 2 (e) and 4 (a)-(d)), exhibited a smaller bias for regression coefficients (Table 2), and yielded a better model fit (Table 4) than the corresponding GP-EQ.

To our best knowledge, this work is the first attempt to relax the assumption of fixing the time partition for the proportional hazards model where a gamma process prior is used for cumulative baseline hazard function. The proposed framework provides researchers with valid statistical approaches to overcome a major barrier for the practical use of gamma process models in the analysis of survival data.


  • Alvares et al. (2019) Alvares, D., Haneuse, S., Lee, C., and Lee, K. H. (2019). SemiCompRisks: An R package for the analysis of independent and cluster-correlated semi-competing risks data. The R Journal, 11(1):376.
  • Arjas and Gasbarra (1994) Arjas, E. and Gasbarra, D. (1994).

    Nonparametric Bayesian inference from right censored survival data, using the Gibbs sampler.

    Statistica Sinica, pages 505–524.
  • Burridge (1981) Burridge, J. (1981). Empirical Bayes analysis of survival time data. Journal of the Royal Statistical Society: Series B, pages 65–75.
  • Cai (2010) Cai, B. (2010). Bayesian semiparametric frailty selection in multivariate event time data. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 52(2):171–185.
  • Celeux et al. (2006) Celeux, G., Forbes, F., Robert, C. P., Titterington, D. M., et al. (2006). Deviance information criteria for missing data models. Bayesian Analysis, 1(4):651–673.
  • Chen et al. (2012) Chen, M.-H., Shao, Q.-M., and Ibrahim, J. G. (2012). Monte Carlo methods in Bayesian computation. Springer Science & Business Media.
  • Cho et al. (2009) Cho, H., Ibrahim, J. G., Sinha, D., and Zhu, H. (2009). Bayesian case influence diagnostics for survival models. Biometrics, 65(1):116–124.
  • Clayton (1991) Clayton, D. G. (1991). A Monte Carlo method for Bayesian inference in frailty models. Biometrics, pages 467–485.
  • Cox (1972) Cox, D. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B, pages 187–220.
  • Dunson and Chen (2004) Dunson, D. B. and Chen, Z. (2004). Selecting factors predictive of heterogeneity in multivariate event time data. Biometrics, 60(2):352–358.
  • Geisser and Eddy (1979) Geisser, S. and Eddy, W. F. (1979). A predictive approach to model selection. Journal of the American Statistical Association, 74(365):153–160.
  • Gelfand and Kottas (2003) Gelfand, A. E. and Kottas, A. (2003). Bayesian semiparametric regression for median residual life. Scandinavian Journal of Statistics, 30(4):651–665.
  • Gelfand and Mallick (1995) Gelfand, A. E. and Mallick, B. K. (1995). Bayesian analysis of proportional hazards models built from monotone functions. Biometrics, pages 843–852.
  • Gelman et al. (2013) Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian data analysis. CRC press.
  • Green (1995) Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732.
  • Gu et al. (2013) Gu, X., Yin, G., and Lee, J. J. (2013). Bayesian two-step lasso strategy for biomarker selection in personalized medicine development for time-to-event endpoints. Contemporary clinical trials, 36(2):642–650.
  • Haneuse et al. (2008) Haneuse, S., Rudser, K. D., and Gillen, D. L. (2008). The separation of timescales in Bayesian survival modeling of the time-varying effect of a time-dependent exposure. Biostatistics, 9(3):400–410.
  • Hanson (2006) Hanson, T. E. (2006). Inference for mixtures of finite polya tree models. Journal of the American Statistical Association, 101(476):1548–1565.
  • Hjort et al. (1990) Hjort, N. L. et al. (1990). Nonparametric Bayes estimators based on beta processes in models for life history data. the Annals of Statistics, 18(3):1259–1294.
  • Ibrahim et al. (2005) Ibrahim, J., Chen, M., and Sinha, D. (2005). Bayesian survival analysis. Wiley Online Library.
  • Kalbfleisch (1978) Kalbfleisch, J. (1978). Non-parametric Bayesian analysis of survival time data. Journal of the Royal Statistical Society. Series B (Methodological), pages 214–221.
  • Kardaun (1983) Kardaun, O. (1983). Statistical survival analysis of male larynx-cancer patients-a case study. Statistica neerlandica, 37(3):103–125.
  • Lee et al. (2011) Lee, K. H., Chakraborty, S., and Sun, J. (2011). Bayesian variable selection in semiparametric proportional hazards model for high dimensional survival data. The International Journal of Biostatistics, 7(1):21.
  • Lee et al. (2015a) Lee, K. H., Chakraborty, S., and Sun, J. (2015a). Survival prediction and variable selection with simultaneous shrinkage and grouping priors. Statistical Analysis and Data Mining, 8(2):114–127.
  • Lee et al. (2016) Lee, K. H., Dominici, F., Schrag, D., and Haneuse, S. (2016). Hierarchical models for semicompeting risks data with application to quality of end-of-life care for pancreatic cancer. Journal of the American Statistical Association, 111(515):1075–1095.
  • Lee et al. (2015b) Lee, K. H., Haneuse, S., Schrag, D., and Dominici, F. (2015b). Bayesian semiparametric analysis of semicompeting risks data: investigating hospital readmission after a pancreatic cancer diagnosis. Journal of the Royal Statistical Society, Series C, 64(2):253–273.
  • McKeague and Tighiouart (2000) McKeague, I. W. and Tighiouart, M. (2000). Bayesian estimators for conditional hazard functions. Biometrics, 56(4):1007–1015.
  • Ouyang et al. (2013) Ouyang, B., Sinha, D., Slate, E. H., and Van Bakel, A. B. (2013). Bayesian analysis of recurrent event with dependent termination: an application to a heart transplant study. Statistics in medicine, 32(15):2629–2642.
  • Savitsky et al. (2011) Savitsky, T., Vannucci, M., and Sha, N. (2011). Variable selection for nonparametric Gaussian process priors: models and computational strategies. Statistical science: a review journal of the Institute of Mathematical Statistics, 26(1):130.
  • Schumacher et al. (1994) Schumacher, M., Bastert, G., Bojar, H., Huebner, K., Olschewski, M., Sauerbrei, W., Schmoor, C., Beyerle, C., Neumann, R., and Rauschecker, H. (1994). Randomized 2 x 2 trial evaluating hormonal treatment and the duration of chemotherapy in node-positive breast cancer patients. german breast cancer study group. Journal of Clinical Oncology, 12(10):2086–2093.
  • Sen et al. (2010) Sen, A., Banerjee, M., Li, Y., and Noone, A.-M. (2010). A Bayesian approach to competing risks analysis with masked cause of death. Statistics in medicine, 29(16):1681–1695.
  • Shapiro et al. (1993) Shapiro, C. L., Gelman, R. S., Hayes, D. F., Osteen, R., Obando, A., Canellos, G. P., Frei III, E., and Henderson, I. C. (1993). Comparison of adjuvant chemotherapy with methotrexate and fluorouracil with and without cyclophosphamide in breast cancer patients with one to three positive axillary lymph nodes. JNCI: Journal of the National Cancer Institute, 85(10):812–817.
  • Sinha et al. (2015) Sinha, A., Chi, Z., and Chen, M.-H. (2015). Bayesian inference of hidden gamma wear process model for survival data with ties. Statistica Sinica, 25(4):1613.
  • Sinha et al. (1999) Sinha, D., Chen, M.-H., and Ghosh, S. K. (1999). Bayesian analysis and model selection for interval-censored survival data. Biometrics, 55(2):585–590.
  • Sinha et al. (2003) Sinha, D., Ibrahim, J. G., and Chen, M.-H. (2003). A Bayesian justification of Cox’s partial likelihood. Biometrika, 90(3):629–641.
  • Spiegelhalter et al. (2002) Spiegelhalter, D. J., Best, N. G., Carlin, B. P., and Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society, Series B, 64(4):583–639.
  • Sreedevi and Sankaran (2012) Sreedevi, E. and Sankaran, P. (2012). A semiparametric Bayesian approach for the analysis of competing risks data. Communications in Statistics-Theory and Methods, 41(15):2803–2818.
  • Van De Vijver et al. (2002) Van De Vijver, M. J., He, Y. D., Van’t Veer, L. J., Dai, H., Hart, A. A., Voskuil, D. W., Schreiber, G. J., Peterse, J. L., Roberts, C., Marton, M. J., et al. (2002). A gene-expression signature as a predictor of survival in breast cancer. New England Journal of Medicine, 347(25):1999–2009.
  • Zhao et al. (2014) Zhao, L., Feng, D., Bellile, E. L., and Taylor, J. M. (2014). Bayesian random threshold estimation in a Cox proportional hazards cure model. Statistics in Medicine, 33(4):650–661.
  • Zhao et al. (2015) Zhao, L., Shi, J., Shearon, T. H., and Li, Y. (2015). A Dirichlet process mixture model for survival outcome data: assessing nationwide kidney transplant centers. Statistics in medicine, 34(8):1404–1416.