On null hypotheses in survival analysis

07/01/2018 ∙ by Mats Julius Stensrud, et al. ∙ 0

The conventional nonparametric tests in survival analysis, such as the log-rank test, assess the null hypothesis that the hazards are equal at all times. However, hazards are hard to interpret causally, and other null hypotheses are more relevant in many scenarios with survival outcomes. To allow for a wider range of null hypotheses, we present a generic approach to define test statistics. This approach utilizes the fact that a wide range of common parameters in survival analysis can be expressed as solutions of differential equations. Thereby, we can test hypotheses based on survival parameters that solve differential equations driven by hazards, and it is easy to implement the tests on a computer. We present simulations, suggesting that our generic approach performs well for several hypotheses in a range of scenarios. Finally, we extend the strategy to to allow for testing conditional on covariates.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The notion of hazards has been crucial for the development of modern survival analysis. Hazards are perhaps the most natural parameters to use when fitting statistical models to time-to-event data subject to censoring, and hazard functions were essential for the development of popular methods like Cox regression and rank tests, which are routinely used in practice.

In the field of causal inference, however, there is concern that many statisticians just do advanced ’curve fitting’ without being careful about the interpretation of the parameters that are reported [1, 2, 3]. This criticism can be directed to several areas in statistics. In this spirit, we think that statisticians in general should pay particular attention to effect measures with clear-cut causal interpretations.

In survival analysis, it has been acknowledged that interpreting hazards as effect measures is delicate, see e.g. [4] and [5]. This contrasts the more traditional opinion, in which the proportional hazards model is motivated by the ’simple and easily understood interpretation’ of hazard ratios [6, 4.3.a]. A key issue arises because the hazard, by definition, is conditioned on previous survival. If we consider causal diagrams [3, 2], it is clear that we condition on a ’collider’ that opens a non-causal pathway from the exposure through any unobserved heterogeneity into the event of interest, see [4, 7, 8]. Since unobserved heterogeneity is present in most practical scenarios, even in randomized trials, the conditioning means that the hazards are fundamentally hard to interpret causally [4, 5].

Although we must be careful about assigning causal interpretations to hazards, we do not claim that hazards are worthless. On the contrary, hazards are key elements in the modelling of other parameters that are easier to interpret, serving as building blocks. This point of view is also found in [2, 17.1]: “…, the survival analyses in this book privilege survival/risk over hazard. However, that does not mean that we should ignore hazards. The estimation of hazards is often a useful intermediate step for the estimation of survivals and risks.”

Indeed, we have recently suggested a generic method to estimate a range of effect measures in survival analysis, utilizing differential equations driven by cumulative hazards

[9].

Nevertheless, the conventional hypothesis tests in survival analysis are still based on hazards. In particular the rank tests [10], including the log-rank test, are based on the null hypothesis

(1)

where is the hazard in group . Formulating such hypotheses in a practical setting will often imply that we assign causal interpretations to these hazard functions. In the simplest survival setting this is not a problem, as there is a one-to-one relationship between hazards and the survival curves, and a null hypothesis comparing two or more survival curves is straightforward. In more advanced settings, e.g. scenarios with competing risks, hypotheses like (1) are less transparent, leading to issues with interpretation [11]. For example, in competing risks settings where competing events are treated as censoring events, the null hypothesis in (1) is based on cause-specific hazards, which are often not the target of inference [11].

We aimed to develop new hypothesis tests for time-to-event outcomes with two key characteristics: First, the tests should be rooted in explicit null hypotheses that are easy to interpret. Second, the testing strategy should be generic, such that the scientist can apply the test to their estimand of interest.

Survival parameters as solutions of differential equations

We will consider survival parameters that are functions solving differential equations on the form

(2)

where is a

dimensional vector of cumulative hazards, and

is Lipschitz continuous with bounded and continuous first and second derivatives, and satisfies a linear growth bound. The class of parameters also includes several quantities that are Lebesgue integrals, such that for some . Here, is a vector that includes our estimand of interest, but may also contain additional nuisance parameters that are needed to formulate the estimand of interest.

Many parameters in survival analysis solve equations on the form (2). In particular, the survival function can be expressed on the form (2) as , where is the cumulative hazard for death. Other examples include the cumulative incidence function, the restricted mean survival function, and the prevalence function. We will present these parameters in detail in Section 3. Nonparametric plugin estimators have been thoroughly studied in [9]. The strategy assumes that can be consistently estimated by

(3)

where is a dimensional predictable process, and is an dimensional counting process. Furthermore, we assume that , the residuals , and its quadratic variation , are so-called predictably uniformly tight. When the estimator is a counting process integral, a relatively simple condition ensures predictable uniformly tightness [9, Lemma 1]. Moreover, we suppose that converges weakly (wrt the Skorohod metric) to a mean zero Gaussian martingale with independent increments, see [9, Lemma 1, Theorem 1 & 2] for details. Examples of estimators on the form (3) that satisfy these criteria are the Nelson-Aalen estimator, or more generally Aalen’s additive hazard estimator; if Aalen’s additive hazards model is a correct model for the hazard , then Aalen’s additive hazard model satisfy these criteria, in particular predictable uniformly tightness.

Our suggested plugin estimator of is obtained by replacing with , giving estimators that solve the system

(4)

where is a consistent estimator of . When the estimand is the survival function, this plugin estimator reduces to the Kaplan-Meier estimator. Ryalen et al [9] identified the asymptotic distribution of to be a mean zero Gaussian martingale with covariance solving a linear differential equation [9, eq. (17) ]. The covariance can also be consistently estimated by inserting the estimates , giving rise to the system

(5)

where are the Jacobian matrices of the columns of from (2), and is a matrix defined by

The variance estimator (

5), as well as the parameter estimator (4), can be expressed as difference equations, and therefore they are easy to calculate generically in computer programs. To be explicit, let denote the ordered jump times of . Then, as long as . Similarly, the plugin variance equation may be written as a difference equation,

2. Hypothesis testing

The null hypothesis is not explicitly expressed in many research reports. On the contrary, the null hypothesis is often stated informally, e.g. vaguely indicating that a difference between two groups is assessed. Even if the null hypothesis is perfectly clear to a statistician, this is a problem: the applied scientist, who frames the research question based on subject-matter knowledge, may not have the formal understanding of the null hypothesis.

In particular, we are not convinced that scientists faced with time-to-event outcomes profoundly understand how null hypotheses based on hazard functions. Hence, using null hypotheses based on hazard functions, such as (1), may be elusive: in many scenarios, the scientist’s primary interest is not to assess whether the hazard functions are equal at all follow-up times. Indeed, the research question is often more focused, and the scientist’s main concern can be contrasts of other survival parameters at a prespecified or in a prespecified time interval [12]. For example, we may aim to assess whether cancer survival differs between two treatment regimens five years after diagnosis. Or we may aim to assess whether a drug increases the average time to relapse in subjects with a recurring disease. We will highlight that the rank tests are often not suitable for such hypotheses.

Hence, instead of assessing hazards, let us study tests of (survival) parameters and in groups 1 and 2 at a prespecified time . The null hypothesis is

(6)

where is the dimension of . We emphasize that the null hypothesis in (6) is different from the null hypothesis in (1), as (6) is defined for any parameter at a . We will consider parameters and that solve (2); this is a broad class of important parameters, including (but not limited to) the survival function, the cumulative incidence function, the time dependent sensitivity and specificity functions, and the restricted mean survival function [9].

2.1. Test statistics

We consider two groups 1 and 2 with population sizes and let . We can estimate parameters and covariance matrices using the plugin method described in Section Survival parameters as solutions of differential equations. The contrast

has an asymptotic mean zero normal distribution under the null hypothesis. If the groups are independent, we may then use the statistic

(7)

to test for differences at , where , and where and are calculated using the covariance matrix estimator (5). Then, the quantity (7) is asymptotically distributed with degrees of freedom under the null hypothesis, which is a corollary of the results in [9], as we know from [9, Theorem 2] that ) converges weakly to mean zero Gaussian martingale whose covariance matrix can be consistently estimated using (5). Therefore, under the null hypothesis (6), the root difference of the estimates, , will converge to a multivariate mean zero normal distribution with covariance matrix that can be estimated by . Due to the continuous mapping theorem, the statistic (7) has an asymptotic distribution.

Sometimes we may be interested in testing e.g. the first components of and , under the null hypothesis for . It is straightforward to adjust the hypothesis (6) and the test statistic, yielding the same asymptotic distribution with degrees of freedom.

3. Examples of test statistics

We derive test statistics for some common effect measures in survival and event history analysis. By expressing the test statistics explicitly, our tests may be compared with the tests based on conventional approaches.

3.1. Survival at

In clinical trials, the primary outcome may be survival at a prespecified , e.g. cancer survival 5 years after diagnosis. Testing if survival at is equal in two independent groups can be done in several ways [12], e.g. by estimating the variance of Kaplan-Meier curves using Greenwood’s formula. However, we will highlight that our generic tests also immediately deal with this scenario: it is straightforward to use the null hypothesis in (6), where and are the survival functions in group and at time . Using the results in Section 2.1, we find that the plugin estimators of and are the standard Kaplan-Meier estimators. The plugin variance in group solves

(8)

for , where is the number at risk in group just before time . Assuming that the groups are independent, the final variance estimator can be expressed as , and the statistic (7) becomes , which is approximately distributed with 1 degree of freedom.

3.2. Restricted mean survival until

As an alternative to the hazard ratio, the restricted mean survival has been advocated: it can be calculated without parametric assumptions and it has a clear causal interpretation [13, 14, 15]. The plugin estimator of the restricted mean survival difference between groups and is where . The plugin estimator for the variance is

where is the plugin variance for , given in (8). The statistic (7) can be used to perform a test, with .

3.3. Cumulative incidence at

Many time-to-event outcomes are subject to competing risks. The Gray test is a competing risk analogue to the log-rank test: the null hypothesis is defined by subdistribution hazards , such that where is the cumulative incidence of the event of interest, are equal at all [16]. Analogous to the log-rank test, the Gray test has low power if the subdistribution hazard curves are crossing [17]. However, we are often interested in evaluating the cumulative incidence at a time , without making assumptions about the subdistribution hazards, which are even harder to interpret causally than standard hazard functions. By expression the cumulative incidence on the form (2), we use our transformation procedure to obtain a test statistic for the cumulative incidence at . The plugin estimator for the cumulative incidence difference is

where is the cumulative cause-specific hazard for the event of interest, and is the Kaplan-Meier estimate within group . The groupwise plugin variances solve

where counts the event of interest.

3.4. Frequency of recurrent events

Many time-to-event outcomes are recurrent events. For example, time to hospitalization is a common outcome in medical studies, such as trials on cardiovascular disease. Often recurrent events are analysed with conventional methods, in particular the Cox model, restricting the analysis to only include the first event in each subject. A better solution may be to study the mean frequency function, i.e. the marginal expected number of events until time , acknowledging that the subject can not experience events after death [18]. We let and be the cumulative hazards for the recurrent event and death in group , respectively, and let and be the mean frequency function and survival, respectively. Then, the plugin estimator of the difference is

The plugin variances solve

where counts the recurrent event, and where is the survival plugin variance in group , as displayed in (8).

3.5. Prevalence in an illness-death model

The prevalence denotes the number of individuals with a condition at a specific time, which is e.g. useful for evaluating the burden of a disease. We consider a simple Markovian illness-death model with three states: healthy:0, ill:1, dead:2. The population is assumed to be healthy initially, but individuals may get ill or die as time goes on. We aim to study the prevalence of the illness in group as a function of time. Here, we assume that the illness is irreversible, but we could extend this to a scenario in which recovery from the illness is possible, similar to Bluhmki [19]. Let be the cumulative hazard for transitioning from state to in group . Then, solves the system

The plugin estimator for the difference is

The variance estimator for group reads

Here, are the number of individuals at risk in states and , while counts the transitions from state to in group . By calculating for , we can find the statistic (7). Here, the prevalence is measured as the proportion of affected individuals relative to the population at . We could use a similar approach to consider the proportion of affected individuals relative to the surviving population at , or we could record the cumulative prevalence until to evaluate the cumulative disease burden.

4. Performance

In this section, we present power functions under several scenarios for the test statistics that were presented in Section 3. The scenarios were simulated by defining different relations between the hazard functions in the two exposure groups: (i) constant hazards in both groups, (ii) hazards that were linearly crossing, and (iii) hazards that were equal initially before diverging after a time .

For each hazard relation (i)-(iii), we defined several ’s such that the true parameter difference was equal to at the prespecified time point , i.e. . For each combination of target parameter, difference , and hazard scenario, we replicated the simulations times to obtain realizations of (7), and we artificially censored 10% of the subjects in each simulation. In the Supplementary material, we show additional simulations with different sample sizes (50, 100 and 500) and fractions of censoring (10%-40%). We have provided an overview of the simulations in figure 2, in which parameters of interest (solid lines) and hazard functions (dashed lines) are displayed in scenarios with fixed at , i.e. .

In each scenario, we rejected at the 5% confidence level. Thus, we obtained

Bernoulli trials in which the success probability is the power function evaluated at

. The estimated power functions, i.e. the estimates of the Bernoulli probabilities, are displayed in figure 3 (solid lines). The power functions are not affected by the structure of the underlying hazards, as desired: our tests are only defined at , and the particular parameterization of the hazard has minor impact on the power function.

The dashed lines in figure 3 show power functions of alternative nonparametric test statistics that are already found in the literature, tailor-made for the scenario of interest. In particular, for the survival at , we obtained a test statistic using Greenwood’s variance formula (and a cloglog transformation in the Supplementary Material) [12]. For the restricted mean survival at , we used the statistic suggested in [15]. For the cumulative incidence at , we used the statistic suggested in [20]. For the mean frequency function we used the estimators derived in [18], and in the prevalence example we used the variance formula in [21, p.   295], as implemented in the etm package in R. Our generic strategy gave similar power compared to the conventional methods for each particular scenario.

4.1. Comparisons with the log-rank test

We have argued that our tests are fundamentally different from the rank tests, as the null hypotheses are different. Nevertheless, since rank tests are widely used in practice, also when the primary interest seems to be other hypothesis than in (1), we aimed to compare the power of our test statistics with the log-rank test under different scenarios. In table 3, we compared the power of our test statistic and the rank test, using the scenarios in figure 2. In the first column, the proportional hazards assumption is satisfied (constant), and therefore the power of the log-rank test is expected to be optimal (assuming no competing risks). Our tests of the survival function and the restricted mean survival function show only slightly reduced power compared to the log rank test. For the cumulative incidence function at a time , our test is less powerful than the Gray test and the log-rank test of the cause specific hazards. However, the cause specific hazard test have type one error rate is not nominal, which we will return to in the end of this section.

The second column displays power of tests under scenarios with crossing hazards. For the survival function, it may seem surprising that the log-rank test got higher power than our test, despite the crossing hazards. However, in this particular scenario the hazards are crossing close to the end of the study (dashed lines in Figure 2), and therefore the crossing has little impact on the power of the log-rank test. In contrast, the power of the log-rank test is considerably reduced in the scenarios where we study the restricted mean survival function and the cumulative incidence functions, in which the hazards are crossing at earlier points in time.

The third column shows the power under hazards that are deviating. For the survival function, our test provides higher power. Intuitively, the log-rank test has less power in this scenario because the hazards are equal or close to equal during a substantial fraction of the follow-up time. For the restricted mean survival, however, the log-rank has more power. This is not surprising [15], and it is due to the particular simulation scenario: Late events have relatively little impact on the restricted mean survival time, and in this scenario a major difference between the hazards was required to obtain . Since the log-rank test is sensitive to major differences in the hazards, it has more power in this scenario. For the cumulative incidence, in contrast, the power of the log-rank test is lower than the power of our test.

The results in table 3 illustrate that power depends strongly on the hazard scenario for the log-rank test, but this dependence is not found for our tests.

To highlight the basic difference between the log-rank test and our tests, we have studied scenarios where in (6) is true (figure 4). That is, at the difference

, but for earlier times the equality does not hold. In these scenarios, the log-rank test got high rates of type 1 errors. Heuristically, this is expected because the hazards are different at most (if not all) times

. Nevertheless, figure 4 confirms that the log-rank test does not have the correct type 1 error rate under null hypotheses as in (6), and should not be used for such tests, even if the power sometimes is adequate (as in table 3).

5. Example: Adjuvant chemotherapy in patients with colon cancer

To illustrate how our tests can be applied in practice, we assessed the effectiveness of two adjuvant chemotherapy regimes in patients with stage III colon cancer, using data that are available to anyone [22, 23]. The analysis is performed as a worked example in the supplementary data using the R package transform.hazards. After undergoing surgery, 929 patients were randomly assigned to observation only, levamisole (Lev) or levamisole plus fluorouracil (Lev+5FU) [22]. We restricted our analysis to the 614 subjects who got Lev or Lev+5FU. All-cause mortality and the cumulative incidence of cancer recurrence was lower in subjects receiving (Lev+5FU), as displayed in Figure 1.

We formally assessed the comparative effectivness of Lev and Lev+5FU after 1 and 5 years of follow-up, using the parameters from section 3. After 1 year, both the overall survival and the restricted mean survival were similar in the two treatment arms (Table 1). However, the cumulative incidence of recurrence was reduced in the Lev+5FU group, and the number of subjects alive with recurrent disease were lower in the Lev+5FU group. Also, the mean time spent alive and without recurrence was longer in the Lev+5FU group (Table 1, Restricted mean recurrence free survival). These results suggest that Lev+5FU has a beneficial effect on disease recurrence after 1 year of follow-up compared to Lev.

After 5 years of follow-up, overall survival and restricted mean survival was improved in the Lev+5FU group (Table 2). Furthermore, the cumulative incidence of recurrence was reduced, and the prevalence of patients with recurrence was lower in the Lev+5FU group. These results suggest that Lev+5FU improves overall mortality and reduces recurrence after 1 year compared to Lev.

In conclusion, our analysis indicates that treatment with Lev+5FU improves several clinically relevant outcomes in patients with stage III colon cancer. We also emphasise that a conventional analysis using a proportional hazards model would not be ideal here, as the plots in Figure 1 indicate violations of the proportional hazards assumptions.

6. Covariate adjustments

Our approach allows us to conduct tests conditional on baseline covariates, using the additive hazard model; by letting the cumulative hazard integrand in (2) be conditional on specific covariates, we can test for conditional differences between groups, assuming that the underlying hazards are additive.

In more detail, we can test for differences between group 1 and 2 under the covariate level by evaluating the cumulative hazards in each group at that level, yielding and . Estimates and can be found using standard software. This allows us to estimate parameters with covariances using (4) and (5), and test the null hypothesis of no group difference within covariate level using the test statistic (7), again assuming that the groups are independent.

7. Discussion

By expressing survival parameters as solutions of differential equations, we provide generic hypothesis tests for survival analysis. In contrast to the conventional approaches that are based on hazard functions [24, Section 3.3], our null hypotheses are defined with respect to explicit parameters, defined at a time . Our strategy also allows for covariate adjustment under additive hazard models.

We have presented some examples of parameters, and our simulations suggest that the test statistics are well-behaved in a range of scenarios. Indeed, for common parameters such as the survival function, the restricted mean survival function and the cumulative incidence function, our tests obtain similar power to conventional tests that are tailor made for a particular parameter. Importantly, our examples do not comprise a comprehensive set of relevant survival parameters, and several other effect measures for event histories may be described on the differential equation form (2), allowing for immediate implementation of hypothesis tests, for example using the R package transforming.hazards [9, 8]. The fact that our derivations are generic and easy to implement for customized parameters, is a major advantage.

Our tests differ from the rank tests, as the rank tests are based on assessing the equality of the hazards during the entire follow-up. However, our strategy is intended to be different: We aimed to provide tests that apply to scenarios where the null hypothesis of the rank tests is not the primary interests.

Restricting the primary parameter to a single time is sometimes considered to be a caveat. In particular, we ignore the time-dependent profile of the parameters before and after . For some parameters, such as the survival function or the cumulative incidence function, this may be a valid objection in principle. However, even if our primary parameter is assessed at , this parameter may account for the whole history of events until . One example is the restricted mean survival, which considers the history of events until . Indeed, the restricted mean survival has been suggested as an alternative effect measure to the hazard ratio, because it is easier to interpret causally, and it does not rely on the proportional hazards assumption [14]. An empirical analysis of RCTs showed that tests of the restricted mean survival function yield results that are concordant with log-rank tests, under the conventional null hypothesis in (1) [14], and similar results were found in an extensive simulation study [15].

Furthermore, the time-dependent profile before is not our primary interest in many scenarios. In medicine, for example, we may be interested in comparing different treatment regimes, such as radiation and surgery for a particular cancer. Then, time to treatment failure is expected to differ in the shorter term due to the fundamental difference between the treatment regimes, but the study objective is to assess longer-term treatment differences [25]. Similarly, in studies of cancer screening, it is expected that more cancers are detected early in the screening arm, but the scientist’s primary aim is often to assess long-term differences between screened and non-screened. In such scenarios, testing at a prespecified are more desirable than the null-hypothesis of the rank tests.

Nevertheless, we must assure that cherry picking of is avoided. In practice, there will often be a natural value of . For example, (or multiple ) can be prespecified in the protocol of clinical trials. In cancer studies, a common outcome is e.g. five year survival. Alternatively,

can be selected based on when a certain proportion is lost to censoring. Furthermore, using confidence bands, rather than pointwise confidence intervals and p-values, is an appealing alternative when considering multiple points in time. There exist methods to estimate confidence bands based on wild bootstrap for competing risks settings

[26], which were recently extended to reversible multistate models allowing for illness-death scenarios with recovery [19]. We aim to develop confidence bands for our estimators in future research.

Finally, we are often interested in assessing causal parameters using observational data, under explicit assumptions that ensure no confounding and no selection bias. Such parameters may be estimated after weighting the original data [8, 27]. Indeed, weighted point estimators are consistent when our approach is used [8], but we would also like to identify the asymptotic root residual distribution, allowing us to estimate covariance matrices that are appropriate for the weighted parameters. We consider this to be an important direction for future work. Currently, such covariance matrices can only be obtained from bootstrap samples.

8. Software

We have implemented a generic procedure for estimating parameters and covariance matrices in an R package, available for anyone to use on github.com/palryalen/transform.hazards. It allows for hypothesis testing at prespecified time points using (6). Worked examples can be found the package vignette, or in the supplementary material here.

9. Funding

The authors were all supported by the research grant NFR239956/F20 - Analyzing clinical health registries: Improved software and mathematics of identifiability.

References

  • [1] Judea Pearl. The new challenge: From a century of statistics to the age of causation. In Computing Science and Statistics, pages 415–423. Citeseer, 1997.
  • [2] Miguel A Hernan and James M Robins. Causal inference. CRC Boca Raton, FL:, 2018 forthcomming.
  • [3] Judea Pearl. Causality: Models, Reasoning and Inference 2nd Edition. Cambridge University Press, 2000.
  • [4] Odd O Aalen, Richard J Cook, and Kjetil Røysland. Does cox analysis of a randomized survival study yield a causal treatment effect? Lifetime data analysis, 21(4):579–593, 2015.
  • [5] Miguel A Hernán. The hazards of hazard ratios. Epidemiology (Cambridge, Mass.), 21(1):13, 2010.
  • [6] David R Cox and David Oakes. Analysis of survival data. 1984. Chapman&Hall, London, 1984.
  • [7] Mats J Stensrud, Morten Valberg, Kjetil Røysland, and Odd O Aalen. Exploring selection bias by causal frailty models: The magnitude matters. Epidemiology, 28(3):379–386, 2017.
  • [8] Pål C Ryalen, Mats J Stensrud, and Kjetil Røysland. The additive hazard estimator is consistent for continuous time marginal structural models. arXiv preprint arXiv:1802.01946, 2018.
  • [9] Pål C Ryalen, Mats J Stensrud, and Kjetil Røysland. Transforming cumulative hazard estimates. accepted in Biometrika, arXiv preprint arXiv:1710.07422, 2017.
  • [10] Odd Aalen. Nonparametric inference for a family of counting processes. The Annals of Statistics, pages 701–726, 1978.
  • [11] Jessica G Young, Eric J Tchetgen Tchetgen, and Miguel A Hernán. The choice to define competing risk events as censoring events and implications for causal inference. arXiv preprint arXiv:1806.06136, 2018.
  • [12] John P Klein, Brent Logan, Mette Harhoff, and Per Kragh Andersen. Analyzing survival curves at a fixed point in time. Statistics in medicine, 26(24):4505–4519, 2007.
  • [13] Patrick Royston and Mahesh KB Parmar. The use of restricted mean survival time to estimate the treatment effect in randomized clinical trials when the proportional hazards assumption is in doubt. Statistics in medicine, 30(19):2409–2421, 2011.
  • [14] Ludovic Trinquart, Justine Jacot, Sarah C Conner, and Raphaël Porcher. Comparison of treatment effects measured by the hazard ratio and by the ratio of restricted mean survival times in oncology randomized controlled trials. Journal of Clinical Oncology, 34(15):1813–1819, 2016.
  • [15] Lu Tian, Haoda Fu, Stephen J Ruberg, Hajime Uno, and Lee-Jen Wei. Efficiency of two sample tests via the restricted mean survival time for analyzing event time observations. Biometrics, 2017.
  • [16] Robert J Gray. A class of k-sample tests for comparing the cumulative incidence of a competing risk. The Annals of statistics, pages 1141–1154, 1988.
  • [17] A Latouche and R Porcher. Sample size calculations in the presence of competing risks. Statistics in medicine, 26(30):5370–5380, 2007.
  • [18] Debashis Ghosh and DY Lin. Nonparametric analysis of recurrent events and death. Biometrics, 56(2):554–562, 2000.
  • [19] Tobias Bluhmki, Claudia Schmoor, Dennis Dobler, Markus Pauly, Juergen Finke, Martin Schumacher, and Jan Beyersmann. A wild bootstrap approach for the aalen–johansen estimator. Biometrics, 2018.
  • [20] Mei-Jie Zhang and Jason Fine. Summarizing differences in cumulative incidence functions. Statistics in Medicine, 27(24):4939–4949, 2008.
  • [21] Per Kragh Andersen, Ørnulf Borgan, Richard D. Gill, and Niels Keiding. Statistical models based on counting processes. Springer Series in Statistics. Springer-Verlag, New York, 1993.
  • [22] Charles G Moertel, Thomas R Fleming, John S Macdonald, Daniel G Haller, John A Laurie, Tangen, et al. Fluorouracil plus levamisole as effective adjuvant therapy after resection of stage iii colon carcinoma: a final report. Annals of internal medicine, 122(5):321–326, 1995.
  • [23] T. M. Therneau and P. M. Grambsch. Modeling Survival Data: Extending the Cox Model. Springer, New York, 2000.
  • [24] Odd Aalen, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.
  • [25] Pål C Ryalen, Mats J Stensrud, and Kjetil Røysland. Causal inference in continuous time: An example on prostate cancer therapy. Accepted in Biostatistics, 2018.
  • [26] DY Lin. Non-parametric inference for cumulative incidence functions in competing risks studies. Statistics in medicine, 16(8):901–910, 1997.
  • [27] Miguel Angel Hernán, Babette Brumback, and James M Robins. Marginal structural models to estimate the causal effect of zidovudine on the survival of hiv-positive men, 2000.