The aim of this study is to develop and evaluate a novel methodology to conduct regression analysis for interval-censored data. Interval-censored data arise naturally in many epidemiological and biomedical research in which the exact time to an event of interest cannot be measured directly, but rather is known to lie within an interval. For example, in a retrospective breast cancer study (Beadle et al., 1984), the rate of deterioration of the cosmetic result was compared between patients with primary radiation therapy plus adjuvant chemotherapy and patients with radiotherapy alone. Patients were scheduled every 4 to 6 months to inspect the cosmetic state at a clinic visit. The information on time until one of endpoints, the appearance of breast retraction that is highly associated with overall cosmetic deterioration, was only known between two adjacent clinic visits.
The regression analysis of interval-censored data has been extensively investigated in literature. In particular, a lot of work has been focusing on the proportional hazards (PH) model, for example, Finkelstein (1986), Satten (1996), Pan (1999), Betensky et al. (2002), Zhang et al. (2010), and Wang et al. (2016), among many others. It is noted that the restrictive assumption of proportionality of hazards made in the above work is not always justifiable. To provide a more general and flexible modeling framework, a lot of effort has been involved in the development of estimation methodology under semi-parametric linear transformation models that include PH model as a special case, and can be classified as an estimating equation method (Zhang et al., 2005; Zhang and Zhao, 2013), fully semiparametric maximum likelihood approach (Zeng et al., 2016), and monotone spline technique (Zhou et al., 2017 and 2018); for a comprehensive review of analysis of interval-censored data see Sun (2006), Zhang and Sun (2010), and the references therein.
The aforementioned methods under transformation models can be either computationally intensive and unstable or complicated to implement. Thus in this work we strive to develop a flexible and computationally efficient penalized spline estimation approach that can be used to fit semi-parametric transformation models with interval-censored data. In particular, we made use of monotone -spline to model the unknown transformation function and utilized penalization methodology to facilitate the spline estimation. To accomplish the model fitting, a nested iterative EM based algorithm was developed for estimation, and a simple and consistent variance-covariance estimation approach was proposed to provide accurate and reliable inference for regression coefficients. As demonstrated via extensive numerical studies, the proposed penalized approach is computationally efficient and robust to model misspecification under a variety of settings. Theoretically, the large-sample properties such as the optimal rate of convergence of the functional estimator and the asymptotic normality and efficiency of regression parameter estimators were established. These appealing properties both numerically and theoretically emphasize the fact that the proposed penalized procedure contributes to the existing methodology for interval-censored data.
The remainder of this paper is presented as follows. The methodological details of the penalized approach including the development of a nested iterative EM based algorithm and the selection of the smoothing parameter and spline knots are presented in Section 2. The large-sample properties of penalized estimators and a consistent variance-covariance estimation approach are provided in Section 3. The numerical performance of the penalized method is assessed via the extensive Monte Carlo studies and compared with the existing approach (Wang et al. 2016) under the PH model in Section 4. The proposed method is further illustrated by analyzing data from a retrospective breast cancer study and a prospective signal tandmobiel study in Section 5. The summary of the study and future work are discussed in Section 6. Finally, the proofs of the asymptotic results and more simulation results are available in Supplementary Materials.
2 Model and numerical algorithm
2.1 Model and penalized likelihood
Consider a linear transformation model in which the conditional distribution of the true failure time
given the covariate vectortakes the form
where is a -dimensional vector of regression coefficients corresponding to the covariate vector . In this specification, it is assumed that the link function is a known and increasing function on and the unknown transformation function is smooth and monotone increasing on with . The above formation provides immensely flexible modeling of censored data and includes a group of commonly used models as special cases. For instance, model (1) reduces to the proportional hazards (PH) model for specifying , while taking
leads to the proportional odds (PO) model.
In this work we are interested in fitting model (1) to interval-censored data. The fundamental feature of interval-censored data is that the exact event time cannot be ascertained, but rather is censored by two observation times, say and with . Let , , and denote the censoring indicators, subject to , where is the indicator function. The event time is left-, interval-, or right-censored if , , or , respectively. Consequently, the observed data comprise . We assume that the covariates are time-invariant and that the event time and the censoring process are independent given the covariates. Under these assumptions, the log-likelihood of the observed data for is given by
Penalized estimation is well-known for its feature to control for the smoothness of nonparametric estimator and to avoid overfitting the model; see Ma and Kosorok (2005) and Lu and Li (2017) and the references therein. Proceeding in this manner, we make use of the following penalized log-likelihood to estimate the parameters in model (1)
where for is the penalized term and is the smoothing parameter that is used to control for the smoothness of fitted function.
In general, it is an extremely challenging task to estimate the infinite-dimensional nonparametric function directly from the penalized likelihood. Following the work of Lu and Li (2017), to tackle the difficulty, we propose to approximate the unknown transformation function via monotone -splines. In particular,
under the constraints , where , are -spline basis functions. The nondecreasing coefficients guarantee that the resulting spline function is nondecreasing; see Theorem 5.9 of Schumaker (1981). Thus, via spline approximation, we obtain the spline model
Under the above spline model (4), the set of unknown parameters that have to be estimated is given by . Moreover, to further relieve the computational burden, we approximate the penalized term by the discrete difference penalty; see Eilers and Marx (1996). By introduction of discrete penalty, the estimation can be easily fit via standard software. Under these specifications, the observed penalized spline log-likelihood for is given by
where is the usual Euclidean norm and is the matrix representation of the discrete difference operator of order 2; see Wood (2017, p.168).
2.2 The EM algorithm
To accomplish model fitting, motivated by Wang et al. (2016), we developed a reliable and efficient EM based optimization procedure to identify the penalized estimator of . In particular, define as the latent Poisson process for subject with mean rate function , . Let denote the time to the occurrence of the first event of the Poisson process. Thus, indeed follows the spline transformation model (4
) with a cumulative distribution function. To implement the EM algorithm for the proposed model, two latent variables need to be specified. In particular, let , where , and similarly, when , define , where . Under these specifications, if is left-censored, then , and in the case of interval-censoring or right-censoring, and or and , respectively. Observe that Poisson, and in the case of , Poisson, where and ; see Wang et al. (2016) for further information. Thus, under these specifications, ignoring the items without including , the complete penalized spline log-likelihood for involving the latent ’s and ’s as well as observed data is given by
During the E-step of the EM algorithm, conditional on the observed data , , and the current estimate of , taking the expectation of with respect to all the latent variables ’s and ’s yields the penalized function, namely,
where and . Under the specification of , if and only if , and, given and ,
is a zero-truncated Poisson distribution. Thus,
where represents evaluated at . Similarly, under the specification of , we have
The M-step involves finding under the constraints for a fixed . The optimization can be easily accomplished via applying the standard linearly constrained optimization routines such as R function constrOptim that is based on an adaptive barrier algorithm.
By taking advantage of -spline approximation, the proposed EM algorithm simplifies the alternative of Wang et al. (2016) for the PH and PO models and extends it to the transformation models.
2.3 Nested optimization procedure
In what follows, we summarize a nested iterative optimization procedure that involves in updating estimates of both the parameters and the smoothing parameter. The outer process is committed to streamlining the smoothing parameter , while the inner process is aimed at solving a constrained penalized function for a fixed value , which can be achieved via applying an efficient EM algorithm as discussed in Section 2.2. At this point, we identify the maximizer of the penalized spline log-likelihood function for a fixed . Once the inner process is finished, we implement the generalized Fellner-Schall method (Wood and Fasiello, 2017) to update the estimate of the smoothing parameter in the outer process. In particular, the updated estimate of , say , is given by
where is the generalized inverse of with and is the generalized inverse of , the negative Hessian matrix of the observed penalized spline log-likelihood function (2.1) evaluated at . Because is non-negative definite, the numerator in (6) is guaranteed to be non-negative; see Theorem 4 of Wood and Fasiolo (2017) for further information. Essentially, formula (6) is used to update the estimate of the smoothing parameter each time after is available. Once the updated estimate of the smoothing parameter, i.e., , is available, the inner process is then restarted to identify . These two processes iterate in turn until convergence, i.e., the absolute difference between and is less than a pre-specified number. The implementation of the nested iterative algorithm is outlined as follows:
- Step 1 (outer process).
Identify for the current through the EM algorithm discussed in Section 2.2.
- Step 2 (inner process).
Update to via equation (6) and go back to Step 1 to identify . Two processes iterate until .
When the algorithm converges, is defined as the penalized spline estimator of . Accordingly, is the penalized spline estimator of .
It is noted that once the smoothing parameter is specified, the regression parameters and the spline coefficients can be estimated simultaneously using the existing procedure. Moreover, as demonstrated via the extensive simulation studies in Section 4, the proposed algorithm usually converges in a few steps and is robust to the selection of the initial values and spline knots. Further, the issue of divergence is very rare. These findings reinforce the assertion that the penalized spline estimation approach is computationally stable and efficient.
2.4 Selection of knot
A sensitivity analysis displayed in Table 1 reveals that the proposed penalized procedure is robust to the selection of knots. Thus, we choose the number of inner knots to be denoting the least integer greater than or equal to
. Once the number of knots is fixed, the location of knots is determined by equally-spaced quantiles of the observation times. This empirical rule was applied in the simulation study and the real application.
3 Asymptotic results
Let and denote the true value and the penalized estimator of , respectively. Define the -norm of as
We establish the asymptotic properties of with -norm
Write the log-likelihood of for as
where and with and . The score function for takes the form
where . Differentiating along a parametric smooth model at for with , , we have score operator for
Let denote the least favorable direction that minimizes the distance for , i.e., is the orthogonal projection of onto the space . The information matrix for at is then defined as
where is the efficient score for at . The following regularity conditions are sufficient to establish the asymptotic results.
The true regression parameter vector is an interior point of a compact subset of .
The true transformation function is strictly increasing with for some , where and its th derivative satisfies Lipschitz condition on with .
The first derivative of link function is bounded away from 0 and infinite and the second derivative of is bounded on .
(a) There exists a positive number such that , and (b) the union of the supports of and is contained in .
(a) The covariate vector has a bounded support, and (b) for any .
The joint density of is bounded away from 0 and infinite.
The smoothing parameter is assumed to be of order
The information matrix is positive definite.
(Consistency and rate of convergence) Under conditions A1 – A7, the penalized estimator is consistent for , , and .
(Asymptotic normality and efficiency) Under conditions A1–A8,
Theorem 1 suggests that for a choice of that is of the order , attains the optimal rate of convergence in the semi-parametric setting. Theorem 2 states that the penalized estimator achieves the information bound and, hence, is efficient; see Bickel et al. (1993) for more discussion of semiparametric efficiency.
As discussed in Huang and Wellner (1997) and Zhang et al. (2010), the least favorable direction is the solution of a Fredholm integral equation of the second kind and does not have a closed form. Therefore, it is very challenging to estimate and, hence, the information matrix directly. Following the proposal of Zhang et al. (2010), by taking advantage of the orthogonal property of the least favorable direction and -spline approximation, we put forward a least-square based approach to tackle the problem. In particular, in view of definition of , define , where is the usual empirical norm and is an approximation -spline space for . Following the arguments in Zhang et al. (2010), it can be shown that is well-defined and can be used to consistently estimate . It follows that
is a consistent estimator of , where is the empirical measure. It is worthwhile to point out that is the outer product version of the observed information for without penalized term. Let denote the outer product version of the penalized observed information for . The following theorem results from the consistency of and the assumption of the smoothing parameter.
(Variance estimation) Under conditions C1–C8, is an asymptotically consistent estimator of .
The finite-sample performance of the proposed variance-covariance estimation method was evaluated in the Monte Carlo study and employed for the inference on regression parameters in the real application.
In this section we assess the proposed methodology under a variety of simulation settings and compare it with an existing approach (Wang et al. 2016) implemented in package . Assume the true failure time was generated from a class of transformation models
where Bernoulli(0.5) and . To demonstrate the broad applicability of the proposed approach, we assume that the link function belongs to a class of smooth and monotone increasing functions indexed by a parameter as follows:
It is noted that the family of models presented above include the most commonly used survival models, i.e., or corresponding to the PH or PO model, respectively. To evaluate the performance of the proposed methodology across a wide range of settings, we considered three simulation configurations that differ in the regression parameters and the unknown transformation function . In particular, in configuration I (C1), and ; in configuration II (C2), and ; and in configuration III (C3), and . Under each configuration, three different models were considered, i.e., (PH), , and
(PO). To generate interval-censored data, the number of observations for each individual was simulated from 1 plus a Poisson random variable with expectation 1 for the purposes of avoiding zero observation, and the gap times between two consecutive observations follow an exponential distribution with mean parameter 0.5. These specifications provide a diversity of right-censored rates, i.e., for, and 1, the right-censored rates are 74%, 76%, and 78%, respectively, under C1, 41%, 46%, and 51%, respectively, under C2, and 14%, 21%, and 27%, respectively, under C3. For each considered configuration, 1,000 datasets were generated under each model with sample size or . Cubic -splines with the knots being equally-spaced percentiles were employed to model the unknown transformation function .
To demonstrate robust performance of the penalized method to the specification of the number of knots, a sensitive analysis was conducted with the number of inner knots being selected to be 3, 5, and 7. Table 1 presents a summary of simulation results for the regression parameters (i.e., and ) under C1, across the three considered transformation models (i.e., , and 1) and two different sample sizes (i.e., or 100). The summaries of simulation results for C2 and C3 are provided in the Web Tables 1a and 1b, respectively. One would observe that, under the three considered simulation settings, the estimation and inference for across all considered numbers of knots are almost the same, suggesting the robustness of the penalized spline method to the selection of the number of knots. Therefore, it is suitable in our simulation study and real application to choose the number of inner knots to be
. These results reveal that the penalized approach exhibits desirable numerical properties across all considered settings. In particular, the empirical biases are small and approach to 0 as the sample size increases, providing numerical assertion of asymptotic consistency. Moreover, the standard deviations (SD) of the point estimates are in accordance with the averages of the estimated standard errors (ASE) derived from the proposed variance-covariance estimation method, and the corresponding empirical coverage probabilities of 95% Wald-type confidence intervals are close to the nominal level. These findings give an impression that the large-sample inference based on the proposed variance-covariance estimation procedure is feasible under all considered models, even with a relatively small sample size. At last, the power of the Wald-type test for the regression parameters based on the proposed variance-covariance estimation procedure was examined. On the basis of the change of power forunder C1 of the PO model displayed in Figure 1, it suggests that the proposed procedure can be employed to accurately estimate the power of Wald-type test, i.e., the estimated powers display the symmetry around true values and increase when the sample size or effect size is increased.
We compared the proposed methodology with the competing -spline based sieve semi-parametric approach outlined in Wang et al. (2016) under the PH model. There are many reasons to choose this approach for a direct comparison. In particular, this competing method is the most recent contribution to the existing methodology for interval-censored data and was implemented in package . For purposes of comparison, cubic splines are used for both the penalized spline and the -spline methods. For a particular data set, as discussed above, the penalized approach selects the number of knots as a cubic root of sample size. In contrast, the competing method makes use of varying configurations of the number of knots. The -spline based sieve estimator is identified via some model selection criteria such as the Akaike’s information criterion (AIC); see Wang et al. (2016) for more details. Table 2 summarizes the result of comparison for estimations of the regression parameters under C1. The results of comparisons under C2 and C3 are provided in Web Tables 2a and 2b, respectively. From this summary, the proposed penalized estimation method exhibits smaller mean squared errors and better empirical coverage probabilities when compared with the competing approach. Further, the standard error estimates obtained from the proposed variance-covariance estimation procedure display less variability when compared to the ones based on the Louis’s method implemented in R package ICsurv. It was found that the competing -spline approach suffers from the issue of algorithm divergence and the negative variance estimate, especially when the number of knots is large or the right censoring rate is high. Therefore, it is evident that the proposed methodology outperforms the approach of Wang et al. (2016) under the PH model across all considered settings even through the competing sieve estimator was selected as the best from multiple knot configurations.
The results with respect to the estimation of the transformation function under C1 of the PH (i.e., ) and the PO (i.e., ) models for all considered sample sizes are summarized in Figure 2. Included in the figure are pointwise means and the corresponding 2.5% and 97.5% percentiles of the 1,000 estimates, together with the true value of the function. Web Figures 2a and 2b exhibit similar results under C2 and C3, respectively. It is observed that the spline curve fits the true function very well and the pointwise lower and upper limits of 95% confidence interval are fairly close, indicating negligible bias and little variability of the spline estimates. Further, both bias and variability decrease with sample size, which provides a numerical verification to the asymptotic consistency established in Section 3.
Finally, we investigated the robustness of the proposed methodology in the case where the assumption of PH or PO is not valid, i.e., select the PH model for and PO model () for under C1. As displayed in Table 3, the proposed methodology still exhibits the satisfactory finite-sample performance in terms of bias, standard deviation, average of estimated standard errors, and coverage probability, i.e., the approach is robust to the specification of the link function to a certain extent. The analogous results under C2 and C3 are available in Web Tables 3a and 3b, respectively. In summary, the proposed penalized method displays desirable numerical properties such as accurate estimation and reliable inference even for a small sample size under a variety of simulation settings and models for interval-censored data.
5 Real data application
5.1 Breast cosmesis study
For purposes of illustration, we applied the proposed methodology to a breast cosmesis study (Beadle et al., 1984). The aim of this retrospective study was to assess the effect of adjuvant chemotherapy on cosmetic state for early breast cancer patients who received primary radiation therapy. Two treatment regimens were assigned, i.e., 48 patients were assigned to the radiation therapy plus adjuvant chemotherapy, and 46 to the radiation therapy alone. Each patient was scheduled to visit the clinic every 4 to 6 months to check overall cosmetic appearance. For this analysis, the event of interest is the appearance of breast retraction which is highly correlated to overall cosmetic deterioration. It is noted that the exact time until the appearance of breast retraction cannot be observed attribute to the design of the study. It was only known that the event time occurred within an interval, i.e., it is interval-censored. Further, as discussed in Finkelstein and Wolfe (1985), the time between clinic visits was unlikely to be related to the development of cosmetic deterioration, i.e., the non-informative censorship assumption was appropriate for analysis of this data set.
For this analysis, we were interested in comparing the rate of cosmetic deterioration between two treatments. Cubic -spline with inner knots placed at equally-spaced percentiles of follow-up time was utilized to model the transformation function. The nested algorithm outlined in Section 2.2 was employed for model fitting, and the proposed variance-covariance estimation method discussed in Section 3 was applied to estimate the standard errors of the regression parameter estimators. Table 4 summarizes the result including the regression parameter estimates and the estimated standard errors along with the corresponding 95% confidence intervals under the PH and the PO models. Both models suggest that adjuvant chemotherapy is a significant risk factor for cosmetic deterioration. In particular, patients who received chemotherapy in addition to primary radiation treatment demonstrated a higher hazard of developing breast retraction when compared to those who received primary radiation therapy alone under the PH model. The similar result is observed for the PO model, i.e., the chemotherapy increases the odds of breast retraction for those who have previously received the primary radiation therapy. These results support the findings provided in Finkelstein and Wolfe (1985), Finkelstein (1986), and Zhang et al. (2010). The estimated transformation functions along with 95% pointwise confidence intervals under the PH and the PO models are displayed in Figure 3. The lower and upper limits of 95% pointwise confidence intervals were calculated as 2.5% and 97.5% percentiles of 1,000 pointwise bootstrap estimates, respectively.
5.2 Signal tandmobiel study
To further illustrate the proposed approach, we analyzed data from signal tandmobiel study that is a longitudinal prospective dental study conducted in Northern Belgium from 1996 to 2001. In this study, a total of 4,468 randomly selected children (2,315 boys and 2,153 girls) who were born in 1989 were examined annually (at most 6 times) by one of the 16 trained dentists. At each clinic visit, the information on the emergence of permanent tooth (i.e., “emerged” versus “not emerged”) was recorded. In addition, the health status of each erupted tooth was inspected and made into two categories: “sound” versus “caries experience” (i.e., presence of caries, filled, or extracted for reasons of caries). Because the participants were examined on an annual basis, caries experience data were interval-censored. It is noted that clinical visits were scheduled independently of the time of caries experience, i.e., it is reasonable to assume the non-informative censoring for this study. For this analysis, we aimed to detect the association between caries experience and three potential risk factors, i.e., gender, type of education system (free school, community school, or province/council school), and the starting age of brushing the teeth. In particular, we are interested in the 4 first molar teeth, whose right-censoring rates are around 25%. In our analysis, we analyzed the health status of one of the maxillary first molar teeth (i.e., tooth 26); see p. 29 of Bogaerts et al. (2017) for more information of tooth identification number) by using the following model
where is a known link function. In the above specifications, for boy and for girl, if the child attended community school ( otherwise), if the child attended province/council school ( otherwise), and indicates the starting age of brushing the teeth (as reported by parents), i.e., the nearest integer that is greater than the true age. The unknown transformation function was approximated by cubic -splines. The selection of knots and smoothing parameter follows the discussion outlined in Section 2.2. Model fitting was accomplished through the nested algorithm summarized in Section 2.2, and the inference on the regression parameters was based on the proposed variance-covariance approach summarized in Section 3. Presented in Table 5 are regression parameter estimates and estimated standard errors along with corresponding 95% confidence intervals under the PH and the PO models. Both models suggest that starting to brush the teeth at a later age is a significant risk factor for caries experience, while gender and school attendance are not. In particular, participants who started brushing the teeth at a later age experienced a significant earlier caries for an erupted tooth. Further, the estimates of the transformation function under the PH and the PO models along with 95% pointwise confidence intervals are shown in Figure 4. The lower and upper limits of the 95% pointwise confidence intervals are determined by 2.5% and 97.5% percentiles of 1,000 bootstrap estimates.
6 Summary and discussion
In this work a computationally efficient penalized approach has been developed to analyze interval-censored data under flexible semi-parametric linear transformation models. The proposed nested iterative algorithm is straightforward to implement via existing software. The proposed penalized estimation method exhibits desirable numerical properties that are validated through the extensive Monte Carlo studies. Further, a simple and consistent variance-covariance estimation approach was developed to provide reliable inference for regression parameters. In addition to the appealing numerical properties, the nonparametric estimator achieves the optimal rate of convergence, and the regression parameter estimators are shown to be asymptotically normal and efficient.
Inspired by the attainment of the proposed penalized approach, future work will be targeted at extending the proposed methodology to a partially linear additive transformation model for interval-censored data and a linear transformation model for heavily right-censored interval-censored data. Another desirable direction for future research could involve the development of an approach to identifying the link function. One option is to apply the single-index methodology to transformation models, i.e., treat the monotone link function as unknown and model it via penalized monotone splines. Proceeding in this way allows us to fully capture the nonparametric feature of the link function. Due to the complex nature of interval-censored data and single-index model this is an extremely challenging topic. A lot of effort would be involved in the discussion of the identifiability of the model and the development of numerical algorithm and large-sample properties of the penalized estimators.
- Beadle (1984) Beadle, G. F., Silver, B., Botnick, L., Hellman, S., and Harris, J. R. (1984). Cosmetic results following primary radiation therapy for early breast cancer. Cancer 54, 2911–2918.
- Bogaerts (2017) Bogaerts, K., Komarek, A., and Lesaffre, E. (2017). Survival Analysis with Interval-Censored Data: A Practical Approach with Examples in R, SAS, and BUGS. Chapman and Hall/CRC, Boca Raton.
- Betensky (2002) Betensky, R., Lindsey, J., Ryan, L., and Wand, M. (2002). A local likelihood proportional hazards model for interval-censored data. Statistics in Medicine 21, 263–275.
- Eilers (1996) Eilers, P. H. and Marx, B. D. (1996). Flexible smoothing with B-splines and penalties. Statistical Science 11, 89–102.
- Finkelstein (1985) Finkelstein, D. M. and Wolfe, R. A. (1985). A semiparametric model for regression analysis of interval-censored failure time data. Biometrics 41, 933–945.
- Finkelstein (1986) Finkelstein, D. M. (1986). A proportional hazards model for interval-censored failure time data. Biometrics 42, 845–854.
- Huang (1997) Huang, J. and Wellner, J. A. (1997). Interval censored survival data: a review of recent progress. In Proceedings of the First Seattle Symposium in Biostatistics (pp. 123-169). Springer, New York, NY.
- Lu (2017) Lu, M. and Li, C. S. (2017). Penalized estimation for proportional hazards models with current status data. Statistics in Medicine 36, 4893–4907.
- Kosorok (2008) Kosorok, M. R. (2008). Introduction to Empirical Processes and Semiparametric Inference. Springer, New York.
- Ma (2005) Ma, S. and Kosorok, M. R. (2005). Penalized log-likelihood estimation for partly linear transformation models with current status data. The Annals of Statistics 33, 2256–2290.
- Pan (1999) Pan, W. (1999). Extending the iterative convex minorant algorithm to the Cox model for interval-censored data. Journal of Computational and Graphical Statistics 8, 109–120.
- Satten (1996) Satten, G. A. (1996). Rank-based inference in the proportional hazards model for interval censored data. Biometrika 83, 355–370.
- Schumarker (1981) Schumaker, L. (1981). Spline Functions: Basic Theory. Wiley, New York.
- Sun (2007) Sun, J. (2006). The Statistical Analysis of Interval-Censored Failure Time Data. Springer, Berlin.
- Wang (2019) Wang, L., McMahan, C. S., Hudgens, M. G., and Qureshi, Z. P. (2016). A flexible, computationally efficient method for fitting the proportional hazards model to interval‐censored data. Biometrics 72, 222–231.
- Wood (2017) Wood, S. N. (2017). Generalized Additive Models: An Introduction with R. (2nd Edition). Chapman and Hall/CRC, Boca Raton.
- Wood (2017) Wood, S. N. and Fasiolo, M. (2017). A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models. Biometrics 73, 1071–1081.
- Ger (2006) van der Geer, S. A. (2006). Empirical Processes in M-estimation. Cambridge University Press, Cambridge.
- Vaart (1996) van der Vaart, A. W. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes. Springer, New York.
- Vaart (2000) van der Vaart, A. W. (2000). Asymptotic Statistics. Cambridge University Press, Cambridge.
- Zeng (2016) Zeng, D., Mao, L., and Lin, D. Y. (2016). Maximum likelihood estimation for semiparametric transformation models with interval-censored data. Biometrika 103, 253–271.
- Zhang (2005) Zhang, Z., Sun, L., Zhao, X., and Sun, J. (2005). Regression analysis of interval‐censored failure time data with linear transformation models. Canadian Journal of Statistics 33, 61–70.
- Zhang (2010) Zhang, Y., Hua, L., and Huang, J. (2010). A spline-based semiparametric maximum likelihood estimation method for the Cox model with interval-censored data. Scandinavian Journal of Statistics 37, 338–354.
- Zhang (2010) Zhang, Z. and Sun, J. (2010). Interval censoring. Statistical Methods in Medical Research 19, 53–70.
Zhang, Z. and Zhao, Y. (2013). Empirical likelihood for linear transformation models with interval-censored failure time data.
Journal of Multivariate Analysis116, 398–409.
- Zhou (2017) Zhou, J., Zhang, J., and Lu, W. (2017). An Expectation Maximization algorithm for fitting the generalized odds‐rate model to interval censored data. Statistics in Medicine 36, 1157–1171.
- Zhou (2018) Zhou, J., Zhang, J., and Lu, W. (2018). Computationally Efficient Estimation for the Generalized Odds Rate Mixture Cure Model With Interval-Censored Data. Journal of Computational and Graphical Statistics 27, 48–58.
Let and be the distribution of for and , respectively, with respect to a -finite measure , and and be the corresponding densities. The empirical process evaluated at a measurable function is defined as , where is the expectation of under the empirical measure . Accordingly, define for a class of measurable functions . Let and denote the covering number and the bracketing number with for -metric with ( reduces to the supremum norm for ). In the sequel, the expressions and b represent and , respectively, up to a constant. We apply Theorem 25.81 of van der Vaart (2000) to derive the consistency and the rate of convergence of . Define the criterion function in above theorem as
7.2 Technical lemmas
If conditions A1–A7 hold, then
for in a neighborhood of .
If conditions A1–A7 hold, then, for a sufficiently small , whenever .
If conditions A1–A7 hold, then, for ,
Suppose that the following conditions are assumed
7.3 Proof of Lemma 1
Let and . The density function is bounded away from a sufficiently small constant and infinite and, hence, is also bounded away form a positive constant and infinite, for varying in a small neighborhood of . It follows that and are bounded. Then, by the mean value theorem, is bounded below by , up to a constant, and, hence, . Let and denote and evaluated at , respectively. By some straightforward algebra, we have
where for in a neighborhood of 1. It follows that
where and . The last inequality holds because the first derivative of the link function is bounded away from 0 and infinite. Invoke the law of total expectation and Cauchy-Schwarz inequality and then apply the orthogonal property of conditional expectation to conclude that there exists such that . It concludes from Lemma 25.86 of van der Vaart (2000) that
The above inequality also holds for by replacing by . Lemma 1 follows.
7.4 Proof of Lemma 2
For any , by a Taylor expansion, for ,
where is a th degree polynomial and for by the Cauchy-Schwarz inequality. To simplify the presentation, assume , i.e., . Write for and with , for . Under the assumption , we have for . Then
The non-singularity of matrix implies that and, hence, because is defined on a bounded set. The Lemma follows from the inequality .
7.5 Proof of Lemma 3
As shown in Lemma 2, for any , can be written as , where is a polynomial of degree and satisfies and . Let be a class of functions with and . According to Lemma 2.4 of van der Geer (2000), the entropy number is of the order . For a fixed function , because is one-to-one, by Example 3.7.4d of van der Geer (2000) and Lemma 9.7 of Kosorok (2008), the class of bounded functions is a Vapnik-Chervonenkis subgraph class of indexes bounded by . Lemma 19.15 of van der Vaart (2000) applies and concludes that the uniform covering number of the above class for -norm is of order . Because implies and , it follows that the uniform covering number of the class of functions with is of order and, hence,
Because is essentially a transformation of