In the study of a disease history, the time from the onset of an initiating event to the focused disease event (or failure) is usually the interest in the epidemiological and biomedical researches. One of the most attractive data comes from the prevalent sampling design, in which individuals only experience the initiating event but not the failure event before the recruiting time. Under this sampling scheme, individuals might not be observed because they experience the failure event before the recruiting time. Such a phenomenon caused by the delayed entry is called left-truncation and tends to produce a biased sample. Meanwhile, individuals who are recruited in the study may drop out or may not experience the failure event at the end of the study. It is called right-censoring in the dataset. To be specific, let be the calendar time of the recruitment and let and denote the calendar time of the initiating event (or the disease incidence) and the failure event, respectively, where , and . Let be the failure time, and let denote the truncation time. Assume that and are independent. Let be the associated covariate of dimension . Let and be the density function and the survivor function of the failure time , respectively. Let , and represent, respectively, the truncation time, the failure time, and the covariates for those subjects who are recruited in the study. That is,
has the same joint distribution asgiven . If , then such an individual is not included in the study to contribute any information.
Specifically, in the case of stable disease, the occurrence of disease onset follows the stationary Poisson process. To see this, we first impose two assumptions (e.g., Huang et al. 2012), including (a) the variables are independent of the disease incidence , and (b) disease incidence occurs over calendar time at a constant rate. Based on these two assumptions, the joint density function of given is given by (Lancaster 1990, Chapter 3)
where . Moreover, by (1), the failure time given has a length-biased density function . It implies that the truncation time
follows the uniform distribution, and the survival time in the prevalent cohort has a length-biased sampling distribution since the probability of the survival time is proportional to the length of survival time.
In addition, we let denote the censoring time for a recruited subject. Let be the observed survival time and let be the indicator of a failure event. Figure 1 gives an illustration of the relationship among those variables.
In the past literature, several models for the analysis of biased samples were developed. To name a few, Huang and Qin (2013) and Chen (2018+) developed the estimation procedures for the additive hazards model for general left-truncated data where the distribution of the truncation time is unspecified. Qin and Shen (2010) and Huang et al. (2012) studied the Cox model for length-biased sampling. In addition, the transformation model is also a widely used model in survival analysis, which is formulated by
where is an unknown increasing function,
is a random variable with a known distribution, andis the vector of parameters of main interest. The transformation model gives a broad class of some frequently used models in survival analysis. Be more specific, when has an extreme value distribution, then follows the proportional hazards (PH) model; whereas when has a logistic distribution, then
follows the proportional odds (PO) model. When biased samples occur, Shen et al. (2009) and Wang and Wang (2015) proposed valid methods to estimate the parameter of main interest based on length-biased sampling. In this paper, we mainly focus on the development of estimation method for the transformation model.
On the other hand, the other important feature in survival analysis is measurement error in covariates. Assume that is an unobserved covariate and is a surrogate version of . The classical additive model is frequently used, which is given by
follows the normal distribution with mean zero and covariance matrix. Assume that and are independent and is known for ease of discussion.
When the absence of left-truncation or length-biased sampling, there are many contributions on correcting the mismeasurement. For example, Nakamura (1992) proposed an approximate corrected partial likelihood method for the Cox model. Yi and Lawless (2007) developed the conditional expectation approach to correct the error effect for the Cox model. More discussions can be found in Yi (2017, Chapter 3).
However, little work has been available when those two features happen simultaneously. In the past literature, Chen and Yi (2018) proposed the corrected pseudo-likelihood estimation to estimate the parameter for the Cox model subject to left-truncation and right-censoring data and covariate measurement error. However, other survival models, such as the transformation model, have not been fully explored when those two complex features occur in the dataset. In this paper, our main purpose is to develop a valid estimation procedure to estimate the parameter for the transformation model with length-biased sampling and measurement error in covariate.
Our work is partially motivated by the Worcester Heart Attack Study (WHAS500) data (Hosmer et al. 2008) which involves biased and incomplete data. Basically, three types of time are recorded: time of the hospital admission, time of the hospital discharge, and time of the last follow-up (which is either death or censoring time). The total follow-up length is defined as the time gap between the hospital admission and the last follow-up, and the hospital stay time is defined as the time length between the hospital admission and the hospital discharge. Data can only be collected for those individuals whose total follow-up length is larger than the hospital stay time, creating biased sample (e.g., Kalbfleisch and Prentice 2002, Section 1.3; Lawless 2003, Section 2.4). It is interesting to study how the risk factors are associated with the survival times after the patients are discharged from the hospital. In addition, ‘blood pressure’ and ‘body mass index’ are variables in the data set, and they may be measured with error. To conduct sensible analyses, it is imperative to account for possible measurement error effects that are induced from error-prone covariates.
The remainder is organized as follows. In Section 2, we first present the proposed method to correct the error effect and derive the estimator. After that, we develop the theoretical result for the proposed method. Numerical results, including simulation study and real data analysis, are provided in Sections 3 and 4, respectively. Finally, we conclude the paper with discussions in Section 5.
2 Main Results
In this section, we propose a method to deal with the measurement error in covariates, and then derive the estimators of and . The theoretical results of the estimators are also established in this section.
2.1 Construction of The Estimating Function
In this section, we review a method proposed by Wang and Wang (2015) and assume that the covariate is collected without mismeasurement.
Suppose we have a sample of subjects where for , has the same distribution as . Let and for . Define
for , where is the conditional hazard function of given , , and is the survivor function of the variable . It can be shown that (4) is a square-integrable martingale (e.g., Anderson et al. 1993). Define , and let denote the estimator of , where and is the Kaplan-Meier estimator. Therefore, based on (4) and the counting process theory (e.g., Anderson et al. 1993), we can construct the following estimating equations
Consequently, let denote the solution of and it is the estimator of based on the covariate . Moreover, is the estimator of .
Based on the true covariates , we have the following proposition which is given in Theorem 1 of Wang and Wang (2015).
Let be the true parameter. Under regularity conditions in Wang and Wang (2015), is a consistent estimator and converges weakly to a normal distribution with mean zero and variance-covariance matrix
converges weakly to a normal distribution with mean zero and variance-covariance matrix, where and can be found in Wang and Wang (2015).
2.2 Corrected Estimating Function
It is well-known that the naive estimator incurs the tremendous bias (e.g., Carroll et al. 2006; Yi 2017). To correct the mismeasurement and reduce the bias of the estimator, we employ the simulation-extrapolation (SIMEX) method (Cook and Stefaski 1994). The proposed procedure is the following three stages:
- Stage 1
Let be a given positive integer and let be a sequence of pre-specified values with . where is a positive integer, and is pre-specified positive number such as .
For a given subject with and , we generate from . Then for observed vector of covariates , we define as
for every . Therefore, the conditional distribution of given is .
- Stage 2
for every and . Let denote the solution of . Moreover, we define
- Stage 3
By (13), we have a sequence . Then we fit a regression model to the sequence
where is the user-specific regression function, is the associated parameter, and is the noise term. The parameter can be estimated by the least square method, and we let denote the resulting estimate of .
Finally, we calculate the predicted value
and take as the SIMEX estimator of .
- Stage 4
Furthermore, we can also derive the estimator of the unknown function . To do this, we first replace by in , which gives . For every and , taking average with respect to gives . Finally, similar to Stage 3 above, fitting a regression model and taking as a predicted value yields a final estimator , also denoted as .
2.3 Theoretical Results
In this section, we present the consistency and the asymptotic distribution of the proposed estimators and , and the proofs are available in Appendix B.
Let be the true parameter and let denote the true increasing function. Under regularity conditions in Appendix A, estimators and have the following properties:
converges to the Gaussian process with mean zero and covariance function ,
where the exact formulations of and are placed in Appendix B.
3 Numerical Study
3.1 Simulation Setup
We examine the setting where is generated from the extreme value distribution and the logistic distribution, and the truncation time is generated from the uniform distribution . Let denote a two-dimensional vector of parameters, and let be the vector of true parameters where we set . We consider a scenario where are generated from a bivariate normal distribution with mean zero and variance-covariance matrix , which is set as . Given , and , the failure time is generated from the model:
Based on our two settings of , the failure time follows the PH model and the PO model, respectively. Therefore, the observed data is collected from by conditioning on that . We repeatedly generate data these steps we obtain a sample of a required size . For the measurement error process, we consider model with error , where is a diagonal matrix of which is taken as , , and , respectively.
We consider two censoring rates, say 25% and 50%, and let the censoring time be generated from the uniform distribution , where is determined by a given censoring rate. Consequently, and are determined by and . In implementing the proposed method, we set and partition the interval into subintervals with width , and let the resulting cutpoints be the values of . We take the regression function to be the quadratic polynomial function, which is a widely used function in many cases (e.g., Cook and Stefaski 1994; Carroll et al. 2006). Finally, 1000 simulations are run for each parameter setting.
3.2 Simulation Results
We mainly examine three estimators which are discussed in Sections 2.1 and 2.2, and they are labeled by Wang-Wang (), Naive (), and Chen (). We report the biases of estimates (Bias), the empirical variances (Var), the mean squared errors (MSE), and the coverage probabilities (CP) of those three estimators under the measurement error model (3). The results are reported in Table 1.
First, the censoring rate and measurement degree have noticeable impact on each estimation methods. As expected, biases and variance estimates increase as the censoring rate increases. When the measurement degree increases, biases of both and are increasing, and the impact of the measurement error degrees seems more obvious on the naive estimator .
Within a setting with a given censoring rate and a measurement error degree, the naive method and the proposed method perform differently. When measurement error occurs, the performance of the proposed method is better than the naive method. The naive method produces considerable finite sample biases with coverage rates of 95% confidence intervals significantly departing from the nominal level. The proposed method outputs satisfactory estimate with small finite sample biases and reasonable coverage rates of 95% confidence intervals. Compared to the variance estimates produced by the naive approach, the proposed method which accounts for measurement error effects yield larger variance estimates, and this is the price paid to remove biases in point estimators. This phenomenon is typical in the literature of measurement error models. However, mean squared errors produced by the proposed method tends to be a lot smaller than those obtained from the naive method. Finally, we also present the numerical results of the estimatorwith the true covariate . In general, gives the smallest bias and it is an efficient estimator with small variance. This result also verifies the validity of method proposed by Wang and Wang (2015).
|model||cr||Method||Estimator of||Estimator of|
4 Data Analysis
In this section, we apply the proposed method to analyze the data arising from the Worcester Heart Attack Study (WHAS500), which are described in Section 1. Discussed by Hosmer et al. (2008), a survival time was defined as the time since a subject was admitted to the hospital. We are interested in studying survival times of patients who were discharged alive from the hospital. Hence, a selection criterion was imposed that only those subjects who were discharged alive were eligible to be included in the analysis. That is, individuals were not enrolled in the analysis if they died before discharging from the hospital, hence biased sample occurs. With such a criterion, a sample of size 461 was available. In this data set, the censoring rate is 61.8%. To be more specific, the total length of follow-up (lenfol) is the survival time (i.e., ), the length of hospital stay (los) is the truncation time (i.e., ), and the vital status at last follow-up (fstat) is . In our analysis, the covariates include the body mass index (BMI) and the blood pressure (BP) of a patient. Suppose that BMI and BP are subject to measurement error, we let where and denote BMI and BP, respectively, and consider the measurement error model . Let , where is the associated parameter with for . We fit the regression model (2) with being the extreme value distribution and the logistic distribution, respectively.
In this data set, there is no additional data source, such as a validation subsample or replicated measurements which is often required to describe the measurement error process (e.g., Carroll et al. 2006; Yi 2017). To get around this and understand the impact of measurement error on estimation, we carry out sensitivity analyses. Specifically, let be the sample covariance matrix of , and for sensitivity analyses we consider to be the covariance matrix for the measurement error model (3), where is the diagonal matrix with diagonal elements being a common value .
In Table 2
, we report the point estimates (EST), the standard errors and p-values for the estimatorsfor the cases with , and , respectively, corresponding to minor, moderate and large measurement error, and report the results of by directly using in the analysis. All the point estimates produced by our proposed method are fairly close as observed, and the results are fairly stable regardless of the degree of measurement error. Both our proposed method and the naive estimator suggest that two covariates are significant regardless of the specification of the distribution of .
In this article, we focus the discussion on the transformation model with length-biased sampling and develop a valid method to correct the covariate measurement error and derive an efficient estimator. In this article, we also establish the large sample properties, and the numerical results guarantee that our proposed method outperforms. Although we only focus on the simple structure of the measurement error model, our method can easily be extended to complex measurement error models or additional information, such as repeated measurement or validation data. In addition, there are still many challenges in this topic, such as analysis without specifying the distribution of the truncation time or the discussion of time-dependent covariates with mismeasurement. These topics are also our researches in the future.
Appendix A Regularity Conditions
is a compact set, and the true parameter value is an interior point of .
Let be the finite maximum support of the failure time.
The are independent and identically distributed for .
The covariate is bounded.
Conditional on , are independent of .
Censoring time is non-informative. That is, the failure time and the censoring time are independent, given the covariates .
The regression function is true, and its first order derivative exists.
Condition (C1) is a basic condition that is used to derive the maximizer of the target function. (C2) to (C6) are standard conditions for survival analysis, which allow us to obtain the sum of i.i.d. random variables and hence to derive the asymptotic properties of the estimators. Condition (C7) is a common assumption in SIMEX method.
Appendix B Proof of Theorem 2.2
Before presenting the proof, we first define some notation. For any , , and , let
where is the derivative of . We further define
Proof of Theorem 2.2 (1):
The Kaplan-Meier estimator over is uniformly consistency to in the sense that as (Pollard 1990; van der Vaart 1998). It implies that as ,
since as . Furthermore, by the similar derivation in Step 1 of Wang and Wang (2015), we have
), and the Uniformly Law of Large Numbers (van der Vaart 1998), we have thatconverges uniformly to . Then we have that as ,
for every . By (B.5), we can show that as ,
Since , therefore, by the continuous mapping theorem, we have that as ,
Proof of Theorem 2.2 (2):
By (B.7), we have for every , b, and . Taking average with respect to gives . On the other hand, by the Uniformly Law of Large Numbers and similar derivations in Wang and Wang (2015) with , we have that as , for all . Therefore, we conclude that as and , by the fact that .
Proof of Theorem 2.2 (3):
For and , applying the Taylor series expansion on (12) around gives
On the other hand, by the similar derivations in Wang and Wang (2015) and standard algebra, we can derive as a sum of i.i.d. random functions. The exact formulation is given by
, and is the cumulative hazards function of .