Semiparametric Estimation for the Transformation Model with Length-Biased Data and Covariate Measurement Error

12/27/2018 ∙ by Li-Pang Chen, et al. ∙ University of Waterloo 0

Analysis of survival data with biased samples caused by left-truncation or length-biased sampling has received extensive interest. Many inference methods have been developed for various survival models. These methods, however, break down when survival data are typically error-contaminated. Although error-prone survival data commonly arise in practice, little work has been available in the literature for handling length-biased data with measurement error. In survival analysis, the transformation model is one of the frequently used models. However, methods of analyzing the transformation model with those complex features have not been fully explored. In this paper, we study this important problem and develop a valid inference method under the transformation model. We establish asymptotic results for the proposed estimators. The proposed method enjoys appealing features in that there is no need to specify the distribution of the covariates and the increasing function in the transformation model. Numerical studies are reported to assess the performance of the proposed method.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the study of a disease history, the time from the onset of an initiating event to the focused disease event (or failure) is usually the interest in the epidemiological and biomedical researches. One of the most attractive data comes from the prevalent sampling design, in which individuals only experience the initiating event but not the failure event before the recruiting time. Under this sampling scheme, individuals might not be observed because they experience the failure event before the recruiting time. Such a phenomenon caused by the delayed entry is called left-truncation and tends to produce a biased sample. Meanwhile, individuals who are recruited in the study may drop out or may not experience the failure event at the end of the study. It is called right-censoring in the dataset. To be specific, let be the calendar time of the recruitment and let and denote the calendar time of the initiating event (or the disease incidence) and the failure event, respectively, where , and . Let be the failure time, and let denote the truncation time. Assume that and are independent. Let be the associated covariate of dimension . Let and be the density function and the survivor function of the failure time , respectively. Let , and represent, respectively, the truncation time, the failure time, and the covariates for those subjects who are recruited in the study. That is,

has the same joint distribution as

given . If , then such an individual is not included in the study to contribute any information.

Specifically, in the case of stable disease, the occurrence of disease onset follows the stationary Poisson process. To see this, we first impose two assumptions (e.g., Huang et al. 2012), including (a) the variables are independent of the disease incidence , and (b) disease incidence occurs over calendar time at a constant rate. Based on these two assumptions, the joint density function of given is given by (Lancaster 1990, Chapter 3)


where . Moreover, by (1), the failure time given has a length-biased density function . It implies that the truncation time

follows the uniform distribution, and the survival time in the prevalent cohort has a length-biased sampling distribution since the probability of the survival time is proportional to the length of survival time.

In addition, we let denote the censoring time for a recruited subject. Let be the observed survival time and let be the indicator of a failure event. Figure 1 gives an illustration of the relationship among those variables.

Figure 1: Schematic depiction of length-biased sampling.

In the past literature, several models for the analysis of biased samples were developed. To name a few, Huang and Qin (2013) and Chen (2018+) developed the estimation procedures for the additive hazards model for general left-truncated data where the distribution of the truncation time is unspecified. Qin and Shen (2010) and Huang et al. (2012) studied the Cox model for length-biased sampling. In addition, the transformation model is also a widely used model in survival analysis, which is formulated by


where is an unknown increasing function,

is a random variable with a known distribution, and

is the vector of parameters of main interest. The transformation model gives a broad class of some frequently used models in survival analysis. Be more specific, when has an extreme value distribution, then follows the proportional hazards (PH) model; whereas when has a logistic distribution, then

follows the proportional odds (PO) model. When biased samples occur, Shen et al. (2009) and Wang and Wang (2015) proposed valid methods to estimate the parameter of main interest based on length-biased sampling. In this paper, we mainly focus on the development of estimation method for the transformation model.

On the other hand, the other important feature in survival analysis is measurement error in covariates. Assume that is an unobserved covariate and is a surrogate version of . The classical additive model is frequently used, which is given by



follows the normal distribution with mean zero and covariance matrix

. Assume that and are independent and is known for ease of discussion.

When the absence of left-truncation or length-biased sampling, there are many contributions on correcting the mismeasurement. For example, Nakamura (1992) proposed an approximate corrected partial likelihood method for the Cox model. Yi and Lawless (2007) developed the conditional expectation approach to correct the error effect for the Cox model. More discussions can be found in Yi (2017, Chapter 3).

However, little work has been available when those two features happen simultaneously. In the past literature, Chen and Yi (2018) proposed the corrected pseudo-likelihood estimation to estimate the parameter for the Cox model subject to left-truncation and right-censoring data and covariate measurement error. However, other survival models, such as the transformation model, have not been fully explored when those two complex features occur in the dataset. In this paper, our main purpose is to develop a valid estimation procedure to estimate the parameter for the transformation model with length-biased sampling and measurement error in covariate.

Our work is partially motivated by the Worcester Heart Attack Study (WHAS500) data (Hosmer et al. 2008) which involves biased and incomplete data. Basically, three types of time are recorded: time of the hospital admission, time of the hospital discharge, and time of the last follow-up (which is either death or censoring time). The total follow-up length is defined as the time gap between the hospital admission and the last follow-up, and the hospital stay time is defined as the time length between the hospital admission and the hospital discharge. Data can only be collected for those individuals whose total follow-up length is larger than the hospital stay time, creating biased sample (e.g., Kalbfleisch and Prentice 2002, Section 1.3; Lawless 2003, Section 2.4). It is interesting to study how the risk factors are associated with the survival times after the patients are discharged from the hospital. In addition, ‘blood pressure’ and ‘body mass index’ are variables in the data set, and they may be measured with error. To conduct sensible analyses, it is imperative to account for possible measurement error effects that are induced from error-prone covariates.

The remainder is organized as follows. In Section 2, we first present the proposed method to correct the error effect and derive the estimator. After that, we develop the theoretical result for the proposed method. Numerical results, including simulation study and real data analysis, are provided in Sections 3 and 4, respectively. Finally, we conclude the paper with discussions in Section 5.

2 Main Results

In this section, we propose a method to deal with the measurement error in covariates, and then derive the estimators of and . The theoretical results of the estimators are also established in this section.

2.1 Construction of The Estimating Function

In this section, we review a method proposed by Wang and Wang (2015) and assume that the covariate is collected without mismeasurement.

Suppose we have a sample of subjects where for , has the same distribution as . Let and for . Define


for , where is the conditional hazard function of given , , and is the survivor function of the variable . It can be shown that (4) is a square-integrable martingale (e.g., Anderson et al. 1993). Define , and let denote the estimator of , where and is the Kaplan-Meier estimator. Therefore, based on (4) and the counting process theory (e.g., Anderson et al. 1993), we can construct the following estimating equations

or equivalently,




where is the cumulative hazard function of . When is fixed, let denote the unique solution of (5). Next, replacing in (6) by gives


Consequently, let denote the solution of and it is the estimator of based on the covariate . Moreover, is the estimator of .

Based on the true covariates , we have the following proposition which is given in Theorem 1 of Wang and Wang (2015).

Proposition 2.1.

Let be the true parameter. Under regularity conditions in Wang and Wang (2015), is a consistent estimator and

converges weakly to a normal distribution with mean zero and variance-covariance matrix

, where and can be found in Wang and Wang (2015).

However, as described in Section 1, we only observe instead of . Therefore, replacing by in (7) gives the naive estimating function


where is the solution of (5) replacing by . We let denote the naive estimator which is a solution of .

2.2 Corrected Estimating Function

It is well-known that the naive estimator incurs the tremendous bias (e.g., Carroll et al. 2006; Yi 2017). To correct the mismeasurement and reduce the bias of the estimator, we employ the simulation-extrapolation (SIMEX) method (Cook and Stefaski 1994). The proposed procedure is the following three stages:

Stage 1

Let be a given positive integer and let be a sequence of pre-specified values with . where is a positive integer, and is pre-specified positive number such as .

For a given subject with and , we generate from . Then for observed vector of covariates , we define as


for every . Therefore, the conditional distribution of given is .

Stage 2

In this stage, replacing in (5) and (6) by defined in (9) gives new estimating equations



Similar to the procedure in Section 2.1, we first fix and let denote the solution of (10). After that, replacing in (2.2) by gives the estimating equation


for every and . Let denote the solution of . Moreover, we define

Stage 3

By (13), we have a sequence . Then we fit a regression model to the sequence


where is the user-specific regression function, is the associated parameter, and is the noise term. The parameter can be estimated by the least square method, and we let denote the resulting estimate of .

Finally, we calculate the predicted value


and take as the SIMEX estimator of .

Stage 4

Estimation of
Furthermore, we can also derive the estimator of the unknown function . To do this, we first replace by in , which gives . For every and , taking average with respect to gives . Finally, similar to Stage 3 above, fitting a regression model and taking as a predicted value yields a final estimator , also denoted as .

2.3 Theoretical Results

In this section, we present the consistency and the asymptotic distribution of the proposed estimators and , and the proofs are available in Appendix B.

Theorem 2.2.

Let be the true parameter and let denote the true increasing function. Under regularity conditions in Appendix A, estimators and have the following properties:

  • as ;

  • as ;

  • as ;

  • converges to the Gaussian process with mean zero and covariance function ,

where the exact formulations of and are placed in Appendix B.

3 Numerical Study

3.1 Simulation Setup

We examine the setting where is generated from the extreme value distribution and the logistic distribution, and the truncation time is generated from the uniform distribution . Let denote a two-dimensional vector of parameters, and let be the vector of true parameters where we set . We consider a scenario where are generated from a bivariate normal distribution with mean zero and variance-covariance matrix , which is set as . Given , and , the failure time is generated from the model:

Based on our two settings of , the failure time follows the PH model and the PO model, respectively. Therefore, the observed data is collected from by conditioning on that . We repeatedly generate data these steps we obtain a sample of a required size . For the measurement error process, we consider model with error , where is a diagonal matrix of which is taken as , , and , respectively.

We consider two censoring rates, say 25% and 50%, and let the censoring time be generated from the uniform distribution , where is determined by a given censoring rate. Consequently, and are determined by and . In implementing the proposed method, we set and partition the interval into subintervals with width , and let the resulting cutpoints be the values of . We take the regression function to be the quadratic polynomial function, which is a widely used function in many cases (e.g., Cook and Stefaski 1994; Carroll et al. 2006). Finally, 1000 simulations are run for each parameter setting.

3.2 Simulation Results

We mainly examine three estimators which are discussed in Sections 2.1 and 2.2, and they are labeled by Wang-Wang (), Naive (), and Chen (). We report the biases of estimates (Bias), the empirical variances (Var), the mean squared errors (MSE), and the coverage probabilities (CP) of those three estimators under the measurement error model (3). The results are reported in Table 1.

First, the censoring rate and measurement degree have noticeable impact on each estimation methods. As expected, biases and variance estimates increase as the censoring rate increases. When the measurement degree increases, biases of both and are increasing, and the impact of the measurement error degrees seems more obvious on the naive estimator .

Within a setting with a given censoring rate and a measurement error degree, the naive method and the proposed method perform differently. When measurement error occurs, the performance of the proposed method is better than the naive method. The naive method produces considerable finite sample biases with coverage rates of 95% confidence intervals significantly departing from the nominal level. The proposed method outputs satisfactory estimate with small finite sample biases and reasonable coverage rates of 95% confidence intervals. Compared to the variance estimates produced by the naive approach, the proposed method which accounts for measurement error effects yield larger variance estimates, and this is the price paid to remove biases in point estimators. This phenomenon is typical in the literature of measurement error models. However, mean squared errors produced by the proposed method tends to be a lot smaller than those obtained from the naive method. Finally, we also present the numerical results of the estimator

with the true covariate . In general, gives the smallest bias and it is an efficient estimator with small variance. This result also verifies the validity of method proposed by Wang and Wang (2015).

model cr Method Estimator of Estimator of
Bias Var MSE CP(%) Bias Var MSE CP(%)
PH 25% 0.01 Naive -0.230 0.007 0.059 21.3 -0.749 0.014 0.626 14.9
Chen 0.017 0.013 0.014 94.7 0.009 0.028 0.028 94.2
0.50 Naive -0.343 0.006 0.123 1.6 -0.606 0.015 0.432 30.0
Chen 0.025 0.023 0.026 94.5 0.011 0.027 0.028 94.5
0.75 Naive -0.347 0.005 0.125 0.3 -0.636 0.016 0.465 23.8
Chen 0.025 0.023 0.023 94.8 0.019 0.025 0.025 93.9
Wang-Wang 0.009 0.011 0.011 95.6 0.008 0.013 0.013 94.1
50% 0.01 Naive -0.248 0.016 0.267 9.1 -0.742 0.016 0.565 0.1
Chen 0.017 0.014 0.014 94.4 0.016 0.021 0.021 94.3
0.50 Naive -0.375 0.015 0.145 0.2 -0.600 0.016 0.376 0.4
Chen 0.024 0.036 0.039 95.2 0.019 0.025 0.025 95.0
0.75 Naive -0.360 0.014 0.134 0.1 -0.630 0.014 0.413 0.2
Chen 0.025 0.033 0.033 94.6 0.026 0.025 0.025 94.8
Wang-Wang 0.008 0.013 0.013 95.3 0.006 0.014 0.014 95.7
PO 25% 0.01 Naive -0.250 0.009 0.072 23.0 -0.729 0.015 0.557 0.4
Chen 0.010 0.019 0.020 94.2 0.009 0.024 0.024 94.5
0.50 Naive -0.377 0.008 0.150 1.5 -0.588 0.017 0.369 3.6
Chen 0.012 0.018 0.040 94.5 0.011 0.024 0.025 93.7
0.75 Naive -0.362 0.007 0.138 1.1 -0.619 0.014 0.405 1.4
Chen 0.016 0.018 0.018 94.3 0.015 0.022 0.022 94.6
Wang-Wang 0.005 0.015 0.015 94.6 0.005 0.013 0.014 94.7
50% 0.01 Naive -0.268 0.016 0.273 10.1 -0.842 0.016 0.574 1.3
Chen 0.016 0.024 0.024 94.6 0.016 0.027 0.027 94.5
0.50 Naive -0.388 0.016 0.168 1.4 -0.600 0.016 0.376 1.4
Chen 0.027 0.036 0.037 94.2 0.021 0.026 0.026 95.1
0.75 Naive -0.410 0.017 0.185 1.9 -0.630 0.018 0.413 1.2
Chen 0.028 0.036 0.036 94.6 0.025 0.027 0.027 94.6
Wang-Wang 0.010 0.013 0.013 95.1 0.008 0.014 0.014 95.4
Table 1: Numerical results for simulation study. ‘’ denotes usage of the true covariate ; cr is the censoring rate; Bias is the difference between empirical mean and true value; Var is the empirical variance; MSE is the mean square error; and CP is the coverage probability.

4 Data Analysis

In this section, we apply the proposed method to analyze the data arising from the Worcester Heart Attack Study (WHAS500), which are described in Section 1. Discussed by Hosmer et al. (2008), a survival time was defined as the time since a subject was admitted to the hospital. We are interested in studying survival times of patients who were discharged alive from the hospital. Hence, a selection criterion was imposed that only those subjects who were discharged alive were eligible to be included in the analysis. That is, individuals were not enrolled in the analysis if they died before discharging from the hospital, hence biased sample occurs. With such a criterion, a sample of size 461 was available. In this data set, the censoring rate is 61.8%. To be more specific, the total length of follow-up (lenfol) is the survival time (i.e., ), the length of hospital stay (los) is the truncation time (i.e., ), and the vital status at last follow-up (fstat) is . In our analysis, the covariates include the body mass index (BMI) and the blood pressure (BP) of a patient. Suppose that BMI and BP are subject to measurement error, we let where and denote BMI and BP, respectively, and consider the measurement error model . Let , where is the associated parameter with for . We fit the regression model (2) with being the extreme value distribution and the logistic distribution, respectively.

In this data set, there is no additional data source, such as a validation subsample or replicated measurements which is often required to describe the measurement error process (e.g., Carroll et al. 2006; Yi 2017). To get around this and understand the impact of measurement error on estimation, we carry out sensitivity analyses. Specifically, let be the sample covariance matrix of , and for sensitivity analyses we consider to be the covariance matrix for the measurement error model (3), where is the diagonal matrix with diagonal elements being a common value .

In Table 2

, we report the point estimates (EST), the standard errors and p-values for the estimators

for the cases with , and , respectively, corresponding to minor, moderate and large measurement error, and report the results of by directly using in the analysis. All the point estimates produced by our proposed method are fairly close as observed, and the results are fairly stable regardless of the degree of measurement error. Both our proposed method and the naive estimator suggest that two covariates are significant regardless of the specification of the distribution of .

Estimator Estimator
Model method EST S.E p-value EST S.E p-value
PH 0.15 -3.297 0.011 0.000 -1.448 0.174 8.661e-17
0.50 -1.370 0.011 0.000 -1.405 0.179 4.188e-15
0.75 -2.091 0.016 0.000 -1.382 0.152 9.714e-20
-1.694 0.010 0.000 -0.834 0.109 1.988e-14
PO 0.15 -1.538 0.015 1.169e-22 -1.570 0.174 6.765e-100
0.50 -1.444 0.012 2.374e-33 -1.250 0.174 2.816e-33
0.75 -1.933 0.012 1.540e-56 -1.102 0.162 1.120e-70
-0.894 0.011 4.391e-16 -0.326 0.107 4.029e-12
Table 2: Sensitivity analyses result of Worcester Heart Attack Study Data

5 Discussion

In this article, we focus the discussion on the transformation model with length-biased sampling and develop a valid method to correct the covariate measurement error and derive an efficient estimator. In this article, we also establish the large sample properties, and the numerical results guarantee that our proposed method outperforms. Although we only focus on the simple structure of the measurement error model, our method can easily be extended to complex measurement error models or additional information, such as repeated measurement or validation data. In addition, there are still many challenges in this topic, such as analysis without specifying the distribution of the truncation time or the discussion of time-dependent covariates with mismeasurement. These topics are also our researches in the future.

Appendix A Regularity Conditions

  • is a compact set, and the true parameter value is an interior point of .

  • Let be the finite maximum support of the failure time.

  • The are independent and identically distributed for .

  • The covariate is bounded.

  • Conditional on , are independent of .

  • Censoring time is non-informative. That is, the failure time and the censoring time are independent, given the covariates .

  • The regression function is true, and its first order derivative exists.

Condition (C1) is a basic condition that is used to derive the maximizer of the target function. (C2) to (C6) are standard conditions for survival analysis, which allow us to obtain the sum of i.i.d. random variables and hence to derive the asymptotic properties of the estimators. Condition (C7) is a common assumption in SIMEX method.

Appendix B Proof of Theorem 2.2

Before presenting the proof, we first define some notation. For any , , and , let

where is the derivative of . We further define

where .

Proof of Theorem 2.2 (1):
The Kaplan-Meier estimator over is uniformly consistency to in the sense that as (Pollard 1990; van der Vaart 1998). It implies that as ,




since as . Furthermore, by the similar derivation in Step 1 of Wang and Wang (2015), we have


Let be a solution of . Since is a solution of . By (B.1), (B.2), (B.3

), and the Uniformly Law of Large Numbers (van der Vaart 1998), we have that

converges uniformly to . Then we have that as ,


By definition (13), taking averaging with respect to on both sides of (B.4) gives that as ,


for every . By (B.5), we can show that as ,


Since , therefore, by the continuous mapping theorem, we have that as ,


Proof of Theorem 2.2 (2):
By (B.7), we have for every , b, and . Taking average with respect to gives . On the other hand, by the Uniformly Law of Large Numbers and similar derivations in Wang and Wang (2015) with , we have that as , for all . Therefore, we conclude that as and , by the fact that .

Proof of Theorem 2.2 (3):
For and , applying the Taylor series expansion on (12) around gives

or equivalently,


By (B.1), (B.2), (B.3), and the Uniformly Law of Large Numbers, we have that as ,



On the other hand, by the similar derivations in Wang and Wang (2015) and standard algebra, we can derive as a sum of i.i.d. random functions. The exact formulation is given by



, and is the cumulative hazards function of .

Combining (B.10) and (B.9) with (B.8) yields