The task of predicting the occurrence of future clinical events is central to many areas of medical practice. “Risk calculators”, which make personalized risk predictions based on individual characteristics, are now commonly employed for outcomes such as heart attack, stroke, and diabetes. A common feature of many of these risk calculators is that their predictions are based on data from cohort studies. For example, the Framingham Risk Score (FRS) [1, 10] for predicting the occurrence of cardiovascular disease (CVD) is based on data from the Framingham Heart Study 
, a cohort study which has been ongoing since 1948. Many calculators, including the FRS, focus on predicting five- and ten-year risk, i.e., the probability of having a first CVD-defining event (coronary heart disease, stroke, or other CVD-related death) in the next five or ten years. Unfortunately, the cost of subject acquisition and long-term follow-up restricts the size of these studies; for example, total enrollment in the Framingham Heart Study across three generational cohorts is slightly less than 15,000, and a recent study of a subset of 8,491 FHS participants  reported 1,174 incident CVD events over 12 years of follow-up. As a result, longitudinal cohort studies may be limited in their ability to describe unique features of contemporary patient populations, and in their capacity to predict risk over shorter time horizons (eg. 1 year, 5 years) when the event rate is low.
An alternative data source for risk prediction is electronic health record (EHR) data, which is increasingly available on entire populations under care within health insurance systems. A typical EHR data capture covers a fixed time period, often several years, during which subjects may enroll in and/or disenroll from the health insurance plan. EHR data often contain hundreds of thousands of records on enrolled subjects from multiple sources including electronic medical records, claims and prescription data, and state and national databases. Since the data are recorded longitudinally, they are well-suited to obtaining risk estimates, but their complexity, dimensionality, and (often) high levels of missing data challenge standard statistical methods.
Risk engines for a variety of clinical outcomes have traditionally been developed by fitting survival regression models such as the Cox proportional hazards model  or the accelerated failure time model 
to censored time-to-event data from the aforementioned longitudinal studies. Because regression-based models yield estimates of the effects of each covariate in the model, they are useful not only as a predictive tool, but also for understanding the relative influence of covariates on risk. However, the drawback of fitting simple regression models is that they often lack the flexibility to capture non-linear covariate effects and interactions. If predictive performance and not model interpretability is the central concern, then more flexible machine learning techniques may yield improved risk engines by capturing complex relationships and between covariates and risk.
In this paper, we present Censored Naive Bayes (CNB), a machine learning approach to predicting risk using time-to-event data subject to censoring. CNB is an extension of Naive Bayes (NB), a popular technique typically applied to classification problems with categorical outcomes. CNB is nonparametric with respect to the underlying distribution of event times, and models the marginal covariate distributions in a flexible manner. We begin by describing our setting of interest, and describe an extension of the usual Naive Bayes technique allowing estimation of the conditional survivor function. The estimated survivor function can be used to predict risk over any specified time interval. We compare our proposed CNB technique to a Cox proportional hazards (CPH) based approach in a variety of simulated scenarios, and illustrate the application of our method to cardiovascular risk prediction using longitudinal electronic health record data from a large Midwest integrated healthcare system.
2 Naive Bayes for binary outcomes
Naive Bayes is a machine learning technique that has been widely applied to classification problems [13, 12, 14, 27, 19]. In this section, we describe the basic NB technique for estimating the probability of a binary outcome. In the next section, we discuss our extension of the NB to time-to-event data with censored outcomes.
Consider the task of estimating the probability of occurrence of an event over a fixed time period , based on individual characteristics
which are measured at some well-defined baseline time. For the moment, we will assume that we have data onsubjects who have had measured at baseline and been followed for time units to observe whether or not occurred ( event occurrence). The target of of estimation is .
While regression-based approaches derive a likelihood or estimating function by directly modeling as a function of , an alternative is to proceed indirectly by using Bayes Rule, writing
In general, it may be difficult to model the joint distribution of, particularly if is large. Researchers have therefore proposed to simplify estimation by assuming parsimonious dependence structures for . The Bayesian network  approach consists of partitioning into low-dimensional, mutually independent subsets on the basis of a directed acyclic graph (DAG) encoding conditional independence relationships between covariates. The most extreme version of this approach, termed Naive Bayes, makes the dramatic simplifying assumption that the covariates constituting are independent, given , so that
; one common assumption in classification problems is that the individual covariate distributions can be represented using mixtures of Normal distributions. Estimation proceeds by computing the maximum likelihood parameter estimates of each covariate distribution (conditional on), and plugging the resulting estimates into (1). is typically estimated as the sample proportion of subjects with .
Even though the Naive Bayes assumption is often incorrect, NB has been found to perform quite well in a variety of classification tasks, including when covariates are highly correlated [13, 12, 29, 20, 25]. NB is also known to be poorly calibrated in some settings, i.e., conditional probability estimates are biased though they are highly correlated with the true probabilities so that classification performance remains acceptable . NB easily accommodates problems with large numbers of covariates, can be computed quickly for large samples, and implicitly handles datasets with missing covariate values. The problem of NB calibration is addressed at greater length in the Simulations and Discussion sections.
3 Censored Naive Bayes
3.1 Limtations of the binary-outcomes approach
In many studies, some participants will have incomplete data because of loss-to-follow-up or censoring. The EHR dataset motivating this work consists of patient records captured between 2002 and 2009; over this time period, some patients who contributed data in 2002 disenrolled prior to 2009, and other patients enrolled and began contributing data after 2002. If denotes the indicator that a subject experiences a cardiovascular event within 5 years of his/her “index visit” (a term we define more formally in the next section), then subjects who were followed for less than five years and who did not experience an event have undefined. One approach is to discard such subjects (see, e.g., [26, 4, 18]), but when a substantial fraction of subjects do not have five years of follow-up this strategy may lead to biased probability estimates since the underlying failure times are right-truncated. The typical classification-based approach suffers from other drawbacks as well: For example, changing the time horizon over which one wants to make predictions (e.g., from 5 years to 3 years) requires recalculating the outcomes and refitting the model, which may be impractical if the NB is to be used as the basis of a risk calculator in clinical practice.  and 
have proposed approaches which impute the event times of censored observations in a reasonable but somewhat ad hoc manner. adopts a more principled likelihood-based approach to imputing event times, but their imputation technique may perform poorly if the assumed parametric distribution of event times is incorrect.
3.2 Notation and setup
Consider a longitudinal study of years duration, from which data on independent subjects are available. Subjects may enter the study at any time between the beginning of the study and year , and follow-up can end when an event occurs, when the study ends in year , or when a subject drops out of the study prior to year . For each subject define as the time between some index time and the occurrence of an event (event time), and as the time between and the end of follow-up due to study termination or drop-out (censoring time). In our data application example, the index time corresponds to the first clinic visit following the initial 12 months of continuous enrollment in the healthcare system. For all subjects , we observe data , where is the observed time , , and is a
-vector of continuous-valued covariates available at. The target of inference is now , the survivor function conditional on . In what follows, we will use as shorthand for the conditional pmf or density of , given .
By Bayes Theorem, we can write
In this context, our “naiveness” assumption is that covariates are independent conditional on the events and , for all . We therefore rewrite (2) as
The key components which require estimation are therefore the marginal survivor function and the covariate densities and . The following sections describe an approach to estimating these components and hence the resulting conditional survivor function .
3.3 Estimation of the marginal survivor function
We propose to estimate non-parametrically via the Kaplan-Meier estimator . Let the observed event times be . For , we estimate
where and respectively give the number of events and individuals at risk at time , as follows:
Since the Kaplan-Meier estimator makes no assumptions about the form of , it is ideally suited to a machine learning situation where little information may be available to guide parametric assumptions about the failure time distribution. Furthermore, since it is the nonparametric maximum likelihood estimator (NPMLE) of , it is a natural choice among the set of all nonparametric estimators. Most standard scientific and/or statistical software (MATLAB, Mathematica, R, etc.) possess built-functions for calculating the Kaplan-Meier estimate from censored data, making implementation trivial. In situations where the distribution of failure times follows a known parametric distribution, a parametric approach to estimating the marginal survivor function may be chosen instead.
The Kaplan-Meier estimator may be biased if censoring is informative, i.e., if the failure and censoring times are dependent. In this paper, where we aim to describe a simple extension of NB to time-to-event data, we will operate under the assumption that censoring occurs at random. Formally, we assume that . The assumption can be slightly weakened if there is a known set of discrete-valued variables such that the failure and censoring times are conditionally independent given , in which case one could consider estimating as , where can be consistently estimated via Kaplan-Meier within the subset of individuals with . Random censoring may be a reasonable assumption in EHR datasets where most censoring is induced either by the end of the data capture window or by subjects disenrolling from the healthcare plan, often due to an employer switching plans. Even in cases where random censoring does not strictly hold, the NB approach can be viewed as trading off some bias in estimation of for additional flexibility in estimation of via modeling of the conditional covariate densities and .
3.4 Estimation of covariate densities and
If there are total covariates, then the Naive Bayes assumption requires estimation of the densities and for at every . For fixed , one could estimate these univariate conditional densities quite flexibly using kernel density methods, but one is then left with the challenge of combining the resulting density estimates across adjacent values of
to yield relatively smooth functions. In preliminary work, we discovered that the kernel density estimation approach was highly variable and performed poorly, so instead we propose much simpler parametric models yielding density estimates which vary smoothly in. Specifically, we assume that
where is the standard Normal PDF. The assumption that changes in the covariate distributions with are restricted to location-scale shifts within the Normal family is a strong one, but our simulations demonstrate that it need not hold exactly for NB to perform well.
For failure and censoring times , consider the following non-parametric estimators of and :
In (4) and (5), all individuals with observation times contribute to estimation of and , since implies that . In (6) and (7), only observations with observed failures prior to contribute to estimation of and . The procedure excludes censored observations with , for which it is possible that or . We show in Appendix A that, for event times yielding non-zero denominators, (4)-(7) are consistent estimators of their respective parameters under the following conditions:
Condition (1) is the standard assumption of independence of failure and censoring times, conditional on . Condition (2) requires that the censoring time be independent of each . This assumption may be plausible in settings (such as EHR data) if censoring is largely administrative in nature, induced by the fact some subjects enrolled near the end of the study period, or occurring due to factors unlikely to be related to failure time (e.g., a subject’s employer switching health plans). Together, Conditions (1) and (2) imply that , i.e., failure and censoring times are marginally independent.
3.4.1 Smoothing of covariate densities
The estimators (4)-(5) may fluctuate substantially between adjacent failure times as outlying covariate values are either excluded from the risk set or acquire more influence as less extreme values are removed. Furthermore, these estimators leave undefined values of the densities for . We therefore propose to apply nonparametric smoothing techniques to estimate the functions and . One popular smoothing technique is loess , which combines local linear fits across a moving window. Since the precision of the estimates of and decreases as increases and the precision of and decreases as decreases, we propose to fit the loess curves to observations weighted according to their estimated precision. For the mean curve , we fit a loess curve to the weighted data
the empirical variance ofobservations and . For , the weights are given by , where . and are weighted by corresponding formulae, with and the empirical variance of among subjects with failure times prior to .
The final algorithm for estimating the conditional survivor function is therefore as follows:
Given a set of failure times :
For every covariate , do the following:
Evaluate , and for .
Fit a weighted loess smoother to plots of the resulting estimates versus , for . Let the resulting estimated nonparametric regression functions be known as and .
Plug in and to estimate and and to estimate assuming that and .
Estimate the marginal survivor function using the Kaplan-Meier estimator.
Estimate the survivor function by plugging in estimates from steps 3 and 4 into (3). If the plugin estimates yield an estimated probability greater than , report a predicted probability of .
Software implementing this extended Naive Bayes technique is available on the website of the first author (Wolfson). In the following sections, we illustrate the application of our method to simulated data, compare performance with the Cox proportional hazards model, and use the NB to build a cardiovascular risk prediction model using electronic health record data.
One theoretical advantage of the proposed Bayesian network approach is that it does not depend on correct specification of a regression model for the hazard function (as in the Cox model). Hence, NB might be better suited to modeling data which do not follow a proportional hazards model, or for which the functional relationship between covariates and the hazard is misspecified. In this section, we compare the performance of our Naive Bayes method with the Cox proportional hazards model (CPH) when 1) failure times follow a proportional hazards model, 2) failures times follow a log-logistic accelerated failure time (AFT) model such that the hazards are non-proportional, and 3) the manner in which covariates influence the failure time is misspecified in the regression model. In all cases, censoring times were generated as , thereby guaranteeing the conditional and unconditional independence of censoring and failure times. Observed event times and censoring indicators were defined in the usual way: and . We consider predicting the probability of surviving beyond 7 years, , though we emphasize that our approach yields estimates of the entire conditional survivor function so that could be predicted for any with a trivial amount of additional computation.
4.1 Performance metrics
In our simulation, we sought to evaluate two characteristics of the NB and CPH techniques: model calibration, the degree to which predicted and true event probabilities coincide; and model discrimination, the degree to which the model can identify low- and high-risk individuals. In all cases, calibration and discrimination were assessed on separately simulated validation datasets of equal size. All tables are based on 1000 simulations.
4.1.1 Model calibration.
To assess model calibration, bias and mean squared error (MSE) of the predicted probabilities were computed with respect to the true event probabilities calculated from the various simulation models. We report “conditional” bias and MSE within specific population subgroups, as well as “marginal” measures which average across the entire simulated population.
4.1.2 Model discrimination.
To compare the ability of the CPH and NB models to predict events (in this case, a failure prior to time ), we used the net reclassification improvement (NRI) 
. The NRI, also referred to as the net reclassification index, compares the number of “wins” for each model among discordant predictions. We defined risk quartiles according to the true event probabilities; risk predictions for an individual were considered discordant between the NB and CPH models if the predictions fell in different quartiles. The NRI for comparing the NB and CPH models is defined by
are the number of individuals who experienced events prior to 7 years and were placed in a higher risk category by the NB than the CPH algorithm (the opposite change in risk categorization yields ). and
count the number of individuals who did not experience an event and were “down-classified” by NB and CPH, respectively.and are the total number of events and non-events, respectively. Event indicators were defined by whether the simulated failure time was less than 7, regardless of whether the failure was observed or not. A positive NRI means better reclassification performance for the NB model, while a negative NRI favors the CPH model. While NRIs can theoretically range from -1 to 1, in the risk prediction setting they do not typically exceed the range [-0.25, 0.25]. For example,  calculated the effects of omitting various risk factors from the Reynolds Risk Score model for prediction of 10-year cardiovascular risk. The estimated NRIs ranged from -0.195 (omitting age) to -0.032 (omitting total cholesterol or parental history of MI), and all were statistically significant at the 0.05 level. In addition to the NRI, the tables of simulation results also display the reclassification improvement rates for events and non-events (i.e. the summands in (8)) separately. In our simulations, the true event times of each individual in the test set were used to calculate the NRI. In the real data analyzed in Section 5, the test data are subject to censoring, hence we use a modified version of the NRI due to  which accounts for censoring.
4.2 Proportional hazards
Table 1 summarizes the performance of the CPH and NB models when data are generated from a proportional hazards model using the method suggested in . Covariates were generated from a -variate Normal distribution with mean zero, variance one, and exchangeable correlation with pairwise correlation . Different values of were considered in the various simulation settings. The final design matrix was defined by , where is an -vector of ones. Because each covariate was marginally standard normal, no covariate scaling was performed before applying the computational methods. Given observed covariates for subject , a failure time was generated as
where , which corresponds to generating failure times from a Weibull distribution with scale parameter , so that the hazard for subject is proportional to the baseline hazard with proportionality factor . For this set of simulations, we set , , and
was set to , and across simulations, yielding censoring rates of 55%, 78%, and 90%, and event rates of 40%, 18%, and 7% respectively.
plot predicted vs. true probabilities across quantiles of(coefficient ) for 5000 points sampled from 10 simulation runs under the settings where and , respectively ( is fixed at -1).
When covariates are independent (), NB and CPH have similar calibration and discrimination performance. When covariates are strongly correlated, NB has an anti-conservative bias in which large survival probabilities are overestimated and small survival probabilities are underestimated (see, e.g., panels 1 and 4 of Figure 0(b)). Interestingly, this bias does not degrade the discrimination ability of NB as measured by the NRI. In fact, the NRI favors NB over CPH more strongly in cases where bias is larger. This finding is not entirely surprising; it has been shown that NB tends to drive predicted probabilities towards either 0 or 1  while retaining an approximately correct ranking of observations with respect to their risk.
4.3 Nonproportional hazards
Table 2 compares the performance of the NB with the CPH model when data follow a log-logistic accelerated failure time (AFT) model. Covariates were generated as described above. Given observed covariate , failure times were generated from a Log-logistic distribution with shape parameter and scale parameter . Censoring times were generated as . was set as
was set to , and across simulations, yielding censoring rates of 60%, 74%, and 90%, and event rates of 40%, 25%, and 8% respectively.
As in the proportional hazards case, bias was small for both methods when covariates were independent, and was higher for NB when covariates were correlated. NB performed better than CPH in terms of risk reclassfication, with the advantage being particularly dramatic ( net reclassification improvement) when the underlying event rate was higher. In general, the NB yielded better risk classification for subjects who experienced events, while CPH performed better for subjects without events.
4.4 Regression model misspecification
Table 3 summarizes the performance of NB and CPH models when data are generated from a Log-logistic model and the CPH linear predictor is misspecified. Our simulation is motivated by our applied data example, where the effects of age and systolic blood pressure (SBP) on the probability of experiencing a nonfatal cardiovascular event appear to be non-linear. The simulation model encodes a scenario where subjects at middle ages are at highest risk, while those at low and high ages are at lower risk.
For this set of simulations, the total sample size was fixed at . We started by generating an ‘age’ covariate from a Log-logistic distribution with shape parameter 10, scale parameter 50, and rate parameter 0.4; this yielded a distribution of ages with a median of 50 and an interquartile range of 10. A small fraction () of simulated ages were ; these ages were set to 100. Given age, systolic blood pressure values were simulated from a Normal distribution with mean. Based on the simulated ages and SBPs, failure times were simulated from a Log-logistic distribution with scale parameter and shape parameter
where is a binary indicator, is an age-by-SBP interaction, and is an interaction between continuous age and binary indicator of age . All columns of the design matrix implied by this linear predictor were standardized to have mean zero and variance one before estimation was performed, so that the coefficients represent standardized effect sizes for each covariate. The left panel of Figure 2 (plot labeled ‘True’) shows a set of survival probabilities (i.e. probabilities of having a failure time years) simulated from this model and plotted vs. age.
We consider the performance of the NB and CPH models for different specifications of the linear predictor in the Cox model. There is no explicit regression model associated with the NB, and hence it is provided with (standardized) vectors of ages and SBPs. We consider varying degrees of misspecification of the regression model, from a model incorporating only linear terms for age and SBP (omitting the (age ) indicator and higher-order interactions) to the full model including all the covariate terms from (9).
|CPH regression terms||Bias(CPH)||Bias(NB)||MSE(CPH)||MSE(NB)||RI(E)||RI(NE)||NRI|
|Age, SBP, Age x SBP||0.008||0.000||0.615||0.422||0.166||-0.059||0.107|
|Age, SBP, (Age)||0.009||0.001||0.498||0.421||0.127||-0.067||0.060|
|Age, SBP, (Age60)||0.011||0.000||0.268||0.422||0.078||-0.107||-0.029|
Performance of CPH and NB models for data simulated from a Log-logistic regression model including the variables from (9). The total sample size for each simulation was . The first column of the table specifies which terms are included in the CPH regression model; the NB is provided with age and SBP values only. The final row shows the results when the CPH model includes all relevant covariates. RI(E) and RI(NE) are the reclassification improvement rates for events and non-events, respectively. Mean squared errors are multiplied by 100.
Table 3 and Figure 2 illustrate how the additional flexibility of the NB model allows more accurate predictions to be obtained when relevant covariates are omitted from the model. The NB model has smaller MSE and positive NRI when compared to a CPH model which includes only main effects for age (or log(Age)) and SBP. Figure 2 displays predicted survival probabilities from a typical simulation run using the NB and the main effects only CPH model. The overlaid smoothers show clearly that the NB captures the non-linear effects of age and SBP observed in the true probabilities (left panel) better than CPH. Returning to Table 3, adding an age SBP interaction term does not markedly improve the performance of CPH. Incorporating a quadratic effect for age, a standard strategy when faced with apparent non-linearity in the effect of a covariate, yields some improvement in CPH but the NB retains an advantage in MSE and NRI. When the (Age) indicator term is included, the CPH model outperforms NB both in terms of MSE and NRI.
5 Data application
We illustrate our Naive Bayes method by applying it to the task of predicting the risk of cardiovascular events from longitudinal electronic health record data. The dataset includes medical information on a cohort of patient-members age 18 years and older from a large Midwest integrated healthcare system. Data were available on members who attended a primary care clinic between 2002 and 2009, and who were continuously enrolled for at least twelve consecutive months during that period.
We consider the task of predicting the individual risk of experiencing a cardiovascular event, including death where cardiovascular disease was listed as the primary cause; procedures such as thromboectomy, stent placement or coronary artery bypass graft; and diagnoses such as stroke or mycocardial infraction. For both event definitions, failure and censoring times of subjects were measured from an index visit, corresponding to the first clinic visit following 12 months of continuous enrollment in the healthcare system. Only first events were recorded; subjects were censored if they disenrolled prior to experiencing an event or were event-free when the dataset was locked and prepared for analysis in 2009. Those whose primary cause of death was not cardiovascular disease were coded as censored. In this data application example, we did not adjust either the CPH or NB methods for competing risks.
Information available for predicting the risk of cardiovascular events included age, blood pressure, and LDL cholesterol. We restricted our analysis to subjects who had values for these risk factors recorded prior to the index visit; these factors were treated as baseline predictors. The one exception was cholesterol, which was often missing for subjects under the age of 50. Due to low rates of dyslipidemia in this group, subjects without a measured LDL cholesterol within the first year of enrollment were assigned a normal LDL cholesterol value of 120 mg/dl. With these restrictions, the analysis cohort consisted of 274,964 patients with a mean follow up of 4.8 years. The cohort was randomly split into a training dataset consisting of 75% of the observations (206,223 subjects), and a test dataset consisting of the remaining 25% (68,741 subjects). After standardizing the covariate values, we applied both the CPH and NB methods to the training data and obtained predictions for the test data. The CPH model regression equation included linear terms for each of the three risk factors (age, blood pressure, and LDL cholesterol), consistent with existing CPH-based techniques such as the Framingham risk equations.
Figure 3 shows five-year cardiovascular risk predicted by NB and CPH, by age, for the two event definitions. Overlaid on the two plots is the observed event rate for five-year age ranges, computed within each range via Kaplan-Meier. The figure illustrates how the flexibility of the NB method allows it to capture a marked uptick in event rate after the age of 50; in contrast, the linear predictor of the Cox model restricts the set of age-risk relationships that can be well approximated.
5.1 Reclassification with censored observations
As noted earlier, the Net Reclassification Improvement statistic cannot be applied directly to data where the outcome status of some subjects is unknown. In our setting, omitting subjects with less than five years of follow-up (or treating them as non-events) will result in biased estimates of the NRI. To evaluate risk reclassification on our test data which are subject to censoring, we use a “censoring-adjusted” NRI (cNRI) which takes the form
Full details appear in ; briefly, the expected number of events overall and among up-classified and down-classified individuals is computed from event probabilities estimated via Kaplan-Meier. Uncertainty intervals are obtained by computing the cNRIs on the test set for predictions from NB and Cox models fitted to 500 bootstrap resamples of the training data.
Using the test data (), we computed the cNRI comparing the two models. Following the recommendations of , we considered clinically relevant risk categories for five-year risk of cardiovascular events: 0-5%, 5-10%, and . Table 4 summarizes the reclassification improvements and net reclassification improvements for subjects with and without events. The NRI across all subjects was positive (NRI = 0.10, 95% CI 0.084 to 0.120), indicating that NB had significantly better reclassification performance than the Cox model. The reclassification improvements behave similarly to our simulations, with Naive Bayes outperforming the Cox model in terms of predicting the occurrence of events among those who actually experienced them, and the Cox model apparently more accurate among subjects who did not experience events.
|(test set)||# of events||cRI (events)||cRI (non-events)||cNRI|
with 95% Wald confidence interval. Reclassification improvement scores that are significantly different than zero are indicated in bold (favoring Naive Bayes) and italics (favoring Cox)
We have presented an extension of the Naive Bayes technique to accommodate time-to-event data subject to censoring, and illustrated its application to predicting cardiovascular risk using electronic health record data. Our method, like many machine learning techniques, is ‘model-agnostic’ and hence can capture complex relationships between demographics, biomarkers, and clinical risk without requiring a priori specification of a regression model. While it is true that a more complex Cox regression model or a set of stratified Kaplan-Meier estimates might capture observed non-linearities in risk, considering multiple candidate regression models or covariate strata increases the risk of overfitting, even when an independent test set is used. In contrast, our relatively simple Naive Bayes approach captures non-linear effects without requiring any model tuning, scoring, or comparisons. The NB is straightforward to implement and runs quickly; in our simulation studies, unoptimized R code for implementing NB had similar running time to the coxph command from the survival package. In our data application, we successfully applied the NB method to a dataset with observations, where the total running time was several minutes using a simple implementation in MATLAB. The assumption of covariate independence allows our NB algorithm to be easily parallelized, which is not true of procedures for maximizing the Cox partial likelihood.
The proposed NB method assumes that covariates are independent and normally distributed conditionally on both and for all . This is unlikely to hold in practice, though the NB has been shown to be effective in other settings where distributional assumptions regarding the covariates are not satisified. Indeed, while most covariates in our simulation study were generated from marginal normal distributions, no constraint was placed on their conditional distributions. Regarding the issue of calibration, we found that predictions from NB were mostly unbiased and often (but not always) had higher MSE than those from the Cox model. The respectable calibration of the NB predictions may be due to the smoothing of conditional covariate means and variances across time which may attenuate and stabilize probability estimates. The overall effect may be similar to the smoothing and binning approaches described in  for improving calibration of NB classifiers. Perhaps unsurprisingly given its origins, NB outperformed the Cox model in terms of Net Reclassification Improvement, a metric which evaluates how successfully models classify subjects into clinically relevant risk groups. In clinical practice, the ability to discriminate between low- and high-risk patients may be more important than obtaining a precise risk estimate, which may recommend the use of Naive Bayes in settings such as the cardiovascular risk prediction task we consider here.
The covariate independence assumption can be relaxed by modeling covariates as arising from a multivariate normal distribution, but it is unclear how to extend this approach to our setting where parameter values are smoothed over time. Specifically, the key challenge is that covariance matrices must be positive definite, a constraint which is difficult to incorporate into a smoothing procedure. The assumption of conditional normal distributions (given and
) is also somewhat limiting in that it restricts NB to cases where covariates are continuous and have distributions which can be approximated by a normal. In practice, separate NB models could be fit at each level of categorical risk factors (eg. smokers and non-smokers), but this could be time-consuming for factors with more than two levels and inefficient for ordered factors (eg. tumor staging). Furthermore, many risk factors of clinical disease have skewed distributions either marginally or among subjects with survival times beyond a certain threshold. In ongoing work, we are developing techniques which accommodate discrete and skewed covariates in the NB framework.
This work was partially supported by NHLBI grant #R01HL102144-01 and AHRQ grant #R21HS017622-01.
- Anderson et al.  K M Anderson, P M Odell, P W Wilson, and W B Kannel. Cardiovascular disease risk profiles. American Heart Journal, 121(1 Pt 2):293–8, January 1991. ISSN 0002-8703. URL http://www.ncbi.nlm.nih.gov/pubmed/1985385.
-  Heather Arruda. About the Framingham Heart Study Participants. URL http://www.framinghamheartstudy.org/participants/index.html.
- Bender et al.  Ralf Bender, Thomas Augustin, and Maria Blettner. Generating survival times to simulate Cox proportional hazards models. Statistics in Medicine, 24(11):1713–23, June 2005. ISSN 0277-6715. doi: 10.1002/sim.2059. URL http://www.ncbi.nlm.nih.gov/pubmed/16680804.
- Blanco et al.  R. Blanco, I. Inza, M. Merino, J. Quiroga, and P. Larrañaga. Feature selection in Bayesian classifiers for the prognosis of survival of cirrhotic patients treated with TIPS. Journal of Biomedical Informatics, 38(5):376–388, 2005. URL http://www.sciencedirect.com/science/article/pii/s1532-0464(05)00047-x.
- Buckley and James  J Buckley and I James. Linear regression with censored data. Biometrika, 66(3):429–436, 1979.
- Chia et al.  Chih-Chun Chia, Ilan Rubinfeld, Benjamin M. Scirica, Sean McMillan, Hitinder S. Gurm, and Zeeshan Syed. Looking Beyond Historical Patient Outcomes to Improve Clinical Models. Science Translational Medicine, 4(131):131ra49, April 2012. ISSN 1946-6234. doi: 10.1126/scitranslmed.3003561. URL http://stm.sciencemag.org/content/4/131/131ra49.abstracthttp://www.ncbi.nlm.nih.gov/pubmed/22539773.
- Cleveland et al.  William S. Cleveland, Susan J. Devlin, and Eric Grosse. Regression by local fitting. Journal of Econometrics, 37(1):87–114, January 1988. ISSN 03044076. doi: 10.1016/0304-4076(88)90077-2. URL http://dx.doi.org/10.1016/0304-4076(88)90077-2.
- Cook and Ridker  NR Cook and PM Ridker. The use and magnitude of reclassification measures for individual predictors of global cardiovascular risk. Annals of Internal Medicine, 150(11):795–802, 2009. URL http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2782591/.
- Cox  D. R. Cox. Regression Models and Life-Tables. Journal of the Royal Statistical Society. Series B (Methodological), 34(2):187–220, 1972. URL http://links.jstor.org/sici?sici=0035-9246(1972)34:2<187:RMAL>2.0.CO;2-6.
- D’Agostino et al.  R B D’Agostino, S Grundy, L M Sullivan, and P Wilson. Validation of the Framingham coronary heart disease prediction scores: results of a multiple ethnic groups investigation. Journal of the American Medical Association, 286(2):180–7, July 2001. ISSN 0098-7484. URL http://www.ncbi.nlm.nih.gov/pubmed/11448281.
- D’Agostino et al.  Ralph B D’Agostino, Ramachandran S Vasan, Michael J Pencina, Philip A Wolf, Mark Cobain, Joseph M Massaro, and William B Kannel. General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation, 117(6):743–53, February 2008. ISSN 1524-4539. doi: 10.1161/CIRCULATIONAHA.107.699579. URL http://www.ncbi.nlm.nih.gov/pubmed/18212285.
- Domingos and Pazzani  Pedro Domingos and M Pazzani. On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning, 29:103–130, 1997. URL http://link.springer.com/article/10.1023/A:1007413511361.
- Domingos and Pazzani  Pedro Domingos and Michael Pazzani. Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier. In Machine Learning, pages 105–112. Morgan Kaufmann, 1996. URL http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.4134.
- Hand and Yu  David J. Hand and Keming Yu. Idiot’s Bayes?Not So Stupid After All? International Statistical Review, 69(3):385–398, December 2001. ISSN 0306-7734. doi: 10.1111/j.1751-5823.2001.tb00465.x. URL http://doi.wiley.com/10.1111/j.1751-5823.2001.tb00465.x.
John and Langley 
George H John and Pat Langley.
Estimating Continuous Distributions in Bayesian Classifiers.
Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, 1995.
-  A Kagan, T R Dawber, W B Kannel, and N Revotskie. The Framingham study: a prospective study of coronary heart disease. Federation proceedings, 21(4)Pt 2:52–7. ISSN 0014-9446. URL http://www.ncbi.nlm.nih.gov/pubmed/14453051.
- Kaplan and Meier  E.L. L. Kaplan and Paul Meier. Nonparametric estimation from incomplete observations. Journal of the American Statistical Assocation, 53(282):457–481, November 1958. URL http://www.jstor.org/stable/10.2307/2281868http://www.jstor.org/pss/2281868.
Larranaga et al. 
P. Larranaga, Basilio Sierra, M. Gallego, M. Michelena, and J. Picaza.
Learning Bayesian networks by genetic algorithms: a case study in the prediction of survival in malignant skin melanoma.Artificial Intelligence in Medicine, 1211:261–272, 1997. URL http://www.springerlink.com/index/w31m361t2v5q1357.pdf.
- Maimon and Rokach  Oded Z. Maimon and Lior Rokach. Decomposition Methodology For Knowledge Discovery And Data Mining: Theory And Applications. World Scientific, 2005. ISBN 9812560793. URL http://books.google.com/books?id=RARx4rjp_7AC&pgis=1.
- Mani and Pazzani  Subramani Mani and Michael J. Pazzani. Knowledge Discovery from a Breast Cancer Database. In Artificial Intelligence in Medicine, pages 130 – 133. 1997. URL http://www.researchgate.net/publication/221450627_Knowledge_Discovery_from_a_Breast_Cancer_Database.
-  Judea Pearl. Fusion, Propagation, and Structuring in Belief Networks. URL http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.84.8016.
- Pencina et al.  Michael J Pencina, Ralph B D’Agostino, and Ewout W Steyerberg. Extensions of net reclassification improvement calculations to measure usefulness of new biomarkers. Statistics in medicine, 30(1):11–21, January 2011. ISSN 1097-0258. doi: 10.1002/sim.4085. URL http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3341973&tool=pmcentrez&rendertype=abstract.
- Pencina et al.  Michael MJ Pencina, Ralph B D’Agostino, and Ramachandran S Vasan. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Statistics in Medicine, 27(April 2007):157–172, 2008. doi: 10.1002/sim. URL http://www.mendeley.com/catalog/evaluating-added-predictive-ability-new-marker-area-under-roc-curve-reclassification-beyond/http://onlinelibrary.wiley.com/doi/10.1002/sim.2929/abstract.
- Pepe  Margaret S Pepe. Problems with risk reclassification methods for evaluating prediction models. American Journal of Epidemiology, 173(11):1327–35, June 2011. ISSN 1476-6256. doi: 10.1093/aje/kwr013. URL http://aje.oxfordjournals.org/content/173/11/1327.short.
- Russek et al.  E Russek, R A Kronmal, and L D Fisher. The effect of assuming independence in applying Bayes’ theorem to risk estimation and classification in diagnosis. Computers and biomedical research, 16(6):537–52, December 1983. ISSN 0010-4809. URL http://www.ncbi.nlm.nih.gov/pubmed/6653089.
- Sierra and Larranaga  B. Sierra and P. Larranaga. Predicting survival in malignant skin melanoma using Bayesian networks automatically induced by genetic algorithms. An empirical comparison between different approaches. Artificial Intelligence in Medicine, 14(1-2):215–230, 1998. URL http://www.sciencedirect.com/science/article/pii/S0933365798000244.
- Singh  Sandeep Singh. Using Naive Bayes for Data Clustering. University of New Brunswick (Canada), 2005. ISBN 0494253711. URL http://books.google.com/books?id=LuO576rEXCIC&pgis=1.
- Stajduhar and Dalbelo-Basić  Ivan Stajduhar and Bojana Dalbelo-Basić. Learning Bayesian networks from survival data using weighting censored instances. Journal of Biomedical Informatics, 43(4):613–22, August 2010. ISSN 1532-0480. doi: 10.1016/j.jbi.2010.03.005. URL http://dx.doi.org/10.1016/j.jbi.2010.03.005.
- Titterington et al.  D. M. Titterington, G. D. Murray, L. S. Murray, D. J. Spiegelhalter, A. M. Skene, J. D. F. Habbema, and G. J. Gelpke. Comparison of Discrimination Techniques Applied to a Complex Data Set of Head Injured Patients. Journal of the Royal Statistical Society. Series A (General), 144(2):145, 1981. ISSN 00359238. doi: 10.2307/2981918. URL http://www.citeulike.org/user/wkretzsch/article/9414776.
- tajduhar and Dalbelo-Baić  Ivan tajduhar and Bojana Dalbelo-Baić. Uncensoring censored data for machine learning: A likelihood-based approach. Expert Systems with Applications, January 2012. ISSN 09574174. doi: 10.1016/j.eswa.2012.01.054. URL http://dx.doi.org/10.1016/j.eswa.2012.01.054.
Zadrozny and Elkan 
B Zadrozny and C Elkan.
Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers.ICML, 1:609–616, 2001. URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29.3039&rep=rep1&type=pdf.
- Zupan et al.  B. Zupan, J. Demsar, M.W. W Kattan, J.R. R Beck, and I. Bratko. Machine learning for survival analysis: a case study on recurrence of prostate cancer. Artificial Intelligence in Medicine, 20(1):59–75, August 2000. ISSN 0933-3657. URL http://www.ncbi.nlm.nih.gov/pubmed/11185421http://www.sciencedirect.com/science/article/pii/S0933365700000531.
Let be an event time for which and . Then
are consistent for and , and
are consistent for , , provided that the following conditions hold:
Condition (2). .
We begin by showing that , where we have replaced by for clarity. By writing
it is clear that the numerator and denominator converge to and respectively. Let ; then the expression in the denominator is (which is since ) and the numerator can be written as
Converting back to our original notation, we conclude that
as required. Similar arguments yield the results that , , and . Since , the desired results and are immediate. We therefore focus on and .
Having shown that , we now show that under the above Conditions. Using to denote the density function, we have:
Hence, we have the desired result on the expectation of , yielding the consistency of . The same argument gives the result on the expectation of , yielding the consistency of .