A Naive Bayes machine learning approach to risk prediction using censored, time-to-event data

04/08/2014
by   Julian Wolfson, et al.
University of Minnesota
0

Predicting an individual's risk of experiencing a future clinical outcome is a statistical task with important consequences for both practicing clinicians and public health experts. Modern observational databases such as electronic health records (EHRs) provide an alternative to the longitudinal cohort studies traditionally used to construct risk models, bringing with them both opportunities and challenges. Large sample sizes and detailed covariate histories enable the use of sophisticated machine learning techniques to uncover complex associations and interactions, but observational databases are often "messy," with high levels of missing data and incomplete patient follow-up. In this paper, we propose an adaptation of the well-known Naive Bayes (NB) machine learning approach for classification to time-to-event outcomes subject to censoring. We compare the predictive performance of our method to the Cox proportional hazards model which is commonly used for risk prediction in healthcare populations, and illustrate its application to prediction of cardiovascular risk using an EHR dataset from a large Midwest integrated healthcare system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/27/2021

Targeting Underrepresented Populations in Precision Medicine: A Federated Transfer Learning Approach

The limited representation of minorities and disadvantaged populations i...
03/07/2017

Propensity score prediction for electronic healthcare databases using Super Learner and High-dimensional Propensity Score Methods

The optimal learner for prediction modeling varies depending on the unde...
11/12/2019

Harmonic Mean Point Processes: Proportional Rate Error Minimization for Obtundation Prediction

In healthcare, the highest risk individuals for morbidity and mortality ...
09/16/2021

Integrating Flowsheet Data in OMOP Common Data Model for Clinical Research

Flowsheet data presents unique challenges and opportunities for integrat...
12/01/2018

Measuring the Stability of EHR- and EKG-based Predictive Models

Databases of electronic health records (EHRs) are increasingly used to i...
12/06/2018

Automatically Explaining Machine Learning Prediction Results: A Demonstration on Type 2 Diabetes Risk Prediction

Background: Predictive modeling is a key component of solutions to many ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The task of predicting the occurrence of future clinical events is central to many areas of medical practice. “Risk calculators”, which make personalized risk predictions based on individual characteristics, are now commonly employed for outcomes such as heart attack, stroke, and diabetes. A common feature of many of these risk calculators is that their predictions are based on data from cohort studies. For example, the Framingham Risk Score (FRS) [1, 10] for predicting the occurrence of cardiovascular disease (CVD) is based on data from the Framingham Heart Study [16]

, a cohort study which has been ongoing since 1948. Many calculators, including the FRS, focus on predicting five- and ten-year risk, i.e., the probability of having a first CVD-defining event (coronary heart disease, stroke, or other CVD-related death) in the next five or ten years. Unfortunately, the cost of subject acquisition and long-term follow-up restricts the size of these studies; for example, total enrollment in the Framingham Heart Study across three generational cohorts is slightly less than 15,000

[2], and a recent study of a subset of 8,491 FHS participants [11] reported 1,174 incident CVD events over 12 years of follow-up. As a result, longitudinal cohort studies may be limited in their ability to describe unique features of contemporary patient populations, and in their capacity to predict risk over shorter time horizons (eg. 1 year, 5 years) when the event rate is low.

An alternative data source for risk prediction is electronic health record (EHR) data, which is increasingly available on entire populations under care within health insurance systems. A typical EHR data capture covers a fixed time period, often several years, during which subjects may enroll in and/or disenroll from the health insurance plan. EHR data often contain hundreds of thousands of records on enrolled subjects from multiple sources including electronic medical records, claims and prescription data, and state and national databases. Since the data are recorded longitudinally, they are well-suited to obtaining risk estimates, but their complexity, dimensionality, and (often) high levels of missing data challenge standard statistical methods.

Risk engines for a variety of clinical outcomes have traditionally been developed by fitting survival regression models such as the Cox proportional hazards model [9] or the accelerated failure time model [5]

to censored time-to-event data from the aforementioned longitudinal studies. Because regression-based models yield estimates of the effects of each covariate in the model, they are useful not only as a predictive tool, but also for understanding the relative influence of covariates on risk. However, the drawback of fitting simple regression models is that they often lack the flexibility to capture non-linear covariate effects and interactions. If predictive performance and not model interpretability is the central concern, then more flexible machine learning techniques may yield improved risk engines by capturing complex relationships and between covariates and risk

[6].

In this paper, we present Censored Naive Bayes (CNB), a machine learning approach to predicting risk using time-to-event data subject to censoring. CNB is an extension of Naive Bayes (NB), a popular technique typically applied to classification problems with categorical outcomes. CNB is nonparametric with respect to the underlying distribution of event times, and models the marginal covariate distributions in a flexible manner. We begin by describing our setting of interest, and describe an extension of the usual Naive Bayes technique allowing estimation of the conditional survivor function. The estimated survivor function can be used to predict risk over any specified time interval. We compare our proposed CNB technique to a Cox proportional hazards (CPH) based approach in a variety of simulated scenarios, and illustrate the application of our method to cardiovascular risk prediction using longitudinal electronic health record data from a large Midwest integrated healthcare system.

2 Naive Bayes for binary outcomes

Naive Bayes is a machine learning technique that has been widely applied to classification problems [13, 12, 14, 27, 19]. In this section, we describe the basic NB technique for estimating the probability of a binary outcome. In the next section, we discuss our extension of the NB to time-to-event data with censored outcomes.

Consider the task of estimating the probability of occurrence of an event over a fixed time period , based on individual characteristics

which are measured at some well-defined baseline time. For the moment, we will assume that we have data on

subjects who have had measured at baseline and been followed for time units to observe whether or not occurred ( event occurrence). The target of of estimation is .

While regression-based approaches derive a likelihood or estimating function by directly modeling as a function of , an alternative is to proceed indirectly by using Bayes Rule, writing

In general, it may be difficult to model the joint distribution of

, particularly if is large. Researchers have therefore proposed to simplify estimation by assuming parsimonious dependence structures for . The Bayesian network [21] approach consists of partitioning into low-dimensional, mutually independent subsets on the basis of a directed acyclic graph (DAG) encoding conditional independence relationships between covariates. The most extreme version of this approach, termed Naive Bayes, makes the dramatic simplifying assumption that the covariates constituting are independent, given , so that

(1)

A number of parametric and semi-parametric approaches to modeling the univariate covariate distributions are possible and have been described elsewhere [12, 15]

; one common assumption in classification problems is that the individual covariate distributions can be represented using mixtures of Normal distributions. Estimation proceeds by computing the maximum likelihood parameter estimates of each covariate distribution (conditional on

), and plugging the resulting estimates into (1). is typically estimated as the sample proportion of subjects with .

Even though the Naive Bayes assumption is often incorrect, NB has been found to perform quite well in a variety of classification tasks, including when covariates are highly correlated [13, 12, 29, 20, 25]. NB is also known to be poorly calibrated in some settings, i.e., conditional probability estimates are biased though they are highly correlated with the true probabilities so that classification performance remains acceptable [31]. NB easily accommodates problems with large numbers of covariates, can be computed quickly for large samples, and implicitly handles datasets with missing covariate values. The problem of NB calibration is addressed at greater length in the Simulations and Discussion sections.

3 Censored Naive Bayes

3.1 Limtations of the binary-outcomes approach

In many studies, some participants will have incomplete data because of loss-to-follow-up or censoring. The EHR dataset motivating this work consists of patient records captured between 2002 and 2009; over this time period, some patients who contributed data in 2002 disenrolled prior to 2009, and other patients enrolled and began contributing data after 2002. If denotes the indicator that a subject experiences a cardiovascular event within 5 years of his/her “index visit” (a term we define more formally in the next section), then subjects who were followed for less than five years and who did not experience an event have undefined. One approach is to discard such subjects (see, e.g., [26, 4, 18]), but when a substantial fraction of subjects do not have five years of follow-up this strategy may lead to biased probability estimates since the underlying failure times are right-truncated. The typical classification-based approach suffers from other drawbacks as well: For example, changing the time horizon over which one wants to make predictions (e.g., from 5 years to 3 years) requires recalculating the outcomes and refitting the model, which may be impractical if the NB is to be used as the basis of a risk calculator in clinical practice. [32] and [28]

have proposed approaches which impute the event times of censored observations in a reasonable but somewhat ad hoc manner.

[30] adopts a more principled likelihood-based approach to imputing event times, but their imputation technique may perform poorly if the assumed parametric distribution of event times is incorrect.

3.2 Notation and setup

Consider a longitudinal study of years duration, from which data on independent subjects are available. Subjects may enter the study at any time between the beginning of the study and year , and follow-up can end when an event occurs, when the study ends in year , or when a subject drops out of the study prior to year . For each subject define as the time between some index time and the occurrence of an event (event time), and as the time between and the end of follow-up due to study termination or drop-out (censoring time). In our data application example, the index time corresponds to the first clinic visit following the initial 12 months of continuous enrollment in the healthcare system. For all subjects , we observe data , where is the observed time , , and is a

-vector of continuous-valued covariates available at

. The target of inference is now , the survivor function conditional on . In what follows, we will use as shorthand for the conditional pmf or density of , given .

By Bayes Theorem, we can write

(2)

In this context, our “naiveness” assumption is that covariates are independent conditional on the events and , for all . We therefore rewrite (2) as

(3)

The key components which require estimation are therefore the marginal survivor function and the covariate densities and . The following sections describe an approach to estimating these components and hence the resulting conditional survivor function .

3.3 Estimation of the marginal survivor function

We propose to estimate non-parametrically via the Kaplan-Meier estimator [17]. Let the observed event times be . For , we estimate

where and respectively give the number of events and individuals at risk at time , as follows:

Since the Kaplan-Meier estimator makes no assumptions about the form of , it is ideally suited to a machine learning situation where little information may be available to guide parametric assumptions about the failure time distribution. Furthermore, since it is the nonparametric maximum likelihood estimator (NPMLE) of , it is a natural choice among the set of all nonparametric estimators. Most standard scientific and/or statistical software (MATLAB, Mathematica, R, etc.) possess built-functions for calculating the Kaplan-Meier estimate from censored data, making implementation trivial. In situations where the distribution of failure times follows a known parametric distribution, a parametric approach to estimating the marginal survivor function may be chosen instead.

The Kaplan-Meier estimator may be biased if censoring is informative, i.e., if the failure and censoring times are dependent. In this paper, where we aim to describe a simple extension of NB to time-to-event data, we will operate under the assumption that censoring occurs at random. Formally, we assume that . The assumption can be slightly weakened if there is a known set of discrete-valued variables such that the failure and censoring times are conditionally independent given , in which case one could consider estimating as , where can be consistently estimated via Kaplan-Meier within the subset of individuals with . Random censoring may be a reasonable assumption in EHR datasets where most censoring is induced either by the end of the data capture window or by subjects disenrolling from the healthcare plan, often due to an employer switching plans. Even in cases where random censoring does not strictly hold, the NB approach can be viewed as trading off some bias in estimation of for additional flexibility in estimation of via modeling of the conditional covariate densities and .

3.4 Estimation of covariate densities and

If there are total covariates, then the Naive Bayes assumption requires estimation of the densities and for at every . For fixed , one could estimate these univariate conditional densities quite flexibly using kernel density methods, but one is then left with the challenge of combining the resulting density estimates across adjacent values of

to yield relatively smooth functions. In preliminary work, we discovered that the kernel density estimation approach was highly variable and performed poorly, so instead we propose much simpler parametric models yielding density estimates which vary smoothly in

. Specifically, we assume that

where is the standard Normal PDF. The assumption that changes in the covariate distributions with are restricted to location-scale shifts within the Normal family is a strong one, but our simulations demonstrate that it need not hold exactly for NB to perform well.

For failure and censoring times , consider the following non-parametric estimators of and :

(4)
(5)
(6)
(7)

In (4) and (5), all individuals with observation times contribute to estimation of and , since implies that . In (6) and (7), only observations with observed failures prior to contribute to estimation of and . The procedure excludes censored observations with , for which it is possible that or . We show in Appendix A that, for event times yielding non-zero denominators, (4)-(7) are consistent estimators of their respective parameters under the following conditions:
Condition (1).
Condition (2).

Condition (1) is the standard assumption of independence of failure and censoring times, conditional on . Condition (2) requires that the censoring time be independent of each . This assumption may be plausible in settings (such as EHR data) if censoring is largely administrative in nature, induced by the fact some subjects enrolled near the end of the study period, or occurring due to factors unlikely to be related to failure time (e.g., a subject’s employer switching health plans). Together, Conditions (1) and (2) imply that , i.e., failure and censoring times are marginally independent.

3.4.1 Smoothing of covariate densities

The estimators (4)-(5) may fluctuate substantially between adjacent failure times as outlying covariate values are either excluded from the risk set or acquire more influence as less extreme values are removed. Furthermore, these estimators leave undefined values of the densities for . We therefore propose to apply nonparametric smoothing techniques to estimate the functions and . One popular smoothing technique is loess [7], which combines local linear fits across a moving window. Since the precision of the estimates of and decreases as increases and the precision of and decreases as decreases, we propose to fit the loess curves to observations weighted according to their estimated precision. For the mean curve , we fit a loess curve to the weighted data

where with

the empirical variance of

observations and . For , the weights are given by , where . and are weighted by corresponding formulae, with and the empirical variance of among subjects with failure times prior to .

The final algorithm for estimating the conditional survivor function is therefore as follows:

Given a set of failure times :

  1. For every covariate , do the following:

    1. Evaluate , and for .

    2. Fit a weighted loess smoother to plots of the resulting estimates versus , for . Let the resulting estimated nonparametric regression functions be known as and .

    3. Plug in and to estimate and and to estimate assuming that and .

    4. Estimate the marginal survivor function using the Kaplan-Meier estimator.

  2. Estimate the survivor function by plugging in estimates from steps 3 and 4 into (3). If the plugin estimates yield an estimated probability greater than , report a predicted probability of .

Software implementing this extended Naive Bayes technique is available on the website of the first author (Wolfson). In the following sections, we illustrate the application of our method to simulated data, compare performance with the Cox proportional hazards model, and use the NB to build a cardiovascular risk prediction model using electronic health record data.

4 Simulations

One theoretical advantage of the proposed Bayesian network approach is that it does not depend on correct specification of a regression model for the hazard function (as in the Cox model). Hence, NB might be better suited to modeling data which do not follow a proportional hazards model, or for which the functional relationship between covariates and the hazard is misspecified. In this section, we compare the performance of our Naive Bayes method with the Cox proportional hazards model (CPH) when 1) failure times follow a proportional hazards model, 2) failures times follow a log-logistic accelerated failure time (AFT) model such that the hazards are non-proportional, and 3) the manner in which covariates influence the failure time is misspecified in the regression model. In all cases, censoring times were generated as , thereby guaranteeing the conditional and unconditional independence of censoring and failure times. Observed event times and censoring indicators were defined in the usual way: and . We consider predicting the probability of surviving beyond 7 years, , though we emphasize that our approach yields estimates of the entire conditional survivor function so that could be predicted for any with a trivial amount of additional computation.

4.1 Performance metrics

In our simulation, we sought to evaluate two characteristics of the NB and CPH techniques: model calibration, the degree to which predicted and true event probabilities coincide; and model discrimination, the degree to which the model can identify low- and high-risk individuals. In all cases, calibration and discrimination were assessed on separately simulated validation datasets of equal size. All tables are based on 1000 simulations.

4.1.1 Model calibration.

To assess model calibration, bias and mean squared error (MSE) of the predicted probabilities were computed with respect to the true event probabilities calculated from the various simulation models. We report “conditional” bias and MSE within specific population subgroups, as well as “marginal” measures which average across the entire simulated population.

4.1.2 Model discrimination.

To compare the ability of the CPH and NB models to predict events (in this case, a failure prior to time ), we used the net reclassification improvement (NRI) [23]

. The NRI, also referred to as the net reclassification index, compares the number of “wins” for each model among discordant predictions. We defined risk quartiles according to the true event probabilities; risk predictions for an individual were considered discordant between the NB and CPH models if the predictions fell in different quartiles. The NRI for comparing the NB and CPH models is defined by

(8)

are the number of individuals who experienced events prior to 7 years and were placed in a higher risk category by the NB than the CPH algorithm (the opposite change in risk categorization yields ). and

count the number of individuals who did not experience an event and were “down-classified” by NB and CPH, respectively.

and are the total number of events and non-events, respectively. Event indicators were defined by whether the simulated failure time was less than 7, regardless of whether the failure was observed or not. A positive NRI means better reclassification performance for the NB model, while a negative NRI favors the CPH model. While NRIs can theoretically range from -1 to 1, in the risk prediction setting they do not typically exceed the range [-0.25, 0.25]. For example, [8] calculated the effects of omitting various risk factors from the Reynolds Risk Score model for prediction of 10-year cardiovascular risk. The estimated NRIs ranged from -0.195 (omitting age) to -0.032 (omitting total cholesterol or parental history of MI), and all were statistically significant at the 0.05 level. In addition to the NRI, the tables of simulation results also display the reclassification improvement rates for events and non-events (i.e. the summands in (8)) separately. In our simulations, the true event times of each individual in the test set were used to calculate the NRI. In the real data analyzed in Section 5, the test data are subject to censoring, hence we use a modified version of the NRI due to [22] which accounts for censoring.

4.2 Proportional hazards

Table 1 summarizes the performance of the CPH and NB models when data are generated from a proportional hazards model using the method suggested in [3]. Covariates were generated from a -variate Normal distribution with mean zero, variance one, and exchangeable correlation with pairwise correlation . Different values of were considered in the various simulation settings. The final design matrix was defined by , where is an -vector of ones. Because each covariate was marginally standard normal, no covariate scaling was performed before applying the computational methods. Given observed covariates for subject , a failure time was generated as

where , which corresponds to generating failure times from a Weibull distribution with scale parameter , so that the hazard for subject is proportional to the baseline hazard with proportionality factor . For this set of simulations, we set , , and

was set to , and across simulations, yielding censoring rates of 55%, 78%, and 90%, and event rates of 40%, 18%, and 7% respectively.

Bias(CPH) Bias(NB) MSE(CPH) MSE(NB) RI(E) RI(NE) NRI
1000 0 0.0 0.020 -0.002 0.160 0.280 0.150 -0.160 -0.006
1000 0 0.7 0.020 -0.020 0.160 4.300 0.081 -0.041 0.040
1000 -1 0.0 0.006 -0.000 0.088 0.210 0.048 -0.069 -0.021
1000 -1 0.7 0.004 -0.057 0.087 3.500 0.000 0.045 0.045
1000 -2 0.0 0.001 0.000 0.042 0.110 -0.032 -0.004 -0.037
1000 -2 0.7 0.002 -0.043 0.041 1.900 0.010 0.035 0.045
5000 0 0.0 0.020 0.000 0.067 0.055 0.150 -0.150 0.001
5000 0 0.7 0.020 -0.019 0.068 4.300 0.078 -0.041 0.036
5000 -1 0.0 0.005 0.000 0.020 0.042 0.052 -0.059 -0.007
5000 -1 0.7 0.005 -0.056 0.020 3.500 -0.021 0.050 0.030
5000 -2 0.0 0.001 0.000 0.009 0.022 0.002 -0.016 -0.014
5000 -2 0.7 0.001 -0.046 0.008 2.000 -0.061 0.095 0.033
Table 1: Performance of CPH and NB models on data simulated from Weibull (proportional hazards) model. Mean squared errors are computed over the entire validation set and multiplied by 100. RI(E) and RI(NE) are the reclassification improvement rates for events and non-events, and their sum yields the NRI. Positive reclassification improvement scores favor NB, and negative scores favor CPH.

Figures 0(a) and 0(b)

plot predicted vs. true probabilities across quantiles of

(coefficient ) for 5000 points sampled from 10 simulation runs under the settings where and , respectively ( is fixed at -1).

(a)
(b)
Figure 1: Predicted vs. true probabilities across quantiles of (coefficient ) for 5000 points sampled from 10 simulation runs under the settings where and . is fixed at -1. Dark gray points are predictions from CPH, light gray from NB. Dashed line is a lowess smoother applied to CPH predictions, dotted line a smoother applied to NB predictions.

When covariates are independent (), NB and CPH have similar calibration and discrimination performance. When covariates are strongly correlated, NB has an anti-conservative bias in which large survival probabilities are overestimated and small survival probabilities are underestimated (see, e.g., panels 1 and 4 of Figure 0(b)). Interestingly, this bias does not degrade the discrimination ability of NB as measured by the NRI. In fact, the NRI favors NB over CPH more strongly in cases where bias is larger. This finding is not entirely surprising; it has been shown that NB tends to drive predicted probabilities towards either 0 or 1 [31] while retaining an approximately correct ranking of observations with respect to their risk.

4.3 Nonproportional hazards

Table 2 compares the performance of the NB with the CPH model when data follow a log-logistic accelerated failure time (AFT) model. Covariates were generated as described above. Given observed covariate , failure times were generated from a Log-logistic distribution with shape parameter and scale parameter . Censoring times were generated as . was set as

was set to , and across simulations, yielding censoring rates of 60%, 74%, and 90%, and event rates of 40%, 25%, and 8% respectively.

Bias(CPH) Bias(NB) MSE(CPH) MSE(NB) RI(E) RI(NE) NRI
1000 1 0.0 0.002 0.000 0.100 0.140 0.024 0.180 0.210
1000 1 0.7 0.004 -0.099 0.097 5.800 0.018 0.270 0.290
1000 0 0.0 0.007 0.001 0.220 0.260 0.170 -0.130 0.042
1000 0 0.7 0.007 -0.058 0.200 4.100 0.160 -0.091 0.064
1000 -1 0.0 0.005 -0.001 0.230 0.310 0.150 -0.160 -0.011
1000 -1 0.7 0.005 -0.012 0.210 1.800 0.180 -0.160 0.015
5000 1 0.0 0.003 0.001 0.056 0.037 0.053 0.180 0.230
5000 1 0.7 0.002 -0.100 0.052 6.000 -0.005 0.310 0.300
5000 0 0.0 0.005 0.001 0.110 0.068 0.190 -0.140 0.050
5000 0 0.7 0.005 -0.059 0.110 4.000 0.140 -0.087 0.057
5000 -1 0.0 0.004 0.000 0.100 0.071 0.200 -0.200 -0.003
5000 -1 0.7 0.004 -0.012 0.096 1.500 0.200 -0.190 0.009
Table 2: Performance of CPH and NB models on data simulated from Log-logistic (non-proportional hazards) model. Mean squared errors are multiplied by 100. RI(E) and RI(NE) are the reclassification improvement rates for events and non-events, and their sum yields the NRI. Positive reclassification improvement scores favor NB, and negative scores favor CPH.

As in the proportional hazards case, bias was small for both methods when covariates were independent, and was higher for NB when covariates were correlated. NB performed better than CPH in terms of risk reclassfication, with the advantage being particularly dramatic ( net reclassification improvement) when the underlying event rate was higher. In general, the NB yielded better risk classification for subjects who experienced events, while CPH performed better for subjects without events.

4.4 Regression model misspecification

Table 3 summarizes the performance of NB and CPH models when data are generated from a Log-logistic model and the CPH linear predictor is misspecified. Our simulation is motivated by our applied data example, where the effects of age and systolic blood pressure (SBP) on the probability of experiencing a nonfatal cardiovascular event appear to be non-linear. The simulation model encodes a scenario where subjects at middle ages are at highest risk, while those at low and high ages are at lower risk.

For this set of simulations, the total sample size was fixed at . We started by generating an ‘age’ covariate from a Log-logistic distribution with shape parameter 10, scale parameter 50, and rate parameter 0.4; this yielded a distribution of ages with a median of 50 and an interquartile range of 10. A small fraction () of simulated ages were ; these ages were set to 100. Given age, systolic blood pressure values were simulated from a Normal distribution with mean

and standard deviation

. Based on the simulated ages and SBPs, failure times were simulated from a Log-logistic distribution with scale parameter and shape parameter

(9)

where is a binary indicator, is an age-by-SBP interaction, and is an interaction between continuous age and binary indicator of age . All columns of the design matrix implied by this linear predictor were standardized to have mean zero and variance one before estimation was performed, so that the coefficients represent standardized effect sizes for each covariate. The left panel of Figure 2 (plot labeled ‘True’) shows a set of survival probabilities (i.e. probabilities of having a failure time years) simulated from this model and plotted vs. age.

Figure 2: True and predicted risks of experiencing an event based on a Log-logistic model with shape parameter defined by (9), plotted versus age (left panel) and SBP (right panel). On each plot, the points show (for a typical simulation run) true simulated risks and the solid line is a lowess scatterplot smooth showing the trend in risk across age and SBP respectively. The dashed lines in each panel show the trend in risk predictions obtained from the Cox Proportional Hazards model incorporating linear main effects for age and SBP (first row of Table 3), while the dotted lines show the trend in predictions by Naive Bayes.

We consider the performance of the NB and CPH models for different specifications of the linear predictor in the Cox model. There is no explicit regression model associated with the NB, and hence it is provided with (standardized) vectors of ages and SBPs. We consider varying degrees of misspecification of the regression model, from a model incorporating only linear terms for age and SBP (omitting the (age ) indicator and higher-order interactions) to the full model including all the covariate terms from (9).

CPH regression terms Bias(CPH) Bias(NB) MSE(CPH) MSE(NB) RI(E) RI(NE) NRI
Age, SBP 0.008 0.000 0.623 0.419 0.175 -0.061 0.113
log(Age), SBP 0.008 0.000 0.634 0.421 0.179 -0.060 0.118
Age, SBP, Age x SBP 0.008 0.000 0.615 0.422 0.166 -0.059 0.107
Age, SBP, (Age) 0.009 0.001 0.498 0.421 0.127 -0.067 0.060
Age, SBP, (Age60) 0.011 0.000 0.268 0.422 0.078 -0.107 -0.029
All 0.011 0.001 0.272 0.422 0.076 -0.109 -0.032
Table 3:

Performance of CPH and NB models for data simulated from a Log-logistic regression model including the variables from (

9). The total sample size for each simulation was . The first column of the table specifies which terms are included in the CPH regression model; the NB is provided with age and SBP values only. The final row shows the results when the CPH model includes all relevant covariates. RI(E) and RI(NE) are the reclassification improvement rates for events and non-events, respectively. Mean squared errors are multiplied by 100.

Table 3 and Figure 2 illustrate how the additional flexibility of the NB model allows more accurate predictions to be obtained when relevant covariates are omitted from the model. The NB model has smaller MSE and positive NRI when compared to a CPH model which includes only main effects for age (or log(Age)) and SBP. Figure 2 displays predicted survival probabilities from a typical simulation run using the NB and the main effects only CPH model. The overlaid smoothers show clearly that the NB captures the non-linear effects of age and SBP observed in the true probabilities (left panel) better than CPH. Returning to Table 3, adding an age SBP interaction term does not markedly improve the performance of CPH. Incorporating a quadratic effect for age, a standard strategy when faced with apparent non-linearity in the effect of a covariate, yields some improvement in CPH but the NB retains an advantage in MSE and NRI. When the (Age) indicator term is included, the CPH model outperforms NB both in terms of MSE and NRI.

5 Data application

We illustrate our Naive Bayes method by applying it to the task of predicting the risk of cardiovascular events from longitudinal electronic health record data. The dataset includes medical information on a cohort of patient-members age 18 years and older from a large Midwest integrated healthcare system. Data were available on members who attended a primary care clinic between 2002 and 2009, and who were continuously enrolled for at least twelve consecutive months during that period.

We consider the task of predicting the individual risk of experiencing a cardiovascular event, including death where cardiovascular disease was listed as the primary cause; procedures such as thromboectomy, stent placement or coronary artery bypass graft; and diagnoses such as stroke or mycocardial infraction. For both event definitions, failure and censoring times of subjects were measured from an index visit, corresponding to the first clinic visit following 12 months of continuous enrollment in the healthcare system. Only first events were recorded; subjects were censored if they disenrolled prior to experiencing an event or were event-free when the dataset was locked and prepared for analysis in 2009. Those whose primary cause of death was not cardiovascular disease were coded as censored. In this data application example, we did not adjust either the CPH or NB methods for competing risks.

Information available for predicting the risk of cardiovascular events included age, blood pressure, and LDL cholesterol. We restricted our analysis to subjects who had values for these risk factors recorded prior to the index visit; these factors were treated as baseline predictors. The one exception was cholesterol, which was often missing for subjects under the age of 50. Due to low rates of dyslipidemia in this group, subjects without a measured LDL cholesterol within the first year of enrollment were assigned a normal LDL cholesterol value of 120 mg/dl. With these restrictions, the analysis cohort consisted of 274,964 patients with a mean follow up of 4.8 years. The cohort was randomly split into a training dataset consisting of 75% of the observations (206,223 subjects), and a test dataset consisting of the remaining 25% (68,741 subjects). After standardizing the covariate values, we applied both the CPH and NB methods to the training data and obtained predictions for the test data. The CPH model regression equation included linear terms for each of the three risk factors (age, blood pressure, and LDL cholesterol), consistent with existing CPH-based techniques such as the Framingham risk equations.

Figure 3 shows five-year cardiovascular risk predicted by NB and CPH, by age, for the two event definitions. Overlaid on the two plots is the observed event rate for five-year age ranges, computed within each range via Kaplan-Meier. The figure illustrates how the flexibility of the NB method allows it to capture a marked uptick in event rate after the age of 50; in contrast, the linear predictor of the Cox model restricts the set of age-risk relationships that can be well approximated.

Figure 3: Observed and predicted 5-year cardiovascular event rates vs. age.

5.1 Reclassification with censored observations

As noted earlier, the Net Reclassification Improvement statistic cannot be applied directly to data where the outcome status of some subjects is unknown. In our setting, omitting subjects with less than five years of follow-up (or treating them as non-events) will result in biased estimates of the NRI. To evaluate risk reclassification on our test data which are subject to censoring, we use a “censoring-adjusted” NRI (cNRI) which takes the form

(10)

Full details appear in [22]; briefly, the expected number of events overall and among up-classified and down-classified individuals is computed from event probabilities estimated via Kaplan-Meier. Uncertainty intervals are obtained by computing the cNRIs on the test set for predictions from NB and Cox models fitted to 500 bootstrap resamples of the training data.

Using the test data (), we computed the cNRI comparing the two models. Following the recommendations of [24], we considered clinically relevant risk categories for five-year risk of cardiovascular events: 0-5%, 5-10%, and . Table 4 summarizes the reclassification improvements and net reclassification improvements for subjects with and without events. The NRI across all subjects was positive (NRI = 0.10, 95% CI 0.084 to 0.120), indicating that NB had significantly better reclassification performance than the Cox model. The reclassification improvements behave similarly to our simulations, with Naive Bayes outperforming the Cox model in terms of predicting the occurrence of events among those who actually experienced them, and the Cox model apparently more accurate among subjects who did not experience events.

(test set) # of events cRI (events) cRI (non-events) cNRI
68,741 1,382
Table 4: Censoring-adjusted Reclassification Improvement and Net Reclassfication Improvement (cRIs and cNRIs) comparing Naive Bayes and Cox models for predicting cardiovascular events from 5295 patients’ electronic health record data. Positive cRIs and cNRIs favor Naive Bayes, and negative cNRIs favor the Cox model. denotes an estimated cRI/cNRI of

with 95% Wald confidence interval

. Reclassification improvement scores that are significantly different than zero are indicated in bold (favoring Naive Bayes) and italics (favoring Cox)

6 Discussion

We have presented an extension of the Naive Bayes technique to accommodate time-to-event data subject to censoring, and illustrated its application to predicting cardiovascular risk using electronic health record data. Our method, like many machine learning techniques, is ‘model-agnostic’ and hence can capture complex relationships between demographics, biomarkers, and clinical risk without requiring a priori specification of a regression model. While it is true that a more complex Cox regression model or a set of stratified Kaplan-Meier estimates might capture observed non-linearities in risk, considering multiple candidate regression models or covariate strata increases the risk of overfitting, even when an independent test set is used. In contrast, our relatively simple Naive Bayes approach captures non-linear effects without requiring any model tuning, scoring, or comparisons. The NB is straightforward to implement and runs quickly; in our simulation studies, unoptimized R code for implementing NB had similar running time to the coxph command from the survival package. In our data application, we successfully applied the NB method to a dataset with observations, where the total running time was several minutes using a simple implementation in MATLAB. The assumption of covariate independence allows our NB algorithm to be easily parallelized, which is not true of procedures for maximizing the Cox partial likelihood.

The proposed NB method assumes that covariates are independent and normally distributed conditionally on both and for all . This is unlikely to hold in practice, though the NB has been shown to be effective in other settings where distributional assumptions regarding the covariates are not satisified. Indeed, while most covariates in our simulation study were generated from marginal normal distributions, no constraint was placed on their conditional distributions. Regarding the issue of calibration, we found that predictions from NB were mostly unbiased and often (but not always) had higher MSE than those from the Cox model. The respectable calibration of the NB predictions may be due to the smoothing of conditional covariate means and variances across time which may attenuate and stabilize probability estimates. The overall effect may be similar to the smoothing and binning approaches described in [31] for improving calibration of NB classifiers. Perhaps unsurprisingly given its origins, NB outperformed the Cox model in terms of Net Reclassification Improvement, a metric which evaluates how successfully models classify subjects into clinically relevant risk groups. In clinical practice, the ability to discriminate between low- and high-risk patients may be more important than obtaining a precise risk estimate, which may recommend the use of Naive Bayes in settings such as the cardiovascular risk prediction task we consider here.

The covariate independence assumption can be relaxed by modeling covariates as arising from a multivariate normal distribution, but it is unclear how to extend this approach to our setting where parameter values are smoothed over time. Specifically, the key challenge is that covariance matrices must be positive definite, a constraint which is difficult to incorporate into a smoothing procedure. The assumption of conditional normal distributions (given and

) is also somewhat limiting in that it restricts NB to cases where covariates are continuous and have distributions which can be approximated by a normal. In practice, separate NB models could be fit at each level of categorical risk factors (eg. smokers and non-smokers), but this could be time-consuming for factors with more than two levels and inefficient for ordered factors (eg. tumor staging). Furthermore, many risk factors of clinical disease have skewed distributions either marginally or among subjects with survival times beyond a certain threshold. In ongoing work, we are developing techniques which accommodate discrete and skewed covariates in the NB framework.

Acknowledgments

This work was partially supported by NHLBI grant #R01HL102144-01 and AHRQ grant #R21HS017622-01.

References

Appendix A

Theorem.

Let be an event time for which and . Then

(11)
(12)

are consistent for and , and

(13)
(14)

are consistent for , , provided that the following conditions hold:
Condition (1).
Condition (2). .

Proof.

We begin by showing that , where we have replaced by for clarity. By writing

(15)

it is clear that the numerator and denominator converge to and respectively. Let ; then the expression in the denominator is (which is since ) and the numerator can be written as

(16)
(17)
(18)

Converting back to our original notation, we conclude that

(19)
(20)

as required. Similar arguments yield the results that , , and . Since , the desired results and are immediate. We therefore focus on and .

Having shown that , we now show that under the above Conditions. Using to denote the density function, we have:

Hence, we have the desired result on the expectation of , yielding the consistency of . The same argument gives the result on the expectation of , yielding the consistency of .