Identifying Diabetic Patients with High Risk of Readmission

02/12/2016 ∙ by Malladihalli S Bhuvan, et al. ∙ 0

Hospital readmissions are expensive and reflect the inadequacies in healthcare system. In the United States alone, treatment of readmitted diabetic patients exceeds 250 million dollars per year. Early identification of patients facing a high risk of readmission can enable healthcare providers to to conduct additional investigations and possibly prevent future readmissions. This not only improves the quality of care but also reduces the medical expenses on readmission. Machine learning methods have been leveraged on public health data to build a system for identifying diabetic patients facing a high risk of future readmission. Number of inpatient visits, discharge disposition and admission type were identified as strong predictors of readmission. Further, it was found that the number of laboratory tests and discharge disposition together predict whether the patient will be readmitted shortly after being discharged from the hospital (i.e. <30 days) or after a longer period of time (i.e. >30 days). These insights can help healthcare providers to improve inpatient diabetic care. Finally, the cost analysis suggests that 252.76 million can be saved across 98,053 diabetic patient encounters by incorporating the proposed cost sensitive analysis model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A survey conducted by the Agency for Healthcare Research and Quality (AHRQ) found that in the year 2011 more than 3.3 million patients were readmitted in the United States within 30 days of being discharged [3]. The need for readmission indicates that inadequate care was provided to the patient at the time of first admission. Inadequate care poses threat to patients life and treatment of readmitted patients leads to increased healthcare costs. Over 41 billion dollars were spent on treatment on readmitted patients in 2011 [3]. Diabetes is the seventh leading cause of death and affects about 23.6 million people in the United States. Hospital readmission being a major concern in diabetes care, over 250 million dollars was spent on treatment of readmitted diabetic patients in 2011 [3].

Patients facing a high risk of readmission need to be identified at the time of being discharged from the hospital, to facilitate improved treatment to reduce the chances of their readmission. Readmission of patients within 30 days of being discharged (short-term readmission) has been a widely used metric for studying readmissions [3]. However, a significant number of diabetic patients are readmitted after 30 days of being discharged (long-term readmission). As opposed to previous work done in the domain we consider both short-term and long-term readmission scenarios. In addition to an effective prediciton model, identifying risk factors (features in the medical record) correlating to readmission will help in considering these factors with greater care and better documentation in future medical records, thereby developing more efficient medical protocols.

The core idea is to provide a comprehensive data solution to readmission problem to facilitate implementation at the healthcare institutions to embark a significant improvement in the in-patient diabetic care. This solution provides all round information to the implementing healthcare institution along with the cost analysis model.

Addressing these critical problems involves several data challenges which are considered throughout the research. The main contribution of this work are:

  • Prediction of diabetis patients with high risk of readmission, by modeling multivariate patient medical records using machine learning classifiers. Incorporating a conservative prediction model to have higher recall as suited to healthcare institutions.

  • Analysis of characteristics of Short-term (within 30 days) and Long-term (after 30 days) readmissions with differeent classifiers.

  • Identifying the critical risk factors (features) using ablation study.

  • Identifying the association across critical risk factors using association rule mining.

  • Cost analysis to determine the effective cost saved by implementing the work in real world.

The rest of the paper is organized as follows: Section 2 presents a brief overview of the past work. Section 3 describes the dataset used and the proposed methodology covering all the enumerated points mentioned above. Results and discussions are presented in Section 4 with respected to each part, followed by conclusion and future work in section 5.

Ii Related Work

Numerous previous studies have analyzed the risk factors that predict readmissions rates of diabetic patients [3, 25, 22, 24, 7, 8, 16, 14, 20, 18, 19], out of which significant ones have been discussed here. [24] Found that acute and chronic glycemic control influenced readmission risk in a dataset of more than 29,000 patients over the age of 65. [8] Analyzed the readmission risk for a dataset of more than 52,000 patients in the Humedica network. [19] Studied demographic and socioeconomic factors which influence readmission rates. [25] Studied the impact of HbA1c on readmissions. [17]

have conducted analysis on predictability of hospital readmissions in general without targetting any specific disease. The dataset considered in this case, covers demographic, clinical procedure-related and diagnostic-related features along with drug information for patients above 65 years of age to predict the readmission within 30 days (short-term readmission prediction). It contains comprehensive results on feature reduction methods, but the performance of the prediction models are modest. Cost analysis and grouped features importance mining was not considered and the analysis was not targeted towards specific disease like diabetis.

To the best of our knowledge, our work is the first one analyzing diabetic patients faced with the risk of both short and long term of readmission along with feature analysis and cost analysis. We use a bigger and a more balanced dataset (i.e. data across all age groups and across 130 hospitals) as compared to previous works. Consequently, our results are more reflective of the problem of readmissions among diabetic patients.

Previous works have not documented the performance of different machine learning classifiers. Moreover, both short-term and long-term readmission scenarios were not considered. Though they solved a specific problem, they do not provide a single comprehensive solution to readmission problem that can readily be implemented.

In addition to addressing the above gaps in the research, this work covers methods to identify the critical risk factors in predicting the readmission rates. Knowledge of such factors is likely to be useful in developing protocols for better inpatient diabetes care. We hope that results presented in this work will serve as a good baseline for any future work to compare against.

Iii Methodology

A dataset containing medical records of 101765 diagnosed with diabetes, collected over a period of 10 years (1999-2008) from 130 hospitals in USA was used for all analysis presented in this work [25, 22]. The medical record of each patient included 50 potential risk factors and a label indicating whether the patient was readmitted within 30 days, after 30 days or was never readmitted. The distribution is as follows - 11% of patients were readmitted within 30 days, 35% after 30 days and 54% patients were never readmitted.

Fig. 1: Overview of the methodology. The data was preprocessed according to the method described in section 3.A. Using the preprocessed data we built models for identifying high-risk patients (classification, section 3.B) and identifying groups of features that were important for predicting readmission rates (feature analysis, section 3.C).

An overview of our method is provided in Figure  1. We first preprocess the data according to the method described in section 3.A. Using this preprocessed data we build models for predicting readmission rates (classification, section 3.B) and identifying groups of features that are important for predicting readmission rates (feature analysis, section 3.C) which in turn uncover the critical risk factors.

No. Feature No. Feature
1 Race 12
Number of
Emergency
Visits (NE)
2 Gender 13
Number of
Inpatient
Visits (NI)
3 Age 14
Diagnosis 1
(Primary) (PD)
4
Admission
Type (AT)
15
Diagnosis 2
(Secondary) (SD)
5
Discharge
Disposition (DD)
16
Diagnosis 3
(Tertiary) (TD)
6
Admission
Source (AS)
17
Number of
Diagnoses (ND)
7
Time in
Hospital (Days)
18
Glucose
Serum Test (GST)
8
Number of
Lab Procedures
19 A1C Test Result
9
Number of
Procedures
20 Insulin
10
Number of
Medications
21
Change of
Medication
11
Number of
Outpatient Visits (NO)
22
Diabetic
Medication (DM)
TABLE I: The list of risk factors considered for predicting readmission rates [25]

Iii-a Risk Factors

Prior to performing any analysis we processed the data in the following way: The primary, secondary and tertiary medical diagnoses were indicated by the ICD9 codes [26]. Each ICD9 indicates a unique diagnostic condition. The ICD9 codes took more than 1000 unique values. For many diagnostic conditions (ICD9 codes) data were available for only a few patients. Thus, determining the effect of each diagnostic condition on the readmission rates was not feasible. Consequently, we grouped ICD9 codes representing similar diagnoses into a total of 10 groups [26]. If grouping was not done, each of these ICD-9 codes would be present in smaller number of samples. Hence, individually each of these ICD-9 codes would not significantly represent the data and might receive a lower importance weight. Moreover, many of these codes are very closely related hence they can be grouped meaningfully as explained in [26]. Since each ICD-9 code indicates a single medical complication, grouping all related complications (for e.g. grouping all complications related to respiratory system from codes 450-519) into one nominal feature value is legitimate and meaningful to consider in a healthcare scenario. This would be crucial to determine the effect of the diagnostic condition on the readmission rate. The dataset also provided details of the medication administered to each patient. We found that the only medication that varied across the patients was the delivery of insulin, while other medications remained common among all the patients hence only insulin medication was considered as a feature. For some factors such as weight, payer code and medical specialty – the data were missing for 97%, 40% and 49% of the patients respectively. Consequently, we ignored these factors in our analysis. Future surveys and data collections need to collect these information to help the analysis. The race of the patient and the type of diagnoses were missing for 2% and 1% of the patients considered in the study. We removed such patients from the dataset. After this data pre-processing we were left with 22 factors listed in Table  I. The detailed definitions of these factors can be found in [25].

Some factors such as the type of diagnoses and admission type took nominal values where as others such as number of inpatient visits and time in hospital took numerical values. While some algorithms for predicting risk of readmission rates naturally deal with nominal and numerical data both (e.g. random forests), other algorithms such as neural networks cannot deal with nominal values. For algorithms that only operate on numerical data, nominal values were converted into binary features. For example, the factor ’Admission Type’ that took one out nine distinct values was represented by a 9-D binary vector. Each dimension was set to 1 or 0 depending on the value that Admission type took.

Iii-B Classification

Identification of high-risk diabetic patients was posed as the problem of classifying of whether a patient would be readmitted within 30 days of being discharge or after 30 days of being discharged or never readmitted. Since, different classification algorithms are apt for different kinds of data we experimented and compared results from five different algorithms. Prior to training the classification algorithms, we randomly split our dataset into two distinct sets - the training and the test set. The training and test set consisted of 75% and 25% of the data. The parameters of each algorithm were chosen based on the classification performance evaluated by five-fold-cross-validation on the training set. The performance of all algorithms was evaluated on the test set.

A short description of all classification algorithms considered in this work is provided below.

Iii-B1 Naive Bayes

Naive Bayes algorithm [27] is a probabilistic model for classification. It assumes that given the class, features are statistically independent of each other.

Iii-B2 Bayesian Networks

Bayesian networks [4]estimate the probability distribution of a class by modeling the relationship between features by a acyclic undirected graph (in general Bayes networks can be directed and cyclic, but for our experiments we only considered acyclic and undirected models).

Iii-B3 Random Forest

Random Forest is composed of a set of decision trees. Each decision tree acts as a weak classifier and pooling the responses from multiple decision trees leads to a strong classifier (random forest

[2]). Each decision tree is trained independently and determines the class of an input by evaluating a series of greedily learned binary questions. The random forest consisting of 250 trees, each of depth of utmost 5 nodes was used, as it was found to be optimal from the experiment with varying number of trees and depth in the forest.

Iii-B4 Adaboost

Adaboost constructs a strong classifier by sequentially combining a set of weak classifiers [10]. At first iteration, a single classifier is learnt to minimize the classification error. At each consequent iteration, a new classifier is learnt which seeks to minimize the error of the classifier composed of the set of classifiers learnt until the previous iteration. In all our experiments, decision trees were used as weak classifiers.

Iii-B5 Neural Networks

Neural Networks [11]

are one of the powerful classifiers which has established state of art results in like speech processing, computer vision

[21]

and a wide variety of other tasks. We have used a MultiLayer Perceptron (MLP) which is a feed-forward artificial neural network

[12]

model that maps sets of input data onto a set of appropriate outputs. A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a non-linear activation function. MLP is a modification of the standard linear perceptron and can distinguish data that are not linearly separable. It is trained with one hidden layer by minimizing the squared error plus a quadratic penalty with the BFGS method

[23]. We experimented with Neural Networks with one hidden layer consisting of 2, 4 and 8 nodes and observed that there was very small increase in the performance but was not significant to consider. Hence considering the training time of these neural networks we decided to use Neural Network with one hidden layer of 2 nodes for further experiments. We used Weka 3.7 [13] for training neural networks using MLP Classifier [9].

Iii-C Feature Analysis

Various features in the medical records are analyzed to get insights about their importance in predicting the readmission status of a patient encounter instance. Following subsections explain two methods of carrying the feature analysis to discover critical risk factors.

Iii-C1 Ablation Study of Risk Factors

The importance of individual risk factors can be judged by performing an ablation study. An ablation study involves removing one factor at a time and comparing the accuracy of predicting readmission with this set of features with the accuracy obtained by considering all the features. Intuitively, removal of more important features should lead to larger decreases in accuracy. Ablation of risk factors was performed to judge the importance of each of these factors.

Iii-C2 Associative Rule Mining (ARM)

Along with the predictive modeling the main theme is to identify the risk factors and help medical practitioners get more insights. One of the objectives is to assist the practitioners and researchers in getting more insights to understand why certain patients get readmitted and factors leading to it, hence indirectly increasing the efficiency of the diagnosis in order to prevent further readmissions. Ablation study helps to understand the risk factors, which is further supported by significant association rules.

Groups of consistently occurring factors that influence readmission rates can be revealed by associative rule mining (ARM; Apriori algorithm; [1]). ARM aims at identifying rules of the form A =>B where A is a conjunction of an arbitrary number of factors and B is a factor that is predicted to occur if A is true. The tuple (A, B) forms a set of factors which commonly appear in the dataset. Discovering the groups of factors that commonly occur among readmitted patients or patients those are never readmitted can further our understanding about causes of readmission. If for a patient factors A and B both were present, we deemed that the patient followed the rule (A, B). We determined such rules by mining the medical records of all patients in our dataset. Next, we performed class sensitive ARM – where we took examples either from readmitted patients or patients that were never readmitted. Then for each rule, we determined the number of patients in the entire dataset that followed this rule. Within this set, we computed the fraction of patients which were readmitted <30 days and the fraction that were readmitted at any time in the future. All the rules were then sorted based on the fraction of patients which followed the rule and were readmitted within 30 days. The rules with the highest fraction of readmissions indicate the factors that are strong predictors of risk of readmission. The rules with lowest fraction of readmissions indicate the factors that are strong predictors of low risk of readmission.

Iii-D Evaluation Criteria

A system for predicting high-risk patients is only useful, if a large fraction of patients at high-risk are correctly identified (i.e. high recall) without raising a large number of false alarms (i.e. high precision). Receiver Operating Characteristics (ROC) is ubiquitously known as one of the best metrics to evaluate classification models. The research paper [5]

, clearly proves that a deep connection exists between ROC space and Precision-Recall (PR) space, such that a curve dominates in ROC space if and only if it dominates in PR space. The paper also states that, though ROC curves are commonly used to present results for binary decision problems in machine learning, when dealing with highly skewed datasets (like our dataset), PR curves give a more informative picture of an algorithms performance. Therefore, we chose to present PR curves. Moreover PR curves provide sufficient information to select the appropriate threshold needed to tune for higher recall which is necessary in a healthcare scenario (a conservative scenario). All methods presented in this paper were evaluated based on recall and precision. The definition of these evaluation metrics is provided below:

  • Precision (P): is defined as the fraction of ground truth positives among all the examples that are predicted to be positive.

    (1)

    Where, TP, FP stand for True Positives and False Positives respectively

  • Recall (R): is defined as the fraction of ground truth positives that are predicted to be positives.

    (2)

    Where, FN denotes False Negatives.

  • Precision-Recall Curve [6]

    : The tradeoff between the precision and recall can be studied by looking at the plot of how precision changes as a function of recall. This plot is known as the Precision-Recall (PR) curve. The precision is plotted on the y-axis and recall on x-axis. The area under precision recall curve is a cost-effective metric for evaluation of classifiers. High value of area under the PR curve indicates better performance. Area under the PR curve is preferred way of studying the performance of classifiers in skewed (i.e. imbalance between positives and negatives) datasets such as the one considered in this work

    [6].

Iii-E Cost Sensitive Analysis

The model needs to be tuned to maximize the cost saved by the hospital by using the analysis model. This can be done by selecting appropriate threshold for the machine learning algorithms. Let be the cost incurred per readmission, and

be the cost per Special Diagnosis for Patients predicted as Yes (<30 or >30 day Readmissions). Let the patient encounter instances of the test set be defined according to the below cost matrix. Cost matrix is one in which cost or penalty of classification will be specified for each element as in the confusion matrix.

(3)

Where TP, FP, FN and TN correspond to True Positives, False Positives, False Negatives and True Negatives respectively.

Without the prediction model: all the instances where patients actually get readmitted (True Positives and False Negatives) incur a cost of , hence defined by the below matrix. cost matrix without prediction model.

(4)

With the prediction model: considering that all patients who are predicted to get readmitted (True Positives and False Positives) would be examined with a special diagnosis which would cost , which in turn would prevent their readmission. Hence by above consideration, we make a realistic hypothesis that the special diagnosis provided for predicted instances prevent their readmission (Here the prediction is helping to determine whom to provide special diagnosis). The patient encounters who are predicted to be readmitted but who do not actually get readmitted, would also incur a cost of beta for special diagnosis, which will just serve as preventive measure in conservative hospital scenario. Cost of would be incurred for those patients who are predicted not to get readmitted but who actually get readmitted. Those patients who where predicted as not readmitted and those who actually don’t get readmitted (True Negatives) anyway would not contribute to any cost, but it may simply help the hospital for capacity planning. Hence with the predictive model we would get the cost matrix C’ shown below. In our research special diagnosis is considered to be ’One Extra Day’ during the initial admission during which the doctor can conduct another diagnosis which might result in the discovery of new complications in the patient. Hence the cost of special diagnosis is considered to be cost per one day of admission in the hospital.

(5)

The difference of cost matrix without and with prediction model gives the saved cost matrix as shown below.

(6)

The prediction model needs to be tuned to appropriate thresholds to maximize this saved-cost matrix, hence maximizing the cost saved by the analysis model.

Iv Results and Discussions

This section contains results consolidated from carious experiments as described in the section 3. The effectiveness of various classifiers on predicting the readmission status have been discussed in the section 4.A. The section 4.B describes critical risk factors that have been identified by feature analysis conducted by two methods namely, the ablation study and Association Rule Mining. Finally, the results of cost effective analysis have been discussed in section 4.C.

Iv-a Analysis of Classifiers

Fig. 2: Comparing the accuracy of different methods for identifying high-risk patients that are readmitted within 30 days of being discharged. The dotted line shows the performance of a classifier at chance accuracy.
Fig. 3: Comparing the accuracy of different methods for identifying high-risk patients that are readmitted in the future. The dotted line shows the performance of a classifier at chance accuracy.

The distribution of patient encounters is skewed because; most patients in the dataset were never readmitted (54%). Only 11% of patients were readmitted within 30 days(<30), while the rest (35%) of them were readmitted after 30 days. But the greater than 30 day readmission class (>30) is ambiguous as it could mean 31 days or even a couple of years. In first case, by making an hypothesis that ’>30 day readmission’ instances might have patterns similar to ’NO readmission’ instances, we combined >30 and NO readmission instances and built a binary classification model to classify <30 versus (>30 & NO),i.e., identifying high risk patients readmitted within 30 days of being discharged (Figure  2). In the second case, we made a hypothesis readmission either after <30 and >30 days might have similar patterns and combined <30 days and >30 days readmission instances and built a binary classification model to classify Readmission (<30 & >30) versus ’NO readmission’, i.e., identifying high risk patients readmitted at any time in the future (Figure  3). In this case, we can see that distribution is almost evened for the two classes.

We experimented with both the cases. As we observe from Table  II, area under PR curves for best performing algorithm in each case came out to be higher in the latter case i.e., while identifying high risk patients readmitted at any time in the future. Therefore, the performance of identifying high-risk readmitted patients within 30 days is lower than identifying high-risk patients readmitted at any time in the future.

Classifier Area Under Precision-Recall Curve
Class <30 Class <30 + Class >30
Naive Bayes
0.214 0.63
Bayes Network
0.208 0.637
Random Forest
0.242 0.65
Adaboost Trees
0.167 0.569
Neural Networks
0.233 0.654
TABLE II: Comparing the performance of different algorithms for identifying high risk patients

Iv-B Identifying the critical risk factors

Feature analysis described in the section 3.C leads to the discovery of importance of risk factors estimated by two methods presented in the following subsections.

Iv-B1 Using Ablation Study

Fig. 4: Analyzing the importance of individual risk factors for identifying high-risk patients. The importance of each risk factor was estimated by computing the out of bag error (Section 3.C.1). Larger out of bag error indicates that the risk factor is more important. The number of inpatient incidents, the discharge disposition and admission type are most important for identifying high risk patients.
Fig. 5: Analyzing the importance of individual risk factors in differentiating risk factors influencing short-term and long-term readmissions. The importance of each risk factor was estimated by computing the out of bag error (Section 3.C.1). Larger out of bag error indicates that the risk factor is more important. The number of laboratory procedures and discharge disposition are found to be most important for differentiating short-term readmissions from long-term readmissions.

Random Forests were most accurate at identifying patients with high-risk of readmissions. This suggested that random forests could be used to identify the importance of each risk factor in identifying high-risk patients. Such factors were identified using an ablation study (section 3.C.1). The ablation study compares the performance of classifiers first by considering all the features, and then removing one of these features. The difference in the accuracies is used as the estimate of importance of the feature. Figure  4 shows the increase in out of bag errors (i.e. increase in inaccuracy), for each factor. We observed that the number of inpatient visits, discharge disposition and admission type are most important for identifying high-risk patients.

Further, in order to gain additional insights into causes of readmissions, we sought to analyze if there were any differences between the factors that led to readmission within 30 days and after 30 days. For this we trained a random forest for differentiating patients readmitted within 30 days from the patients who were readmitted after 30 days. We then performed an ablation study on identify the factors which were different among short-term and long-term readmissions. The results of our analysis are presented in Figure  5. We found that the number of lab tests (i.e. laboratory procedures) is useful in differentiating between short and long term readmissions. We also found that patients who are discharged to home are more likely to be readmitted within 30 days as opposed to being readmitted after 30 days.

Iv-B2 Association Rule Interpretation

Apriori algorithm retrieved several general and class association rules. These association rules by themselves are a treasure which would help doctors for improving efficiency of initial diagnosis. Prominent Association Rules have been given in Table  III with respective support and confidence. More of these rules have been put up in appendix A. These rules can be interpreted by inference logic from the Consequent (LHS) to Antecedent (RHS).

The rule 3 in Table  III, indicates that Caucasian female patients who were primarily diagnosed with Caucasian female patients who were primarily diagnosed with Dyspnea and Respiratory abnormalities (ICD9-786.0), and had Hypertension and Maligancy (ICD9-401.0) as secondary diagnosis, are less probable (2.27%) to be readmitted within 30 days, as only 5 such patients out of 220 were readmitted within 30 days during the survey. Similarly rule 6 indicates that, patients admitted through emergency (admission source id=7), and discharged or transferred to Skilled Nursing Facility (discharge disposition id=3) are slightly more probable () to get readmitted within or after a month. Similarly, the rule 1 implies that a patient encounter who is diagnosed with Cellulitis/abscess, face (ICD-9: 682.0) and Diabetes mellitus without mention of special complication (ICD-9: 250.0) as primary and secondary diagnosis respectively has low possibility of getting readmitted within 30 days (1.88%) and mostly would not get readmitted at any time in future (71.83%).

When the prediction model predicts that this particular patient will not get readmitted, as a medical practitioner, one can use this matched rule to statistically understand that similar patients (who had similar diagnosis) were not readmitted in the past. It might help him to get him more insights and would assist him in further diagnosis. He would understand that such patients are less prone to risk. Such interpretations from all the rules could be very useful for taking informed decisions and gauging the risk factor. We observe that most of these rules can only help to decide about the probability of patients not getting readmitted within 30 days, due to the biased distribution of dataset.

No.
Readmission
Status
<30 >30 NO TOTAL
Total Number of Instances 11357 35545 54863 101765
Association Rules (Consequents) Class-Wise Matches (in %) Total Matches
1 diag_1=682.0; diag_2=250.0 1.88 26.29 71.83 213
2 diag_1=786.0; diag_2=250.0; diag_3=401.0 2.16 22.51 75.32 231
3 race=Caucasian; gender=Female; diag_1=786.0; diag_3=401.0 2.27 28.64 69.09 220
4 admission_type_id=1; discharge_disposition_id=3 14.76 34.35 50.89 7813
5 admission_type_id=1; discharge_disposition_id=3; admission_source_id=7 15.09 35.41 49.50 6645
6 discharge_disposition_id=3; admission_source_id=7 15.19 36.97 47.84 8290
TABLE III: Prominent Association Rules (Refer Tables  VI and  VII in the appendix for ICD-9 code and Id Mappings respectively)

Iv-C Cost Analysis

The cost analysis has been done as explained in the section 3.E. The article [15] specifies that cost of readmission of Diabetes mellitus and its complications to be $251 million for 23,700 total readmissions. Hence cost per readmission approximately equals to $10,591 (). In our research, special diagnosis is considered to be One Extra Day of admission during which the doctor can conduct another diagnosis which might result in the discovery of new complications in the patient. Hence the cost of special diagnosis is considered to be cost per one day of admission in the hospital. From our dataset we find that the average time in hospital across diabetic patient encounters is 4.396 days. Hence cost for one day admission is considered to be $2,409 (). Hence the value of and are $10,591 and $2,409 respectively.

Hence the saved cost matrix would be:

(7)

Model with different machine learning algorithms were tuned to maximize the cost saved by selecting appropriate threshold. The cost saved by different models are given the Table  IV.

Algorithm
Cost Saved for
Test Set
(Million USD)
Cost Saved for
Total set
(Million USD)
Naive Bayes 58.726 249.783563
Bayes Network 58.808 250.1323396
Random Forest 59.425 252.7566705
Adaboost-Trees 58.298 247.9631195
Neural Network 58.963 250.7916123
TABLE IV: Comparison of cost saved by different machine learning models.

We observe that the Random Forest model would save maximum cost of $59.425 million for 23,053 diabetic patient encounter instances of test set. We get $252.76 million by extending this to total number of 98,053 diabetic patient encounter instances.

V Conclusion and Future Work

In this work we presented a scheme to identify high-risk patients and evaluated different machine-learning algorithms. In contrast to previous work we considered both short-term and long-term readmissions and focussed on specific disease like diabetes. We found that random forests were optimal for this task. The dataset of readmissions is often skewed and consequently, the performance of identifying high-risk patients is modest. We conducted some preliminary experiments to modify the loss function of the classifiers to account for the skewed nature of the dataset. Slight improvement in the performance was observed comapared to previous works but not significant enough. Larger datasets containing medical records of readmitted patients are likely to be helpful for future research. Moreover, this work uncovers the features that are critical in identifying high risk of readmission and compares them in short-term and long-term analysis. In addition, statistically significant associations taht exist among these features are presented. TO complement the prediciton and feature analysis model, a cost sensitive model has been proposed.

From the cost analysis, we observe that a cost of $252.76 million can be saved for 98,053 instances of diabetic patient encounters. Saving such huge amount cost is essential for healthcare system. The model never suggests healthcare personnel to give less attention to those patients predicted not to be readmitted, but prompts extra attention to those who were predicted to get readmitted. In this sense, the designed model is conservative in nature and is safe to use in healthcare institutions as it enhances preventive strategy along with saving cost associated with readmission. Moreover, those patients who were predicted to be readmitted would receive special diagnosis at an earlier stage which might save lot of lives. Hence we believe that the model could be incorporated in healthcare institutions to witness its effectiveness.

Our research considers the hypothesis that the special diagnosis would prevent the actual readmission. We have considered cost of special diagnosis to be cost per one day of admission where doctor can run another diagnosis to discover other possible complications. Though these hypotheses seem legitimate they need to be tested by incorporating the research model in real healthcare systems. Extensive study needs to be done on the feature importance analysis which would help the healthcare institutions to prioritize their healthcare data documentation system. The cost saved by correctly predicting the patients who do not get readmitted might save some cost as it helps in optimal resource planning, but this cost needs to statistically determined, and the model should also be tuned by considering this cost into account.

Our research targets diabetic patients only. Such analysis needs to be carried for other top health conditions like Heart disease, Schizophrenia etc. In the future studies scheduled and unscheduled readmissions needs to be considered [20]. Several critical features in the medical records, like age were found missing. Hence a superior data collection drive needs to done for future research in this regard. Some features that needs to collected would be age, date of admission (to find the season of the year), number of patients with same disease at the instance of admission (to co-relate to epidemics), family history (to find hereditary information). The conversation between doctor and patient could also be collected which might help to extract essential features corresponding to patients will power and attitude by text-mining techniques which in turn might improve the intelligent models to identify patients with high risk of readmission.

Acknowledgment

We extend our immense gratitude to Prof. Bhiksha Raj, Prof. Rita Singh (Language Technology Institute, School of Computer Science, Carnegie Mellon University) and Pulkit Agrawal (Dept. of Computer Science, University of California, Berkley) for their keen guidance enabling the project. We would like to thank M. Lichman[22] for making the dataset available in the UCI machine learning library.

Appendix A Significant Association Rules

This section contains Table  V presenting a selected list of association rules. They can be interpreted as explained in section 4.B.2. These rules can be used as suggestions by a doctor. Tables  VI and  VII describe the ICD9 code mappings for the ICD9 codes and the Id mappings for the Id’s used in Table  III respectively. Complete mappings can be found in [22].

No. Readmission Status <30 >30 NO TOTAL
Total Number of Instances 11357 35545 54863 101765
Association Rules
(Consequents)
Class-Wise Matches (in %) Total Matches
1 diag_1=682.0; diag_2=250.0 1.88 26.29 71.83 213
2 diag_1=786.0; diag_2=250.0; diag_3=401.0 2.16 22.51 75.32 231
3 race=Caucasian; gender=Female; diag_1=786.0; diag_3=401.0 2.27 28.64 69.09 220
4 race=Caucasian; diag_1=386.0 2.48 42.15 55.37 121
5 race=AfricanAmerican; gender=Female; diag_1=786.0; diag_3=250.0 2.68 34.90 62.42 149
6 diag_2=493.0; diag_3=250.0 2.70 36.49 60.81 148
7 diag_2=411.0; diag_3=V45 2.83 48.11 49.06 106
8 diag_1=414.0; diag_2=411.0; diag_3=V45 2.97 46.53 50.50 101
9 race=Caucasian; gender=Female; diag_2=414.0; diag_3=V45 3.60 44.14 52.25 111
10 race=Caucasian; diag_1=414.0; diag_3=414.0 3.61 39.69 56.70 194
11 gender=Female; diag_2=414.0; diag_3=V45 3.68 42.65 53.68 136
12 race=Caucasian; gender=Male; diag_3=413.0 3.70 40.74 55.56 135
13 race=Caucasian; gender=Female; diag_1=486.0; diag_3=428.0 4.26 50.35 45.39 141
14 gender=Female; diag_1=486.0; diag_3=428.0 4.65 51.16 44.19 172
15 discharge_disposition_id=1; admission_source_id=1 8.76 32.46 58.79 18812
16 admission_type_id=1; discharge_disposition_id=1 9.60 37.31 53.09 31695
17 admission_type_id=2; discharge_disposition_id=1 9.69 34.68 55.63 11700
18 admission_type_id=1; discharge_disposition_id=1; admission_source_id=7 9.78 38.44 51.79 28726
19 discharge_disposition_id=1; admission_source_id=7 9.93 39.13 50.93 33981
20 A1Cresult=None; insulin=No 10.23 34.19 55.58 39978
21 admission_type_id=3; admission_source_id=1 10.24 30.39 59.37 16187
22 insulin=No; change=No 10.25 33.46 56.28 37338
23 A1Cresult=None; insulin=No; change=No 10.35 33.67 55.98 32786
24 A1Cresult=None; change=No 10.87 34.04 55.08 45674
25 race=Caucasian; gender=Male 11.07 34.70 54.22 36410
26 race=AfricanAmerican; gender=Female 11.08 34.98 53.94 11728
27 admission_type_id=2; admission_source_id=1 11.16 33.14 55.70 9224
28 diag_2=H; A1Cresult=None 11.25 36.93 51.81 25870
29 race=Caucasian; gender=Female 11.49 36.50 52.01 39689
30 diag_1=H; A1Cresult=None 11.62 36.15 52.23 24397
31 A1Cresult=None; insulin=Steady 11.69 34.97 53.34 24269
32 A1Cresult=None; change=Ch 12.35 37.22 50.43 36186
33 discharge_disposition_id=6; admission_source_id=7 14.42 44.15 41.43 6840
34 admission_type_id=1; discharge_disposition_id=3 14.76 34.35 50.89 7813
35 admission_type_id=1; discharge_disposition_id=3; admission_source_id=7 15.09 35.41 49.50 6645
36 discharge_disposition_id=3; admission_source_id=7 15.19 36.97 47.84 8290
TABLE V: Prominent Association Rules helpful for identifying patient encounters which are Not <30 day readmissions
Diagnoses (ICD9 Codes) Diseases
682.0 Cellulitis/abscess, face
250.0 Diabetes mellitus without mention of complication
786.0 Dyspnea and respiratory abnormalities
401.0 Hypertension, malignant
TABLE VI: Prominent ICD9 Code Mappings
Identifiers Mappings
admission_type_id=1 Emergency
admission_source_id=7 Emergency Room
discharge_disposition_id=3; Discharged or Transferred to Skilled Nursing Facility (SNF)
TABLE VII: Prominent Admission Source, Type and Discharge Disposition Id Mappings

References

  • [1] R. Agrawal, R. Srikant, et al. Fast algorithms for mining association rules. In Proc. 20th int. conf. very large data bases, VLDB, volume 1215, pages 487–499, 1994.
  • [2] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [3] T. D. Briefing. Ahrq: The conditions that cause the most readmissions. The Daily Briefing. Web, 2014.
  • [4] G. F. Cooper and E. Herskovits. A bayesian method for constructing bayesian belief networks from databases. In

    Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence

    , pages 86–94. Morgan Kaufmann Publishers Inc., 1991.
  • [5] J. Davis and M. Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pages 233–240. ACM, 2006.
  • [6] J. Davis and M. Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pages 233–240. ACM, 2006.
  • [7] K. M. Dungan. The effect of diabetes on hospital readmissions. Journal of diabetes science and technology, 6(5):1045–1052, 2012.
  • [8] E. Eby, C. Hardwick, M. Yu, S. Gelwicks, K. Deschamps, J. Xie, and T. George. Predictors of 30 day hospital readmission in patients with type 2 diabetes: a retrospective, case-control, database study. Current Medical Research & Opinion, 31(1):107–114, 2014.
  • [9] E. Frank. Mlp classifier. Web, 2012.
  • [10] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
  • [11] K. Fukushima.

    Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.

    Biological cybernetics, 36(4):193–202, 1980.
  • [12] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193–202, 1980.
  • [13] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10–18, 2009.
  • [14] M. Hasan. Readmission of patients to hospital: still ill defined and poorly understood. International Journal for Quality in Health Care, 13(3):177–179, 2001.
  • [15] B. Herman. The costs of 10 top medicaid readmission conditions. Becker’s Hospital Review, 2014.
  • [16] B. B. H. Herman. The costs of 10 top medicaid readmission conditions. Web, 2014.
  • [17] A. Hosseinzadeh, M. T. Izadi, A. Verma, D. Precup, and D. L. Buckeridge. Assessing the predictability of hospital readmission using machine learning. In H. Muñoz-Avila and D. J. Stracuzzi, editors, IAAI. AAAI, 2013. 978-1-57735-615-8.
  • [18] S. Howell, M. Coory, J. Martin, and S. Duckett. Using routine inpatient data to identify patients at risk of hospital readmission. BMC Health Services Research, 9(1):96, 2009.
  • [19] H. J. Jiang, D. Stryer, B. Friedman, and R. Andrews. Multiple hospitalizations for patients with diabetes. Diabetes care, 26(5):1421–1426, 2003.
  • [20] H. Kim, J. S. Ross, G. D. Melkus, Z. Zhao, and K. Boockvar. Scheduled and unscheduled hospital readmissions among diabetes patients. The American journal of managed care, 16(10):760, 2010.
  • [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [22] M. Lichman. UCI machine learning repository, 2013.
  • [23] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
  • [24] M. D. Silverstein, H. Qin, S. Q. Mercer, J. Fong, and Z. Haydar. Risk factors for 30-day hospital readmission in patients≥ 65 years of age. Proceedings (Baylor University. Medical Center), 21(4):363, 2008.
  • [25] B. Strack, J. P. DeShazo, C. Gennings, J. L. Olmo, S. Ventura, K. J. Cios, and J. N. Clore. Impact of hba1c measurement on hospital readmission rates: analysis of 70,000 clinical database patient records. BioMed research international, 2014, 2014.
  • [26] I. Wikipedia: The Free Encyclopedia. Wikimedia Foundation. List of icd-9 codes. Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. Web, 2014.
  • [27] H. Zhang. The optimality of naive bayes. AA, 1(2):3, 2004.