I Introduction
The Support Vector Machine (SVM) [1] is a binary classifier, widely used in practice due to its tractability for large scale problems. To obtain an SVM classifier, one needs to solve a convex quadratic problem with linear constraints, which can be done for large problem instances involving thousands of samples and hundreds of variables for each sample. The goal of this paper is to exploit the SVM framework to move beyond predictions and attempt to “control” future outcomes by appropriately modifying some of the key predictive variables. To that end, we develop a new method we call Prescriptive Support Vector Machine (PSVM).
We will apply the new method to an important problem in health care; preventing hospital readmissions. The need for systematic, quantitative methods for addressing health care problems is compelling. An estimated $3 trillion is spent annually on health care in the U.S., a value that exceeds 17% of the U.S. Gross Domestic Product (GDP) – by far the largest among the 13 highincome Organization for Economic Cooperation and Development countries. The Centers for Medicare and Medicaid Services have identified hospital readmissions, defined as an additional admission to address the same issue within 30 days after discharge, as an important and potentially preventable source of excessive resource utilization and increased cost of care
[2].An analysis of 2005 Medicare claims demonstrated that about 75% of 30day readmissions, representing about $12 billion in Medicare spending, were potentially preventable [3]. As a result, through the enactment of the Readmissions Reduction Program section of the Affordable Care Act of 2012, readmissions have been increasingly used as a quality of care metric, and their reduction is mandated for certain diseases [2]. In this context, many surgical departments in the U.S. are establishing processes aimed at reducing 30day readmissions. We refer to [4] for a general discussion of the benefits and some potential risks associated with the application of health analytics.
Several works exploit classical machine learning approaches, such as random forests, gradient tree boosting, logistic regression, linear and kernelized SVM, and related methods for predicting 30day readmissions in patients with heart failure
[5, 6, 7]. Recently, the authors developed an interpretable classification approach to predict chronic disease hospitalizations based on past Electronic Health Records (EHRs), establishing convergence, sample complexity and generalization guarantees [8, 9, 10]. Interpretability is indeed critical for medical and health informatics, as well as other areas, e.g., safety and security management. Without interpretable models, physicians may not use “black box” predictions even if they are highly accurate.In this paper, we augment earlier SVMbased predictive analytics along three directions. First, we use a sparsityinducing normbased constraint to obtain sparse classifiers which can generalize better outofsample and provide interpretability. Second, we leverage our work in [8, 10] to solve a joint clustering and classification problem and discover hidden clusters in the positive class and corresponding, percluster SVMbased classifiers. The third direction is the development of prescriptive analytics. In our setting, this consists of a method which leverages the SVMbased predictive model to devise personalized interventions with the potential to prevent a readmission by controlling/optimizing the value of some variables characterizing the patient.
There have only been very few works focusing on socalled prescriptive analytics. An example is [11, 12], which develop a datadriven framework to prescribe an optimal decision in a setting where the cost depends on uncertain problem parameters that need to be learned from data.
We apply our methods to a data set containing over 2.28 million patients who had surgeries in 2011–2014 in the U.S. The data are collected as part of American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) [13]. Earlier work studied risk factors for 30day readmissions for categories of surgical patients, e.g., orthopaedic trauma injuries [14], knee and hip arthroplasty [15], and ventral hernia repair [16]. A simple readmission score using few variables was developed based on 2011 NSQIP data only in [17]. To the best of our knowledge, our work is the first to develop analytics for 30day readmissions after general surgery using millions of NSQIP records.
The remainder of this paper is organized as follows. Sec. II reviews SVMbased classification and the joint clustering and classification method [10]. These are key building blocks for the prescriptive method which is presented in Sec. III. The data and preprocessing steps are outlined in Sec. IV. Experimental results are in Sec. V and conclusions in Sec. VI.
Notation: All vectors are column vectors and are denoted by bold lowercase letters. For economy of space, we write to denote the column vector , where is the dimension of . We use prime to denote the transpose of a vector. Unless otherwise specified, denotes the norm and the norm. We will use to denote the norm, where . We will also use the notation for the set .
Ii SVM based predictive analytics
The SVM algorithm [1]
seeks a separating hyperplane in the variable space, so that data samples from the two different classes reside on two different sides of the hyperplane. The minimum over all the distances from the input data samples to the hyperplane is called
margin. The goal of SVM is to find the optimal hyperplane that has the maximum margin. In cases where data samples are neither linearly nor perfectly separable, the softmargin SVM tolerates misclassification errors and can leverage kernel functions to map the features into a higher dimensional space where linear separability is possible (kernelized SVMs) [1].Iia SLSVM: Sparse Linear SVM
Following [8, 10] and our interest in interpretable classifiers, we formulate a Sparse version of Linear SVM (SLSVM) as follows. We are given training data and labels , , where is the vector of variables characterizing the th patient and (resp., ) indicates that the patient is (resp., is not) readmitted. We will refer to the class with labels equal to as the positive class and the other class as the negative class.
We seek to find a hyperplane orthogonal to some vector that passes from , which can be done by solving the following quadratic programming problem:
(1)  
s.t.  
In the above formulation, the first term is proportional to one over the minimum distance between a hyperplane that passes from and a hyperplane that passes from , i.e., one over the thickness of a band (margin) in which we would like to avoid placing any data points so as to increase the separability between the two classes. The parameter is a tunable parameter and is a misclassification penalty for each data point . The constraint on imposes sparsity in the variable vector , thus, allowing only a sparse subset of features to be selected for the classification decision. The parameter is also tunable and controls the level of sparseness. There exist close connections to previous work, such as elastic net regularization [18], norm SVM [19], and a robust optimization approach for obtaining appropriate regularizers to learning problems [20]. A drawback of the formulation (1) is that it is difficult to kernelize, while kernelized elastic net has also been proposed in [21].
IiB JCC: Joint Clustering and Classification
In [8, 10] the authors have proposed a Joint Clustering and Classification (JCC) problem based on the Sparse Linear Support Vector Machine (SLSVM) framework. The SLSVM method we saw in Sec. IIA can in fact be seen as a special case of JCC where only one cluster is being used. The classification problem under consideration satisfies the following assumptions. The negative class samples are assumed to be i.i.d. and drawn from a single cluster with distribution . The positive class samples belong to clusters, with distributions . Different positive clusters have different features that separate them from the negative samples (see Fig. 1 for an example).
Let and be the dimensional positive and negative samples, the corresponding labels, where and and and . Assuming hidden clusters in the positive class, we try to discover: the hidden clusters (denoted by a mapping function ) and classifiers , as the solution to the following Joint Clustering and Classification (JCC) problem:
(2)  
s.t.  
where is a parameter controlling the sparsity of the classifier in cluster .
In formulation (2), we have introduced different misclassification penalties, , for positive and negative samples. In fact, the misclassification costs of the negative samples are counted times, since these samples are drawn from a single distribution and are not clustered but simply copied into each cluster. The parameters and control the relative weight of these misclassification costs from negative and positive samples and should be appropriately selected to negate the overcounting, specifically, we set . The constraint is an relaxation of the sparsity requirement to the local classifiers, which is essential to align the formulation with the problem assumptions and to estimate more robust local classifiers. The selection of the tuning parameters is discussed in more detail in [8, 10].
Two different approaches have been proposed for (2) [8, 10]. The first, transforms the problem into a Mixed Integer Programming (MIP) problem but can only solve smallscale problems. The second approach is an alternating optimization approach which applies to largescale problems and also gives rise to theoretical performance guarantees. It is shown in [8, 10] that it is better to perform joint clustering and classification instead of separating the two tasks.
Iii SVM based prescriptive analytics
Prescriptive Support Vector Machines (PSVM) is a prescriptive method we introduce in this paper that builds on top of SLSVM and JCC.
Suppose we have generated the percluster optimal predictive hyperplanes using the JCC approach we described in Sec. IIB. Let be an index set of variables for each patient we can control/modulate by applying certain interventions/therapies. For each patient in the positive class, with variable vector , we are interested in optimizing the value of the controllable variables , for , so that the patient is predicted to belong to the negative class.
There is, however, a cost for large changes to the value of the controllable variables, which introduces a tradeoff between “flipping” the patient to the negative class and implementing interventions that lead to large changes in the controllable variables (see Fig. 2). The following formulation optimizes a linear combination of the corresponding two terms in the objective. Specifically, consider a patient in cluster , where , is vector of variables characterizing the patient, and is the patient’s variables after applying the prescription/intervention. Let ( be the coefficients associated with the predictive hyperplane discovered by JCC in the th cluster. To determine we solve the following convex optimization problem:
(3)  
s.t.  
where and are bounds on the controllable variables for each patient . The parameter tradesoff the failure to flip the patient to the negative side of the hyperplane with the required change in the patient’s characteristics measured by the term . The higher the value of , the more attention is given to the goal of preventing a readmission. To select an appropriate value for we can use crossvalidation, based on some cost function that accounts for the cost of reducing readmissions and the cost of prescriptions.
Notice that problem (3) can be solved independently for each patient who is predicted to belong to the positive class (readmitted). Thus, it is naturally distributed and can obtain a prescription for each atrisk patient with only local computations. The form of the problem (3) depends on the selection of the norm; for instance, when , we have a quadratic programming problem and when
, we have a linear programming problem.
Iv Data and Preprocessing
In this section we describe the data set we use to test and validate our methods.
Iva NSQIP Dataset Description
The ACSNSQIP was created to improve surgical techniques and outcomes and catalogs over 300 variables on comorbidities, intraoperative events, and 30day outcomes using prospective random sampling [13]. It contains no protected health information.
The NSQIP dataset contains variables such as:

Demographic and health care status characteristics, such as age, gender, race, body mass index, smoking, diabetes, hypertension requiring medication, and admittance from the emergency room.

Preoperative, intraoperative, and postoperative variables, including hospital length of stay information, superficial/deep/organ space surgical site infections, and existence/description of complications (e.g., pneumonia, infections, bleeding, thromboembolic events, etc.).

Laboratories, preoperative and postoperative values.
After data preprocessing steps we describe below, there were a total of deidentified patients, of whom were readmitted within 30 days, resulting in a readmission rate of . A total of 230 variables were available for analysis, most of which were binary and integer with the remaining being continuous.
IvB NSQIP Dataset Preprocessing
Data preprocessing steps we applied were as follows:

Patients who died within 30 days from discharge were not included in the total of patients, as these events compete with readmission.

Categorical variables (e.g., race, discharge, destination, insurance type, CPT code, ICD9 code) were numerically encoded by what is typically referred to as one hot encoding, which amounts to introducing a new indicator variable for each category.

Missing values of categorical variables were treated as new categories and missing values of numerical variables were replaced by
nearestneighbors imputation.

Features with small standard deviation (
) were removed. 
One of every two features which were highly linearly correlated (absolute value of correlation ) was removed.

Feature scaling was applied for all features to bring all values into the range, specifically, all variables were normalized by subtracting the minimum and dividing by the range.
The variables were further separated into two classes: preoperative variables and postoperative variables. Preoperative variables are those that can be known before or during the main surgical procedure while postoperative variables, including complications, can only be determined after the surgery has been completed. The reason for considering these two classes of variables is that some postoperative variables may be affected by the controllable variables which may be modulated using our prescriptive method.
IvC Controllable Variables
We consider three types of controllable variables on which to intervene using prescriptive analytics:

Preoperative lab tests: sodium, Blood Urea Nitrogen (BUN), serum creatinine, serum albumin, bilirubin, SGOT (Serum GlutamicOxaloacetic Transaminase), alkaline phosphatase, White Blood Cell count (WBC), hematocrit (HCT), platelet count, Partial Thromboplastin Time (PTT), Prothrombin Time (PT), and International Normalized Ratio (INR) of PT values.

Length of stay at the hospital: total length of stay, days from admission to operation, days from operation to discharge.

SSI (Surgical Site Infection) or Infection: occurrences of deep incisional SSI, occurrences of organ space SSI, and postoperative occurrences of Urinary Tract Infection (UTI).
Preoperative lab values could be altered through appropriate medications and treatment before the operation to bring them closer to levels not associated with readmission. The length of stay at the hospital could be to shortened, or lengthened as appropriate. Recommendations can also target the tightening of infection control measures that affect the variables described in the third item above. In the work we report in this paper we focus on the preoperative hematocrit (HCT), as it is a variable that can be directly impacted (increased) through blood transfusion. The predictive models also suggest that preoperative hematocrit (HCT) is one of the most important controllable variables.
V Performance Evaluation and Experimental Results
Va Prediction Results
VA1 Prediction Accuracy
In the readmission prediction problem, one typically considers two distinct performance metrics computed outofsample, i.e., over a test set not seen during training. These metrics are the false positive rate (or false alarm rate, or one minus the specificity of the test) and the detection rate (or the true positive rate, or sensitivity of the test). A Receiver Operating Characteristic (ROC) curve, is a curve that evaluates the performance of a binary classifier as the decision threshold is varied, created by plotting the true positive rate against the false positive rate at different threshold settings. To have a single metric to compare different ROC curves, we will consider the Area Under the ROC Curve (AUC). An ideal prediction model has an AUC equal to , whereas a random prediction would yield an AUC of . Anything with an AUC greater than is considered a moderately good predictive model.
We randomly chose and of the patients in the dataset to form the training and validation set and keep the remaining of the patients as a test set.
We compared the methods we presented in Sec. II with some standard machine learning methods, namely, Random Forest (RF) [24] and Logistic Regression (LR) [25]. The Random forest [24]
is a large collection of decision trees and it classifies by averaging the decisions of each tree. Logistic regression is widely used as a base for comparison in medical machine learning studies. In this work, a logistic regression model was fitted with an additional regularization term: an
norm term (similar to ridge regression)
[25].In Table I, we compare the performance of the various classification methods: Random Forests (RF) [24], SLSVM, regularized logistic regression (L2LR) with preoperative variables and postoperative variables [25]. JCC was also applied to the problem but resulted into a single positive cluster, which is identical to SLSVM. In Table I, the 2nd column reports AUC using only preoperative variables and the 3rd column lists the corresponding AUC using all (preoperative and postoperative) variables.
Methods were implemented in Python (Python Software Foundation, https://www.python.org/) [26] and Matlab (MathWorks, Natick, MA). For random forests, the number of trees grown was 500. Crossvalidation was used to tune parameters of all methods, e.g., the number of variables randomly sampled as candidates at each split for RF, regularization strength for SLSVM and L2LR.
Method  preop AUC  postop AUC 

L2LR  72.32%  83.53% 
RF  73.11%  84.91% 
SLSVM  72.28%  83.48% 
Based on the results of Table I, using postoperative variables results into substantially better performance. AUCs of all the methods were similar, perhaps because the NSQIP dataset contained a large amount of data and a sufficient number of highly predictive features. It is interesting that we can predict with such a high accuracy 30day readmissions. In fact, just this information can be extremely useful as the health care system can target atrisk patients and monitor them postdischarge to reduce the risk of readmission.
VA2 Important Variables
For each variable, we computed a twotailed value using Welch’s
test, where the null hypothesis was that the two cohorts (readmitted and nonreadmitted patients) have equal means. We found
variables with a value less than .Using this analysis, the variables with the most statistically significant values in the two patient cohorts (readmitted and nonreadmitted), were: return to Operating Room (OR) after the main surgery and before discharge, length of stay, occurrences of Surgical Site Infection (SSI, either organ/space SSI, superficial SSI, deep incisional SSI), occurrences of urinary tract infection, occurrences Deep Vein Thrombosis (DVT)/thrombophlebitis, occurrences of pulmonary embolism, pneumonia occurrences,
estimated probability of morbidity,
occurrences of sepsis, occurrences myocardial infarction, occurrences of progressive renal insufficiency, stroke with neurological deficit, disseminated cancer, patient currently on dialysis (preop), preoperative HCT, total operation time (in minutes), and Body Mass Index (BMI).VB Prescriptive Results
In this section we evaluate the effectiveness of prescriptions obtained by solving problem (3) for each patient. We focus on optimizing the patient’s preoperative hematocrit (HCT) using a blood transfusion. Transfusions infuse blood into the patient bloodstream and, typically, a discrete number of bags, each containing 100cc of blood, gets prescribed. We will limit the number of bags of blood given to a patient to 3, corresponding to 300cc of blood, which can be considered as a safe upper limit for blood transfusion. Each bag of blood given to patient increases HCT by roughly 3%. We thus define 4 possible treatments as follows:

Treatment 1: No transfusion.

Treatment 2: 1 bag of blood transfusion.

Treatment 3: 2 bags of blood transfusion.

Treatment 4: 3 bags of blood transfusion.
Since we do not have in the NSQIP data information on whether a blood transfusion has been performed, we assume a baseline treatment depending on the patient’s HCT as follows:

For female patients, if HCT<37, 0 bags of blood are assumed to have been given; if 37<HCT<40, 1 bag of blood is assumed to have been given; if 40<HCT<43, 2 bags of blood are assumed to have been given; and, if HCT>43, 3 bags of blood are assumed to have been given.

For male patients, if HCT<41, 0 bags of blood are assumed to have been given; if 41<HCT<44, 1 bag of blood is assumed to have been given; if 44<HCT<47, 2 bags of blood are assumed to have been given; and, if HCT>47, 3 bags of blood are assumed to have been given.
We then use formulation (3) to obtain a prescription for each patient in a test dataset, using the norm () in the penalty associated with the prescribed change in the patient’s HCT. In the absence of ground truth, we then evaluate the effect of the prescription using a variety of prescriptive methods. We will compare the readmission rate when prescriptions are being implemented with a baseline rate set to be equal to the actual readmission rate of for patients in the test set. To be able to compare the readmission rate with or without the prescriptions, we calibrate each predictive model by selecting a decision threshold (i.e., a point on the ROC curve corresponding to the model) so that the model yields the same readmission rate of in the absence of any prescriptions.
Table II reports the results. The first column lists the predictive model used to evaluate the effects of the prescriptions. The second column lists the readmission rate after the optimal prescription is applied to each patient in the test set. The third column lists the readmission rate in the absence of any prescriptions. The average reduction of the readmission rate across the three predictive models is , which implies a decrease compared to the baseline readmission rate of . The average percentage change of HCT due to the prescriptions is equal to .
Method  Prescriptive rate  Baseline rate 

L2LR  4.95%  5.85% 
RF  4.70%  5.85% 
SLSVM  4.18%  5.85% 
Vi Conclusions
We developed a new framework to decide prescriptions or other interventions that reduce the rate of an undesirable event. We build this prescriptive capability based on an SVMbased predictive model. Decisions can be decomposed for each subject (patient).
We applied this new framework to a large dataset of 2.28 million patients tracked by the ACSNSQIP over a four year period (2011–2014). We considered personalized decisions to potentially increase the preoperative HCT for each patient through a blood transfusion. The objective of the prescriptive method is to prevent 30day readmissions and reduce the corresponding readmission rate.
Our results show that our prescriptive SVM approach reduces the readmission rate by an average of from the readmission rate of in the absence of prescriptions. This amounts to a relative percentage decrease of . Considering that more than $12 billion were spent in 2005 by Medicare on potentially preventable readmissions, these types of readmission rate reductions can lead to dramatic savings on an annual basis. Future work will consider kernelized methods and speeding them up them for largescale datasets.
Acknowledgments
We thank Dr. George Kasotakis, Dr. Dimitris Bertsimas and Michael Lingzhi Li for useful discussions.
References
 [1] C. Cortes and V. Vapnik, “Supportvector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
 [2] “Readmissions reduction program,” https://www.cms.gov/Medicare/MedicareFeeforServicePayment/AcuteInpatientPPS/ReadmissionsReductionProgram.html, accessed: 20180222.
 [3] J. James, Medicare Hospital Readmissions Reduction Program: To Improve Care and Lower Costs, Medicare Imposes a Financial Penalty on Hospitals with Excess Readmissions. Project HOPE, 2013.
 [4] I. C. Paschalidis, “How machine learning is helping us predict heart disease and diabetes,” https://hbr.org/2017/05/howmachinelearningishelpinguspredictheartdiseaseanddiabetes, accessed: 20180222.
 [5] R. Amarasingham, B. J. Moore, Y. P. Tabak, M. H. Drazner, C. A. Clark, S. Zhang, W. G. Reed, T. S. Swanson, Y. Ma, and E. A. Halm, “An automated model to identify heart failure patients at risk for 30day readmission or death using electronic medical record data,” Medical care, vol. 48, no. 11, pp. 981–988, 2010.
 [6] J. D. Frizzell, L. Liang, P. J. Schulte, C. W. Yancy, P. A. Heidenreich, A. F. Hernandez, D. L. Bhatt, G. C. Fonarow, and W. K. Laskey, “Prediction of 30day allcause readmissions in patients hospitalized for heart failure: comparison of machine learning and other statistical approaches,” JAMA cardiology, vol. 2, no. 2, pp. 204–209, 2017.
 [7] S. B. Golas, T. Shibahara, S. Agboola, H. Otaki, J. Sato, T. Nakae, T. Hisamitsu, G. Kojima, J. Felsted, S. Kakarmath et al., “A machine learning model to predict the risk of 30day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data,” BMC medical informatics and decision making, vol. 18, no. 1, p. 44, 2018.
 [8] T. S. Brisimi, T. Xu, T. Wang, W. Dai, W. G. Adams, and I. C. Paschalidis, “Predicting chronic disease hospitalizations from electronic health records: an interpretable classification approach,” Proceedings of the IEEE, vol. 106, no. 4, pp. 690–707, 2018.
 [9] T. S. Brisimi, T. Xu, T. Wang, W. Dai, and I. C. Paschalidis, “Predicting diabetesrelated hospitalizations based on electronic health records,” Statistical methods in medical research, p. 0962280218810911, 2018.
 [10] T. Xu, T. S. Brisimi, T. Wang, W. Dai, and I. C. Paschalidis, “A joint sparse clustering and classification approach with applications to hospitalization prediction,” in Decision and Control (CDC), 2016 IEEE 55th Conference on. IEEE, 2016, pp. 4566–4571.
 [11] D. Bertsimas and N. Kallus, “From predictive to prescriptive analytics,” arXiv preprint arXiv:1402.5481, 2014.
 [12] D. Bertsimas and B. Van Parys, “Bootstrap robust prescriptive analytics,” arXiv preprint arXiv:1711.09974, 2017.
 [13] A. M. Ingraham, K. E. Richards, B. L. Hall, and C. Y. Ko, “Quality improvement in surgery: the american college of surgeons national surgical quality improvement program approach,” Advances in surgery, vol. 44, no. 1, pp. 251–267, 2010.
 [14] V. Sathiyakumar, C. S. Molina, R. V. Thakore, W. T. Obremskey, and M. K. Sethi, “Asa score as a predictor of 30day perioperative readmission in patients with orthopaedic trauma injuries: an nsqip analysis,” Journal of orthopaedic trauma, vol. 29, no. 3, pp. e127–e132, 2015.
 [15] A. J. Pugely, J. J. Callaghan, C. T. Martin, P. Cram, and Y. Gao, “Incidence of and risk factors for 30day readmission following elective primary total joint arthroplasty: analysis from the acsnsqip,” The Journal of arthroplasty, vol. 28, no. 9, pp. 1499–1504, 2013.
 [16] F. Lovecchio, R. Farmer, J. Souza, N. Khavanin, G. A. Dumanian, and J. Y. Kim, “Risk factors for 30day readmission in patients undergoing ventral hernia repair,” Surgery, vol. 155, no. 4, pp. 702–710, 2014.
 [17] D. J. Lucas, A. Haider, E. Haut, R. Dodson, C. L. Wolfgang, N. Ahuja, J. Sweeney, and T. M. Pawlik, “Assessing readmission after general, vascular, and thoracic surgery using acsnsqip,” Annals of surgery, vol. 258, no. 3, p. 430, 2013.
 [18] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320, 2005.
 [19] J. Zhu, S. Rosset, R. Tibshirani, and T. J. Hastie, “1norm support vector machines,” in Advances in neural information processing systems, 2004, pp. 49–56.
 [20] R. Chen and I. C. Paschalidis, “A robust learning approach for regression models based on distributionally robust optimization,” The Journal of Machine Learning Research, vol. 19, no. 1, pp. 517–564, 2018.
 [21] Y. Feng, S.G. Lv, H. Hang, and J. A. Suykens, “Kernelized elastic net regularization: Generalization bounds, and sparse recovery,” Neural computation, vol. 28, no. 3, pp. 525–562, 2016.
 [22] A. M. Association, Current procedural terminology: CPT. American Medical Association, 2007.
 [23] U. S. D. of Health and H. Services, The International Classification of Diseases: 9th Revision: Clinical Modification. US Government Printing Office, 1989.
 [24] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
 [25] J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning. Springer series in statistics New York, 2001, vol. 1.
 [26] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikitlearn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
Comments
There are no comments yet.