Cerebrovascular accident (CVA) or cerebrovascular insult (CVI), commonly referred to as “stroke” is the leading cause of death, next to ischaemic heart disease and the leading cause of adult disability worldwide [f1, f2]. Globally, 15 million people suffer from stroke every year and of this, a third dies and half of the remaining struggle with permanent disabilities [f5]. The statistics is no different for developing countries like India [f6, f9, f8]. In Trivandrum, the capital of the Indian state of Kerala, the incidence rate of stroke is 135.0 in the urban community and 138.0 in the rural community [f9]. Clearly, stroke has transformed from a disease pertaining to developed nations to a global hazard.
The steps taken by major health organisations such as 1) WHO, 2) AHA (American Heart Association), 3) Indian Stroke Association etc. for reducing the stroke fatalities and permanent disabilities can be broadly classified into two:
Improvements in the stroke treatment methods, development and optimisation of early diagnosis techniques and training of personnels to handle stroke cases effectively.
Establishing stroke risk factors through scientific research and analysis which can be used for preventing the incidence of stroke by creating awarenesses in the public and by administering treatments for reducing the risk of having a stroke.
The former step is not entirely applicable to low-income and middle-income countries due to the constraint in the resources, both in terms of trained personnels and money. But the results of the latter step can be put to work, irrespective of the availability of money. But this approach still requires trained personnels who is aware of the stroke risk factors and how much each for the risk factor can contribute to the total risk, both individually and collectively. For instance, in India, which is a lower-middle-income country according to The World Bank [f11], there is only one neurologist for every 3 million people [f12, f13, f14]. Not only that, these neurologists are heavily laden with the treatment of non-stroke disorders [f12].
In this work, we propose a two tier system for the prevention of stroke. The first tier makes use of stroke risk factors, much the same way mentioned earlier, except for the fact that instead of trained professionals, machine learning to used. An ANN (Artificial Neural Network) is trained using the stroke risk parameter values of subjects who had stroke and risk parameter values of normal subjects. This can be viewed as a regression problem[f15, f16] with the output of the system giving a score that indicates the net stroke risk of the patient.
In the tier-2 of the proposed system, we make use of neuroimaging, feature extraction and classification techniques to give an additional information on the risk of the person to have a stroke. A multilevel Support Vector Machine (SVM) trained on T2-weighted MRI images of subjects who had stroke and the same of normal subjects is used to classify a given T2-weighted MRI image into two classes:either having a high risk of stroke or a low risj of stroke. Similar systems for predicting neurological disorders such as Alzheimer’s disease can be found in the literature [40, a4]. In our work, we used Haralick features [a6] and Non-negative Matrix Factorisation (NMF)  for feature extraction. To the best of our knowledge, this is a maiden work in the field of stroke prediction. We also propose a novel multilevel classification system which can incorporate two non-linear features in such a way that their combination gives better classification efficiency than what can be achieved if there are used individually or if used by simple concatenation.
2 Prior Art
As such, there are not many works in the literature where stroke is predicted using neuroimaging. Nevertheless, there are some works, for instance the one by National Stroke Association, USA, where they have developed a score card as shown in Fig.1. A score of
or more on this scorecard means that the person has a high probability of getting suffering from a brain attack.
Lloyd et al. in  have proposed a method for predicting IS (Ischemic Strokes) by considering non-traditional factors such as body mass index, waist:hip ratio, high density lipoprotein cholesterol, albumin, von Willebrand factor, alcohol consumption, peripheral arterial disease, and carotid artery wall thickness. But they could achieve only a modest improvement in thee prediction capability compared to models that use only traditional risk factors.
2.1 Stroke risk factors
The stroke Association of UK classifies the stroke risk factors into two groups [n10]:
Lifestyle risk factors
Medical conditions risk factors
2.1.1 Lifestyle risk factors
The following are included in lifestyle risk factors:
Overweight and obesity
Smoking habits Smoking has a significant effect on stroke risk factor of a subject. According to [82, 83, n10], smoking doubles the risk of stroke. A person smoking 20 cigarettes a day has six times the risk of stroke compared to a non-smoker . For a person with high blood pressure, the risk to have a stroke is 5 times more than a smoker with normal blood pressure and 20 times more than a non-smoker with normal blood pressure .  reports that 10% of deaths from stroke are due to smoking. Diet High amounts of fruits and vegetables in diet can reduce the risk of stroke by up to 30%. Higher the amount of salt intake, higher will be the risk of stroke . There will be a 23% increase in the risk for an increase of 5g of salt consumption a day . Others An overweight person has a 22% higher risk of ischemic stroke whereas an obese person has a 64% higher risk . When moderate physical activity reduces the risk of stroke by upto 27%, physical inactivity can increase the risk by 50% [101, 102]. Regular alcohol consumption can can result in a three fold increase in the risk of stroke .
2.1.2 Medical Conditions
Various medical condition risk factors include:
High blood pressure
Atrial fibrillation Atrial fibrillation (AF) is the irregular or abnormal heart rhythm. AF can increase the risk of stroke by a factor of five . Treatment of AF can prevent stroke to a great extent since a a fourth of the people who have a stroke have AF too. Others High blood pressure is a leading cause of stroke. A half of the stroke are caused due to high blood pressure . A person who is diabetic has twice the risk of having a stroke when compared to a normal person . High cholesterol in conjunction with smoking habits or physical inactivity has a significant effect on risk of a person to have stroke. Reducing cholesterol can reduce the risk of stroke by 21% .
 gives a slightly different set of stroke risk factors which are:
The authors have also given a result which supports this work; more than 94% of the subjects who had stroke has at least one of these stroke factors above the normal value.
3 Dimensionality reduction techniques
Quite often, we are often faced with the task of handling very high dimensional data in various applications111It may be noted that the term “high” is not objective but a subjective term. It all depends on several factors like hardware specifications, application, required response time etc. For instance, a few Megabytes of data can be “high” for a system that has only a FDD (floppy disk drive) as the storage system (eg.Yamaha PSR 1100- an arranger workstation of the previous decade) but will be a “low” for a system that has a HDD (hard disk drive) .. This problem can be posed mathematically as follows: The given obervation vector which is “high” dimensional in nature is denoted as . The vector can be viewed as a member of dimensional vector space and hence can be written as
. Now, our task is to create a dimensional vector , expressed mathematically as:
, subject to the following conditions:
, i.e.,the dimension of is less than the dimension of
, in some criterion, represents
This high dimension of data may make the task in our hand too complex due to several reasons:
Hardware limitations, that include memory imitations, processor power limitations etc. to handle the high dimension. These limitations can also manifest as a power limitation especially for mobile devices since more memory and processor usage means more power consumption.
There is a finite probability that as the number of features increases, the classification error initially goes down but at some point, it starts to grow [f20], which is definitely bad and needs to be avoided.
The reason why dimensionality reduction is possible is because of the following [f30, f31]:
All practical measurements are corrupted by measurement noise. In a given data, the high variations might be due to the measurement noise and hence many of the variable might be irrelevant.
Some of the variables in the observation vector given in (1) can be expressed as a linear combination of other variables in the same vector. In other words, the variables are correlated to each other and the task is to find a new vector , given in (2) in which the variables are hopefully, uncorrelated. The variables in the new vector are often referred to as “latent variables” or “hidden variables” [f30, f32, f33, f34].
3.2 Non-negative Matrix Factorisation (NMF)
One of the major drawbacks of PCA is that negative values of the basis vectors are quite difficult for being interpreted in many practical applications. One of remedies for this was to have a representation where non-negativity is imposed by some means. Given a non-negative matrix , the task is to decompose the matrix into two non-negative matrices and subjected to the condition that the Forbenius norm between the given matrix and the product of the two vectors is minimum.
This decomposition is known as Non-Negative Matrix Factorization (NMF) 222A comprehensive review of NMF can be found in [f35]. The matrix is called the basis matrix or the mixing matrix and represents unknown or hidden sources. The beauty of (3) lies in the perspective of viewing each column of as a linear combination of columns of with the weights given by the components of each column in [nn1, nn2, nn3]. The number of columns in is very much less than the number of rows in . Though NMF seems to be a computationally complex operation, several algorithms have been developed in the last decade for the computationally efficient implementation of NMF.
The maiden work in the field of NMF (Non-negative Matrix Factorisation) can be traced back to a 1994 paper by Paatero and Tapper  in which they performed factor analysis on environmental data [f35]. Their aim was to find the common latent features or latent variables that explained the given set of observation vectors. Some elementary variables combine together positively to give each of the factor. A factor can either be present, in which case it has a positive effect or the factor can be absent, in which case the factor has null influence. Clearly, there is no room for a “negative” influence and hence it often makes sense to constrain the factors to be non-negative.
Given a non-negative matrix whose each columns correspond to different variables of an observation matrix, the task is to decompose into another two and , subjected to the condition that both and are also non-negative in nature. The columns of can be considered to be the factors and the rows of are the influences of these factors. is used to denote the weight associated to each element, which indicates the level of confidence in that measurement. Paatero and Tapper advocate optimizing the function
Independent of Paatero, Lee and Seung introduced the concept of NMF in a 1997 paper titled “Algorithms for Non-negative Matrix Factorisation” [19, f35]. They begin by considering the following encoding problem. Suppose that the columns of are fixed feature vectors and that is an input vector to be encoded. The task is to minimize the reconstruction error . Mathematically,
Depending on the constraint on , different learning algorithms can be developed. The choice for unconstrained minimisation is PCA. Contrary to PCA, Vector Quantization (VQ) requires that equal one of the canonical basis vectors (i.e. a single unit component with the remaining entries zero). Lee and Seung proposed a convex coding scheme which requires the entries of to be non-negative numbers which sum to one. So the encoded vector is the best approximation to the input from the convex hull of the feature vectors. This was one of the two techniques which they put forward. The second technique which they put forward was conic coding. This requires that the entries of be non-negative, like in the case of the convex coding scheme but doesn’t have the constraint that the sum of the elements in , which are non-negative, to be unity. Then the encoded vector is the best approximation to the input from the cone generated by the feature vectors.
4 Textural features
Haralick Textural features were developed by R.M. Haralick in the year 1973 [a6]. The basis for these features is the gray-level co-occurrence matrix, given by,
This is a square matrix with dimension , where is the number of gray levels in the image. Element of the matrix is generated by counting the number of times a pixel with value is adjacent to a pixel with value and then dividing the entire matrix by the total number of such comparisons made. Each entry is therefore considered to be the probability that a pixel with value will be found adjacent to a pixel of value .
Since adjacency can be defined to occur in each of four directions in a 2D, square pixel image (horizontal, vertical, left and right diagonals - see Fig. 3), four such matrices can be calculated.
Haralick then described 14 statistics that can be calculated from the co-occurrence matrix with the intent of describing the texture of the image:
Angular Second Moment:
Sum of Squares: Variation:
Inverse Difference Moment:
Information Measure of Correlation 1:
Information Measure of Correlation 2:
Maximal Correlation Coefficient:
where ,, and
are the mean and standard deviations ofand is given by the following relation:
Database Two datasets were used in this work: one which contains neuroimages of 30 subjects and the other which contains the values of stroke risk parameters of 30 subjects. These datasets are explained below,
6 Neuroimages dataset
This dataset work mainly includes the images from “The Whole Brain Atlas”333The database can be obtained from www.med.harvard.edu/aanlib/home.html. , developed by Keith Johnson, MD, and Alex Becker, PhD., with the support of the Brigham and Women’s Hospital Departments of Radiology and Neurology, Harvard Medical School, the Countway Library of Medicine, and the American Academy of Neurology. The database includes the MRI images of neurological disease such as:
Neoplastic Disease (brain tumor)
Metastatic bronchogenic carcinoma
Motor neuron disease
Inflammatory or Infectious Disease
Cardiovascular Accident (CVA)
Acute stroke: Speech arrest
Acute stroke: “alexia without agraphia”
Subacute stroke: “transcortical aphasia”
Chronic subdural hematoma
Hypertensive encephalopathy, and
T2-weighted MRI images of 30 subjects were used for this work. In these 30 images, 14 were of subjects who had stroke and 16 were normal subjects. A part of the database is given in Fig. 4. The images to corresponds to stroke images and they are 10: Acute stroke: Speech arrest, 11: Acute stroke: “alexia without agraphia”, 12: Subacute stroke: “transcortical aphasia”, 13: Chronic subdural hematoma, 14: Hypertensive encephalopathy, and 15: Cerebral hemorrhage.
Prior to feature extraction, the images were subjected to both anatomic and intensity normalisation. Normalising the images with respect to the maximum pixel intensity value may introduce noise [40, lbs1], so the maximum pixel intensity was considered to be the mean of the highest pixel intensity values, as proposed in  and [a7]. Original and preprocessed images are shown in Fig. 5. This database will be hereafter referred to as “TBA” database.
7 Risk parameter database
This database was obtained from the data obtained from a private medical college in Trivandrum, kerala. It contains the stroke risk parameter values of 30 subjects, 15 of whom had stroke and the rest have a very low stroke risk score according to the stroke risk card of American Stroke Association. Stroke risk factors which were considered as:
Blood pressure- in mmHg
Atrial fibrillation- yes/no
Cholesterol levels- mg/dL
Exercise habits- yes/no
Stroke in family- yes/no
This database will be hereafter referred to as the “SRP” database.
8 Classification of structural MRIs
As a predecessor to the proposed CAD (Computer Aided Diagnosis) tool for prediction, we first developed a CAD tool for differentiating brain MRIs obtained from the TBA database of subjects who have suffered from stroke (these MRIs will be hereafter referred to as “stroke MRI”) from the MRI of subjects suffering from other neural disorders (these MRIs will be hereafter referred to as “non-stroke MRI”), which can be diagnosed from structural MRIs [panachakel2017multi]. For this, we made use of the Haralick features [a6] and Non-negative Matrix Factorisation 
for feature extraction and SVM for classification. Also, we have introduced a computationally efficient method for combining feature vectors which are linear in two different kernel spaces. For this, we have made use of the distance of the feature vector from the hyperplane as a measure of confidence value of classification, thus improving the classification efficiency which could be obtained by using either one of the two feature vectors or by mere concatenation of the two features.
The features used were NMF and Haralick features. For NMF, the number of basis was set to 14. The Haralick features used were the range and mean of these 14 statistical parameters descried in [a6] along the 4 directions as the first set of features, with unity distance between the neighbours. 8 BPP images were scaled down to 4 BPP images to reduce the computation time.
8.2 Multi-level SVM
In the proposed multi-level classification, two support vector models are created using NMF features and Haralick features. Now, given a feature vector , in the feature space , we compute a score based on its distance from the decision boundary hyperplane as,[n3]
where , vector weights for support vectors.
The scores provide an estimate of how good the classification is. Larger the score, larger will be the distance from the hyperplane and hence higher will be probability of the sample to lie in that class[n3]. In our work, for a given test sample, we compute the scores for both the support vector models. The model which gives the highest absolute value for the score is assumed to have classified the sample correctly. This approach improves the classification accuracy.
8.3 Performance Metrics
Three performance metrics were used in this work:
Sensitivity (), which is the measure of the system’s ability to identify stroke MRIs.
Specificity (), which is a measure of the system’s ability to identify non-stroke MRIs.
Accuracy (), which is a measure of the system’s net classification efficiency.
Before defining these metrics mathematically, we introduce the following terms:
: True Positive, stroke MRI identified as stroke MRI.
: True Negative, non-stroke MRI identified as non-stroke MRI.
: False Positive, non-stroke MRI identified as stroke MRI.
: False Negative, stoke MRI identified as non-stroke MRI.
Now, we can mathematically define sensitivity, specificity and accuracy as:
For the purpose of cross-validation, LOOCV (Leave-One-Out Cross Validation) was used. As the name suggests, Leave-One-Out Cross Validation (LOOCV) involves using an observation as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of an observation and a training set.
A sample confusion matrix which will be used for providing the results is shown in Fig. 6.
Feature extraction and classification was done in MATLAB® 2013a. SVM functions in the statistical toolbox was used in this work. Different kernel functions with different parameter values were tried out in the classification process to obtain the highest possible classification accuracy.
For classification using Haralick features, for RBF (Radial Basis Function) kernel, highest accuracy was obtained for a “rbf_sigma” of 60. For MLP (Multilayer Perception), the parameter was [10000 -100]. The results are given in TABLE1. While using NMF, the “rbf_sigma” value for highest efficiency was 40 and the LMP parameter was [10 -100]. The results are shown in TABLE 2
The corresponding confusion matrices for the best classification output 444When the classification accuracies are equal, the next parameter compared was sensitivity.is shown in Fig. 7.
Fig. 8 shows the confusion matrices when the features are considered simultaneously. Fig. 8(a) shows the confusion matrix when the two features are simply concatenated. RBF kernel with “rbf_sigma” of 40 was used since it gave the best performance. Fig. 8(b) shows the confusion matrix when the proposed multi-level SVM is used.
Fig. 9 shows a graphical comparison of sensitivity, specificity and accuracy when the features are used individually and simultaneously.
Clearly, multi-level SVM outperform ordinary SVM. The developed system has a sensitivity of 78.57% at a specificity of 93.75% yielding an accuracy of 86.67% as given in TABLE 3. The analysis showed that false detection occurred mainly for neoplastic disorders.
10 Prediction of CVA
10.1 Tier 1
Tier 1 makes use of the SRP database. For classification, it uses FFNN (Feed-Froward Neural Network), with nine input nodes, one hidden layer with six neurons and two output neurons. The training algorithm used is Levenberg-Marquardt backpropagation algorithm and the learning rate was 0.1 with the rate to decrease the learning rate set to 0.5. These parameters were found to be the most appropriate after intense training and cross validation tests with several parameter combination. This neural network is shown in Fig.10.
The confusion matrix obtained for tier-1 is shown in Fig. 11.
10.2 Tier 2
The tier-2 makes use of neuroimaging and machine learning for predicting CVA. The ideal source of dataset for testing this tier would have been the brain MRI images of subjects who had stroke, taken before the onset of stroke. But, it is very difficult to obtain such a dataset. So instead, the brain MRI images of patients who had stroke were taken from the TBA database and the all lesions due to stroke, visible in them were removed with the help of a group consisting of a general physician and a radiologist. The portions removed were assigned “NaN” (Not a Number) as the pixel intensity values and the MATLAB® routine for evaluating the Haralick features was modified so that pixels with intensity values of NaN were completely neglected in the formation of the covariance matrix. 28 features as described in Section 9 were used for the classification.
During the training stage, normal brain MRIs from the TBA database were used but for the testing and cross validation stages, the brain MRIs with lesions removed were used. Due to the removal of lesions, NMF could not be used, and hence this stage relies entirely on Haralick features.
As with the “Classification of structural MRIs”, described in Section 8, different SVM with different kernel functions were used to “predict” stroke. This is called “prediction” because once the dead brain cells are removed, the MRI does not have any lesions of a stroke and the MRI, as such, corresponds to a normal MRI. Since the MRI is a “stroke MRI” (ref. Section Section 8), it is reasonable to assume that the MRI corresponds to a subject who is going to have a stroke, i.e., the stage before there are any typical stroke symptoms visible in the MRI. It is a reasonable because stroke is usually a focal brain ischemia and results in the death of brain cells ina particular area where the blood flow is blocked . If the proposed classifier for predicting stroke can classify a “stroke MRI” with the lesions removed, it means that tthere are are some latent features in the brain MRI of a stroke patient other than the stroke lesions which the features used have successfully extracted.
The classifier performance for various kernel functions is given in TABLE 4. The various parameters were:
MLP: [1 -2.54]
The confusion matrix when the kernel function was MLP is given in FIg. 12.
Even though. a classification efficiency of 70% may not be very high, considering the fact that only Haralick features were used and the sensitivity is greater than 85% at a specificity greater than 55%. This can definitely be improved by incorporating other features too. In that case, this can be considered as a promising results.
This work introduced two CAD systems, one for classifying structural MRI images and other for predicting CVA. A novel technique for classifying using separate set of features which are linear in different kernels were also introduced. This technique, named “multi-level SVM” improves the classification accuracy by 18 % over a classifier that uses the two features simultaneously by concatenating them together.
The accuracy for classifying structural MRI images using NMF, Haralick features and multi-level SVM was 86.67%. This acted as the prelude to a CAD tool for predicting CVAs. The CAD tool for prediction is two tier in architecture. The first tier makes use of ANN and stroke risk factor parameters for creating a system, which can predict the probability of a person to have a stroke, given his/her stroke risk parameter values.
The second tier makes use of neuroimaging for the prediction. T2-weighted structural MRI images were used for training the system. The term “prediction” is used because the input to the system during the testing stage has the stroke lesions removed. The classification accuracy obtained by using Haralick features alone was 70%. This shows that there are changes in the brain other than the lesions caused by oncosis. The classification efficiency can be improved by using other features or by improving the classifier. These CAD tools can be used for predicting the stroke risk of a person, thus reducing the incidence rate of CVA,which is the second leading cause of death.