The prediction of reduced life expectancy in individuals is a public health priority and central to personalised medical decision making . Previous attempts to predict reduced life expectancy in the elderly have been studied using invasive (e.g., blood samples) and non-invasive (e.g., self-reported survey results, clinical examination) tests . These approaches resulted in a classification accuracy between 60% and 80% [1, 2], although patient age alone has shown a predictive accuracy of above 65% . Compared to these previous attempts, the use of chest CT for the prediction of reduced life expectancy is advantageous because these scans potentially offer information on multiple organs and tissues from a single non-invasive test. Hence, it is the aim of this paper to show that the use of chest CT alone (i.e., excluding previously used invasive and non-invasive tests) can produce accurate prediction of reduced life expectancy.
Typically, prognostic models in medical image analysis have been designed for the prediction of disease specific outcomes [3, 4, 5, 6, 7], where the methodology requires hand-crafted features. These features are selected/extracted based on their correlation with the prognosis, followed by modelling of the desired outcome using survival models or predictive classifiers. This multi-stage process of feature design and selection/extraction, followed by modelling has many disadvantages, such as the hand-crafting of the image features requiring medical expertise and being useful only for the particular prognosis being addressed, and the independence between feature selection/extraction and modelling potentially introducing redundant features and removing complementary features for the classification process.
In this paper, we propose two new approaches for the prediction of 5-year all-cause mortality in elderly individuals using chest CT and the segmentation maps of the following anatomies: aorta, spinal column, epicardial fat, body fat, heart, lungs and muscle. We have chosen chest CTs because they are commonly performed and widely available from hospitals, which facilitates dataset acquisition, and the segmentation maps are informed by previous biomarker research, which has demonstrated predictive and detectable changes in these tissues [5, 6, 7]. The approaches developed in the paper are the following (Fig. 1): 1) a unified framework based on deep learning, where features and classifier are automatically learned in a single optimisation; and 2) a multi-stage framework based on the hand-crafting and selection/extraction of radiomics features, followed by a classifier learning process. Experiments based on 48 annotated chest CT volumes show that the deep learning model produces mean classification accuracy of 68.5%, while radiomics produces a mean accuracy that varies between 56% to 66% (depending on the feature selection/extraction method and classifier). Even though these results show comparable classification accuracy, deep learning models have an important advantage compared to radiomics, which is the fully automated way of designing features, without requiring the assistance of a medical expert. This advantage also means that future similar problems can be addressed in a more automated way, facilitating progress in this field of research.
2 Literature Review
This paper is related to radiomics and deep learning for medical image analysis. Radiomics methods are a recent development in medical image analysis and are currently the state of the art in clinical studies. These methods are concerned with the design of hand-crafted features and their association with subtle variations in disease processes (e.g. genetic variations) . Usually, radiomics methods are applied to imaging studies of patients with active tumours 
, but the application of these techniques to a general population of radiology patients for the prediction of important medical outcomes (e.g., mortality) is novel. The hand-crafting of features in these methods is inefficient because this process requires medical expertise, or alternatively if the features are task-agnostic (i.e. not informed by domain knowledge) it is not possible to know in advance which features will be effective, and it is therefore necessary to generate many possible features. This often requires a feature selection/extraction step to reduce the training complexity of the final classifier, and this step is based on a search heuristic that is not necessarily linked to the classification target. For every new problem being addressed by radiomics, these two inefficient steps must be repeated, representing the major disadvantage of these methods.
Deep learning models are defined by a network composed of several layers of non-linear transformations that represent features of different levels of abstraction extracted directly from the input data[8, 9]. In medical image analysis, deep learning can significantly improve segmentation and classification results [10, 11, 12], but its application to routinely collected medical images to predict important medical outcomes (e.g., mortality) has yet to be demonstrated. Our main references are the multi-view classification of mammograms  the classifies breast exams into normal, benign and malignant; and the chest pathology classification using X-Rays  because these works use deep learning methods for the high-level classification of medical images, but both classify diagnosis, which is conceptually different compared to our prognostic output.
The dataset is represented by , where denotes the chest CT with representing the volume lattice of size , represents the segmentation map for the anatomies in muscle, body fat, aorta, spinal column, epicardial fat, heart, and lungs , and denotes whether the patient is dead () or alive () on the time to censoring (time to death or time of last follow-up).
This approach comprises the following stages : 1) hand-crafting a large pool of features, 2) feature selection/extraction, and 3) classifier training. The hand-crafting process involves medical expertise to extract intensity, texture and shape information from particular image regions that are relevant for the final prognosis/diagnosis task. The feature extraction is denoted by
where represents a function that extracts the features . Intensity features are based on the histogram of grey values per anatomy . The feature is defined by statistics from
, such as mean, median, range, skewness, kurtosis, and etc. In addition to these task-agnostic intensity-based features, we also include task-specific features that are related to the problem of estimating chronic disease burden, such as approximations of bone mineral density scoring (BMD), emphysema scoring , and coronary (and aortic) artery calcification score .
The texture-based features use first and second-order matrix statistics, like the grey level co-occurrence matrix (GLCM) for anatomy , denoted by , where the row and column of represent the number of times that grey levels and co-occur in two voxels separated by the distance in the direction within the segmentation map provided by . The grey level run-length matrix (GLRLM) for anatomy is defined by , where the row and column denote the number of times a run of length happens with grey level in direction within the segmentation . The grey level size-zone matrix (GLSZM) for anatomy is represented by , where the row and column denote the number of times grey levels are contiguous in 8-connected pixels within the segmentation . Finally, the multiple gray level size-zone matrix (MGLSZM) for anatomy is defined by , computed by a weighted average of several
, each estimated with a different number of possible grey levels. The features computed from these matrices are based on several statistics, such as energy, mean, entropy, variance, kurtosis, skewness, correlation, etc. Each of the intensity and texture features are defined in a spatial context, by the use of weighted mean positions and spatial quartile means in all three dimensions, to identify any local variations across the tissues and organs. Finally, the shape-based features are based on the volume of each anatomy, computed from the segmentation map .
The feature selection/extraction step forms a low-dimensionality vector() using a heuristic that aims to reconstruct , under some constraints [15, 16]. This vector is used for training the classifier, as in:
where represents the training set, denotes a classifier that returns a value in indicating the confidence in the 5-year mortality prediction, represents the classifier parameters, and
denotes the loss function that penalises classification errors.
3.0.3 Deep Learning
The deep learning model used in this work is the Convolutional Neural Network (ConvNet)[17, 9], defined as follows:
where denotes the composition operator, represents the ConvNet parameters (i.e., weights and biases), and the output is a value in indicating the confidence in the 5-year mortality prediction. Each network layer in (3) contains a set of filters, with each filter being defined by
where represents a non-linearity , and denote the weight and bias parameters, and . The last layer of the model in (3) produces a response , which is the input for that contains two output nodes (denoting the probability of 5-year mortality or survival), where layers and are fully-connected. The training of the model in (3) minimises the binary cross entropy loss on the training set , as follows:
4.0.1 Materials and Methods
The dataset has 24 cases (mortality) and 24 matched controls (survival), forming 48 annotated chest CTs of size . Inclusion criteria for the mortality cases are: age 60, mortality in 2014, and underwent CT chest imaging in the 3 to 5 years preceding death. Exclusion criteria are: acute disease identified on CT chest, mortality unrelated to chronic disease (e.g., trauma), and active cancer diagnosis. Controls were matched on age, gender, time to censoring (death or end of follow-up), and source of imaging referral (emergency, inpatient or outpatient departments). Images were obtained using 3 types of scanners (GE Picker PQ 6000, Siemens AS plus, and Toshiba Aquilion 16) using standard protocols. The chest CTs were obtained in the late arterial phase, following a 30 second delay after the administration of intravenous contrast (Omnipaque350/Ultravist370), and were annotated by a radiologist using semi-automated segmentation tools contained in the Vitrea software suite (Vital Images, Toshiba), where the following anatomies have been segmented: muscle, body fat, aorta, spinal column, epicardial fat, heart, and lungs.
The evaluation of the methodologies is based on a 6-fold cross-validation experiment, where each fold contains 20 cases and 20 matched controls for training and 4 cases and 4 matched controls for testing. The classification performance is measured using the mean accuracy over the six experiments, with accuracy computed by , where represents correct mortality prediction, denotes correct survival prediction, means incorrect mortality prediction, and , incorrect survival prediction. We also show the receiver operating characteristic (ROC) curve and area under curve (AUC)  using the classifier confidence on the 5-year mortality classification.
For the radiomics method, we hand-crafted 16210 features,where 2506 features come from the aorta, 2506 from heart , 2236 from lungs, 2182 from epicardial fat, 2182 from body fat, 2182 from muscle, and 2416 from spinal column 111Most of these features are hand-crafted with the methodology provided by J. Carlson (https://cran.r-project.org/web/packages/radiomics/)., where 936 represent domain knowledge features [6, 7, 14] (see Sec. 3). For the feature selection/extraction, we have tried an identity linear feature extration (i.e., original features), LASSO  and PCA 
learned with the training set for each fold. Finally, we tried different classifiers, such as linear (L) and non-linear (NL) support vector machine (SVM)
and random forests (RF). Based on the experimental results, we show the performance of the following models: 1) features extracted with LASSO, NLSVM trained with and ; 2) original features, RF trained with with 900 trees, minimum nodesize of 5 (minimum number of training samples per node), and trained with mtry of 3 (i.e., number of variables sampled as candidates for each node split); and 3) features extracted with LASSO, LSVM trained with .
The ConvNet has four convolutional layers, where the input has 8 channels (chest CT and 7 segmentation maps), the first layer has 50 filters and the second to fourth layers have 100 filters of size
(i.e., these are 3-D filters). The first convolutional layer has ReLU activation, the fifth layer contains 6000 nodes, and the output layer has two nodes. For training, dropout  of is applied to all layers, the learning rate starts at
, from epochs 1 to 10, which is then continuously reduced until it reachesfrom epochs 60 to 120, and we use RMS prop  with , and . This network and training parameters are selected based on their experimental results.
We show the mean and the standard deviation of the ROC curves for the testing set of the deep learning and the radiomics (with NLSVM, RF and LSVM classifiers) models in Fig.2
, which also shows a table with the mean and standard deviation of the AUC and accuracy of the testing set of the deep learning and the radiomics models. Using the t-test for paired samples, we note that there is no significant difference between any pair of models in terms of accuracy and AUC results on the testing set. Also, all models are compared to the null hypothesis that the true mean is 0.5 (i.e., chance) for accuracy on the testing set, and both the deep learning and the radiomics with NLSVM classifier show a p-value; for the AUC on the testing set, only the deep learning model shows a p-value . Finally, in Fig. 3, we show two chest CT examples with the output from both models.
|(a) Case (Mortality)||(b) Control (Survival)|
5 Discussion and Conclusions
The experiments demonstrate promising results, with prediction accuracy from routinely obtained chest CTs similar to the current state-of-the-art clinical risk scores, despite our small dataset and our exclusion of highly predictive covariates such as age and gender. Furthermore, expert review of the correctly classified images (such as the example cases in Fig. 3) suggests that our models may be identifying medically plausible imaging biomarkers. The comparison between deep learning and radiomics models shows that they produce comparable classification results, but the deep learning model offers several advantages, such as automatic feature learning, and unified feature and classifier learning.
These advantages mitigate the issues of hand-crafting features, which requires expert domain knowledge, and the complicated multi-stage learning process of radiomics. It is in fact remarkable that a deep learning model implemented with relative simplicity could produce competitive results compared to the radiomics method, which uses features that have been heavily tuned for the task at hand [6, 7, 14], and relies on an extensive set of initial features (e.g., we have 16210 features). This hand-crafting task would need to be re-tuned for every new problem in radiomics, unlike the CNN approach. Finally, we believe that the deep learning results can be improved with the use of pre-training and data augmentation [8, 9] and both models would benefit significantly from the integration of predictive epidemiological information (e.g., gender and age).
In this paper, we show the first proof of concept experiments for a system that is capable of predicting 5-year mortality in elderly individuals from chest CTs alone. The widespread use of medical imaging suggests that our methods will be clinically useful after being successfully tested in large scale problems (in fact, we are in the process of acquiring larger annotated datasets), as the only required inputs are already highly utilised: the medical images. We also note that the proposed deep learning model can be easily extended to other important medical outcomes, and other imaging modalities.
-  Ganna, A., Ingelsson, E.: 5 year mortality predictors in 498 103 uk biobank participants: a prospective population-based study. The Lancet 386(9993) (2015) 533–540
-  Yourman, L.C., Lee, S.J., Schonberg, M.A., Widera, E.W., Smith, A.K.: Prognostic indices for older adults: a systematic review. Jama 307(2) (2012) 182–192
-  Aerts, H.J., Velazquez, E.R., Leijenaar, R.T., Parmar, C., Grossmann, P., Cavalho, S., Bussink, J., Monshouwer, R., Haibe-Kains, B., Rietveld, D., et al.: Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature communications 5 (2014)
-  Lambin, P., Rios-Velazquez, E., Leijenaar, R., Carvalho, S., van Stiphout, R.G., Granton, P., Zegers, C.M., Gillies, R., Boellard, R., Dekker, A., et al.: Radiomics: extracting more information from medical images using advanced feature analysis. European Journal of Cancer 48(4) (2012) 441–446
-  Kumar, V., Gu, Y., Basu, S., Berglund, A., Eschrich, S.A., Schabath, M.B., Forster, K., Aerts, H.J., Dekker, A., Fenstermacher, D., et al.: Radiomics: the process and the challenges. Magnetic resonance imaging 30(9) (2012) 1234–1248
-  Bauer, J.S., Henning, T.D., Müeller, D., Lu, Y., Majumdar, S., Link, T.M.: Volumetric quantitative ct of the spine and hip derived from contrast-enhanced mdct: conversion factors. American Journal of Roentgenology 188(5) (2007) 1294–1301
-  Haruna, A., Muro, S., Nakano, Y., Ohara, T., Hoshino, Y., Ogawa, E., Hirai, T., Niimi, A., Nishimura, K., Chin, K., et al.: Ct scan findings of emphysema predict mortality in copd. CHEST Journal 138(3) (2010) 635–640
-  Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786) (2006) 504–507
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
-  Bar, Y., Diamant, I., Wolf, L., Greenspan, H.: Deep learning with non-medical training used for chest pathology identification. In: SPIE Medical Imaging, International Society for Optics and Photonics (2015) 94140V–94140V
Ciresan, D., Giusti, A., Gambardella, L.M., Schmidhuber, J.:
Deep neural networks segment neuronal membranes in electron microscopy images.In: Advances in neural information processing systems. (2012) 2843–2851
-  Dhungel, N., Carneiro, G., Bradley, A.P.: Deep learning and structured prediction for the segmentation of mass in mammograms. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. Springer (2015) 605–612
-  Carneiro, G., Nascimento, J., Bradley, A.P.: Unregistered multiview mammogram analysis with pre-trained deep learning models. In: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. Springer (2015) 652–660
-  Nasir, K., Rubin, J., Blaha, M.J., Shaw, L.J., Blankstein, R., Rivera, J.J., Khan, A.N., Berman, D., Raggi, P., Callister, T., et al.: Interplay of coronary artery calcification and traditional risk factors for the prediction of all-cause mortality in asymptomatic individuals. Circulation: Cardiovascular Imaging 5(4) (2012) 467–473
-  Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) (1996) 267–288
-  Jolliffe, I.: Principal component analysis. Wiley Online Library (2002)
-  LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361(10) (1995)
-  Hastie, T., Tibshirani, R., Friedman, J., Franklin, J.: The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer 27(2) (2005) 83–85
-  Cortes, C., Vapnik, V.: Support-vector networks. Machine learning 20(3) (1995) 273–297
-  Breiman, L.: Random forests. Machine learning 45(1) (2001) 5–32
-  Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10). (2010) 807–814
-  Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1) (2014) 1929–1958
-  Dauphin, Y.N., de Vries, H., Chung, J., Bengio, Y.: Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390 (2015)