Prediction of Overall Survival of Brain Tumor Patients

Automated brain tumor segmentation plays an important role in the diagnosis and prognosis of the patient. In addition, features from the tumorous brain help in predicting patients overall survival. The main focus of this paper is to segment tumor from BRATS 2018 benchmark dataset and use age, shape and volumetric features to predict overall survival of patients. The random forest classifier achieves overall survival accuracy of 59 67 approach uses fewer features but achieves better accuracy than state of the art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

09/20/2019

Brain Tumor Segmentation and Survival Prediction

The paper demonstrates the use of the fully convolutional neural network...
01/26/2021

Glioblastoma Multiforme Patient Survival Prediction

Glioblastoma Multiforme is a very aggressive type of brain tumor. Due to...
01/26/2021

A Survey and Analysis on Automated Glioma Brain Tumor Segmentation and Overall Patient Survival Prediction

Glioma is the most deadly brain tumor with high mortality. Treatment pla...
09/07/2020

Brain Tumor Survival Prediction using Radiomics Features

Surgery planning in patients diagnosed with brain tumor is dependent on ...
09/25/2021

Predicting survival of glioblastoma from automatic whole-brain and tumor segmentation of MR images

Survival prediction models can potentially be used to guide treatment of...
10/04/2018

Survival prediction using ensemble tumor segmentation and transfer learning

Segmenting tumors and their subregions is a challenging task as demonstr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Medical fraternity considers brain tumor amongst the most fatal type of cancer[1]. Brain tumors are divided into two categories based on origin and malignancy. Former is further classified as primary and secondary. The primary tumor develops in the brain whereas secondary spreads from another body part to the brain. According to the World Health Organization (WHO), malignancy based tumors can be classified in grades I to IV according to increasing aggressiveness [2]. High-Grade Glioma (HGG) (grade III and grade IV tumor), needs immediate treatment [2]. It may lead to patient’s death in less than two years, whereas Low-Grade Glioma (LGG) is the benign tumor which grows slowly and the patient has several years of life expectancy.

Magnetic Resonance Imaging (MRI) is a preferred technique for capturing tumors in the brain as it provides good soft tissue contrast [3]. MRI sequences are also acquired by injecting Gadolinium to enhance and improve the quality of the MRI images[4]. Usually, human expert uses MRI images for the tumor diagnosis. The task is quite challenging due to the large data volume[5]. This motivates the need for automated or semi-automated brain tumor segmentation. Automated brain tumor segmentation is divided into three categories: basic, generative, and discriminative [11],[30]

. With the evolution of deep learning, state-of-the-art methods use Convolution Neural Network(CNN) for semantic segmentation of the tumor 

[6].

Many methods further segment the tumor into its substructures like; necrosis, enhancing tumor and edema. Size of the tumor and size of substructures play a major role in predicting the overall survival (OS). In [13]

, 3D U-net based model for tumor segmentation and radiomics based features are used for overall survival prediction. The tumor is characterized by image-based features computed from the segmentation masks. These features are then used to train a Random Forest Regressor (RFR) with 1000 trees and ensemble of small multilayer perceptrons (MLP). The reported accuracy is 52.6% on the test dataset for overall survival and the Spearman correlation coefficient of 0.496.

In another attempt at survival prediction [14], the authors use pre-trained AlexNet to segment the brain tumor. The features from segmentions are used to train the linear discriminant for survival prediction. The texture features resulted in the accuracy of 46%, and histogram features achieved an accuracy of 68.5% for the test dataset. The authors developed a fully automated model for segmentation of LGG and HGG in multimodal MRIs[15]

. The prediction of patient overall survival is based on support vector machine (SVM) learning algorithms. They reported 100% accuracy for overall survival prediction on a set of 16 test samples. In

[29], authors use Dense-Res-Inception Net(DRINet) for biomedical image segmentation. The paper reported 83.47%, 73.41%, 64.98% Dice Similarity Coefficient(DSC) for whole tumor, tumor core and enhancing tumor respectively.

In [16]

, a fully convolution neural network(FCNN) architecture is used for tumor segmentation and the extracted features are fed to SVM classifier for OS prediction. A preprocessing step on MR scans is done using Z-score normalization to overcome multi-center data and magnetic field inhomogeneities. Also, post-processing is implemented using connected components to remove components below the threshold. The features are extracted from segmented regions and fed to SVM with a linear kernel. The reported accuracy for OS prediction is 60%. In  

[19] authors created an ensemble using 19 variations of DeepMedic and 7 variations of 3D U-net. Various features namely age, spatial, volumetric, morphological and tractographic are extracted, and their combination is used to train the SVM classifier. The authors reported an accuracy of 70% for features from ground truth and 63% for features from network segmentation. Both the accuracies are reported on the data of 59 patients with resection status as gross total resection(GTR).

In [20], authors implemented DeepMedic CNN architecture for tumor segmentation and implemented the cox model for OS prediction. They achieved 80%, 68% and 67% DSC for whole tumor, tumor core and enhancing tumor respectively. The OS prediction accuracy for training dataset is 44.5% and for test dataset is 38.2%.

Authors in [21]

, implemented PixelNet architecture for tumor segmentation and achieved 88% whole tumor DSC. The artificial neural network (ANN) is trained on mean, skewness, and location of tumor for OS prediction. They reported an accuracy of 54.5%. In 

[22], the authors implemented densely connected convolution neural network for segmentation, and MLP based regressor for OS prediction. They reported 50% accuracy for training data. Authors in [23]

implemented the ensemble of three convolution networks with hybrid loss function and extracted radiomics features to train random forest classifier. They reported accuracy of 52.6% on the validation set. In 

[24], authors modified U-net architecture with bottleneck layers and dense layers and applied elastic net for OS prediction. They reported an accuracy of 67% for the training data.

Authors in [25]

implemented extended U-net architecture for tumor segmentation and XGBoost regression for OS prediction. They achieved 65% accuracy on training data. In  

[26], residual U-net is implemented for tumor segmentation. The ensemble of regression network and random forest classifier is used for survival prediction. The paper reported accuracy of 47.5%.

The above methods either uses segmentation model with large number of network parameters or use more features for training the classifier. The literature suggests that U-net architecture provides good semantic segmentation. Therefore, the paper uses U-net architecture as proposed in  [17]

with modifications. The proposed work reduces network depth to minimize network parameters. The inductive transfer learning 

[28] is used for substructures segmentation. The whole tumor segmentation weights are transferred to the networks which train substructures. The weight transfer has substantially reduced the problem of training failure and it allows network to learn from small annotated data. The volumetric and shape features are extracted from the segmented results. Along with these features, age is used to train random forest classifier for OS prediction.

This paper is organized as follows. Section II presents preliminaries about the brain tumor segmentation (BraTS) [6] dataset used in the proposed work. Section III covers the CNN used for brain tumor segmentation and random forest classifier for overall survival prediction. Section IV discusses the experimental results. Finally, Section V concludes the paper, with suggestions to further improve the OS prediction.

Ii Multimodal Brain Tumor Segmentation Challenge

The multimodal brain tumor segmentation challenge invites researchers to develop robust brain tumor segmentation techniques from MRI scans [8, 6]. The data set handles all ethical issues with care. The BraTS 2018 challenge has two tasks: segmentation of the gliomas, and prediction of patient’s OS. The dataset [8, 9, 10, 6] comprised of clinically-acquired 3T multimodal MRI scans and all the ground truth labels were manually revised by expert board certified neuro-radiologists. Annotations are the Gd-enhancing tumor (ET — label 4), the peritumoral edema (ED — label 2), and the necrotic and non-enhancing tumor (NCR/NET — label 1)  [6]

. The dataset is co-registered to the same anatomical template, interpolated to the same resolution (1

) and skull-stripped [17]. The dataset has 210 HGG samples and 75 LGG samples, with each sample having four MRI modalities (, , , and FLAIR) along with the ground truth. Each sample has 155 slices with 240x240 pixels per slice. Features related to patients’ OS are also provided like the number of survival days, resection status (GTR / sub total resection(STR)) and the age. The suggested classes based on the prediction of OS were long-survivors (15 months), short-survivors (10 months), and mid-survivors (between 10 to 15 months). Overall survival of patients for number of days is shown in Fig. 1. Age and OS days distribution among three survival classes is shown in Table I. The number of short survivors are higher compared to other classes. The mean age of such patients is also high and median age is 66.55. In addtion, their OS days are less in comparison to other classes. Whereas long-survivors are less in number but they have higher OS span compared to other two classes. One can also observe high variability in data of the long-survivors.

Fig. 1: Patients’ OS days.
Survival class # Patients Age ( ) OS days ( )
Short-survivors 65 65.44 10.68 147.44 83.08
Mid-survivors 50 58.70 11.26 394 49.32
Long-survivors 48 55.11 12.19 826.23 370.91
TABLE I: Distribution of dataset features in survival classes.

Iii Implementation Details

U-net architecture promises good semantic segmentation for biological images as shown in [17]. Authors in [18] also adopted the U-net architecture for brain tumor segmentation for 2D images. The proposed work considers the architecture of  [b21-b22] with minor modification. The proposed work uses three down-sampling and two up-sampling modules in the network instead of five down-sampling and four up-sampling modules of [17] and [18]

. It is found that reduction in network depth reduces number of parameters, speeding up the processing without compromising the accuracy. Each up/down sampling module has two convolution layers. Relu activation function is applied after convolution operation and Dice loss function is used to calculate network loss after each epoch. The network is trained on whole tumor as well as on each substructure i.e., edema, enhancing tumor and necrosis.

Fig. 2 shows the architecture used for tumor segmentation in this paper. The labeled data is highly imbalance as negative class dominates positive class. For e.g., if the whole tumor spread cover 30% slices, then necrosis and enhancing tumor spread is found only in 10% of the brain slices. This reduction in the positive class, makes the network training difficult. This may result in network being stuck to local minima which requires re-initiation of the network training. Such training also results in large amount of false positives. The concept of inductive transfer learning as suggested in  [28] helps to resolve the issue. Source domain (whole tumor) network parameters are transferred to target domain (substructure) network and these parameters are used to initialize the target network training. The parameter transfer serves three purposes: 1) It deals with scarcity of labelled data; 2) provides localization for substructure area and 3) reduces amount of false positives. Weight transfer has improved the network training performance. Data preprocessing includes Z-score normalization and data augmentation by applying rotation, flipping, elastic transformation, shear, shift and zoom to the MRI slices.

Fig. 2: U-net architecture with reduced depth.

Following features are extracted from the whole tumor and substructures for training random forest classifier:

  • Volumetric features include the volume of the tumor with respect to the brain, the volume of necrosis, edema, enhancing tumor with respect to the whole tumor, and extent of the tumor.

  • Shape features include elongation, flatness, minor axis length, major axis length, maximum 2D diameter, maximum 3D diameter, mesh volume, sphericity.

Five volumetric features, fourteen shape features, and age is used to train random forest classifier with 5-fold cross-validation.

Iv Experimental Results

The work uses NVIDIA Quadro K5200 and Quadro P5000 GPUs for training and testing of CNN and random forest training. Python 3.6 along with all the necessary packages is used for development. The U-net is trained over 80% (228) images and tested on 20%(57) images. The training data is further divided into training(204 images) and validation(24 images) sets.

The dataset is highly imbalanced as tumor occupies small portion of the brain. The substructures of the tumor occupy an even smaller volume compared to the whole tumor. Initially, the network is trained to segment whole tumor with initialization of parameters randomly chosen from normal distribution. The obtained weights inturn are transferred to the substructure networks for parameter initialization. During each run network is trained for 50 epochs. Fig.

3 shows segmentation results for the whole tumor with three substructures(with and without inductive transfer learning) for a sample 2D slice.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Fig. 3: Segmentation results: a) T2 image, (where, yellow represents Edema, blue represents Enhancing Tumor, and green represents Necrosis/Non-enhancing tumor) b) Whole tumor ground truth, c) Whole tumor segmentation, d) Edema ground truth, e) Edema segmentation without weight initialization, f) Edema segmentation with weight initialization, g) Enhancing tumor ground truth, h) Enhancing tumor segmentation without weight initialization, i) Enhancing tumor segmentation with weight initialization, j) Necrotic ground truth, k) Necrotic segmentation without weight initialization, l) Necrotic segmentation with weight initialization.

Table II shows the dice similarity coefficient, sensitivity and predictive positive rate(PPV) for the test dataset of 57 patients (HGG - 42, LGG - 15). One can observe that inductive transfer learning improves segmentation result.

DSC Sensitivity PPV
Whole Tumor 0.78 0.76 0.91
A B A B A B
Necrosis 0.58 0.65 0.56 0.67 0.69 0.70
Enhancing 0.58 0.60 0.56 0.54 0.69 0.74
Edema 0.63 0.71 0.56 0.66 0.79 0.83
TABLE II: DSC, sensitivity and positive predictive value (PPV) for test dataset, A: without weight transfer, B: with weight transfer.

Once the model is ready, segmentation is applied to data of 163 patients whose survival expectancy is provided. Volumetric and shape based features are extracted from the segmentation results. The shape-based features are extracted using pyradiomics [27].

Overall survival prediction using random forest classifier with five-fold cross-validation is shown in Table III. The results are shown for two types of datasets, 1) test dataset (out of 163 patients’ data, 130 patients’ images are used for training and 33 patients’ images are used for testing) 2) set with GTR status (59 patients).

Table IV compares the accuracy of the proposed method with other state-of-the-art techniques. Methods proposed in [13, 14] uses the dataset of BraTS 2017 whereas other methods [19, 20, 21, 22, 25, 26] use BraTS 2018 dataset. In [19], classifier training set is made up of the images with GTR status. Whereas, other methods uses the training set with resection status as either GTR/ STR/ NA. Better OS accuracy is achieved in the proposed method due to; 1. Use of U-net architecture with fewer parameters; 2. Inductive transfer learning for substructure segmentation.

Feature Test dataset GTR dataset
Age + Volumetric + Shape 59% 67%
Age + Volumetric 46% 64%
Shape + Volumetric 50% 65%
Age 31% 52%
TABLE III: Overall survival prediction accuracy.
Ref. Classifier(s) Accuracy%
[13] Ensemble of random forest and multi layer perceptron 52.6
[14] Linear Discriminant 46
[19] Linear SVM GTR set 63
[20] Neural network and random forest 38
[21] Artificial neural network 54.5
[22] Multi layer perceptron 50.8
[25] XGBoost 65
[26] Ensemble of random forest and regression network 47.5
Proposed Random forest 59/67(GTR)
TABLE IV: Comparison with state-of-the-art methods.

Iv-a Feature Analysis

It can be observed from Table III that combination of age, volumetric and shape features are best suited for the model. Additionally other features like first order, gray level co-occurrence matrix (GLCM), gray level difference matrix (GLDM), gray level run length matrix (GLRLM) can be extracted from various modalities to improve the life expectancy results. However, higher order features are not useful in the present work as they require near perfect segmentation. Though it must be noted that improvements in the segmentation results can increase the accuracy of the OS prediction.

V Conclusion

The paper uses modified U-net architecture with reduced depth of three layers. In addition, the model is trained for the whole tumor with random parameter initialization. Substructure networks are initialized using weights of the whole tumor network. After completion of the substructure network training, segmentation results are generated for test dataset. Random forest classifier is trained on the extracted features for OS prediction. The proposed work achieves better accuracy compared to state-of-the-art methods. The accuracy can be enhanced by improving segmentation. It can be further improved by refining the network or implementing post-processing on segmentation. The future work will focus on improving the segmentation and using features from MRI modalities to improve overall survival prediction.

Acknowledgment

The authors would like to thank NVIDIA Corporation for donating the Quadro K5200 and Quadro P5000 GPU used for this research, Dr. Krutarth Agravat (Medical Officer, Essar Ltd) and Mr. Abhishek Shah (L.G. Medical College, Ahmedabad) for clearing our doubts related to medical concepts. The authors acknowledge continuous support from Professor Sanjay Chaudhary and Mr Himanshu Budhia for this work.

References

  • [1] DeAngelis, Lisa M. Brain tumors. New England Journal of Medicine 344.2 (2001): pp. 114-123.
  • [2] Kleihues, Paul, Peter C. Burger, and Bernd W. Scheithauer. ”The new WHO classification of brain tumours.” Brain pathology 3.3 (1993): 255-268.
  • [3] Liang, Zhi-Pei, and Paul C. Lauterbur. Principles of magnetic resonance imaging: a signal processing perspective. SPIE Optical Engineering Press, 2000.
  • [4] S. Bauer, R. Wiest, N. Lutz-P, R. Mauricio, “A survey of MRI-based medical image analysis for brain tumor studies,” Phys. Med. Biol., vol. 58, no. 13, pp. 97–129, 2013.
  • [5] Bankman, Isaac, ed. Handbook of medical image processing and analysis. Elsevier, 2008.
  • [6] Menze BH, Jakab A., Bauer S., Kalpathy-Cramer J., Farahani K., Kirby J., et al., ”The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)”, IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694
  • [7] Braintumorsegmentation.org. (2018). MICCAI BRATS - The Multimodal Brain Tumor Segmentation Challenge. [online] Available at: http://braintumorsegmentation.org/.
  • [8] Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, et al., ”Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features”, Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117
  • [9] Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, et al., ”Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection”, The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.KLXWJJ1Q
  • [10] Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, et al., ”Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection”, The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.GJQ7R0EF
  • [11] Agravat, Rupal R., and Mehul S. Raval. ”Deep Learning for Automated Brain Tumor Segmentation in MRI Images.” Soft Computing Based Medical Image Analysis. 2018. 183-201.
  • [12] Tustison N., Avants B., Cook P., Zheng Y., Egan A., Yushkevich P., et al., ”N4ITK: improved N3 bias correction.” IEEE transactions on medical imaging 29.6 (2010): 1310-1320.
  • [13] Isensee F., Kickingereder P., Wick W., Mendszus M., and Maier-Hein K., ”Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge.” 2017 International MICCAI BraTS Challenge (2017).
  • [14]

    Chato, Lina, and Shahram Latifi. ”Machine Learning and Deep Learning Techniques to Predict Overall Survival of Brain Tumor Patients using MRI Images.” Bioinformatics and Bioengineering (BIBE), 2017 IEEE 17th International Conference on. IEEE, 2017.

  • [15] Osman, Alexander FI. ”Automated Brain Tumor Segmentation on Magnetic Resonance Images and Patient’s Overall Survival Prediction Using Support Vector Machines.” International MICCAI Brainlesion Workshop. Springer, Cham, 2017.
  • [16] Varghese Alex, Mohammed Safwan, and Ganapathy Krishnamurthi, Brain Tumor Segmentation from Multi Modal MR images using Fully Convolutional Neural Network, BRATS proceedings, MICCAI 2017.
  • [17] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. ”U-net: Convolutional networks for biomedical image segmentation.” International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
  • [18] Dong H., Yang G., Liu F., Mo Y., Guo Y. (2017) Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks. In: Valdés Hernández M., González-Castro V. (eds) Medical Image Understanding and Analysis. MIUA 2017. Communications in Computer and Information Science, vol 723. Springer, Cham.
  • [19]

    Kao PY., Ngo T., Zhang A., Chen J.W., Manjunath B.S. (2019) Brain Tumor Segmentation and Tractographic Feature Extraction from Structural MR Images for Overall Survival Prediction. In: Crimi A., Bakas S., Kuijf H., Keyvan F., Reyes M., van Walsum T. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, vol 11384. Springer, Cham.

  • [20] Gates E., Pauloski J.G., Schellingerhout D., Fuentes D. (2019) Glioma Segmentation and a Simple Accurate Model for Overall Survival Prediction. In: Crimi A., Bakas S., Kuijf H., Keyvan F., Reyes M., van Walsum T. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, vol 11384. Springer, Cham
  • [21] Islam M., Jose V.J.M., Ren H. (2019) Glioma Prognosis: Segmentation of the Tumor and Survival Prediction Using Shape, Geometric and Clinical Information. In: Crimi A., Bakas S., Kuijf H., Keyvan F., Reyes M., van Walsum T. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, vol 11384. Springer, Cham
  • [22] Kori A., Soni M., Pranjal B., Khened M., Alex V., Krishnamurthi G. (2019) Ensemble of Fully Convolutional Neural Network for Brain Tumor Segmentation from Magnetic Resonance Images. In: Crimi A., Bakas S., Kuijf H., Keyvan F., Reyes M., van Walsum T. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, vol 11384. Springer, Cham
  • [23] Ren, Zhang L., Shen D., Wang Q., ”Ensembles of Multiple Scales, Losses and Models for Brain Tumor Segmentation and Overall Survival Time Prediction Task.” International MICCAI Brainlesion Workshop. Springer, Cham, 2018.
  • [24] Hyung Eun Shin, Moo Sung Park, ”Brain Tumor Segmentation using 2D U-net, International MICCAI Brainlesion Workshop.” Springer, Cham, 2018.
  • [25] Xu X., Kong X., Sun G., Lin F., Cui X., Sun S., et al., ”Brain Tumor Segmentation and Survival Prediction Based On Extended U-Net Model and XGBoost.” International MICCAI Brainlesion Workshop. Springer, Cham, 2018.
  • [26] Yang, Hao-Yu, and Junlin Yang. ”Automatic Brain Tumor Segmentation with Contour Aware Residual Network and Adversarial Training.” International MICCAI Brainlesion Workshop. Springer, Cham, 2018.
  • [27] van Griethuysen, J.J., Fedorov, A., Parmar, C., Hosny, A., Aucoin, N., Narayan, V., Beets-Tan, R.G., Fillion-Robin, J.-C., Pieper, S., Aerts, H.J.: Computational radiomics system to decode the radiographic phenotype. Cancer research 77, e104-e107 (2017)
  • [28] Pan, Sinno Jialin, and Qiang Yang. ”A survey on transfer learning.” IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359.
  • [29] Liang C., Bentley P., Mori K., Misawa K., Fujiwara M., and Rueckert D., ”DRINet for medical image segmentation.” IEEE transactions on medical imaging 37.11 (2018): 2453-2462.
  • [30] Agravat, Rupal R., and Mehul S. Raval. ”Brain Tumor Segmentation.” Computer Society of India (2016): 31-35.