A Survey on Recent Advancements for AI Enabled Radiomics in Neuro-Oncology

10/16/2019 ∙ by Syed Muhammad Anwar, et al. ∙ 0

Artificial intelligence (AI) enabled radiomics has evolved immensely especially in the field of oncology. Radiomics provide assistancein diagnosis of cancer, planning of treatment strategy, and predictionof survival. Radiomics in neuro-oncology has progressed significantly inthe recent past. Deep learning has outperformed conventional machinelearning methods in most image-based applications. Convolutional neu-ral networks (CNNs) have seen some popularity in radiomics, since theydo not require hand-crafted features and can automatically extract fea-tures during the learning process. In this regard, it is observed that CNNbased radiomics could provide state-of-the-art results in neuro-oncology,similar to the recent success of such methods in a wide spectrum ofmedical image analysis applications. Herein we present a review of the most recent best practices and establish the future trends for AI enabled radiomics in neuro-oncology.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Brain and other central nervous system (CNS) tumors account for the second most common cancer affecting children, and the third most common cancer affecting adolescents and young adults  [5][12]. There are approximately 700,000 people with primary brain or CNS tumors in the United States alone [5]. Treatment is dependent on multiple factors including age, gender, tumor size and location, etc. The standard approach in most cases is to surgically remove the tumor via craniotomy [51]. However, some tumors cannot be surgically removed and the treatment then relies on radiation therapy. Rigorous planning is necessary to determine the exact tumor volume and a buffer region surrounding the tumor which has to be treated to prevent growth from left over malignant cells. The accurate planning of resection and radiation area is challenging owing to the difficulty in determining the exact tumor dimensions. For manual segmentation (delineation), the radiologists need to carefully analyze a large amount of radiology images. To ease the load on radiologists, computational methods to automatically extract quantitative features (aka radiomics) from radiological scans have been proposed.

Radiomics comprises of numerous significant disciplines, including radiology, computer vision, and machine learning. The objective is the recognition of quantitative imaging features with an anticipation of significant clinical results in prognosis and analysis of certain treatment strategies [61]. The information provided by radiology scans is processed with the help of quantitative image analysis (QIA) for identifying patterns in radiology scans in a way that human eye may not achieve. Different steps includes, acquire and store images, segment and identify region of interest (ROIs), extract features, build and validate model, and integration of these process into clinical decision support system. The resultant units of data from QIA may be called quantitative imaging bio-markers depending on their predictive powers. A huge amount of information is captured during clinical imaging but the underlying data, in most cases, have been reported in subjective and qualitative terms. Specifically, radiomics in neuro-oncology aims to revamp the brain tumor treatment paradigm by extracting quantitative features from brain scans (MRI). Data is mined via multiple machine learning algorithms and can potentially be used as imaging bio-markers to distinguish intra-tumoral dynamics during treatment [19]. With the increase in number of reported cancer cases, analytic methods for imaging have revealed new understandings about initial treatment response, risk factors, and optimal treatment approaches [29] [60]. Image-based models are turning into a significant empowering innovation that allow investigation and approval of selected quantitative features.

The recent advancements, particularly in Artificial Intelligence (AI), are impacting major technological and scientific fields. To keep up with these advancements, medical science is adapting new methodologies for improving diagnosis and treatment of various clinical conditions [31]. In clinical setting, imaging has played a vital role for a long time by helping physicians in diagnostic and treatment related decisions making [2]. However, over a passage of time, medical imaging has evolved from just being a diagnostic tool and is now beginning to take a critical role in precision medicine for tasks such as screening, diagnosis, guided treatment, and assessing the disease recurrence likelihood [17]. The emerging field of radiomics in oncology has helped in developing a latent solution for tumor characterization by extracting a large number of features from medical images [30] [43]. Attributes that can be used in assessment of tissue appearance by radiologist are of great importance and can be used in the development of medical imaging analysis techniques. Some common examples of such attributes include texture, intensity, and morphology. Texture can be defined as the spatial variation of pixel intensities within an image, and is known to be particularly sensitive for the assessment of pathology [18]. Visual assessment of texture is however, particularly subjective. Additionally, it is known that human observers possess limited sensitivity to textural patterns, whereas computational texture analysis techniques can be significantly more sensitive to such changes [15]. For image classification, numerous computer vision algorithms depend on extracting native characteristics form images. These features are handcrafted with an eye for resolving explicit issues like obstructions and variations in scale and brightness. The design of handcrafted features often involves finding the right trade-off between accuracy and computational efficiency [39]. In contrast, deep learning (DL) methods have a huge potential to replace conventional machine learning methods for automatically extracting imaging features which are more efficient and give state-of-the-art performance in a large number of applications already. In the following, we present a review of methods relying on handcrafted features and those using DL and analyze the future direction for AI enabled radiomics in neuro-oncology.

2 Radiomics using Handcrafted Features

A general pipeline for radiomics in neuro-oncology is shown in Figure 1

. Different radiomics features are extracted from medical images and then machine learning classifiers are used to detect diseases such as brain tumor. These radiomics features are either extracted in a hand crafted manner or through DL. The top layer (Figure

1

) shows how handcrafted features are used with different radiology image inputs. The feature extraction stage (also known as conventional radiomics approach) relies on selecting features from various domains such as texture, intensity/density, and frequency (e.g., wavelet). Different machine learning classifiers ( support vector machines (SVMs) and logistic regression(LR)) are used for analysis of these features and results are analyzed using performance parameters (such as accuracy and receiver operating characteristics (ROC)). Whereas, in deep learning the model chooses the appropriate features, allowing feature learning, which can then be used directly for classification/regression. These learned features can also be used with other classifiers such as SVM.

One of the approaches employed to extract radiomic features is called local binary patterns (LBPs) [1] [16] [20] [44], where binary word encoding is used to incorporate relationship between pixels and their neighbours. This enables LBP to detect patterns in the image irrespective of contrast variations. LBP feature extractor is known for its efficiency in utilizing the computation power, but its effectiveness reduces with an increase in noise in the image [42]. Another commonly used method to extract radiomics features is Histogram of Oriented Gradient (HOG) [49] [57]

where the number of oriented gradient occurrences in certain image regions are counted to create a histogram. Depending on the application, different regions can be used to capture local shape and edge information from the images, which is further converted into a feature vector using the HOG descriptor. It was found that operating on a larger neighbourhood is better when using HOG for MR images due to the low intensity variance 

[40].

Figure 1: A pipeline of steps for radiomics in radiology using handcrafted and DL based features
Method (year) Features Classifier Accuracy/Specificity/Sensitivity)
[44] (2019) LBP SVM 97.02/94.28/98.48
[8] (2019) Discrete Wavelet Transform/Bag of words SVM 100/-/-
[20] (2019) Fusion of LBP, GLCM and GLRL Ensemble 97.14/-/-
[49] (2019)
GLCM+PHOG+Intensity-Based +
Modified CLBP
PSO-SVM 98.36/97.83/99.17
[45] (2019)
GLCM +GLRL + gray level size zone matrix
+ first order statistics + shape descriptors
SVM + LASSO 90/-/-
[57] (2018)
GLCM + GLRL +HOG + neighbourhood
grey-tone difference matrix
RUSBoost ensemble classifier 73.2/-/-
[16] (2018) LBP SVM 95/94/96
[59] (2018)
GLCM + GLRLM features +
Gabor descriptor
SVM 71.02/-/-
[35] (2018) Multiple hand crafted Radiomics Nomogram + ROC 81.52/-/-
[25] (2018)
radiomics signature
(Lasso-Cox regression)
Thresholding 95/-/-
[36] (2018)
statistical+histogram features +
GLCM+GLRLM+GLZLM
Logistic Regression 89/96/85
[47] (2018)
GLCM + GLRL + Fractal Dimensions
+ wavelet filtered GLCM
Logistic Regression 95/-/-
[33] (2018)
statistical + shape-based
+ texture + wavelet
LASSO Cox regression model 82.3/-/-
[18] (2017) Gabor texture descriptor SVM 97.5-92-99
[1] (2017) LBP + HOG Random Forest 83.77/-/-
Table 1: Radiomics in neuro-oncology using handcrafted features.

The first use of gray level co-occurrence matrix (GLCM), a statistical method used for texture analysis by examining spatial relationship between pixels, was recorded in 1973 when Haralick [21] used it to generate state-of-the-art results in image classification. It works by counting the number of times a certain pair of pixels in a specific spatial relationship and with similar gray scale values occur in an image. Recently, GLCM has been widely used for extracting features for disease classification [20] [33] [35] [36] [47] [49] [55] [57] [9] [11]. Another commonly used method, Gray Level Run Length Matrix (GLRL) [48], works on the principle of connectivity and extracts quantitative information (lengths) of connected pixels in a specific direction. GLRL has also been widely used for feature extraction in radiomics studies [59].

As a special class of frequency and structure based approaches, Gabor filter has shown to be popular texture analysis approach, and it has also been employed to examine MR scans to filter out texture-based features such as, smoothness, kurtosis, entropy, contrast, mean, and homogeneity. Gabor filter works especially well for images with uniform patterns. Medical images usually possess pixels with low variance of intensity levels and uniform orientation. Hence, Gabor filter may outperform other texture-based descriptors in case it has the capability to encode narrow bands of occurrences and orientations. Gabor filter is also good at examining structure differentiation that are caused by cancerous cells in MR images making it ideal for medical imaging data. For these reasons, these filters have been used for extracting radiomic features in multiple studies

[18] [38] [59]. Radiomics is also applied successfully in other diagnostic applications, some recent works are summarized in Table 1 highlighting the features and classifiers used. After extracting radiomics features by using various descriptors, a classifier assigns a particular class to the patient image. Most methods (Table 1) use support vector machine as a classifier. Other methods include least absolute shrinkage and selection operator (LASSO), random forest and logistic regression. It is important to observe here that there is a wide array of descriptors available and hence requires a lot of handcrafting to chose the most appropriate features. An automated system that can learn features from raw input data could help in providing more generalized results for the increasing number of radiology studies.

3 Radiomics using Deep Learning

Recently, the most widely used machine learning techniques are based on deep learning, where various functions are used to transform the data into a hierarchical representation [46]

. DL has gained wide attention in image categorization, image recognition, speech recognition and natural language processing, and medical image analysis

[6] [56]. One major advantage of DL is the fact that features are extracted directly from raw data allowing feature learning [26]

. DL is also found successful in solving complex problems with limited data, using transfer learning wherein a model trained on one type of data is used to train a different complex task

[53]. On the flip side, DL is generally known to be more successful in solving problems where large data sets are available [26], although methods that work for limited data are emerging [54].

There are two popular approaches (Figure 1) used in DL - training a network and extracting the features to use with a simple machine learning classifier and training an end-to-end network that incorporates the classification/regression task in it’s learning. An example of the former is the work by Nie et. al [41]. The authors proposed a multi-channel artificial intelligence enabled radiomics for predicting patient’s survival time in neuro-oncological applications . First, the proposed technique used three-dimensional convolutional neural networks for extracting high level features from multi-modal MR images. In the second step, those features along with patient’s personal details and medical history were fed to an SVM for predicting the survival time. The proposed method achieved state-of-the-art results with an accuracy of . Chang et. al [13] proposed an end-to-end trained residual convolutional network to diagnose isocitrate dehydrogenase (IDH) mutations in people suffering from grade II-IV gliomas. The diagnosis of IDH mutations could assist radiologists in the treatment of patients suffering from gliomas. The network was trained on multi-institutional clinical MRI data and different techniques like random rotation and zooming was used to reduce data over-fitting. The proposed network gave an accuracy and area under the curve (AUC) of and respectively on training data, and on validation data, and and on testing data respectively. This artificial intelligence based radiomics is currently considered as the largest study for the prediction of IDH mutations. In [32] proposed deep learning enabled radiomics for survival prediction of patients suffering from glioblastoma multiforme. The proposed technique used transfer learning for predicting patient’s survival. The features were extracted from MR images using conventional and deep learning methods. The features extracted from deep learning were fed to LASSO Cox model for predicting patient’s survival. The proposed technique also required demographic information such as age and Karnofsky Performance Score. The technique has some limitations as it was designed for small dataset and, also the relation between features and patient’s genetic details were not investigated. The results showed that deep learning based radiomics achieved better prognosis than conventional machine learning based radiomics.

There are various methods reported in literature related to brain diseases that are based on both conventional features and DL based methods [4] [23] [24] [14] [7]. In [58]

, authors combines fully convolutional neural network with conditional random field (CRF). The technique used image patches for training fully convolutional neural network and 2D image slices: coronal, sagital and axial, for training CRF as recurrent neural network. Then image slices were used to fine tune both networks. The experiments were carried out on BraTS 2013, 2015 and 2016 data sets

[37] [10]. This study trained three segmentation models using both image patches and slices, and it has been observed that slice by slice segmentation was computationally more effective than segmentation using image patches. This method worked well for 2D images but did not perform well for 3D volumes. Cascaded anisotropic convolutional neural networks were employed to segment multi-class brain tumor [52]

. The developed technique treated all three classes (core, enhancing, and whole) separately, and three different network architectures were designed and concatenated. Anisotropic network was designed to resolve model complexity arising from the use of large receptive fields. Residual connections were employed for robust training and segmentation performance. The model was tested on BraTS 2017 dataset

[10] and achieved the dice scores (DSC) of 0.7831, 0.8739, and 0.7748 for enhancing, whole, and core tumor regions respectively. The experiments showed that this setup has made training easier and reduced false positives. But this technique is not end-to-end and consumes more time in training and testing than other techniques.

Method (Year)
Dataset Architecture Task Performance parameter
Nie et al. (2019)
Clinical (Glioma) Images CNN + SVM Survival prediction Accuracy: 90.66%
Chang et al. (2018)
Clinical MR Images Res-Net IDH phenotype prediction
Accuracy: 89.1%
(AUC = 0.95)
Zhao et al. (2018)
BraTS CNN+ CRF-RNN Tumor segmentation
DSC: Whole-0.82,
Core-0.72, Enhanced-0.62
Wang et al. (2017)
BraTS 2017 Cascaded CNN Tumor segmentation
DSC: Whole-0.87,
Core-0.77, Enhanced-0.78
Alex et al. (2017)
BraTS 2017 CNN + Texture Features Tumor segmentation
DSC: Whole-0.83,
Core-0.69, Enhanced-0.72
Havaeiet al. (2017)
BraTS 2013 Cascaded CNN Tumor segmentation
DSC: Whole-0.81,
Core-0.72, Enhance-0.58
Lao et al. (2017)
Clinical data CNN + LASSO Cox Overall survival C-index=0.739
Liu et al. (2017)
BraTS 2015 CNN Tumor segmentation DSC: Core-0.75, Enhanced-0.81
Kamnitsas et al. (2017)
BraTS 2015 3D-CNN + CRF Tumor segmentation
DSC: Whole-0.75,
Core-0.72, Enhanced-0.898
Table 2: DL based radiomics approaches in neuro-oncology.

A 23-layered fully convolutional neural network was proposed for segmentation of gliomas from MRI [3]. Texture analysis including first order texture features and shape-based features was used for the prediction of patient’s survival. The designed algorithm was trained on 2D slices extracted from patient’s MRI volume. The proposed network gave the survival prediction accuracy of and on BraTS 2017 validation and testing dataset respectively. The achieved DSC on BraTS 2017 for whole tumor, tumor core and enhanced region was 0.83, 0.69 and 0.72, respectively. A novel CNN architecture was proposed which incorporated both dual pathway and cascaded architecture for radiomics in neuro-oncology [22]. The output of cascaded architecture was fed to dual pathway network improved prediction accuracy. The convolutional neural network predicts labels independent of its neighboring pixels which limits its capability for producing accurate results. The cascaded architecture output made it possible for the proposed CNN to incorporate the influence of neighboring pixels. This variation of convolutional neural network increased the speed by 40 folds and incorporated both local and global features. The fully connected layer of the proposed network architecture was designed in convolutional manner. Two phase training technique was used for accurate delineation of brain tumor and it was tested on BraTS 2013 dataset. The proposed architecture worked well for two-dimensional data but slows down in case of three-dimensional data.

An algorithm was devised using convolutional neural network for segmentation of brain metastases from MRI [34]. Image patches were fed to the network for voxel-wise classification which made the setup efficient for segmenting small lesions. Although the network was designed for mono-modality imaging, nonetheless it was also tested on multi-modality dataset (BraTS), where DSC values of 0.75 and 0.81 were achieved on core and enhanced tumors, respectively. The network was trained on pre-defined parameters which made it more robust. The performance of this network architecture could be improved by readjusting patch size and hyper-parameters, however. This AI-enabled radiomics in neuro-oncology could help in treatment strategy planning for brain metastases. In [28], authors proposed deepmedic platform, a dual pathway network incorporating local and global features, for segmenting brain tumors. Conditional random fields were used as a post-processing step to reduce the number of false positives. An improvement to deepmedic was proposed using residual connections, and performance was evaluated on a small dataset (BraTS 2015) to make this approach more flexible [27]. This simplified approach achieved good results on BraTS 2016 as well, where DSC score by using 75% of the data was 91.4, 83.1 and 79.4 for whole tumor, core, and enhanced tumor regions. Table 2 gives a summary of the methods in segmentation and prediction. Although the results of DL are promising, the methodology suffers with the black box paradigm. The feature learning process is still not transparent and the aim of achieving generalization is still to be achieved.

4 Discussion and Conclusion

AI-enabled radiomics is making significant progress in neuro-oncology and similar applications, with performance better than conventional approaches. It aids radiologists in making an accurate prognosis leading to better treatment strategy. An important consideration is finding the right hand-crafted features, as the results have shown that these features can significantly effect the overall outcome of the method. A possible solution to this impediment is to use DL which is known to learn the right features in an automated fashion, when a reasonable amount of training data is present. It is observed that DL based methods are able to produce state-of-the-art results. Both radiomics and DL fields are currently developing at a very fast pace. It is believed that they will work together in future resulting in the development of AI enabled radiomics that will transform not only prognosis and diagnosis, but also how treatment planning and analysis of disease recurrence works in oncology.

Various tumor types may appear similar on radiology images, but the molecular characteristics of different malignant parts vary. Moreover tumor phenotype changes with the passage of time, hence biopsies cannot provide much information. Hence, personalized medicine predicts different results and more effective treatments, in the light of improved serum, tissue, and imaging bio-markers [50]. Radiomics can assist by evaluating the imaging bio-markers that would identify the tumor signature clearly and hence show the tumor function and evolution. These statistics will help multi-disciplinary oncology members to develop a highly personalized curative plan for individuals based on the information of exactly how that specific patient’s cancer is expected to behave. Interpretable DL will help in identifying the right radiomic features improving upon the hand-crafted features based methods. For precision and accuracy in this challenging area, more interpretation and explainability is required for the underlying DL-based models.

References

  • [1] S. Abbasi and F. Tajeripour (2017) Detection of brain tumor in 3d mri images using local binary patterns and histogram orientation gradient. Neurocomputing 219, pp. 526–535. Cited by: Table 1, §2.
  • [2] H. J. Aerts, E. R. Velazquez, R. T. Leijenaar, C. Parmar, P. Grossmann, S. Carvalho, J. Bussink, R. Monshouwer, B. Haibe-Kains, D. Rietveld, et al. (2014) Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature communications 5, pp. 4006. Cited by: §1.
  • [3] V. Alex, M. Safwan, and G. Krishnamurthi (2017) Automatic segmentation and overall survival prediction in gliomas using fully convolutional neural network and texture analysis. In International MICCAI Brainlesion Workshop, pp. 216–225. Cited by: §3.
  • [4] T. Altaf, S. M. Anwar, N. Gul, M. N. Majeed, and M. Majid (2018) Multi-class alzheimer’s disease classification using image and clinical features. Biomedical Signal Processing and Control 43, pp. 64–74. Cited by: §3.
  • [5] American brain tumor association. Note: http://abta.pub30.convio.net/about-us/news/brain-tumor-statistics/07/01/2019 Cited by: §1.
  • [6] S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan (2018) Medical image analysis using convolutional neural networks: a review. Journal of medical systems 42 (11), pp. 226. Cited by: §3.
  • [7] T. Ateeq, M. N. Majeed, S. M. Anwar, M. Maqsood, Z. Rehman, J. W. Lee, K. Muhammad, S. Wang, S. W. Baik, and I. Mehmood (2018) Ensemble-classifiers-assisted detection of cerebral microbleeds in brain mri. Computers & Electrical Engineering 69, pp. 768–781. Cited by: §3.
  • [8] W. Ayadi, W. Elhamzi, I. Charfi, and M. Atri (2019) A hybrid feature extraction approach for brain mri classification based on bag-of-words. Biomedical Signal Processing and Control 48, pp. 144–152. Cited by: Table 1.
  • [9] U. Bagci, J. Yao, K. Miller-Jaster, X. Chen, and D. J. Mollura (2013) Predicting future morphological changes of lesions from radiotracer uptake in 18f-fdg-pet images. PLoS One 8 (2), pp. e57105. Cited by: §2.
  • [10] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, J. B. Freymann, K. Farahani, and C. Davatzikos (2017) Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data 4, pp. 170117. Cited by: §3.
  • [11] M. Buty, Z. Xu, M. Gao, U. Bagci, A. Wu, and D. J. Mollura (2016) Characterization of lung nodule malignancy using hybrid shape and appearance features. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 662–670. Cited by: §2.
  • [12] Cancer.net. Note: https://www.cancer.net/cancer-types/brain-tumor/statistics07/01/2019 Cited by: §1.
  • [13] K. Chang, H. X. Bai, H. Zhou, C. Su, W. L. Bi, E. Agbodza, V. K. Kavouridis, J. T. Senders, A. Boaro, A. Beers, et al. (2018) Residual convolutional neural network for the determination of idh status in low-and high-grade gliomas from mr imaging. Clinical Cancer Research 24 (5), pp. 1073–1081. Cited by: §3.
  • [14] A. Farooq, S. Anwar, M. Awais, and M. Alnowami (2017) Artificial intelligence based smart diagnosis of alzheimer’s disease and mild cognitive impairment. In 2017 International Smart cities conference (ISC2), pp. 1–4. Cited by: §3.
  • [15] A. E. Fetit, J. Novak, D. Rodriguez, D. P. Auer, C. A. Clark, R. G. Grundy, A. C. Peet, and T. N. Arvanitis (2018) Radiomics in paediatric neuro-oncology: a multicentre study on mri texture analysis. NMR in Biomedicine 31 (1), pp. e3781. Cited by: §1.
  • [16] M. Giacalone, P. Rasti, N. Debs, C. Frindel, T. Cho, E. Grenier, and D. Rousseau (2018) Local spatio-temporal encoding of raw perfusion mri for the prediction of final lesion in stroke. Medical image analysis 50, pp. 117–126. Cited by: Table 1, §2.
  • [17] A. Giardino, S. Gupta, E. Olson, K. Sepulveda, L. Lenchik, J. Ivanidze, R. Rakow-Penner, M. J. Patel, R. M. Subramaniam, and D. Ganeshan (2017) Role of imaging in the era of precision medicine. Academic radiology 24 (5), pp. 639–649. Cited by: §1.
  • [18] G. Gilanie, U. I. Bajwa, M. M. Waraich, Z. Habib, H. Ullah, and M. Nasir (2018) Classification of normal and abnormal brain mri slices using gabor texture and support vector machines. Signal, Image and Video Processing 12 (3), pp. 479–487. Cited by: §1, Table 1, §2.
  • [19] R. J. Gillies, P. E. Kinahan, and H. Hricak (2015) Radiomics: images are more than pictures, they are data. Radiology 278 (2), pp. 563–577. Cited by: §1.
  • [20] N. Gupta, P. Bhatele, and P. Khanna (2019) Glioma detection on brain mris using texture and morphological features with ensemble learning. Biomedical Signal Processing and Control 47, pp. 115–125. Cited by: Table 1, §2, §2.
  • [21] R. M. Haralick, K. Shanmugam, et al. (1973) Textural features for image classification. IEEE Transactions on systems, man, and cybernetics (6), pp. 610–621. Cited by: §2.
  • [22] M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P. Jodoin, and H. Larochelle (2017) Brain tumor segmentation with deep neural networks. Medical image analysis 35, pp. 18–31. Cited by: §3.
  • [23] S. Hussain, S. M. Anwar, and M. Majid (2017) Brain tumor segmentation using cascaded deep convolutional neural network. In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1998–2001. Cited by: §3.
  • [24] S. Hussain, S. M. Anwar, and M. Majid (2018) Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing 282, pp. 248–261. Cited by: §3.
  • [25] Y. Jiang, C. Chen, J. Xie, W. Wang, X. Zha, W. Lv, H. Chen, Y. Hu, T. Li, J. Yu, et al. (2018) Radiomics signature of computed tomography imaging for prediction of survival and chemotherapeutic benefits in gastric cancer. EBioMedicine 36, pp. 171–182. Cited by: Table 1.
  • [26] A. Kamilaris and F. X. Prenafeta-Boldú (2018) Deep learning in agriculture: a survey. Computers and Electronics in Agriculture 147, pp. 70–90. Cited by: §3.
  • [27] K. Kamnitsas, E. Ferrante, S. Parisot, C. Ledig, A. V. Nori, A. Criminisi, D. Rueckert, and B. Glocker (2016) DeepMedic for brain tumor segmentation. In International workshop on Brainlesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries, pp. 138–149. Cited by: §3.
  • [28] K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker (2017) Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis 36, pp. 61–78. Cited by: §3.
  • [29] A. Kotrotsou, P. O. Zinn, and R. R. Colen (2016) Radiomics in brain tumors: an emerging technique for characterization of tumor environment. Magnetic Resonance Imaging Clinics 24 (4), pp. 719–729. Cited by: §1.
  • [30] V. Kumar, Y. Gu, S. Basu, A. Berglund, S. A. Eschrich, M. B. Schabath, K. Forster, H. J. Aerts, A. Dekker, D. Fenstermacher, et al. (2012) Radiomics: the process and the challenges. Magnetic resonance imaging 30 (9), pp. 1234–1248. Cited by: §1.
  • [31] P. Lambin, R. T. Leijenaar, T. M. Deist, J. Peerlings, E. E. De Jong, J. Van Timmeren, S. Sanduleanu, R. T. Larue, A. J. Even, A. Jochems, et al. (2017) Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology 14 (12), pp. 749. Cited by: §1.
  • [32] J. Lao, Y. Chen, Z. Li, Q. Li, J. Zhang, J. Liu, and G. Zhai (2017) A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Scientific reports 7 (1), pp. 10353. Cited by: §3.
  • [33] X. Liu, Y. Li, Z. Qian, Z. Sun, K. Xu, K. Wang, S. Liu, X. Fan, S. Li, Z. Zhang, et al. (2018) A radiomic signature as a non-invasive predictor of progression-free survival in patients with lower-grade gliomas. NeuroImage: Clinical 20, pp. 1070–1077. Cited by: Table 1, §2.
  • [34] Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, S. Lau, W. Lu, Y. Yan, S. B. Jiang, X. Zhen, R. Timmerman, et al. (2017) A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PloS one 12 (10), pp. e0185844. Cited by: §3.
  • [35] Z. Liu, Y. Wang, X. Liu, Y. Du, Z. Tang, K. Wang, J. Wei, D. Dong, Y. Zang, J. Dai, et al. (2018) Radiomics analysis allows for precise prediction of epilepsy in patients with low-grade gliomas. NeuroImage: Clinical 19, pp. 271–278. Cited by: Table 1, §2.
  • [36] P. Lohmann, M. Kocher, G. Ceccon, E. K. Bauer, G. Stoffels, S. Viswanathan, M. I. Ruge, B. Neumaier, N. J. Shah, G. R. Fink, et al. (2018) Combined fet pet/mri radiomics differentiates radiation injury from recurrent brain metastasis. NeuroImage: Clinical 20, pp. 537–542. Cited by: Table 1, §2.
  • [37] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, et al. (2014) The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34 (10), pp. 1993–2024. Cited by: §3.
  • [38] N. Nabizadeh and M. Kubat (2015) Brain tumors detection and segmentation in mr images: gabor wavelet vs. statistical features. Computers & Electrical Engineering 45, pp. 286–301. Cited by: §2.
  • [39] L. Nanni, S. Ghidoni, and S. Brahnam (2017) Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognition 71, pp. 158–172. Cited by: §1.
  • [40] L. Nanni, C. Salvatore, A. Cerasa, I. Castiglioni, A. D. N. Initiative, et al. (2016) Combining multiple approaches for the early diagnosis of alzheimer’s disease. Pattern Recognition Letters 84, pp. 259–266. Cited by: §2.
  • [41] D. Nie, J. Lu, H. Zhang, E. Adeli, J. Wang, Z. Yu, L. Liu, Q. Wang, J. Wu, and D. Shen (2019)

    Multi-channel 3d deep feature learning for survival time prediction of brain tumor patients using multi-modal neuroimages

    .
    Scientific reports 9 (1), pp. 1103. Cited by: §3.
  • [42] T. Ojala, M. Pietikäinen, and T. Mäenpää (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis & Machine Intelligence (7), pp. 971–987. Cited by: §2.
  • [43] C. Parmar, E. R. Velazquez, R. Leijenaar, M. Jermoumi, S. Carvalho, R. H. Mak, S. Mitra, B. U. Shankar, R. Kikinis, B. Haibe-Kains, et al. (2014) Robust radiomics feature quantification using semiautomatic volumetric segmentation. PloS one 9 (7), pp. e102107. Cited by: §1.
  • [44] S. Polepaka, C. S. Rao, and M. C. Mohan (2019) IDSS-based two stage classification of brain tumor using svm. Health and Technology, pp. 1–10. Cited by: Table 1, §2.
  • [45] Z. Qian, Y. Li, Y. Wang, L. Li, R. Li, K. Wang, S. Li, K. Tang, C. Zhang, X. Fan, et al. (2019) Differentiation of glioblastoma from solitary brain metastases using radiomic machine-learning classifiers. Cancer letters 451, pp. 128–135. Cited by: Table 1.
  • [46] J. Schmidhuber (2015) Deep learning in neural networks: an overview. Neural networks 61, pp. 85–117. Cited by: §3.
  • [47] C. Shen, Z. Liu, Z. Wang, J. Guo, H. Zhang, Y. Wang, J. Qin, H. Li, M. Fang, Z. Tang, et al. (2018) Building ct radiomics based nomogram for preoperative esophageal cancer patients lymph node metastasis prediction. Translational oncology 11 (3), pp. 815–824. Cited by: Table 1, §2.
  • [48] K. H. R. Singh (2016) A comparison of gray-level run length matrix and gray-level co-occurrence matrix towards cereal grain classification. International Journal of Computer Engineering & Technology (IJCET) 7 (6), pp. 9–17. Cited by: §2.
  • [49] G. Song, Z. Huang, Y. Zhao, X. Zhao, Y. Liu, M. Bao, J. Han, and P. Li (2019) A noninvasive system for the automatic detection of gliomas based on hybrid features and pso-ksvm. IEEE Access 7, pp. 13842–13855. Cited by: Table 1, §2, §2.
  • [50] M. Subramanyam and J. Goyal (2016) Translational biomarkers: from discovery and development to clinical practice. Drug Discovery Today: Technologies 21, pp. 3–10. Cited by: §4.
  • [51] UCSF health. Note: https://www.ucsfhealth.org/conditions/brain_tumor/treatment.htmlAccessed: 07/01/2019 Cited by: §1.
  • [52] G. Wang, W. Li, S. Ourselin, and T. Vercauteren (2017) Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI Brainlesion Workshop, pp. 178–190. Cited by: §3.
  • [53] K. Weiss, T. M. Khoshgoftaar, and D. Wang (2016) A survey of transfer learning. Journal of Big data 3 (1), pp. 9. Cited by: §3.
  • [54] K. C. Wong, T. Syeda-Mahmood, and M. Moradi (2018) Building medical image classifiers with very limited data using segmentation networks. Medical image analysis 49, pp. 105–116. Cited by: §3.
  • [55] S. Wu, J. Zheng, Y. Li, Z. Wu, S. Shi, M. Huang, H. Yu, W. Dong, J. Huang, and T. Lin (2018) Development and validation of an mri-based radiomics signature for the preoperative prediction of lymph node metastasis in bladder cancer. EBioMedicine 34, pp. 76–84. Cited by: §2.
  • [56] K. Yasaka, H. Akai, A. Kunimatsu, S. Kiryu, and O. Abe (2018) Deep learning with convolutional neural network in radiology. Japanese journal of radiology 36 (4), pp. 257–272. Cited by: §3.
  • [57] Z. Zhang, J. Yang, A. Ho, W. Jiang, J. Logan, X. Wang, P. D. Brown, S. L. McGovern, N. Guha-Thakurta, S. D. Ferguson, et al. (2018) A predictive model for distinguishing radiation necrosis from tumour progression after gamma knife radiosurgery based on radiomic features from mr images. European radiology 28 (6), pp. 2255–2263. Cited by: Table 1, §2, §2.
  • [58] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan (2018) A deep learning model integrating fcnns and crfs for brain tumor segmentation. Medical image analysis 43, pp. 98–111. Cited by: §3.
  • [59] H. Zhou, D. Dong, B. Chen, M. Fang, Y. Cheng, Y. Gan, R. Zhang, L. Zhang, Y. Zang, Z. Liu, et al. (2018) Diagnosis of distant metastasis of lung cancer: based on clinical and radiomic features. Translational oncology 11 (1), pp. 31–36. Cited by: Table 1, §2, §2.
  • [60] M. Zhou, B. Chaudhury, L. O. Hall, D. B. Goldgof, R. J. Gillies, and R. A. Gatenby (2017) Identifying spatial imaging biomarkers of glioblastoma multiforme for survival group prediction. Journal of Magnetic Resonance Imaging 46 (1), pp. 115–123. Cited by: §1.
  • [61] M. Zhou, J. Scott, B. Chaudhury, L. Hall, D. Goldgof, K. W. Yeom, M. Iv, Y. Ou, J. Kalpathy-Cramer, S. Napel, et al. (2018) Radiomics in brain tumor: image assessment, quantitative feature descriptors, and machine-learning approaches. American Journal of Neuroradiology 39 (2), pp. 208–216. Cited by: §1.