Pulmonary adenocarcinoma is the most common histopathological subtype of lung cancer. Early identifying pathological invasiveness of pulmonary adenocarcinomas by computed tomography (CT) would be clinically important and could guide clinical decision making maeshima2010histological ; tsutani2013prognostic ; yanagawa2017radiological ; han2018ct
. According to the degrees of invasiveness, adenocarcinomas are classified as atypical adenomatous hyperplasia (AAH), adenocarcinomas in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA)travis2011international . Due to the noise and artifacts in CT imaging, it is challenging for radiologists to differentiate the degrees of invasiveness. The design of a reliable invasiveness prediction system is increasingly needed in clinical practice.
Existing works can be mainly categorised into two groups: methods that extract conventional radiomics features and identify the invasiveness types by using a statistical classifier such as support vector machine or random forestxue2018use ; mei2018predicting ; zhao2019development
; methods that use convolutional neural network(CNN) which learn abstract features to predict invasivenesszhao20183dcnn ; yanagawa2019applicationcnn . The radiomics features contain hand-crafted, low-level information while deep-learning features are data-driven high-level representation.
In this paper, we investigate the combination and complementarity of radiomics features and deep-learning features. Specifically, we propose an effective deep feature fusion network which merges the radiomics and deep-learning features and is further optimized in a data-driven manner. Our model can incorporate the strength of these two different features to make an accurate prediction.
To the best of our knowledge, this is the first attempt to construct an end-to-end deep learning framework that combines radiomics features and deep-learning features to address the invasiveness prediction problem. Extensive experimental results on our private dataset demonstrate the effectiveness of the proposed framework and clearly indicate that complementarity modeling between different feature representations is valuable for identifying invasiveness.
In this section, we present the proposed invasiveness prediction framework which is built based on an effective feature fusion network. The network architecture is shown in Fig. 1. Our model consists of two streams, the first part deals with the convolutional neural network given 3D nodule-centered patches of lung CT slices as the input to compute deep-learning features; the second part extracts and selects radiomics features given the same patches. These two features are followed by a new conversion layer, then concatenated together to produce an enhanced representation in a new fusion layer. We claim that the radiomics part regularizes and serves as deep supervision during the learning process. Finally, softmax function is applied in the last layer to make label predictions, and the cross-entropy loss is minimized for training the network.
Deep-learning features. The ResNet50 he2016deep
architecture is used for extracting deep-learning features. Specifically, in order to capture spatial context information in neighboring CT slices, we utilize 3D convolution with the same padding instead of 2D convolution layer. The output of the average pooling layer is a 2048-dimensional vector, which we regarded as the deep-learning features.
Radiomics features. We firstly extract around 4000-dimensional radiomics features from the CT slices, which can be categorized into four types such as intensity, shape, texture and wavelet van2017computational
. Next, we remove the radiomics characteristics with low variance (lower than 0.8) on the training set. Finally, the remaining radiomics features are processed by K-best method and Lasso model which select around 100-dimensional features.
Feature fusion. The fusion layer utilizes full connection to provide self-adaptation on the combination of radiomics features () and deep-learning features (). Before the fusion action, both features are followed by a new conversion layer, namely a 512D-output full connection layer, which bridges the dimensional difference between the two kinds of features, and improves the convergence of our feature fusion network. Specifically, the input of fusion layer is , then the output of this layer is computed by , where
Dataset. To evaluate the performance of our framework, we collected lung CT scans of 676 patients from a collaborating hospital. Each CT scan was manually segmented to obtain a 3D nodule-centered patch. The invasiveness of these nodules were pathology proved and graded as AAH (158 patients), AIS (136 patients), MIA (53 patients), and IA (329 patients). We randomly selected 80% of patients as training set and 20% of patients as testing set. Examples are shown in Fig. 2.
Training details and settings. MXNet platform chen2015mxnet
was applied to construct the proposed deep architecture. The model parameters were initialized by a pre-trained ResNet50 and then were fine-tuned around 20 epochs using the early stopping criterion. The stochastic gradient descent (SGD) optimizer was used with learning rate of 0.001.
Results and analysis. The proposed model was evaluated on our private dataset by comparing with three different representative methods, which are marked as ‘RF+SVM’, ‘CNN’ and ‘RF+SVM+CNN’, respectively. The results are shown in Table 1
. Specifically, ‘RF+SVM’ denotes using radiomics features alone and a support vector machine classifier. ‘CNN’ indicates using deep convolutional neural network (ResNet50) alone. ‘RF+SVM+CNN’ represents the class probability combination of ‘RF+SVM’ and ‘CNN’.
The confusion matrix of different methods.
As shown in Table 1, the deeply fused feature not only outperforms radiomics features and deep-learning features alone, but also show its superiority over simple fusion strategy. Note that, ‘RF+SVM+CNN’ performs the second-best and outperforms both ‘RF+SVM’ and ‘CNN’, which provides supports on our assumption that radiomics features and CNN features are complementary. Also, our model outperforms ‘RF+SVM+CNN’ , due to the features are fused in the deep feature level and are further optimized, while in ‘RF+SVM+CNN’, the predicted probabilities of CNN features are simply combined with the one of radiomics features, which may not be optimal.
We also visualized the confusion matrix results of different methods and show them in Fig. 3. We observed that the proposed fusion features outperformed or not worse than individual features in all classes while the simple combination strategy decreases the performance on the class AIS or falls to the middle performance on class IA, suggesting that the feature fusion network could effectively take advantage of different feature representations to improve the discrimination power.
In this work, we proposed an effective framework for invasiveness prediction of pulmonary adenocarcinoma, which jointly utilizes both the radiomics features and deep-learning features by training a feature fusion network. This model uses radiomics features to regularize the convolutional neural network process in order to make the deep-learning features complementary to radiomics features efficiently. Extensive experiments are conducted on a private dataset to verify the feasibility of our method. Experimental results suggest that combining the information of different features helps to train a superior model, which is a promising avenue for invasiveness prediction.
-  Akiko Miyagi Maeshima, Naobumi Tochigi, Akihiko Yoshida, Hisao Asamura, Koji Tsuta, and Hitoshi Tsuda. Histological scoring for small lung adenocarcinomas 2 cm or less in diameter: a reliable prognostic indicator. Journal of Thoracic Oncology, 5(3):333–339, 2010.
-  Yasuhiro Tsutani, Yoshihiro Miyata, Takahiro Mimae, Kei Kushitani, Yukio Takeshima, Masahiro Yoshimura, and Morihito Okada. The prognostic role of pathologic invasive component size, excluding lepidic growth, in stage i lung adenocarcinoma. The Journal of thoracic and cardiovascular surgery, 146(3):580–585, 2013.
-  Masahiro Yanagawa, Takeshi Johkoh, Masayuki Noguchi, Eiichi Morii, Yasushi Shintani, Meinoshin Okumura, Akinori Hata, Maki Fujiwara, Osamu Honda, and Noriyuki Tomiyama. Radiological prediction of tumor invasiveness of lung adenocarcinoma on thin-section ct. Medicine, 96(11), 2017.
-  L Han, P Zhang, Y Wang, Z Gao, H Wang, X Li, and Zhaoxiang Ye. Ct quantitative parameters to predict the invasiveness of lung pure ground-glass nodules (pggns). Clinical radiology, 73(5):504–e1, 2018.
-  William D Travis, Elisabeth Brambilla, Masayuki Noguchi, Andrew G Nicholson, Kim R Geisinger, Yasushi Yatabe, David G Beer, Charles A Powell, Gregory J Riely, Paul E Van Schil, et al. International association for the study of lung cancer/american thoracic society/european respiratory society international multidisciplinary classification of lung adenocarcinoma. Journal of thoracic oncology, 6(2):244–285, 2011.
-  Xing Xue, Yong Yang, Qiang Huang, Feng Cui, Yuqing Lian, Siying Zhang, Linpeng Yao, Wei Peng, Xin Li, Peipei Pang, et al. Use of a radiomics model to predict tumor invasiveness of pulmonary adenocarcinomas appearing as pulmonary ground-glass nodules. BioMed research international, 2018, 2018.
-  Xueyan Mei, Rui Wang, Wenjia Yang, Fangfei Qian, Xiaodan Ye, Li Zhu, Qunhui Chen, Baohui Han, Timothy Deyer, Jingyi Zeng, et al. Predicting malignancy of pulmonary ground-glass nodules and their invasiveness by random forest. Journal of thoracic disease, 10(1):458, 2018.
-  Wei Zhao, Ya’nan Xu, Zhiming Yang, Yingli Sun, Cheng Li, Liang Jin, Pan Gao, Wenjie He, Peijun Wang, Hongli Shi, et al. Development and validation of a radiomics nomogram for identifying invasiveness of pulmonary adenocarcinomas appearing as subcentimeter ground-glass opacity nodules. European journal of radiology, 112:161–168, 2019.
-  Wei Zhao, Jiancheng Yang, Yingli Sun, Cheng Li, Weilan Wu, Liang Jin, Zhiming Yang, Bingbing Ni, Pan Gao, Peijun Wang, et al. 3d deep learning from ct scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer research, 78(24):6881–6889, 2018.
-  Masahiro Yanagawa, Hirohiko Niioka, Akinori Hata, Noriko Kikuchi, Osamu Honda, Hiroyuki Kurakami, Eiichi Morii, Masayuki Noguchi, Yoshiyuki Watanabe, Jun Miyake, et al. Application of deep learning (3-dimensional convolutional neural network) for the prediction of pathological invasiveness in lung adenocarcinoma: A preliminary study. Medicine, 98(25):e16119, 2019.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In , pages 770–778, 2016.
-  Joost JM Van Griethuysen, Andriy Fedorov, Chintan Parmar, Ahmed Hosny, Nicole Aucoin, Vivek Narayan, Regina GH Beets-Tan, Jean-Christophe Fillion-Robin, Steve Pieper, and Hugo JWL Aerts. Computational radiomics system to decode the radiographic phenotype. Cancer research, 77(21):e104–e107, 2017.
-  Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.