Invasiveness Prediction of Pulmonary Adenocarcinomas Using Deep Feature Fusion Networks

09/21/2019 ∙ by Xiang Li, et al. ∙ Technische Universität München SUN YAT-SEN UNIVERSITY 0

Early diagnosis of pathological invasiveness of pulmonary adenocarcinomas using computed tomography (CT) imaging would alter the course of treatment of adenocarcinomas and subsequently improve the prognosis. Most of the existing systems use either conventional radiomics features or deep-learning features alone to predict the invasiveness. In this study, we explore the fusion of the two kinds of features and claim that radiomics features can be complementary to deep-learning features. An effective deep feature fusion network is proposed to exploit the complementarity between the two kinds of features, which improves the invasiveness prediction results. We collected a private dataset that contains lung CT scans of 676 patients categorized into four invasiveness types from a collaborating hospital. Evaluations on this dataset demonstrate the effectiveness of our proposal.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Pulmonary adenocarcinoma is the most common histopathological subtype of lung cancer. Early identifying pathological invasiveness of pulmonary adenocarcinomas by computed tomography (CT) would be clinically important and could guide clinical decision making maeshima2010histological ; tsutani2013prognostic ; yanagawa2017radiological ; han2018ct

. According to the degrees of invasiveness, adenocarcinomas are classified as atypical adenomatous hyperplasia (AAH), adenocarcinomas in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA) 

travis2011international . Due to the noise and artifacts in CT imaging, it is challenging for radiologists to differentiate the degrees of invasiveness. The design of a reliable invasiveness prediction system is increasingly needed in clinical practice.

Existing works can be mainly categorised into two groups: methods that extract conventional radiomics features and identify the invasiveness types by using a statistical classifier such as support vector machine or random forest 

xue2018use ; mei2018predicting ; zhao2019development

; methods that use convolutional neural network(CNN) which learn abstract features to predict invasiveness 

zhao20183dcnn ; yanagawa2019applicationcnn . The radiomics features contain hand-crafted, low-level information while deep-learning features are data-driven high-level representation.

In this paper, we investigate the combination and complementarity of radiomics features and deep-learning features. Specifically, we propose an effective deep feature fusion network which merges the radiomics and deep-learning features and is further optimized in a data-driven manner. Our model can incorporate the strength of these two different features to make an accurate prediction.

To the best of our knowledge, this is the first attempt to construct an end-to-end deep learning framework that combines radiomics features and deep-learning features to address the invasiveness prediction problem. Extensive experimental results on our private dataset demonstrate the effectiveness of the proposed framework and clearly indicate that complementarity modeling between different feature representations is valuable for identifying invasiveness.

2 Methodology

Figure 1: The architecture of our invasiveness prediction framework. The radiomics features and deep-learning features are aggregated in feature level and optimized in an end-to-end manner.

In this section, we present the proposed invasiveness prediction framework which is built based on an effective feature fusion network. The network architecture is shown in Fig. 1. Our model consists of two streams, the first part deals with the convolutional neural network given 3D nodule-centered patches of lung CT slices as the input to compute deep-learning features; the second part extracts and selects radiomics features given the same patches. These two features are followed by a new conversion layer, then concatenated together to produce an enhanced representation in a new fusion layer. We claim that the radiomics part regularizes and serves as deep supervision during the learning process. Finally, softmax function is applied in the last layer to make label predictions, and the cross-entropy loss is minimized for training the network.

Deep-learning features. The ResNet50 he2016deep

architecture is used for extracting deep-learning features. Specifically, in order to capture spatial context information in neighboring CT slices, we utilize 3D convolution with the same padding instead of 2D convolution layer. The output of the average pooling layer is a 2048-dimensional vector, which we regarded as the deep-learning features.

Radiomics features. We firstly extract around 4000-dimensional radiomics features from the CT slices, which can be categorized into four types such as intensity, shape, texture and wavelet van2017computational

. Next, we remove the radiomics characteristics with low variance (lower than 0.8) on the training set. Finally, the remaining radiomics features are processed by K-best method and Lasso model which select around 100-dimensional features.

Feature fusion. The fusion layer utilizes full connection to provide self-adaptation on the combination of radiomics features () and deep-learning features (). Before the fusion action, both features are followed by a new conversion layer, namely a 512D-output full connection layer, which bridges the dimensional difference between the two kinds of features, and improves the convergence of our feature fusion network. Specifically, the input of fusion layer is , then the output of this layer is computed by , where

denotes the ReLU activation function. By this means, our model can jointly map these two different features to an embedding feature space and make better use of the individual feature strength.

3 Experiments

Dataset. To evaluate the performance of our framework, we collected lung CT scans of 676 patients from a collaborating hospital. Each CT scan was manually segmented to obtain a 3D nodule-centered patch. The invasiveness of these nodules were pathology proved and graded as AAH (158 patients), AIS (136 patients), MIA (53 patients), and IA (329 patients). We randomly selected 80% of patients as training set and 20% of patients as testing set. Examples are shown in Fig. 2.

(a) AAH
(b) AIS
(c) MIA
(d) IA
Figure 2: The examples of our dataset.

Training details and settings. MXNet platform chen2015mxnet

was applied to construct the proposed deep architecture. The model parameters were initialized by a pre-trained ResNet50 and then were fine-tuned around 20 epochs using the early stopping criterion. The stochastic gradient descent (SGD) optimizer was used with learning rate of 0.001.

Results and analysis. The proposed model was evaluated on our private dataset by comparing with three different representative methods, which are marked as ‘RF+SVM’, ‘CNN’ and ‘RF+SVM+CNN’, respectively. The results are shown in Table 1

. Specifically, ‘RF+SVM’ denotes using radiomics features alone and a support vector machine classifier. ‘CNN’ indicates using deep convolutional neural network (ResNet50) alone. ‘RF+SVM+CNN’ represents the class probability combination of ‘RF+SVM’ and ‘CNN’.

Accuracy 82.78 86.09 87.42 88.74
Table 1: The overall classification accuracy (%) of different methods, which is calculated by comparing the prediction and ground-true label of each patient in testing set.
(a) RF+SVM
(b) CNN
(d) Ours
Figure 3:

The confusion matrix of different methods.

As shown in Table 1, the deeply fused feature not only outperforms radiomics features and deep-learning features alone, but also show its superiority over simple fusion strategy. Note that, ‘RF+SVM+CNN’ performs the second-best and outperforms both ‘RF+SVM’ and ‘CNN’, which provides supports on our assumption that radiomics features and CNN features are complementary. Also, our model outperforms ‘RF+SVM+CNN’ , due to the features are fused in the deep feature level and are further optimized, while in ‘RF+SVM+CNN’, the predicted probabilities of CNN features are simply combined with the one of radiomics features, which may not be optimal.

We also visualized the confusion matrix results of different methods and show them in Fig. 3. We observed that the proposed fusion features outperformed or not worse than individual features in all classes while the simple combination strategy decreases the performance on the class AIS or falls to the middle performance on class IA, suggesting that the feature fusion network could effectively take advantage of different feature representations to improve the discrimination power.

4 Conclusion

In this work, we proposed an effective framework for invasiveness prediction of pulmonary adenocarcinoma, which jointly utilizes both the radiomics features and deep-learning features by training a feature fusion network. This model uses radiomics features to regularize the convolutional neural network process in order to make the deep-learning features complementary to radiomics features efficiently. Extensive experiments are conducted on a private dataset to verify the feasibility of our method. Experimental results suggest that combining the information of different features helps to train a superior model, which is a promising avenue for invasiveness prediction.


  • [1] Akiko Miyagi Maeshima, Naobumi Tochigi, Akihiko Yoshida, Hisao Asamura, Koji Tsuta, and Hitoshi Tsuda. Histological scoring for small lung adenocarcinomas 2 cm or less in diameter: a reliable prognostic indicator. Journal of Thoracic Oncology, 5(3):333–339, 2010.
  • [2] Yasuhiro Tsutani, Yoshihiro Miyata, Takahiro Mimae, Kei Kushitani, Yukio Takeshima, Masahiro Yoshimura, and Morihito Okada. The prognostic role of pathologic invasive component size, excluding lepidic growth, in stage i lung adenocarcinoma. The Journal of thoracic and cardiovascular surgery, 146(3):580–585, 2013.
  • [3] Masahiro Yanagawa, Takeshi Johkoh, Masayuki Noguchi, Eiichi Morii, Yasushi Shintani, Meinoshin Okumura, Akinori Hata, Maki Fujiwara, Osamu Honda, and Noriyuki Tomiyama. Radiological prediction of tumor invasiveness of lung adenocarcinoma on thin-section ct. Medicine, 96(11), 2017.
  • [4] L Han, P Zhang, Y Wang, Z Gao, H Wang, X Li, and Zhaoxiang Ye. Ct quantitative parameters to predict the invasiveness of lung pure ground-glass nodules (pggns). Clinical radiology, 73(5):504–e1, 2018.
  • [5] William D Travis, Elisabeth Brambilla, Masayuki Noguchi, Andrew G Nicholson, Kim R Geisinger, Yasushi Yatabe, David G Beer, Charles A Powell, Gregory J Riely, Paul E Van Schil, et al. International association for the study of lung cancer/american thoracic society/european respiratory society international multidisciplinary classification of lung adenocarcinoma. Journal of thoracic oncology, 6(2):244–285, 2011.
  • [6] Xing Xue, Yong Yang, Qiang Huang, Feng Cui, Yuqing Lian, Siying Zhang, Linpeng Yao, Wei Peng, Xin Li, Peipei Pang, et al. Use of a radiomics model to predict tumor invasiveness of pulmonary adenocarcinomas appearing as pulmonary ground-glass nodules. BioMed research international, 2018, 2018.
  • [7] Xueyan Mei, Rui Wang, Wenjia Yang, Fangfei Qian, Xiaodan Ye, Li Zhu, Qunhui Chen, Baohui Han, Timothy Deyer, Jingyi Zeng, et al. Predicting malignancy of pulmonary ground-glass nodules and their invasiveness by random forest. Journal of thoracic disease, 10(1):458, 2018.
  • [8] Wei Zhao, Ya’nan Xu, Zhiming Yang, Yingli Sun, Cheng Li, Liang Jin, Pan Gao, Wenjie He, Peijun Wang, Hongli Shi, et al. Development and validation of a radiomics nomogram for identifying invasiveness of pulmonary adenocarcinomas appearing as subcentimeter ground-glass opacity nodules. European journal of radiology, 112:161–168, 2019.
  • [9] Wei Zhao, Jiancheng Yang, Yingli Sun, Cheng Li, Weilan Wu, Liang Jin, Zhiming Yang, Bingbing Ni, Pan Gao, Peijun Wang, et al. 3d deep learning from ct scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer research, 78(24):6881–6889, 2018.
  • [10] Masahiro Yanagawa, Hirohiko Niioka, Akinori Hata, Noriko Kikuchi, Osamu Honda, Hiroyuki Kurakami, Eiichi Morii, Masayuki Noguchi, Yoshiyuki Watanabe, Jun Miyake, et al. Application of deep learning (3-dimensional convolutional neural network) for the prediction of pathological invasiveness in lung adenocarcinoma: A preliminary study. Medicine, 98(25):e16119, 2019.
  • [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778, 2016.
  • [12] Joost JM Van Griethuysen, Andriy Fedorov, Chintan Parmar, Ahmed Hosny, Nicole Aucoin, Vivek Narayan, Regina GH Beets-Tan, Jean-Christophe Fillion-Robin, Steve Pieper, and Hugo JWL Aerts. Computational radiomics system to decode the radiographic phenotype. Cancer research, 77(21):e104–e107, 2017.
  • [13] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.