A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer via CT Images

02/11/2020 ∙ by Zhengdong Zhang, et al. ∙ Beihang University 2

As Deep Convolutional Neural Networks (DCNNs) have shown robust performance and results in medical image analysis, a number of deep-learning-based tumor detection methods were developed in recent years. Nowadays, the automatic detection of pancreatic tumors using contrast-enhanced Computed Tomography (CT) is widely applied for the diagnosis and staging of pancreatic cancer. Traditional hand-crafted methods only extract low-level features. Normal convolutional neural networks, however, fail to make full use of effective context information, which causes inferior detection results. In this paper, a novel and efficient pancreatic tumor detection framework aiming at fully exploiting the context information at multiple scales is designed. More specifically, the contribution of the proposed method mainly consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation (DC) Module. A bottom-up path augmentation to fully extract and propagate low-level accurate localization information is established firstly. Then, the Self-adaptive Feature Fusion can encode much richer context information at multiple scales based on the proposed regions. Finally, the DC Module is specifically designed to capture the interaction information between proposals and surrounding tissues. Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge, demonstrating the proposed framework can detect the tumor of pancreatic cancer efficiently and accurately.



There are no comments yet.


page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since the Convolutional Neural Network was applied to visual data analysis [1]

, there has been great progress in deep learning, computer vision and medical image processing. There is the potential to apply deep-learning-based approaches on medical image analysis, such as the tumor detection of pancreatic cancer. Pancreatic cancer is one of a malignant tumor diseases with around 7% in 5-year survival rates 

[2] [3]. The pancreas is a small organ located in the deep of human body, so that the difficulty of detection is significantly increased. Furthermore, missing the optimal time for radical surgery is the major cause of cancer death. CT imaging, a medical monitoring technology, which collects information of the tumor location, size and morphology, is helpful for the diagnosis and staging of pancreatic cancer compared with ultrasound imaging and Magnetic Resonance Imaging (MRI) [4]. Nevertheless, manually diagnosing requires doctors with rich clinical experience, because the quality of CT images varies between different CT scanners or operators, and pathological texture features are hard to be distinguished. Therefore, there is a growing need of studying on proposing a robust deep-learning-based algorithm for accurate pancreatic tumor detection.

Kishor achieved detection of pancreatic cancer in 2015 [5]

. A K-means clustering approach was utilized firstly to group the region of interests (ROIs). Then, A Haar wavelet transformation and threshold were adopted to classify images. His algorithm could be briefly deployed for the computer-aided system, whereas the performance of segmentation and classification might be seriously influenced by cancer pathological features. Li utilized saliency maps and densely-connected convolutional networks for pancreatic ductal adenocarcinoma diagnosis in 2019 

[6]. The high-level features were extracted and mapped to different types of pancreatic cysts. A larger training dataset might improve the performance. An approach for pancreatic tumor characterization inspired by the radiologists’ inspiration and label proportion was illustrated by Sarfaraz [7]. He designed a 3D CNN-based graph-regularized sparse multi-task framework with a proportion-SVM to avoid the limited labeled data. It achieved sensitivity in diagnosing Intraductal Papillary Mucinous Neoplasms, but deep learning approaches such as Generative Adversarial Networks may show better performance [8].

Following the above consideration, an advanced framework for detecting human pancreatic tumor via CT images is proposed. Feature Pyramid Networks (FPN) utilize a top-down path with lateral connections to propagate semantic features in low levels, whereas the propagation through a long way increases the difficulty of exploiting accurate localization information [9]. Therefore, a bottom-up Augmented Feature Pyramid aiming at shortening the information path and propagating low-level features is created at first. Secondly, Self-adaptive Feature Fusion to adaptively encode and integrate context information at multiple scales based on the proposals is designed, because the size of tumor is relatively small and nonuniform. Thirdly, inspired by the Non-local Neural Networks [10], we employ a Dependencies Computation Module to compute dependencies and acquire interaction information with surrounding tissues. The expressiveness of features is enhanced by calculating the dependencies ranging from local to global. Subsequently, evaluation is illustrated and applied. The results achieve competitive performance compared with other deep-learning-based approaches.

Fig. 1: The architecture of the pancreatic tumor detection network.

Ii Methods

The novel and efficient tumor detection framework we proposed is illustrated in Fig. 1. The network utilizes FPN combined with Faster R-CNN [11]

as the backbone and the contribution of the proposed method consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation Module. Firstly, we feed the preprocessed CT images into the pre-trained ResNet-101 for feature extraction 

[12], and then build the feature pyramid via up-sampling and lateral connections. Secondly, in order to enhance the entire feature hierarchy for improving detection performance, a bottom-up path is established to make low-level localization information propagation more efficient. Thirdly, we employ a Region Proposal Network (RPN) on each level to generate proposals [11], and then use Self-adaptive Feature Fusion to enlarge the corresponding ROIs and encode richer context information at multiple scales. Besides, we conduct the Dependencies Computation Module to capture dependencies with surrounding tissues of each proposal. Finally, detection results are predicted via a Score Prediction layer and a Box Regression layer, respectively.

Ii-a Augmented Feature Pyramid Networks

In the process of feature extraction, DCNNs can extract semantic information. Meanwhile, high-level feature maps strongly respond to global features, which are beneficial to detect large objects [13]. The tumor, however, is relatively small in CT images, thus the consecutive pooling layers may lose the important spatial details of feature maps. In addition, the low-level accurate localization information is essential for tumor detection, but the information propagation path in FPN, which consists of more than 100 layers, affects the transmission effect. To this end, we build a bottom-up Augmented Feature Pyramid. As shown in Fig. 1, firstly, we generate based on FPN. Then, the augmented path is established from the level , and is directly used as , without any processing. Next, a

convolutional operator with stride 2 is conducted on a higher resolution feature map

to reduce the map size. The down-sampled feature map is then merged with a coarser feature map by element-wise sum. In addition, we employ another convolutional operator on each fused feature map to generate for following feature map generation. This process is iterated until the level is used. In this way, we can acquire a new Augmented Feature Pyramid consisting of .

Ii-B Self-adaptive Feature Fusion

After acquiring the proposed regions by RPN, ROIs are assigned to one certain level according to their size, and the subsequent operations are performed on the same level, resulting in some useful information from other levels are discarded. In this case, instead of using a regression function to make predictions directly on the assigned proposals, we design a Self-adaptive Feature Fusion module, which aggregates hierarchical feature maps from multiple levels, to make full use of context information at multiple scales. Formally, the ROI with width and height is assigned to the level of the Augmented Feature Pyramid for each proposal by:


In Eq. 1,

is the ImageNet pre-training size 224 

[14], and is set to 5, representing the coarsest level . is 4, representing the level . is 3, representing the level .

Fig. 2: The example CT image of the given ROI and its corresponding region .

As shown in Fig. 2, given an input ROI , the predicted bounding box in red fails to cover the entire area of the tumor, especially the edge response, which results in information loss. To tackle this problem, we enlarge the width and height of ROI by the factor 1.2 and 1.2 to create a new region in blue. The new region contains richer context information, especially the responses about edges, which are strong indicators to accurate localization. Furthermore, as high-level features have larger receptive fields and capture more semantic information, low-level features have higher resolution and contain accurate localization details, which are complementary to abstract features. Both of them can help improve the detection performance, therefore, the regions and are mapped to the level and , so that region and get three feature maps from three different scales, respectively. We employ 1414 ROI pooling over these maps to uniform the size. These descriptors are concatenated together and dimensions are reduced by 11 convolutional operators. Finally, the based descriptor is used for score prediction, and the based descriptor is used for bounding box regression.

Ii-C Dependencies Computation Module

In clinical practice, doctors pinpoint tumors through CT images by analyzing the global context information, local geometry structures, shape variations, and especially the spatial relations with surrounding tissues. In this case, we employ the Dependencies Computation Module to compute the response at a position, which is a weighted sum of the features at all positions on the enlarged region . This operation can enable the network to pay more attention to the interactions and dependencies ranging from local to global, which is one of the most useful information for tumor detection. Specifically, given an input , the entire Dependencies Computation Module is defined as follows:


In Eq. 2 and Eq. 3, is the index of the chosen position, is the index of all other positions. The dependencies between any two positions are calculated via , and . , and are matrices implemented by 11 convolutional operators to reduce the number of channels. As the shape of the input feature is 1414512, the shape of three corresponding outputs is 1414256. At last, we use an addition operator to fuse it with the original feature, which denotes:


where is calculated in Eq. 2, is the original input. is a 11 convolution layer used to restore the shape back to 1414512.

Fig. 3: The Histogram of the diameter of tumor in the dataset.

Iii Experiments and Results

Iii-a Dataset

The model is trained by a dataset of pancreatic CT images provided by The Affiliated Hospital of Qingdao University. The dataset contains 2890 CT images, in which 2650 images are for training, and 240 images are for testing. There is no overlap between the training set and the test set, and all the images are labeled by three experienced doctors with accurate bounding boxes. The diameter distribution of the tumor in the dataset is illustrated in Fig. 3. The diameter ranges from 15 to 104 pixels, and most of them are between 20 and 80 pixels. We preprocess these images and conduct data augmentation, including horizontal flip, vertical flip and diagonal flip before training.

Iii-B Experiment Setup

The proposed method is implemented in Python using Tensorflow. During the training process, we set the batch size to 1, the momentum is 0.9 and weight decay is 0.0001, the learning rate is 0.001 for the first 30K iterations, 0.0001 for the next 20K and 0.00001 for the last 10K. For each mini-batch, we sample 512 ROIs with positive-to-negative ratio 1:1. For anchors, according to the tumors’ diameter distribution illustrated in Fig. 

3, we choose 5 scales with box areas of , , , , , and 5 anchor ratios of 1:1, 1:1.5, 1.5:1, 1:2 and 2:1. The hardware settings are Intel (R) Core i7-9800X CPU, Nvidia GeForce RTX2080 Ti GPU and 32GB memory on Ubuntu 64bits Linux desktop.

Fig. 4: Example results of tumor detection. The first row are the ground truth, the second row are the corresponding detection results of the proposed method.

Iii-C Results and Discussion

Example results of tumor detection are shown in Fig. 4

, the localization is relatively accurate and the corresponding probability score is high as well. In order to evaluate the detection performance, the proposed method is compared with classical object detection algorithms, including DetNet 

[15], Cascade R-CNN [16], Mask R-CNN [17], FPN [9], Faster R-CNN [11], RetinaNet [18], SSD512 [19] and YOLO-v3 [20]. These algorithms are trained and tested using the same pancreatic CT dataset without additional modifications. The Intersection Over Union (IOU) between predicted bounding box and the corresponding ground-truth bounding box is calculated for each result, which is defined as follows:


Furthermore, the detection results whose IOU are higher than 0.5 are regarded as valid results. As shown in Table I, the proposed method achieves the best 0.8376, 0.9179 and 0.9018 in Sensitivity, Specificity and Accuracy, respectively, outperforming other methods by a notable margin. The corresponding Receiver Operating Characteristics (ROC) curves in Fig. 5 show that our proposed method is superior to other methods with the Area Under Curve (AUC) of 0.9455.

Methods Sensitivity Specificity Accuracy
SSD512 [19] 0.4238 0.9088 0.6411
FPN + Faster R-CNN [9] 0.6984 0.8584 0.7416
YOLO-v3 [20] 0.7697 0.5849 0.7423
Mask R-CNN [17] 0.7244 0.8247 0.7500
Faster R-CNN [11] 0.4877 0.9131 0.7538
DetNet [15] 0.6932 0.9032 0.7695
RetinaNet [18] 0.8245 0.5238 0.7726
Cascade R-CNN [16] 0.6309 0.9113 0.7981
Our Method 0.8376 0.9179 0.9018
TABLE I: Detection performance comparison among different algorithms on test set
Fig. 5: The ROC curves of different methods for pancreatic tumor detection.

In addition, in order to evaluate the proposed method more accurately, Free-Response Receiver Operating Characteristics (FROC) is used to compute the Sensitivity at 7 FP/scan rates. Our proposed method achieves an average score of 0.901, the corresponding results are documented in Table II.

FPs/scan 0.125 0.25 0.5 1 2 4 8 Average
Sensitivity 0.671 0.804 0.907 0.963 0.977 0.986 0.998 0.901
TABLE II: Detection performance in terms of Sensitivity based on different FPs/scan rates on test set
Augmented Self-adaptive
Feature Feature DC Module Accuracy
Pyramid Fusion
TABLE III: Ablation studies on Effects comparison of the proposed components and their combinations

Extensive ablation experiments are conducted to analyze the effects of the proposed components and their combinations in our method. The results are documented in Table III, the Augmented Feature Pyramid networks and Self-adaptive Feature Fusion can significantly improve the accuracy in individual cases. Finally, the detection accuracy can be significantly improved from 0.7416 to 0.9018 by the combinations of these three proposed components.

Iv Conclusion

In this paper, we study on how to accurately detect the tumor of pancreatic cancer, which is of great significance for the diagnosis in clinical practice. We establish an Augmented Feature Pyramid to propagate low-level accurate localization information. We also design Self-adaptive Feature Fusion to capture richer context information at multiple scales. Finally, we compute the relation information of the features via the Dependencies Computation Module. Comprehensive evaluations and comparisons are completed, and our proposed method achieves promising performance. In the future, we will continue studying on the staging of pancreatic cancer to assist the doctor’s clinical diagnosis.


This research is supported in part by Foundation of Shandong provincial Key Laboratory of Digital Medicine and Computer assisted Surgery (SDKL-DMCAS-2018-01), National Natural Science Foundation of China (NO. 61672077 and 61532002), Applied Basic Research Program of Qingdao (NO. 161013xx), and Beijing Natural Science Foundation-Haidian Primitive Innovation Joint Fund (L182016).


  • [1] T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biological cybernetics, vol. 43, no. 1, pp. 59–69, 1982.
  • [2] D. P. Ryan, T. S. Hong, and N. Bardeesy, “Pancreatic adenocarcinoma,” New England Journal of Medicine, vol. 371, no. 11, pp. 1039–1049, 2014.
  • [3]

    F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,”

    CA: a cancer journal for clinicians, vol. 68, no. 6, pp. 394–424, 2018.
  • [4] L. C. Chu, S. Park, S. Kawamoto, Y. Wang, Y. Zhou, W. Shen, Z. Zhu, Y. Xia, L. Xie, F. Liu et al., “Application of deep learning to pancreatic cancer detection: Lessons learned from our initial experience,” Journal of the American College of Radiology, vol. 16, no. 9, pp. 1338–1342, 2019.
  • [5] C. K. K. Reddy, G. Raju, and P. Anisha, “Detection of pancreatic cancer using clustering and wavelet transform techniques,” in 2015 International Conference on Computational Intelligence and Communication Networks (CICN), 2015, pp. 332–336.
  • [6] H. Li, M. Reichert, K. Lin, N. Tselousov, R. Braren, D. Fu, R. Schmid, J. Li, B. Menze, and K. Shi, “Differential diagnosis for pancreatic cysts in ct scans using densely-connected convolutional networks,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019, pp. 2095–2098.
  • [7]

    S. Hussein, P. Kandel, C. W. Bolan, M. B. Wallace, and U. Bagci, “Lung and pancreatic tumor characterization in the deep learning era: novel supervised and unsupervised learning approaches,”

    IEEE transactions on medical imaging, vol. 38, no. 8, pp. 1777–1787, 2019.
  • [8]

    M. U. Gutmann and A. Hyvärinen, “Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics,”

    Journal of Machine Learning Research

    , vol. 13, no. Feb, pp. 307–361, 2012.
  • [9] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2017, pp. 2117–2125.
  • [10] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803.
  • [11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [13] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8759–8768.
  • [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [15] Z. Li, C. Peng, G. Yu, X. Zhang, and J. Sun, “Detnet: A backbone network for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
  • [16] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6154–6162.
  • [17] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  • [18] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
  • [19] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision, 2016, pp. 21–37.
  • [20] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.