Automated Segmentation of Pulmonary Lobes using Coordination-Guided Deep Neural Networks

04/19/2019 ∙ by Wenjia Wang, et al. ∙ 12

The identification of pulmonary lobes is of great importance in disease diagnosis and treatment. A few lung diseases have regional disorders at lobar level. Thus, an accurate segmentation of pulmonary lobes is necessary. In this work, we propose an automated segmentation of pulmonary lobes using coordination-guided deep neural networks from chest CT images. We first employ an automated lung segmentation to extract the lung area from CT image, then exploit volumetric convolutional neural network (V-net) for segmenting the pulmonary lobes. To reduce the misclassification of different lobes, we therefore adopt coordination-guided convolutional layers (CoordConvs) that generate additional feature maps of the positional information of pulmonary lobes. The proposed model is trained and evaluated on a few publicly available datasets and has achieved the state-of-the-art accuracy with a mean Dice coefficient index of 0.947 ± 0.044.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human lung is divided into five pulmonary lobes, which are served by independent bronchial and vessel trees. Many diseases are associated with specific lobes. Measuring a pathological lung at lobar level is therefore of great help to diagnose and assess different lung diseases. An accurate segmentation of pulmonary lobes is thus necessary. However, there are several challenges to the lobe segmentation. First, lobar boundaries defined by pulmonary fissures are often partially invisible from the CT scans. Furthermore, severe shape disorders of specific lobes may occur during pathological progress of the lung diseases. These problems has greatly limited the development of automated segmentation of pulmonary lobes.

Previously, several research groups have attempted to perform pulmonary lobe segmentation. A few unsupervised methods are reported, including watershed transformation [1], graph-cuts [2], b-splines[3], surface fitting [4] and semi-automated segmentation framework [5]. These methods use anatomical information as prior knowledge, including the segmentation of airways, vessels and fissures, and then generate final segmentation of pulmonary lobes. However, as described above, segmentation of airways, vessels and fissures are not always reliable.

As encouraged by its recent success across the computer vision tasks, deep learning based methods are reported to segment pulmonary lobes. George et al.

[6] propose a method that couples deep learning with the random walker algorithm. They employ the progressive holistically-nested network (P-HNN) model to identify potential lobar boundaries, and then delineate lobar boundaries using a random walker algorithm. Ferreira et al. [7] train an end-to-end deep learning model, known as Fully Regularized V-Net (FRV-Net), to segment five pulmonary lobes. However, in the absence of global/positional information of lobes, these method often generate incorrect segmentation, such as misclassification of different lobes and false response outside of lung area.

We propose an automated segmentation of pulmonary lobes using coordination-guided deep neural networks from chest CT scans. It is a fully end-to-end 3D deep learning approach without heavy post-processing schemes. In order to improve the accuracy of the proposed segmentation, we exploit lung segmentation as pre-processing. To further reduce the misclassification of different pulmonary lobes, we adopt coordination-guided convolutional layers (CoordConvs) which contain positional information of different pulmonary lobes. We evaluate the performance of the proposed method on 4 different data sets by several measuring metrics. The experiments show the superior performance of the proposed method compared to previous state-of-the-art methods.

2 Method

The architecture of this method is based on the volumetric convolutional neural network (V-net)[8], which is widely used on 3D biomedical image segmentation. The input images are first downsampled to a size of 256256128 and a 2D automated lung segmentation is then applied to extract the lung area. After that, we add CoordConv layers to the decoding path of V-net. The proposed model produces a voxel-wise prediction for the five target lobar classes. Figure 1 shows the flow chart of this method.

Figure 1: The flowchart of the proposed method

2.1 V-net

V-net is widely used in 3D biomedical image segmentation tasks and achieves relatively good performance. As shown in Fig 2

, the left part of the network consists of a compression path, while the right part recovers the signal with its original size. Appropriate padding strategies are applied to maintain the shape of feature maps. The V-net uses skip connections concatenating feature maps to recover image details between the encoding and the decoding paths. To the contrary of original settings, we choose Parametric Rectified Linear Unit (PReLU) as the non-linear activation than Rectified Linear Unit (ReLU) to alleviate vanishing gradient problems. The last layer is a convolutional layer with kernel size of 1 and followed by a soft-max activation function which generates the probability maps of 5 lobe classes and 1 background class in a one-hot fashion.

Figure 2: vnet architecture

2.2 CoordConv Layers

The right lung normally comprises three lobes (upper, middle, lower) and the left lung normally has two lobes (upper and lower). Although there are individual differences in chest CT scans, the lobes are distributed with distinct positional features. Misclassification often occurs if positional information of different lobes are not taken into account. However, information transfer between classic convolutional layers is usually limited within the receptive field of the layers, which is restricting the capability of classic convolutional layers to represent a global/positional information. In this work, we adopt an novel structure, a coordination-guided convolutional layer (CoordConv layer), to address this issue[9]. The CoordConv is a simple extension to the classic convolutional layer, integrating positional information by adding extra coordinate channels. As shown in Figure 3, 3 extra channels are added respectively to represent x, y, and z coordinates of the input 3D images. We add CoordConv layers in the last transition in the decoding path and the values of coordination channels are normalized to the range from -1 to 1.

Figure 3: The schematic diagram of 3D CoordConv layer

2.3 Model Regularization and Loss Design

In order to avoid the problem of overfitting we have implemented different regularizing techniques in our proposed model. First, dropout[10]

layers with a probability of 0.5 are applied while training and leads to a reduction of the sensitivity of specific neurons. Furthermore, we introduce data normalization during training process, such as batch normalization (BN)

[11]

. However, due to the memory limitation, the batch size is usually small when training a 3D convolutional neural network, resulting in inaccurate statistics estimation and rapid growth of BN error. Thus, we use group normalization

[12]

that divides the channels into groups and computes within each group the mean and variance for normalization.

We use the negative value of Dice coefficient[13] (called Dice loss) as the criterion to optimize the model parameters. The Dice loss is defined as,

(1)

where and represent predicted value and groundtruth value at position , respectively; iterates the entire image; denotes the class number (; 1-5 for different classes of pulmonary lobes. we don’t consider the background). And is 1e-5 to avoid zero-division.

To further enhance the performance of the proposed model, we consider the detection of lobar boundaries as an auxiliary task. We use the same lobes segmentation networks and add additional boundary loss to improve the ability of the model to distinguish the lobes. The boundary loss is defined as,

(2)

Finally, the total loss of our proposed model is

(3)

In the experiment, is set to .

Techniques right-up right-mid right-low left-up left-low average
baseline 0.801 0.656 0.829 0.859 0.830 0.795
(0.147) (0.208) (0.107) (0.092) (0.128) (0.171)
+ coordmap 0.897 0.846 0.941 0.947 0.947 0.916
(0.092) (0.117) (0.034) (0.047) (0.028) (0.045)
+ group norm 0.921 0.868 0.952 0.952 0.954 0.929
(0.096) (0.137) (0.028) (0.049) (0.028) (0.050)
Proposed 0.934 0.919 0.953 0.958 0.973 0.947
(0.090) (0.104) (0.030) (0.047) (0.030) (0.044)
Table 1: Comparison of different model structures in the proposed method.

3 Experiments and results

3.1 Data

We have included 343 chest CT scans in this work, including 71 cases from the LUNA16 111https://luna16.grand-challenge.org/ dataset, 195 cases from LKDS 222https://tianchi.aliyun.com/getStart dataset, 62 cases from Meinian One-health Health-care Holdings, 15 from Lobe and Lung Analysis 2011 (LOLA11) competition 333https://lola11.grand-challenge.org/ . All CT scans are manually annotated by an experienced radiologist using 3D Slicer Platform (SlicerCIP). The value of slice spacing of these CT volumes varies from 0.5 to 1.5. Both healthy and pathological lungs are included.

3.2 Experiments

The model is implemented using Pytorch 0.4.0 package, and runs on NVIDIA Tesla P100 GPU with 16 GB of memory. We evaluate our results using five-fold cross validation. The performance evaluation metric is Dice coefficient index. In order to assess the effectiveness of the proposed techniques (CoordConvs, group normalization, and etc.), a few experiments are performed using different model settings with or without these techniques. All experiments are under the same set of hyper-parameters. We also compare our method with several previous state-of-the-art methods.

Methods right-up right-mid right-low left-up left-low average
Watershed[1] 0.920 0.770 0.910 0.920 0.890 0.880
(0.090) (0.300) (0.180) (0.160) (0.230) ( - )
PPLS + RM[6] 0.929 0.824 0.887 0.925 0.945 0.888
(0.057) (0.142) (0.239) (0.244) (0.175) (0.116)
Alpha-Exp[2] 0.873 0.714 0.928 0.929 0.884 0.866
(0.169) (0.322) (0.104) (0.118) (0.231) ( - )
FR-vnet[7] 0.931 0.869 0.941 0.948 0.941 0.926
(0.070) (0.110) (0.077) (0.065) (0.070) (0.055)
Proposed 0.934 0.919 0.953 0.958 0.973 0.947
(0.090) (0.104) (0.030) (0.047) (0.030) (0.044)
Table 2: Performance comparison of our method with previous methods.
Figure 4: Example of the segmentation results using different model settings

3.3 Results

Table 1 shows the quantitative performance of the proposed method in terms of mean std of Dice coefficient index. Per-lobe-Dice and overall mean Dice are showed in the Table. We also compare our proposed method with the state-of-the-art methods (see Table 3). Figure 4 shows an example of our segmentation results in 2D and 3D. Previous methods suffer from two major issues to correctly identify the pulmonary lobes: 1) False segmentation outside of lung area (see Figure 4(e)), and 2) misclassification of different lobes (see Figure 4(c)). To address the first issue, a 2D automated lung segmentation(U-net[14]) are performed as a preprocessing step to extract lung ROIs. This step not only effectively solve the problem of false segmentation outside of lung, but also improve the lobe segmentation due to the narrower searching space of the model. For the second issue, we add CoordConv layers to the standard convolutional networks. The positional information carried by CoordConv layers acts as “soft constraint” and guarantees that the segmented lobes are around correct location. Finally, the proposed model achieves an average per-lobe-Dice of 0.947, compared to the previously best state-of-the-art approach of 0.926. The overall processing time of our proposed model is 12s per case on average using one Nvidia Tesla P100 GPU.

4 Conclusion

We propose an automated segmentation of pulmonary lobes using coordination-guided deep neural networks from chest CT scans. Several techniques are designed to improve the overall segmentation accuracy. First, an accurate lung segmentation is introduced to remove the false positive response outside of lung area. Second, CoordConv is added for a more effective transformation of the positional information. Finally, special designs of model regularization and loss function are performed. Experimental results demonstrate the superior performance of our proposed method. However, there are only thin slices in our cases.

5 Acknowledgment

This work is supported in part by the National Natural Science Foundation of China (NSFC) under Grants 81801778, 11831002 and the National Key Research and Development Program of China under Grant 2018YFC0910700.

Methods right-up right-mid right-low left-up left-low average
FR-vnet[7] 0.931 0.869 0.941 0.948 0.941 0.926
(0.070) (0.110) (0.077) (0.065) (0.070) (0.055)
Proposed 0.934 0.919 0.953 0.958 0.973 0.947
(0.090) (0.104) (0.030) (0.047) (0.030) (0.044)
Table 3: Performance comparison of our method with previous methods.

References

  • [1] B Lassen, E. M. van Rikxoort, M Schmidt, S Kerkstra, Ginneken B Van, and J. M. Kuhnigk, “Automatic segmentation of the pulmonary lobes from chest ct scans based on fissures, vessels, and bronchi,” IEEE Transactions on Medical Imaging, vol. 32, no. 2, pp. 210–222, 2013.
  • [2] Nicola Giuliani, Christian Payer, Michael Pienn, Horst Olschewski, and Martin Urschler, “Pulmonary lobe segmentation in ct images using alpha-expansion,” Proc. of VISIGRAPP, pp. 387–394, 2018.
  • [3] Tom Doel, Tahreema N Matin, Fergus V Gleeson, David J Gavaghan, and Vicente Grau, “Pulmonary lobe segmentation from ct images using fissureness, airways, vessels and multilevel b-splines,” in Biomedical Imaging (ISBI), 2012 9th IEEE International Symposium on. IEEE, 2012, pp. 1491–1494.
  • [4] Felix J. S. Bragman, Jamie R. Mcclelland, Joseph Jacob, John R. Hurst, and David J. Hawkes, “Pulmonary lobe segmentation with probabilistic segmentation of the fissuresand a groupwise fissure prior,” IEEE Transactions on Medical Imaging, vol. 36, no. 8, pp. 1650, 2017.
  • [5] Bianca Lassen, Jan-Martin Kuhnigk, Michael Schmidt, Stefan Krass, and Heinz-Otto Peitgen, “Lung and lung lobe segmentation methods at fraunhofer mevis,” in Fourth international workshop on pulmonary image analysis, 2011, vol. 18, pp. 185–99.
  • [6] Kevin George, Adam P Harrison, Dakai Jin, Ziyue Xu, and Daniel J Mollura, “Pathological pulmonary lobe segmentation from ct images using progressive holistically nested neural networks and random walker,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 195–203. Springer, 2017.
  • [7] Filipe T Ferreira, Patrick Sousa, Adrian Galdran, Marta R Sousa, and Aurélio Campilho, “End-to-end supervised lung lobe segmentation,” .
  • [8] Fausto Milletari, Nassir Navab, and Seyed Ahmad Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Fourth International Conference on 3d Vision, 2016, pp. 565–571.
  • [9] Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski, “An intriguing failing of convolutional neural networks and the coordconv solution,” 2018.
  • [10] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,”

    Journal of Machine Learning Research

    , vol. 15, no. 1, pp. 1929–1958, 2014.
  • [11] Sergey Ioffe and Christian Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in International Conference on International Conference on Machine Learning, 2015, pp. 448–456.
  • [12] Yuxin Wu and Kaiming He, “Group normalization,” 2018.
  • [13] Carole H. Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M. Jorge Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” 2017.
  • [14] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” vol. 9351, pp. 234–241, 2015.