Weak Supervision in Convolutional Neural Network for Semantic Segmentation of Diffuse Lung Diseases Using Partially Annotated Dataset

02/27/2020 ∙ by Yuki Suzuki, et al. ∙ 0

Computer-aided diagnosis system for diffuse lung diseases (DLDs) is necessary for the objective assessment of the lung diseases. In this paper, we develop semantic segmentation model for 5 kinds of DLDs. DLDs considered in this work are consolidation, ground glass opacity, honeycombing, emphysema, and normal. Convolutional neural network (CNN) is one of the most promising technique for semantic segmentation among machine learning algorithms. While creating annotated dataset for semantic segmentation is laborious and time consuming, creating partially annotated dataset, in which only one chosen class is annotated for each image, is easier since annotators only need to focus on one class at a time during the annotation task. In this paper, we propose a new weak supervision technique that effectively utilizes partially annotated dataset. The experiments using partially annotated dataset composed 372 CT images demonstrated that our proposed technique significantly improved segmentation accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Diffuse lung diseases (DLDs) are pulmonary abnormalities observable in medical images such as chest computed tomography (CT). Computer-aided diagnosis system for diffuse lung diseases is necessary to eliminate interobserver variabilities[1] and achieve objective assessment of DLDs, that can lead to better diagnosis and patient treatment. Therefore, our goal was to develop an automated DLD segmentation method for the objective assessment of DLDs. The DLD patterns considered in this paper are consolidation (CON), ground glass opacity (GGO), honeycombing (HCM), emphysema (EMP), and normal (NOR).

A number of studies regarding automated assessment of DLDs have been conducted in various context including image level classification and semantic segmentation. There are several kinds of supervised methods for automated assessment of DLDs including fully-supervised[2, 3, 4], semi-supervised[5], weakly supervised[6], and unsupervised[7]

. Machine learning techniques are widely used for semantic segmentation since they are capable of learning complicated texture patterns and often outperform hand-crafted algorithms. Among the machine learning techniques, convolutional neural network (CNN) is one of the most successful technique in computer vision tasks. One of the biggest drawbacks of machine learning including CNN is that it requires training dataset, which typically involves costly annotation tasks unless there is publicly available annotated dataset. Ideal annotation for semantic segmentation is pixel-wise full annotation, in which every pixel in the image is labeled as one of the possible classes.

In this paper, we define partial annotation as an annotation format in which only one class is chosen for the annotation and only pixels belonging to the chosen class are annotated per image. For example, in Figure 0(a), although there is ground glass opacity in the image, only consolidation is chosen for annotation and pixels of consolidation are annotated. Partially annotated dataset is less informative for training, however, it is much easier to create compared to fully annotated dataset since annotators only need to focus on one class at a time during the annotation task.

Partially annotated datasets have been utilized previously [8, 9]. In this paper, we propose a new weak supervision technique that fully utilizes partially annotated dataset. Throughout this paper, each DLD pattern is represented or painted in the following colors (CON:cyan, GGO:yellow, HCM:red, EMP:green, NOR:brown.)

2 Material and Method

2.1 Dataset

The dataset used in this study consists of 372 high-resolution computed tomography (HRCT) taken in Yamaguchi University Hospital, Japan. The mean and the standard deviation of the pixel size are

and respectively and slice thickness is . In this work, no pixel size equalization was performed since the deviation in the pixel sizes was negligibly small.

Statistics of our dataset are shown in Table 1 and typical images and their annotations for each DLD pattern are shown in Figure 1

. In our partially annotated dataset, all the pixels in a slice were manually classified into two classes: dominating DLD pattern and other tissues. In other words, all the pixels in our dataset were assigned one of the labels from either of the two label sets,

or . For example, in Figure 0(a), colored pixels were labeled as and all the other pixels were labeled as . In this paper, we call pixels of label and as weakly annotated pixels and strongly annotated pixels respectively. Our pixel-wise annotations were created in the following steps. First, up to 3 slices were chosen for the annotation for each HRCT scan and for each slice, one representing DLD pattern was chosen by a radiologist. Second, three radiologists performed pixel-wise binary annotation (e.g. binary annotation between or ) for each slice. Finally, the radiologists’ annotations were merged by taking majority classes for each pixel (i.e. pixels labeled as a DLD pattern by more than 2 radiologists became the corresponding DLD pixel). In addition to the DLDs annotation, lung fields were manually segmented under the supervision of radiologists and training and testing were conducted only within the lung fields.

CON GGO HCM EMP NOR total
# of pixels () 6 16 13 41 25 103
# of slices 150 114 129 163 55 611
Table 1: Statistics of the dataset
(a) CON
(b) GGO
(c) HCM
(d) EMP
(e) NOR
Figure 1: Typical slices for each DLD classes. Slices of HRCT are shown in lung window setting (window-center=-600, window-width=1500) with annotated labels superimposed in transparent colors. Note that even if more than one DLD patterns existed, only one DLD pattern was chosen and annotated for a slice to facilitate the annotation process.

2.2 Method

U-Net[10]

and its variants are widely used in semantic segmentation of medical images because of its simplicity and applicability. In this study, we modified U-Net to satisfy two demands for our use. (1) Take 3D input to leverage 3D spatial information of HRCT. (2) Generate 2D image since annotations are given by slices not by volumes. Our modified U-Net’s input tensor shape is (6, 512, 512, 1) and output tensor shape is (1, 512, 512, 5), where each axis in the tensor represents z, y, x, and channel respectively. In the network, the size along z axis was reduced from 6 to 1 by adjusting the padding size of the convolutional layers. The value 6 in the input tensor shape represents the number of slices that our model takes as the input and it was determined empirically.

The proposed loss function used in the training is shown in Eq. (

1), where , , and denote the ground truth label, the softmax output of the CNN, and cross entropy respectively.

is one-hot encoding function that works in the conventional way for

while for , it works so that weakly annotated pixels get encoded equally as the corresponding annotated pixels (e.g. ). is the weight for the weakly supervised pixels, which adjust the balance between supervised and weakly supervised pixels. Our loss function is designed to penalize weakly annotated pixels being predicted as corresponding label.

(1)

3 Results and Discussion

5-fold stratified cross validation was performed for training and testing. Stratified method was chosen so that each class of the DLD patterns was equally split into the cross-validation subsets. In addition to the stratification, during the splitting process, case information was taken into account to avoid data leakage. During the training, 20% of slices in the training subset were excluded as validation subset and used for early termination of the training. Adam optimizer with default parameters was used to train the network. Proposed method is implemented in Python using Keras library and the source code is publicly available at

https://github.com/yk-szk/SPIE2020. In this experiment, we compared the following 4 training methods {“supervised only” : base line method that only uses strongly annotated pixel (equivalent of the proposed method with ) ; “proposed ()” : proposed method with ; “proposed ()” : proposed method with ; “semi-supervised” : semi-supervised method used by Anthimopoulos, M. et.al.[5], that utilizes weakly annotated pixels for semi-supervision.}

Recall, precision, and dice coefficient (a.k.a F-measure) were used for the evaluation. For the sake of the evaluation, continuous softmax outputs were converted into discrete class labels by selecting the classes that gave the maximum probability. Table

2

shows the evaluated metrics for each method. By paired t-tests, statistically significant differences were confirmed between the proposed method (

) and other methods in dice coefficients. As shown in Table 2, utilizing weakly annotated pixels increased precision and was the optimal value that balances recall and precision in this experiment. Evaluated dice coefficients for the proposed method () are shown in Figure 2. As shown in Figure 2, even though the proposed method improved the segmentation accuracy, segmentation accuracy varies between slices. Figure 3

shows the confusion matrix of the pixel-wise classification result. In Figure

3, pixels misclassified as corresponding (e.g. pixels of classified as ) are represented as “Others”. As shown in Figure 3, DLD class combinations with similar texture patterns such as HCM and EMP were misclassified into each other. Figure 4 shows the average result for each DLD class and tested method.

Figure 2: Swarm plot on top of a letter value plot of the dice coefficient for the proposed method ().
Figure 3: Confusion matrix for the proposed method (). Values are normalized along Y axis thus diagonal elements indicate the precisions.
Ground truth Supervised only Proposed () Proposed ()

CON

0.839 0.868 0.824

GGO

0.693 0.676 0.876

HCM

0.581 0.770 0.435

EMP

0.793 0.847 0.815

NOR

0.978 0.968 0.974
Figure 4: Average results and dice coefficients for each DLD pattern. Automated segmentation results are superimposed with colors. For each DLD pattern, the slice that gave the median dice coefficient for the proposed method with was chosen to represent the average result. Note that although CNN performed multi-class segmentation, only one DLD pattern per slice was taken into account for the evaluation.

4 Conclusion

We proposed a new weakly supervised training that effectively utilizes weakly annotated pixels of partially annotated dataset. Experiments demonstrated that our proposed method outperformed conventional methods. Further work is required to differentiate DLD patterns that have similar texture patterns such as HCM and EMP to improve the segmentation accuracy.

Acknowledgements

This work was supported by JSPS KAKENHI Grant Number 17H02110.

References

  • [1] Watadani, T., Sakai, F., Johkoh, T., Noma, S., Akira, M., Fujimoto, K., Bankier, A. A., Lee, K. S., Müller, N. L., Song, J.-W., Park, J.-S., Lynch, D. A., Hansell, D. M., Remy-Jardin, M., Franquet, T., and Sugiyama, Y., “Interobserver Variability in the CT Assessment of Honeycombing in the Lungs,” Radiology 266, 936–944 (mar 2013).
  • [2] Hashimoto, N., Suzuki, K., Liu, J., Hirano, Y., MacMahon, H., and Kido, S., “Deep neural network convolution (NNC) for three-class classification of diffuse lung disease opacities in high-resolution CT (HRCT): Consolidation, ground-glass opacity (GGO), and normal opacity,” in [Medical Imaging 2018: Computer-Aided Diagnosis ], Mori, K. and Petrick, N., eds., 10575, 113, SPIE (feb 2018).
  • [3] Gao, M., Bagci, U., Lu, L., Wu, A., Buty, M., Shin, H.-C., Roth, H., Papadakis, G. Z., Depeursinge, A., Summers, R. M., and Others, “Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 6(1), 1–6 (2018).
  • [4] Negahdar, M., Coy, A., and Beymer, D., “An End-to-End Deep Learning Pipeline for Emphysema Quantification Using Multi-label Learning,” in [2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) ], 929–932, Institute of Electrical and Electronics Engineers (IEEE) (oct 2019).
  • [5] Anthimopoulos, M., Christodoulidis, S., Ebner, L., Geiser, T., Christe, A., and Mougiakakou, S., “Semantic Segmentation of Pathological Lung Tissue With Dilated Fully Convolutional Networks,” IEEE Journal of Biomedical and Health Informatics 23, 714–722 (mar 2019).
  • [6] Wang, C., Moriya, T., Hayashi, Y., Roth, H., Lu, L., Oda, M., Ohkubo, H., and Mori, K., “Weakly-supervised deep learning of interstitial lung disease types on CT images,” in [Medical Imaging 2019: Computer-Aided Diagnosis ], Hahn, H. K. and Mori, K., eds., 10950, 53, SPIE (mar 2019).
  • [7] Mabu, S., Obayashi, M., Kuremoto, T., Hashimoto, N., Hirano, Y., and Kido, S., “Unsupervised class labeling of diffuse lung diseases using frequent attribute patterns,” International Journal of Computer Assisted Radiology and Surgery 12, 519–528 (mar 2017).
  • [8] Dmitriev, K. and Kaufman, A. E., “Learning Multi-Class Segmentations From Single-Class Datasets,” in [

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

     ], (2019).
  • [9] Kong, F., Chen, C., Huang, B., Collins, L. M., Bradbury, K., and Malof, J. M., “Training a single multi-class convolutional segmentation network using multiple datasets with heterogeneous labels: preliminary results,” in [IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium ], 3903–3906, IEEE (jul 2019).
  • [10] Ronneberger, O., Fischer, P., and Brox, T., “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in [Medical Image Computing and Computer-Assisted Intervention – MICCAI ], 234–241, Springer International Publishing (2015).