An Inductive Transfer Learning Approach using Cycle-consistent Adversarial Domain Adaptation with Application to Brain Tumor Segmentation

05/11/2020
by   Yuta Tokuoka, et al.
0

With recent advances in supervised machine learning for medical image analysis applications, the annotated medical image datasets of various domains are being shared extensively. Given that the annotation labelling requires medical expertise, such labels should be applied to as many learning tasks as possible. However, the multi-modal nature of each annotated image renders it difficult to share the annotation label among diverse tasks. In this work, we provide an inductive transfer learning (ITL) approach to adopt the annotation label of the source domain datasets to tasks of the target domain datasets using Cycle-GAN based unsupervised domain adaptation (UDA). To evaluate the applicability of the ITL approach, we adopted the brain tissue annotation label on the source domain dataset of Magnetic Resonance Imaging (MRI) images to the task of brain tumor segmentation on the target domain dataset of MRI. The results confirm that the segmentation accuracy of brain tumor segmentation improved significantly. The proposed ITL approach can make significant contribution to the field of medical image analysis, as we develop a fundamental tool to improve and promote various tasks using medical images.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

10/29/2021

C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation framework for medical Image Segmentation

Deep learning models have obtained state-of-the-art results for medical ...
02/18/2021

Domain Adaptation for Medical Image Analysis: A Survey

Machine learning techniques used in computer-aided medical image analysi...
11/16/2020

A Transfer Learning Based Active Learning Framework for Brain Tumor Classification

Brain tumor is one of the leading causes of cancer-related death globall...
02/25/2017

Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation

Magnetic Resonance Imaging (MRI) is widely used in routine clinical diag...
07/01/2021

Supervised Segmentation with Domain Adaptation for Small Sampled Orbital CT Images

Deep neural networks (DNNs) have been widely used for medical image anal...
10/14/2020

Harnessing Uncertainty in Domain Adaptation for MRI Prostate Lesion Segmentation

The need for training data can impede the adoption of novel imaging moda...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

To realize successful medical image analysis using supervised machine learning, the use of large amounts of training data is unavoidable [1]. The number of publicly accessible medical image databases has been steadily increasing[2, 3, 4, 5]. Accurate annotations of medical images are essential for developing various healthcare applications. Annotation tasks require high level of medical expertise, and hence it is efficient to utilize annotated datasets for other similar tasks. Wang et al. suggested that there exists a high correlation between lesions occurring in the chest (e.g., Infiltration is often associated with Atelectasis and Effusion) [6]. From this perspective, medical knowledge annotated to a certain domain dataset is regarded useful for training other tasks using datasets from different domains. However, given the multi-modal representations for each annotated image, it is challenging to adapt the annotation labels to tasks pertaining other domain datasets. Supervised transfer learning (STL) is a commonly used method for domain adaptation. The standard STL approach involves training the model on a source domain (SD) dataset and subsequently fine-tuning the model on a target domain (TD) dataset. In medical image analysis, several studies have been reported that utilize STL [7]. Although fine-tuning has been successfully utilized to improve convergence and training, this approach cannot utilize unlabeled data and does not model the domain shift between SD and TD explicitly.

In recent years, various unsupervised domain adaptation (UDA) methods, such as generative model-based and autoencoder-based models, have been proposed

[8, 9, 10]. In addition, several studies that utilize Cycle-GAN[11], which is a type of generative adversarial networks (GANs), have been reported in the domain of medical image analysis[10, 9]. During the training of Cycle-GAN, the dataset does not need to be paired between each domain.

In most cases, public medical image datasets are basically unpaired. Unpaired data can be regarded as a dataset wherein images from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are respectively imaged to the other patients. Chen et al. adapted a segmentation model learned using SD chest X-ray images with annotated labels of the lung region; this segmentation model was adapted to the TD chest X-ray images without annotation labels by using Cycle-GAN based UDA[9]. They showed that it is possible to predict TD annotation labels with high accuracy using a trained SD model. This method can be used to train the model to distinguish between SD and TD. Therefore, by using this method, we attempt to explicitly induce SD annotation labels for TD tasks.

In this study, we propose an inductive transfer learning (ITL) approach wherein the trained SD model extracts annotation labels of the TD dataset by using Cycle-GAN based UDA; these labels are induced for the training task of the TD dataset (Fig. 1). To evaluate the proposed approach, we used MRI dataset, with annotation labels of the brain tumor region, as TD and MRI dataset, with the annotation label of the brain tissue region, as SD. We demonstrate that the accuracy of the brain tumor segmentation improves through the utilization of brain tissue annotation labels for the training of brain tumor segmentation.

[width=8cm]fig/overview_3D.pdf

Figure 1: Overview of the proposed ITL approach for improving brain tumor segmentation. Step 1: The segmentor learns to segment the brain tissue using SD dataset; Step 2: Cycle-GAN based UDA learns to translate TD to SD for adapting the brain tissue segmentation; Step 3: The TD segmentor learns to segment the brain tumor using induced the annotation label of the brain tissue.

2 Methods

The proposed ITL approach induces the annotation label of the brain tissue to learn the brain tumor segmentation (Fig. 1). The approach is sub-divided into three steps; step 1 involves the training phase of the SD segmentor; step 2 is essentially the training phase of UDA; step 3 is the training phase of TD segmentor with inducing annotation label by SD segmentor and UDA.

2.1 Implementation of Brain Tissue Segmentor

We implemented the segmentor, which performs the segmentation of the brain tissue based on 3D U-Net[12] (Fig. 2(a)). We used the softmax function as an output function and Dice loss[13]

as the objective function. In general, a commonly occurring problem in the segmentation task, is the presence of imbalanced labels (the number of pixels or voxels in the background and objects). A previous study reported that it is possible to suppress the influence of dataset label imbalance by using the dice loss function as an objective function

[13].

2.2 Implementation of Cycle-GAN based UDA

In this work, we implement the Cycle-GAN based UDA for adapting TD to the brain tissue segmentation. The proposed UDA architecture comprises two generators and three discriminators (Fig. 2(a)) in addition to the trained SD segmentor, which is trained in step 1 (Fig. 2(b)). We used four types of losses for UDA training.

In general, the purpose of UDA is to obtain the function mapping the SD () to the meaning label space () and then adapt the data in the TD () to . In this case,

corresponds to the SD segmentor. To perform unsupervised learning in UDA, the generator

, which maps data of space to space, and discriminator , which monitors the training result, are required. The generator generates a fake as . Given that the discriminator distinguishes between the generated false image and the real image , training of the generator and the discriminator are adversarial in nature. Therefore, the objective function in this relationship can be defined as follows:

(1)

where the discriminator learns to maximize the objective function and to accurately distinguish between and .

During image translation, the generated false should hold the semantic structure of the real image . We adopted the cycle-consistency loss used in Cycle-GAN[11]:

(2)

where the L1Norm is used to reduce blurring in the generated image. This loss imposes a pixel-wise penalty on the distance between the input image and the cycle converted image.

We also adopted identity loss[14], which attempts to retain the spatial information within the domain:

(3)

UDA adapts trained to , which is the output of . If high quality is generated, the accuracy of also improves. As a constraint for generating higher quality , we introduce another discriminator and use semantic-aware loss[9]:

(4)

The full objective function combined with these losses is defined as follows:

(5)

In this study, we set parameters as , respectively. The overall optimization of UDA was based on generator:

(6)

First, we used Adam as the optimizer with an initial learning rate of 0.002. Then, we implemented the SD segmentor and UDA based on Chainer, which used NVIDIA Tesla P100 for the calculation of training and inference.

[width=14cm]fig/model.pdf

Figure 2: Overview of architecture in our pipeline. (a) Architecture of segmentor, generator, and discriminator. (b) Overview of 3D UDA process.

2.3 Training Procedure of Brain Tissue Segmentor

In this study, we adopt the top-performing algorithm from the BraTS 2017 challenge as the TD segmentor and baseline model[15]

. We extended the data by concatenating the annotation labels extracted by UDA to the input image. The annotation label was defined as the probability map of brain tissue segmentation by UDA. Given that there are four semantic labels of brain tissue segmentation, there are eight channels of input images after concatenation.

3 Experiments and Results

3.1 Datasets

We used two public brain MRI datasets (BraTS[3, 4] and ADNI, see www.adni-info.org) to validate our approach. The BraTS has four channels for inputs: T1-weighted, T2-weighted, contrast-enhanced T1, and FLAIR.

The BraTS dataset contains images of 484 patients. We converted all the images to

voxel by crop and linear interpolation, and scaled the intensity values to

. The BraTS dataset is annotated voxel-wise, with four label classes: background, edema, non-enhancing tumor, and enhancing tumor. We used the three classes for evaluation in the brain tumor segmentation task[3]. The whole tumor (WT) is a class including the areas of edema, non-enhancing tumor, and enhancing tumor. The tumor core (TC) is a class including the area of non-enhancing tumor and enhancing tumor. The enhance tumor (ET) is the area of enhancing tumor.

As in the case of the BraTS dataset, the ADNI dataset also has four channels. We annotated the ADNI dataset with brain tissue labels by using an effective approach[16]; consequently, atlas of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) in contrast-enhanced T1 images were generated. In the pre-processing step, we performed skull-stripping[17] on the ADNI dataset, given that the BraTS dataset does not include skull images. The ADNI dataset contains images of 147 patients. We also converted all the images to voxel by using crop and linear interpolation and scaled the intensity values to .

3.2 Evaluation of Brain Tumor Segmentation using ITL

We evaluated the dice score of the brain tumor segmentation task by randomly dividing the dataset into train : validation : test = 7 : 1 : 2 for eleven times in Monte Carlo cross-validation. As a result of inducing the brain tissue annotation label for tasks of brain tumor segmentation, the mean dice score obtained using the proposed method was superior to that of the baseline (Fig. 3). In addition, we performed the paired Wilcoxon rank-sum test and showed that the accuracy was significantly improved, as indicated by the lower p-value score. From these results, we demonstrate that brain tissue annotation labels can be useful for brain tumor segmentation.

[width=8cm]fig/annotation_transfer.pdf

Figure 3: Quantitative evaluation of annotation transfer.

The boxes indicate quartiles including the median; triangles indicate the mean; dots indicate outliers. There is a significant difference in the dice score of the proposed method and the baseline (

, Wilcoxon signed-rank test). The mean dice score of baseline WT, TC, and ET is 0.883, 0.831, and 0.729, respectively. On the other hand, the mean dice score for WT, TC, and ET in our proposed method is 0.888, 0.832, and 0.737, respectively.

[width=14cm]fig/uda_comparison.pdf

Figure 4: Visualization of translation and segmentation. Three examples of source images and four examples of target image, translation, and segmentation are shown. Green region indicates WM; purple region indicates GM; red region indicates CSF.

3.3 Evaluation of SD Segmentor and Cycle-GAN based UDA

We evaluated the training results of the SD segmentor and visualized the results of UDA. We randomly divided ADNI dataset into train : validation : test = 7 : 1 : 2. For the segmentation task, training was performed using train and validation dataset. When evaluated using the test dataset, the dice score of each class was 0.7086 (WM), 0.4883 (GM), and 0.6074 (CSF). Thus, we consider the proposed model to be a suitable SD segmentor for extracting the annotation information of brain tissue.

We trained UDA using the train and validation data in ADNI and BraTS datasets. We translated the test dataset of BraTS into ADNI domain by using trained generator . We qualitatively confirmed that the brain tissue, including the brain tumor region, is sufficiently segmented (Fig. 4). From these results, domain adaptation to the SD segmentor by using UDA can be regarded as successful.

4 Discussion and Conclusion

We showed that the annotation labels of the brain tissue can be applied for the training of the segmentation task of the brain tumor (Fig. 3). This result also suggests that the formation of the brain tumor is closely associated with the distribution of brain tissue. We evaluated our ITL approach only using the combination of {MRI, MRI}. However, the evaluation can also utilize combinations of {CT, MRI}, {CT, CT} and so on. Moreover, it is possible use combinations of both 2D and 3D images, such as {X-ray, CT}, as well as 2D images {X-ray, X-ray}. In this study, we used only one domain that extracts annotations; therefore, our ITL approach can adopt annotation labels of multiple SD to the training of TD.

References

  • Domingos [2012] Pedro Domingos. A few useful things to know about machine learning. Communications of the ACM, 55(10):78–87, 2012.
  • Armato III et al. [2011] Samuel G Armato III, Geoffrey McLennan, Luc Bidaut, Michael F McNitt-Gray, Charles R Meyer, Anthony P Reeves, Binsheng Zhao, Denise R Aberle, Claudia I Henschke, Eric A Hoffman, et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Medical physics, 38(2):915–931, 2011.
  • Menze et al. [2015] Bjoern H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Keyvan Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging, 34(10):1993, 2015.
  • Bakas et al. [2017] Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin S Kirby, John B Freymann, Keyvan Farahani, and Christos Davatzikos. Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data, 4:170117, 2017.
  • Yan et al. [2018] Ke Yan, Xiaosong Wang, Le Lu, and Ronald M Summers.

    DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning.

    Journal of Medical Imaging, 5(3):036501, 2018.
  • Wang et al. [2017a] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3462–3471. IEEE, 2017a.
  • Ghafoorian et al. [2017] Mohsen Ghafoorian, Alireza Mehrtash, Tina Kapur, Nico Karssemeijer, Elena Marchiori, Mehran Pesteie, Charles RG Guttmann, Frank-Erik de Leeuw, Clare M Tempany, Bram van Ginneken, et al. Transfer learning for domain adaptation in MRI: Application in brain lesion segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 516–524. Springer, 2017.
  • Xia and Kulis [2017] Xide Xia and Brian Kulis. W-Net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.
  • Chen et al. [2018] Cheng Chen, Qi Dou, Hao Chen, and Pheng-Ann Heng. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation. arXiv preprint arXiv:1806.00600, 2018.
  • Zhang et al. [2018] Yue Zhang, Shun Miao, Tommaso Mansi, and Rui Liao. Task driven generative modeling for unsupervised domain adaptation: Application to x-ray image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 599–607. Springer, 2018.
  • Zhu et al. [2017] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros.

    Unpaired image-to-image translation using cycle-consistent adversarial networks.

    arXiv preprint, 2017.
  • Çiçek et al. [2016] Özgün Çiçek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 424–432. Springer, 2016.
  • Milletari et al. [2016] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi.

    V-net: Fully convolutional neural networks for volumetric medical image segmentation.

    In 2016 Fourth International Conference on 3D Vision (3DV), pages 565–571. IEEE, 2016.
  • Taigman et al. [2016] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016.
  • Wang et al. [2017b] Guotai Wang, Wenqi Li, Sébastien Ourselin, and Tom Vercauteren. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI Brainlesion Workshop, pages 178–190. Springer, 2017b.
  • Gulban et al. [2018] Omer Faruk Gulban, Marian Schneider, Ingo Marquardt, Roy AM Haast, and Federico De Martino. A scalable method to improve gray matter segmentation at ultra high field MRI. PloS one, 13(6):e0198335, 2018.
  • Iglesias et al. [2011] Juan Eugenio Iglesias, Cheng-Yi Liu, Paul M Thompson, and Zhuowen Tu. Robust brain extraction across datasets and comparison with publicly available methods. IEEE transactions on medical imaging, 30(9):1617–1634, 2011.