Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion

02/22/2020
by   Cheng Chen, et al.
10

Accurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over 16 average for Dice on whole tumor segmentation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

08/19/2019

A unified representation network for segmentation with missing modalities

Over the last few years machine learning has demonstrated groundbreaking...
11/02/2020

Predicting Brain Degeneration with a Multimodal Siamese Neural Network

To study neurodegenerative diseases, longitudinal studies are carried on...
07/21/2021

Modality-aware Mutual Learning for Multi-modal Medical Image Segmentation

Liver cancer is one of the most common cancers worldwide. Due to inconsp...
08/28/2020

Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data

Tumor segmentation in multimodal medical images has seen a growing trend...
08/24/2021

Maximum Likelihood Estimation for Multimodal Learning with Missing Modality

Multimodal learning has achieved great successes in many scenarios. Comp...
01/05/2021

Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation

Multi-modal medical image segmentation plays an essential role in clinic...

Code Repositories

Robust-Mseg

[MICCAI'19] Robust multimodal brain tumor segmentation via feature disentanglement and gated fusion


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Accurate segmentation of brain tumor is of critical importance for quantitative assessment of tumor progression and preoperative treatment planning. The measurement of tumor-induced tissue changes relies on complementary biological information provided in multiple Magnetic Resonance Imaging (MRI) modalities, i.e., FLAIR, T1, T1 contrast-enhanced (T1c), and T2. Joint learning from these multimodal images greatly helps to improve segmentation accuracy. A plentiful of multimodal methods have been developed for automated brain tumor segmentation, by either concatenating multiple MRI modalities as inputs [8, 18], or fusing higher-level features from each modality in latent space [5, 14]. However, availability of a full set of the desired modalities is not always guaranteed in real-world scenarios, due to various scanning protocols and diverse patient conditions. In this regard, robustness to one or more missing modality(ies) during inference is essential for a widely-applicable multimodal learning method.

A typical solution is to synthesize the missing modality(ies) with available ones [15]. Such method requires to build a specific model for each modality from all possible combinations of available modalities, which is complicated. Alternatively, Havaei et al. [6]

propose hetero-modal image segmentation (HeMIS), which fuses multimodal information by computing statistics (i.e., mean and variance) across individual features. This method is easily scalable to various data missing situations, as the fusion in latent space adapts to any number of modalities. Furthermore, Chartsias et al. 

[2] and Van Tulder et al. [16] enhance the modality-invariance of latent representations by minimizing the L1 or L2 distance of features from different modalities. However, different MRI modalities vary in intensity distributions with modality-specific appearance, thus simply encouraging the features from different modalities to be close under L1- or L2-Norm may not achieve optimal representations with desired modality-invariance. Instead, a concurrent work [12] uses adversarial learning to ensure the model generate similar features under missing modalities as in a full modality situation.

To effectively extract modality-invariant representations conveying essential content of tumor, learning to cancel out the modality-specific information may be helpful. This can be achieved using feature disentanglement by decomposing inputs to latent space of interpretable factors [7, 13]. In medical imaging, disentangling representations has recently demonstrated effectiveness for liver lesion classification [1], myocardial segmentation [3] and multimodal deformable registration [11]. However, these works are for uni-modal or bi-modal data. To our best knowledge, the potential of feature disentanglement for robust multimodal segmentation at arbitrary modality number has not been tapped yet.

We propose a novel multimodal learning framework with feature disentanglement and gated feature fusion, which is robust to missing modalities. Our network disentangles multimodal features into modality-specific appearance code and modality-invariant content code. The content code of each modality is fused into a shared representation containing discriminative information for segmentation task. To enhance its modality-invariance, the shared representation is required to reconstruct any modality given corresponding appearance code, even in the absence of some modality(ies). Furthermore, we employ a novel gated feature fusion strategy to automatically learn weight maps and gate the contribution from different modalities at different locations. We validate our proposed method on the task of multimodal brain tumor segmentation with BRATS challenge [10]. With competitive performance to the state-of-the-art methods for full modality, our method is highly robust to various missing modality(ies) situations.

2 Method

Figure 1: Proposed multimodal segmentation framework. Left: Feature disentanglement for multimodal learning. Right: Detailed structure of the gated feature fusion module.

An overview of our proposed multimodal segmentation framework is in Fig. 1. We first introduce feature disentanglement to encode multimodal inputs to modality-specific appearance code and modality-invariant content code. Next, we present a learning-based gating strategy for integrating the complementary disentangled content code from individual modality to a more expressive fused representation, and detailed learning process and network architecture are described in the end.

2.1 Feature Disentanglement for Robust Multimodal Learning

We denote the multimodal images by , where in our brain tumor segmentation task. Each modality is input to its own appearance encoder and content encoder , and we correspondingly obtain its disentangled appearance code and content code . For the appearance code, we follow the common practice in [9]

and set it as a 8-bit vector assuming its prior distribution to be a centered isotropic Gaussian

. The Kullback-Leibler (KL) divergence is computed to encourage the estimated distribution of

to be as close to the normal distribution. In this way, we obtain the loss of

for training the appearance encoders .

Next, for the content code which is expected to gain modality-invariance after evaporating the stylized appearances of different image modalities, we fuse them into an integrated representation expressing essential semantic contents of the tumor. is an automatically learned fusion strategy for which we will elaborate in Section 2.2. From the perspective of successful disentanglement, the obtained content representation should enable to be re-rendered into the original image given any appearance code of a certain modality. To encourage such reconstruction capability, we develop the pseudo-cycle-consistency loss by introducing a set of modality-specific decoders , as follows:

(1)

where we employ the L1-Norm to alleviate the generated images getting blurred. With a Bernoulli indicator , we are aiming to grant the content representation extra robustness to missing data, i.e., still producing a high-quality reconstructed even in the absence of when fusing content codes. We perform the modality dropout in latent space by randomly setting to , which turns off the content code in current learning iteration.

After the disentanglement procedure which cancels out the effect of modality-specific appearance features while aggregates the complementary content information from arbitrary combination of multimodal data, we can perform accurate and robust brain tumor segmentation. We build a segmentation decoder which learns discriminative pattern based on our derived representative and robust . We jointly use Dice loss and weighted cross-entropy loss to handle the unbalanced object sizes in multi-class segmentation:

(2)

where

respectively denote the ground truth, probability prediction and one-hot output of voxel

for class . Directly combing the two types of segmentation loss works well in practice, without need of particularly tuning a balancing weight between them. The is a constant for numerical stabilization, and is calculated online in a batch, treating class imbalance in cross-entropy loss.

2.2 Multimodal Content Fusion with Learned Gating

Effectively fusing the complementary information from various modalities is crucial in a multimodal learning framework. This also holds for our scenario, though we disentangle the content code and enforce robustness to missing data. In fact, feature fusion plays a more important role in unusual inference situations such as unavailability of some modality(ies). If not considered carefully, the fused representation would otherwise be infected by the noisy information from empty-input channel(s), then the model’s performance is inevitably degraded. Existing approaches tackle this using average [6] or max operation [2]. However, the average operation makes each modality contribute equally, which may disregard highly informative features from a certain modality. On the contrary, the max operation only retains the largest response, neglecting information from all the others.

Instead of hard-coding a fusion operation, we propose to automatically learn the mapping function to integrate multimodal features. The contribution weights of a modality are not necessarily identical across locations, as a modality contains different amount of information for areas of each class. For example, T1c shows clear structure of enhancing tumor, but not for the area of edema. In this regard, we dynamically learn a weight map to gate the scale of information from each content code , with a flexibility of voxel-by-voxel. Then, the gated content from individual modalities are fused to form the integrated representation.

Specifically, the disentangled content codes from each modality are concatenated, and then input to a convolutional layer with an output channel of . With sigmoid activation, we obtain the gating weight matrix , which can be split into separate maps of , one for each modality. Next, we re-weight the content code as with element-wise multiplication. These outputs are concatenated and forwarded to a bottleneck

convolution, followed by Leaky ReLU activation. As shown in Eq. (1), we randomly set some content code(s) to be 0 with

during training, to enhance model’s robustness to missing data. Overall, we obtain the fused content code , which has the same feature map size and channel as individual code .

It is worth noting that our learning-based gating strategy is general for multimodal feature fusion, which is superior to existing average or max hard-coding way, by properly aggregating complementary content with data-dependent weight. In our framework, we jointly use it with the disentanglement procedure, and form an accurate and robust end-to-end multimodal learning method.

2.3 Learning Process and Network Architecture

The entire framework is learned with the overall objective function as:

(3)

where the are trade-off parameters weighting the importance of each component, which are both empirically set as in our experiments. We utilize an Adam optimizer with an initial learning rate of , and progressively multiply it by during training. The intensive components with our model only allow us to set the batch size as 1 using one Nvidia Xp GPU.

Our encoders and decoder for segmentation task adopt 3D U-Net [4] architecture, except that one independent encoder is used for each input modality. In each downsampling stage, the content features of individual modality are fused via the learning-based gating strategy with 0.5 probability of to be zero for dropping . Each fused features are then skip-connected to the corresponding upsampling stage. Each consists of 4 residual blocks with instance normalization and Leaky ReLU activation. Between each block, the image dimensions are progressively reduced by 2 and the feature channels are doubled. All convolutions use kernel size of and the initial channel number is 16. The also has 4 residual blocks which is similar to , except that the feature map size is upsampled by 2 with channel number halved after each stage. For image reconstruction, we generally follow the practice in [7]. Specifically, each consists of 5 convolutional layers followed by a global average pooling and a fully-connected layer to obtain the appearance code. Each uses 4 residual blocks followed by 4 upsampling and convolutional layers to produce .

3 Experiments

Dataset and Preprocessing. We validate our proposed method with the 2015 Brain Tumor Segmentation Challenge (BRATS) dataset [10]. The training set consists of 274 cases with ground truth being provided. The test set contains


Methods
Dice(%) Precision(%) Sensitivity(%)
Complete Core Enhancing Complete Core Enhancing Complete Core Enhancing

Kamnitsas et al. [8]
84 67 63 82 85 64 89 62 66

Zhao et al. [17]
82 72 62 84 78 60 83 73 69

OM-Net [18]
86 71 64 86 83 61 88 68 72


Ours
84 72 64 84 80 64 89 69 68

Table 1: Comparison of brain tumor segmentation performance on BRATS 2015 test set. The values are obtained by submitting our results to the online evaluation system.

Modalities
Dice(%)

Complete Core Enhancing


F
T1 T1c T2   Ours   HeMIS   MLP   Ours   HeMIS   MLP   Ours   HeMIS   MLP
  85.49   58.48   61.50   58.66   40.18   37.32   37.66   20.31   18.62
  71.86   33.46   2.04   72.87   44.55   17.70   70.22   49.93   32.92
  68.40   33.22   2.07   50.00   17.42   10.52   22.67   4.67   10.78
  83.02   71.26   63.81   46.67   37.45   34.26   28.30   5.57   15.90
  87.53   67.59   64.97   78.46   63.39   49.38   76.82   65.38   60.30
  74.59   45.93   1.99   76.40   55.06   26.55   73.95   62.40   40.93
  87.66   80.28   78.13   60.17   49.52   48.97   35.28   22.26   25.18
  87.87   69.56   66.88   64.88   47.26   43.66   41.05   23.56   26.37
  89.08   82.10   81.35   63.51   53.42   52.41   39.72   23.19   25.01
  88.01   79.80   81.13   78.09   66.12   65.51   76.62   67.12   66.19
  87.73   80.88   82.19   80.68   69.26   69.34   78.81   71.30   70.93
  89.07   83.87   80.40   65.99   57.76   53.46   43.04   28.46   28.34
  89.06   82.78   83.37   79.47   70.62   70.45   78.07   70.52   70.56
  88.26   70.98   67.85   80.84   66.60   55.40   78.56   67.84   64.81
  89.07   83.15   82.43   81.19   72.50   71.46   79.13   75.37   72.08
Average   84.45   68.22   60.01   69.19   54.07   47.09   57.33   43.86   41.93

Table 2: Robustness comparison of our method against HeMIS [6]

and the imputation MLP 

[6] on the test split of BRATS 2015 training set. The Dice score is presented for every combination case of modalities being available () or missing ().

110 cases with reference labels being held by the organizers and the evaluation can be obtained via an online system. Each case contains four MRI modalities including FLAIR, T1, T1c, and T2. The task of the challenge is to segment three tumor classes, i.e. complete, core, and enhancing tumor. The dataset is preprocessed with being skull-stripped, co-registered, and resampled to isotropic resolution, by the organizer. We further normalized the intensity of each volume to zero mean and unit variance within the brain tissue area. A patch of size was randomly cropped during training as input to the network.

Performance of Robust Brain Tumor Segmentation. We first compare our method with the state-of-the-art methods on the test set of BRATS 2015 for full modalities. Results were obtained directly from the online evaluation system and compared without postprocessing. In Table 1, our method achieves the highest Dice score of the core and enhancing tumor, with the other evaluations highly competitive with the ranking 1st approach OM-Net [18], validating the effectiveness of our segmentation backbone.

Figure 2: Segmentation results from our method for complete tumor (yellow), tumor core (red), and enhancing tumor (blue). Input modalities at inference are indicated.

We then evaluate the robustness of our method to missing modality inference. The absence of modality is implemented by setting to be zero for dropping

at inference. For direct comparison with the HeMIS method and image synthesis method using multilayer perceptron (MLP) 

[6], we used the same data split of BRATS training set as in [6] and directly referenced the results from their paper. In Table 2, our method significantly outperforms the HeMIS and imputation MLP methods for all the 15 possible combination situations of unavailable modalities and all the three tumor classes. This demonstrates the outstanding robustness of our multimodal segmentation method. From the results, we can see that FLAIR and T2 modalities are more informative than others for the complete tumor segmentation, and T1c is discriminative for accurate prediction of enhancing tumor. In Fig. 2, we show that with the increase of the number of missing modalities, the segmentation results produced by our robust model just gradually degrade, rather than encountering sudden failure. Even with T1 alone, we can achieve decent segmentation for the complete tumor and tumor core.

Ablation Study. We investigate the effectiveness of feature disentanglement and gated fusion, as two key components in our method. We first set up a baseline network which uses average fusion without feature disentanglement. Then we add the feature disentanglement and gated fusion one by one into the baseline network. In Fig. 3 (a), we compare the performance of the three networks on the Dice score, averaging over the 15 possible combination situations of input modalities. Both the feature disentanglement and gated fusion bring performance improvement across all the tumor parts, achieving the highest Dice score in most situations (10, 13, and 11 situations out of 15 for the complete tumor, tumor core, and enhancing tumor respectively). Fig. 3 (b) shows the reconstruction results of FLAIR and T2 image by combining their corresponding appearance code with the shared representation fused from content code of different combination of input modalities.

Figure 3: (a) Ablation study of key components in our method. (b) Example reconstruction of FLAIR and T2 images for different combinations of input modalities.

Even when some modality(ies) are missing, our network can still reconstruct the missing modality with the shared representation, indicating that the shared representation successfully yields the essential tumor content.

4 Conclusion

We propose a novel multimodal segmentation framework which jointly uses the feature disentanglement and gated feature fusion to obtain a modality-invariant and discriminative representation. We validate our method on brain tumor segmentation under both full modalities and various combination situations of missing modalities, achieving new state-of-the-art results on BRATS benchmark. The outstanding robustness to great inference variations can make our method widely-applicable in real-world clinical scenarios.

Acknowledgments. This work was supported in part by the National Basic Program of China 973 Program under Grant 2015CB351706, the National Natural Science Foundation of China, under Project No. U1613219, the Research Grants Council of Hong Kong Special Administrative Region, under Project No. CUHK14225616, and the Hong Kong Innovation and Technology Commission, under Project No. ITS/319/17.

References

  • [1] A. Ben-Cohen, R. Mechrez, et al. (2018) Improving cnn training using disentanglement for liver lesion classification in ct. arXiv preprint arXiv:1811.00501. Cited by: §1.
  • [2] A. Chartsias, T. Joyce, M. V. Giuffrida, and S. A. Tsaftaris (2018) Multimodal mr synthesis via modality-invariant latent representation. IEEE TMI 37 (3), pp. 803–814. Cited by: §1, §2.2.
  • [3] A. Chartsias et al. (2018) Factorised spatial representation learning: application in semi-supervised myocardial segmentation. In MICCAI 2018, A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger (Eds.), LNCS, Vol. 11071, pp. 490–498. External Links: Document Cited by: §1.
  • [4] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, et al. (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In MICCAI 2016, S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. B. Ünal, and W. Wells (Eds.), LNCS, Vol. 9901, pp. 424–432. External Links: Document Cited by: §2.3.
  • [5] L. Fidon, W. Li, et al. (2017) Scalable multimodal convolutional networks for brain tumour segmentation. In MICCAI 2017, M. Descoteaux, L. Maier-Hein, A. M. Franz, P. Jannin, D. L. Collins, and S. Duchesne (Eds.), LNCS, Vol. 10435, pp. 285–293. External Links: Document Cited by: §1.
  • [6] M. Havaei, N. Guizard, N. Chapados, and Y. Bengio (2016) Hemis: hetero-modal image segmentation. In MICCAI 2016, S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. B. Ünal, and W. Wells (Eds.), LNCS, Vol. 9901, pp. 469–477. External Links: Document Cited by: §1, §2.2, Table 2, §3.
  • [7] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018)

    Multimodal unsupervised image-to-image translation

    .
    In ECCV, pp. 172–189. Cited by: §1, §2.3.
  • [8] K. Kamnitsas, C. Ledig, V. F. Newcombe, et al. (2017) Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. MIA 36, pp. 61–78. Cited by: §1, Table 1.
  • [9] H. Lee, H. Tseng, J. Huang, M. Singh, and M. Yang (2018) Diverse image-to-image translation via disentangled representations. In ECCV, pp. 35–51. Cited by: §2.1.
  • [10] B. H. Menze, A. Jakab, S. Bauer, et al. (2015) The multimodal brain tumor image segmentation benchmark (brats). IEEE TMI 34 (10), pp. 1993–2024. Cited by: §1, §3.
  • [11] C. Qin, B. Shi, et al. (2019) Unsupervised deformable registration for multi-modal images via disentangled representations. In IPMI, pp. 249–261. Cited by: §1.
  • [12] Y. Shen and M. Gao (2019) Brain tumor segmentation on mri with missing modalities. In IPMI, pp. 417–428. Cited by: §1.
  • [13] Y. H. Tsai, P. P. Liang, A. Zadeh, L. Morency, and R. Salakhutdinov (2018) Learning factorized multimodal representations. arXiv preprint arXiv:1806.06176. Cited by: §1.
  • [14] K. Tseng, Y. Lin, W. Hsu, et al. (2017) Joint sequence learning and cross-modality convolution for 3d biomedical segmentation. In CVPR, pp. 6393–6400. Cited by: §1.
  • [15] G. van Tulder and M. de Bruijne (2015) Why does synthesized data improve multi-sequence classification?. In MICCAI 2015, N. Navab, J. Hornegger, W. M. W. III, and A. F. Frangi (Eds.), LNCS, Vol. 9349, pp. 531–538. External Links: Document Cited by: §1.
  • [16] G. van Tulder and M. de Bruijne (2019) Learning cross-modality representations from multi-modal images. IEEE TMI 38 (2), pp. 638–648. Cited by: §1.
  • [17] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan (2018)

    A deep learning model integrating fcnns and crfs for brain tumor segmentation

    .
    MIA 43, pp. 98–111. Cited by: Table 1.
  • [18] C. Zhou, C. Ding, Z. Lu, et al. (2018)

    One-pass multi-task convolutional neural networks for efficient brain tumor segmentation

    .
    In MICCAI 2018, A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger (Eds.), LNCS, Vol. 11072, pp. 637–645. External Links: Document Cited by: §1, Table 1, §3.