Multimodal Volume-Aware Detection and Segmentation for Brain Metastases Radiosurgery

08/15/2019
by   Szu-Yeu Hu, et al.
0

Stereotactic radiosurgery (SRS), which delivers high doses of irradiation in a single or few shots to small targets, has been a standard of care for brain metastases. While very effective, SRS currently requires manually intensive delineation of tumors. In this work, we present a deep learning approach for automated detection and segmentation of brain metastases using multimodal imaging and ensemble neural networks. In order to address small and multiple brain metastases, we further propose a volume-aware Dice loss which optimizes model performance using the information of lesion size. This work surpasses current benchmark levels and demonstrates a reliable AI-assisted system for SRS treatment planning for multiple brain metastases.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

12/22/2021

Deep learning for brain metastasis detection and segmentation in longitudinal MRI data

Brain metastases occur frequently in patients with metastatic cancer. Ea...
06/25/2017

Scalable multimodal convolutional networks for brain tumour segmentation

Brain tumour segmentation plays a key role in computer-assisted surgery....
03/18/2019

Deep Learning Enables Automatic Detection and Segmentation of Brain Metastases on Multi-Sequence MRI

Detecting and segmenting brain metastases is a tedious and time-consumin...
04/26/2016

Using Indirect Encoding of Multiple Brains to Produce Multimodal Behavior

An important challenge in neuroevolution is to evolve complex neural net...
03/08/2019

Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets

Brain lesion and anatomy segmentation in magnetic resonance images are f...
02/23/2020

Random Bundle: Brain Metastases Segmentation Ensembling through Annotation Randomization

We introduce a novel ensembling method, Random Bundle (RB), that improve...
01/15/2018

Student Beats the Teacher: Deep Neural Networks for Lateral Ventricles Segmentation in Brain MR

Ventricular volume and its progression are known to be linked to several...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Brain metastases (BMs) are the most common intracranial tumors in adults (10 times more common than primary brain tumors) and occur in around 20% of all patients with cancer [5]. The treatment options for brain metastases include craniotomy, chemotherapy, whole brain radiation therapy, and stereotactic radiosurgery (SRS). Among all the options, SRS has been playing a critical role in the treatment of brain metastases as recent studies have shown that SRS leads to better treatment outcomes [10]. By delivering high doses of irradiation in a single or few shots to small targets, SRS effectively destroys tumors without damaging surrounding tissues and has been proved to be beneficial in the local tumor control and post-operative neurocognitive function [3].

As SRS requires precise delineation of tumor margins, target segmentation (contouring) for BMs is performed manually by the radiation oncologist or neurosurgeon on magnetic resonance images (MRI) and computed tomography (CT) images of the brain. However, such manually contouring process can be very time-consuming and suffer from large inter and intra-reader variability [11]. Driven by the ever-increasing capability of deep learning, automated segmentation of BMs using neural networks has been recently proposed [1, 6]. Previous works on computer-aided segmentation of BMs used only MRI as an imaging input. While MRI provides superior ability to characterize neural tissue and the brain structure, MRI is prone to have the problems of spatial distortion and motion artifacts, which can lead to inaccuracy in SRS. CT, on the other hand, is lack of soft tissue contrast but provides a direct measurement of electron densities for radiation dose calculations and has excellent spatial fidelity. Consequently, co-registration between MRI and CT modalities is recommended for precise stereotactic applications [9].

Many previous works on automated segmentation of brain tumors focused on multiforme glioblastoma and glioma, such as BraTS dataset [7], and aims to optimize Dice similarity coefficient (DSC) [8]. Segmentation of brain metastases is more challenging as metastatic lesions can be very small ( 1000 ) and a large brain metastasis can coexist with multiple small lesions. Conventional DSC is thus not an ideal metric to evaluate brain metastases segmentation because it would be dominant by the large lesion but ignore small metastases. Unfortunately, small BMs are much crucial to SRS since they are more likely to be missed by clinicians.

In this paper, we aim to utilize deep neural networks for automated detection and segmentation of brain metastases. Specifically, we present a deep learning-based system for brain metastases detection and segmentation using multimodal imaging (MRI+CT) and ensemble neural networks, which produces a more reliable result than that would be achieved by a single image modality and/or a single neural network. To address the challenge of small BMs, we further propose a volume-aware Dice loss (), which leverages the information of lesion size to optimize overall segmentation.

2 Methods

2.1 Volume-Aware Dice Loss

The Dice loss (

), which aims to optimize DSC, has been widely used as a loss function in medical image segmentation task 

[8] and can be expressed as:

(1)

where and

are the predicted probability vector and the ground truth binary vector for

voxels, respectively. is a smoothing constant to avoid the denominator being zero.

For binary segmentation problems with similar sizes of lesions, using as the objective function is a fair option as it is normalized by the number of foreground pixels. However, is not an ideal choice when multiple targets are present with the same label but with different sizes, since the loss will be dominant by the larger targets. To address the issue, we proposed a Volume-Aware Dice Loss () that optimizes the overall segmentation using the information of lesion size. can be formulated as:

(2)

where is a diagonal matrix that is a weight related to the volume of the tumor containing the -th voxel in the ground truth, denoted by . is a normalization constant such that the maximum of is one. In this paper, we consider the form:

where is a reweighting hyper-parameter and can be defined as follows.

  1. Constant Reweight (CR): is a constant (), such that the weights are simply proportional to the inverse of the square root of tumor volume.

  2. Batch Reweight (BR): is the largest tumor volume in each batch (); e.g. if the largest tumor in one batch has 1500 voxels, another small tumor with 60 voxels will have a weight of .

To demonstrate the effects of , assume a brain image volume with three ground-truth tumors, each with 1800, 450 and 200 voxels. Using the batch reweight , each tumor will have weights 1, 2 and 3, respectively. Suppose the model perfectly predicts the two larger tumors but fails to detect the smallest one. Under such a scenario, is calculated as and is , where . It illustrates that is more sensitive to the small structures.

2.2 Deep Learning Framework

While the current benchmark for brain metastases segmentation employs MRI imaging only [1, 6], CT imaging is an essential reference for clinical treatment planning due to its spatial accuracy. We thus proposed a deep learning framework using multimodal imaging (MRI+CT) and ensemble neural networks for brain metastases detection and segmentation. The framework is shown in Figure 1. For the ensemble model, we explored two different architectures—3D U-Net and DeepMedic.

Figure 1: Proposed deep learning framework with multimodal imaging and ensemble networks.

2.2.1 3D U-Net

3D U-Net [2] is an extension of the original U-Net by replacing all the 2D operations with 3D counterparts. We added one block in addition to the original implementation (number of feature maps: 32, 64, 128, 256, 512 with convolution kernel size 33

3 and max-pooling size 2

21). We took a full size of the axial-view images and randomly sampled 8 consecutive slices on the vertical axis, resulting in input images size of 51251282 (height width number of slices

number of imaging modality). We set a limit to ensure that in each epoch, at least 70% of the samples should contain tumor labels. All the 3D U-Net models were trained using a

rmsprop optimizer, with learning rate , batch size of 1, over 300 epochs on an NVIDIA V100 GPU. The best weights on the validation set was used to evaluate the final results on the test set.

2.2.2 DeepMedic

DeepMedic [4] is originally designed for brain tumor segmentation on multi-channel MRI and also had been applied to BMs [1, 6]. It consists of multiple parallel pathways—one branch takes small patches from full resolution images as input and the others utilizes subsampled-version of the images. Different from the original paper, the DeepMedic architecture we used contained 3 parallel convolutional pathways, one of which was with normal image resolution and the other two of which were with low resolution using down-sampling factors of 3 and 5. There were 11 layers in the network, the first 8 of which were convolutional layers (number of feature maps: 30, 30, 40, 40, 40, 40, 50, 50 with 333 kernels) and the last 3 of which were fully connected layers (with 250 feature maps per layer). The network was trained using a rmsprop optimizer, with learning rate , minimizing the cross-entropy loss over 35 epochs.

2.2.3 Ensemble Model

3D U-Net and DeepMedic were trained separately with different hyperparameters and different objective functions to maximize the capability of the ensemble model. While U-Net utilized the full field of view for each image slice and addressed overall tumor segmentation, DeepMedic leveraged image segments during model training and focused on small metastases. Furthermore, U-Net was trained to optimize DSC and DeepMedic was set to minimize cross-entropy loss. At testing time, each model individually generated probability maps of brain metastases. An ensemble confidence map was then created by calculating the average of the predictions of both models.

3 Experiments and Results

Data

The primary cohort for model training and testing consists of 305 patients with 864 brain metastases (median volume 760 , range 3-110,000 ) treated by CyberKnife G4 system in a single medical center. Each case contains volume masks of brain tumors delineated by an attending radiation oncologist or neurosurgeon on associated CT and T1-weighted MRI scan with contrast. The dataset was split into training (80%), validation (10%), and test set (10%) randomly. To evaluate the model robustness and generalizability, we collected an additional batch of test set from the same institution, containing 36 patients with 96 metastases (median volume 829 ). The results presented in this paper are the average of the two test sets.

For each case, after rigid image registration between CT and MRI image volumes, each slice was resized into 512512 pixels with the resolution of 0.6mm

0.6mm, while slice thickness was resampled to 2mm. Brain window and adaptive histogram equalization were applied to the CT and MRI images slice-by-slice, respectively. All the image volumes were then standardized with zero-mean and unit-variance normalization.

3.1 Volume-Aware Dice Loss

3.1.1 with different reweight strategies

We evaluated the efficacy of on the 3D U-Net. A standard 3D U-Net using the multimodal learning (MRI+CT) and conventional

was trained as the baseline model. Then we compared the relative change of DSC, precision and recall using different settings of

. In the combination of two test sets, the tumors has median 1322 voxels (949 ) and mean 4721 voxels (3389 ). We tested the of 500, 1000, 2500, 5000 for the CR strategy.

Our baseline model achieves a DSC of 0.669, precision 0.689 and recall 0.700. Table 1 lists the results of applying relative to the baseline. Overall, using the with BR () yields the best performance, improving 8.57% of DSC and 24.14% of recall compared to the baseline. The performance of with CR largely depends on the constant value; the higher the constant, the better the recalls. Such an observation is consistent with our expectation that the focuses more on easily neglected small structures and has a higher sensitivity. However, in our dataset, the tumor sizes are highly diverse, therefore making it challenging to determine a value that can generalize to all the tumors. On the other hand, the BR approach has more flexibility using a dynamic weighting strategy, which provides a balance between precision and recall.

configuration Metric Change over loss (%)
Reweight method DSC Precision Recall
Batch-Reweight - +8.57% +2.62% +24.14%
Const-Reweight -52.11% +2.78% -70.08%
Const-Reweight -7.67% +13.15% -18.83%
Const-Reweight +2.22% -5.55% +23.04%
Const-Reweight -8.60% -31.68% +25.46%
Table 1: Performance of different configurations with . All the models were trained on 3D U-Net with CT and MRI.
Tumor Numbers Median Size Loss Pixel-wise recall Metastasis-wise recall
Small
( 1500 mm)
113 368 0.466 0.619
0.633 0.672
Large
( 1500 mm)
78 4114 0.826 0.987
0.828 0.974
Table 2: Pixel-wise and metastasis-wise recall

3.1.2 on small tumors

To evaluate the effectiveness of on different sizes of lesions, we further divided the lesions into large and small tumor groups at a cut-off point of 1500 . We calculated (1) pixel-wise recall, and (2) tumor-wise recall, where a positive tumor prediction is defined as detected if there is at least one pixel being predicted; noted that we did not measure the DSC and the precision because the false positive pixels can’t be categorized into either small or large tumors easily. The results are shown in Table 2. shows a more significant improvement in the small tumor groups, increasing the recall from 0.466 to 0.633 and detection rate from 0.619 to 0.672; while in the large tumor groups, the performances of the two settings are almost identical. The results indicate that the effectively improve the recall in the small tumors.

3.2 Deep Learning Framework

In our final proposed deep learning framework, we used (1) multimodal learning adopting MRI+CT, (2) ensemble learning considering 3D U-Net and DeepMedic, and (3) optimization using .

As shown in Table 3, 3D U-Net and DeepMedic obtain a DSC of 0.669 and 0.625 respectively. DeepMedic utilizes a patch-based training method, which makes the network focusing on smaller regions and contributing to a higher recall; on the other hand, 3D U-Net takes full resolution as inputs. The advantage of seeing the complete brain structure leads to higher precision. The ensemble of the two models improves the DSC to 0.719. Further applying the (with BR) on 3D U-Net, we achieve the best DSC of 0.740 and recall of 0.803. The results indicate that our deep learning approach effectively increases the DSC, precision, and recall. The performance surpasses the current benchmark methods. Examples of prediction results are shown in Figure 2.

Model DSC Precision Recall
3D U-Net 0.669 (0.006) 0.689 (0.001) 0.700 (0.015)
DeepMedic 0.625 (0.013) 0.631 (0.004) 0.734 (0.035)
3D U-Net + DeepMedic 0.719 (0.004) 0.788 (0.002) 0.713 (0.023)
3D U-Net + DeepMedic 0.740 (0.022) 0.779 (0.010) 0.803 (0.001)
Table 3: Model performance of different configurations of loss functions, image modalities, and neural network models. The values are represented as median (std).
Figure 2: Examples of the prediction results overlaying with MRI.

3.3 Limitation

As the annotations were carried out by the neurosurgeon or radiation oncologist during SRS treatment planning, the ground truth labels represent the area for the treatment rather than the actual tumor extent, which leads to imperfect annotations for tumor segmentation. Based on the clinician’s experience and the patient’s disease status, these annotations can be delineated more aggressively or conservatively. Figure 3(a) illustrates an example of an aggressive treatment planning, which shows a broader area than the lesion. Also,the clinician would ignore previously treated tumors (Figure 3(b)). Last, some difficult cases, such as Figure 3(c) and (d), are highly subjective and should be determined through clinical manifestations or the series change of MRI. The cases mentioned above can underestimate our model performance and lead to a higher variance.

Figure 3: Examples of failed cases overlaying with MRI.

4 Conclusion

In this paper, we have achieved high performance for automated detection and segmentation of brain metastases, utilizing multimodal imaging (MRI+CT) as inputs and ensemble neural networks. We have also addressed the challenge of lesion size variance in multiple metastases by introducing a volume-aware Dice loss, which leverages the information of lesion size and significantly enhances the overall segmentation and sensitivity of small lesions, which are critical in the current SRS contouring workflow. It is expected that the proposed solution will facilitate tumor contouring and treatment planning of stereotactic radiosurgery.

References

  • [1] O. Charron, A. Lallement, D. Jarnet, V. Noblet, J. Clavier, and P. Meyer (2018)

    Automatic detection and segmentation of brain metastases on multimodal mr images with a deep convolutional neural network

    .
    Computers in biology and medicine 95, pp. 43–54. Cited by: §1, §2.2.2, §2.2.
  • [2] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pp. 424–432. Cited by: §2.2.1.
  • [3] D. E. Hartgerink, B. van der Heyden, D. De Ruysscher, A. Hoeben, K. Terhaag, P. Lambin, A. Postma, A. Jochems, A. Dekker, J. Schoenmaekers, et al. (2018) Stereotactic radiosurgery in the management of patients with brain metastases of non-small cell lung cancer; indications, decision tools and future directions. Frontiers in oncology 8, pp. 154. Cited by: §1.
  • [4] K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker (2017) Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis 36, pp. 61–78. Cited by: §2.2.2.
  • [5] X. Lin and L. M. DeAngelis (2015) Treatment of brain metastases. Journal of clinical oncology 33 (30), pp. 3475. Cited by: §1.
  • [6] Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, S. Lau, W. Lu, Y. Yan, S. B. Jiang, X. Zhen, R. Timmerman, et al. (2017) A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PloS one 12 (10), pp. e0185844. Cited by: §1, §2.2.2, §2.2.
  • [7] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, et al. (2015) The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34 (10), pp. 1993–2024. Cited by: §1.
  • [8] F. Milletari, N. Navab, and S. Ahmadi (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §1, §2.1.
  • [9] G. C. Pereira, M. Traughber, and R. F. Muzic (2014) The role of imaging in radiation therapy planning: past, present, and future. BioMed research international 2014. Cited by: §1.
  • [10] M. N. Tsao, D. Rades, A. Wirth, S. S. Lo, B. L. Danielson, L. E. Gaspar, P. W. Sperduto, M. A. Vogelbaum, J. D. Radawski, J. Z. Wang, et al. (2012) Radiotherapeutic and surgical management for newly diagnosed brain metastasis (es): an american society for radiation oncology evidence-based guideline. Practical radiation oncology 2 (3), pp. 210–225. Cited by: §1.
  • [11] S. K. Vinod, M. G. Jameson, M. Min, and L. C. Holloway (2016) Uncertainties in volume delineation in radiation oncology: a systematic review and recommendations for future studies. Radiotherapy and Oncology 121 (2), pp. 169–179. Cited by: §1.