Brain metastases (BMs) are the most common intracranial tumors in adults (10 times more common than primary brain tumors) and occur in around 20% of all patients with cancer . The treatment options for brain metastases include craniotomy, chemotherapy, whole brain radiation therapy, and stereotactic radiosurgery (SRS). Among all the options, SRS has been playing a critical role in the treatment of brain metastases as recent studies have shown that SRS leads to better treatment outcomes . By delivering high doses of irradiation in a single or few shots to small targets, SRS effectively destroys tumors without damaging surrounding tissues and has been proved to be beneficial in the local tumor control and post-operative neurocognitive function .
As SRS requires precise delineation of tumor margins, target segmentation (contouring) for BMs is performed manually by the radiation oncologist or neurosurgeon on magnetic resonance images (MRI) and computed tomography (CT) images of the brain. However, such manually contouring process can be very time-consuming and suffer from large inter and intra-reader variability . Driven by the ever-increasing capability of deep learning, automated segmentation of BMs using neural networks has been recently proposed [1, 6]. Previous works on computer-aided segmentation of BMs used only MRI as an imaging input. While MRI provides superior ability to characterize neural tissue and the brain structure, MRI is prone to have the problems of spatial distortion and motion artifacts, which can lead to inaccuracy in SRS. CT, on the other hand, is lack of soft tissue contrast but provides a direct measurement of electron densities for radiation dose calculations and has excellent spatial fidelity. Consequently, co-registration between MRI and CT modalities is recommended for precise stereotactic applications .
Many previous works on automated segmentation of brain tumors focused on multiforme glioblastoma and glioma, such as BraTS dataset , and aims to optimize Dice similarity coefficient (DSC) . Segmentation of brain metastases is more challenging as metastatic lesions can be very small ( 1000 ) and a large brain metastasis can coexist with multiple small lesions. Conventional DSC is thus not an ideal metric to evaluate brain metastases segmentation because it would be dominant by the large lesion but ignore small metastases. Unfortunately, small BMs are much crucial to SRS since they are more likely to be missed by clinicians.
In this paper, we aim to utilize deep neural networks for automated detection and segmentation of brain metastases. Specifically, we present a deep learning-based system for brain metastases detection and segmentation using multimodal imaging (MRI+CT) and ensemble neural networks, which produces a more reliable result than that would be achieved by a single image modality and/or a single neural network. To address the challenge of small BMs, we further propose a volume-aware Dice loss (), which leverages the information of lesion size to optimize overall segmentation.
2.1 Volume-Aware Dice Loss
The Dice loss (
), which aims to optimize DSC, has been widely used as a loss function in medical image segmentation task and can be expressed as:
where andvoxels, respectively. is a smoothing constant to avoid the denominator being zero.
For binary segmentation problems with similar sizes of lesions, using as the objective function is a fair option as it is normalized by the number of foreground pixels. However, is not an ideal choice when multiple targets are present with the same label but with different sizes, since the loss will be dominant by the larger targets. To address the issue, we proposed a Volume-Aware Dice Loss () that optimizes the overall segmentation using the information of lesion size. can be formulated as:
where is a diagonal matrix that is a weight related to the volume of the tumor containing the -th voxel in the ground truth, denoted by . is a normalization constant such that the maximum of is one. In this paper, we consider the form:
where is a reweighting hyper-parameter and can be defined as follows.
Constant Reweight (CR): is a constant (), such that the weights are simply proportional to the inverse of the square root of tumor volume.
Batch Reweight (BR): is the largest tumor volume in each batch (); e.g. if the largest tumor in one batch has 1500 voxels, another small tumor with 60 voxels will have a weight of .
To demonstrate the effects of , assume a brain image volume with three ground-truth tumors, each with 1800, 450 and 200 voxels. Using the batch reweight , each tumor will have weights 1, 2 and 3, respectively. Suppose the model perfectly predicts the two larger tumors but fails to detect the smallest one. Under such a scenario, is calculated as and is , where . It illustrates that is more sensitive to the small structures.
2.2 Deep Learning Framework
While the current benchmark for brain metastases segmentation employs MRI imaging only [1, 6], CT imaging is an essential reference for clinical treatment planning due to its spatial accuracy. We thus proposed a deep learning framework using multimodal imaging (MRI+CT) and ensemble neural networks for brain metastases detection and segmentation. The framework is shown in Figure 1. For the ensemble model, we explored two different architectures—3D U-Net and DeepMedic.
2.2.1 3D U-Net
3D U-Net  is an extension of the original U-Net by replacing all the 2D operations with 3D counterparts. We added one block in addition to the original implementation (number of feature maps: 32, 64, 128, 256, 512 with convolution kernel size 33
3 and max-pooling size 221). We took a full size of the axial-view images and randomly sampled 8 consecutive slices on the vertical axis, resulting in input images size of 51251282 (height width number of slices
number of imaging modality). We set a limit to ensure that in each epoch, at least 70% of the samples should contain tumor labels. All the 3D U-Net models were trained using armsprop optimizer, with learning rate , batch size of 1, over 300 epochs on an NVIDIA V100 GPU. The best weights on the validation set was used to evaluate the final results on the test set.
DeepMedic  is originally designed for brain tumor segmentation on multi-channel MRI and also had been applied to BMs [1, 6]. It consists of multiple parallel pathways—one branch takes small patches from full resolution images as input and the others utilizes subsampled-version of the images. Different from the original paper, the DeepMedic architecture we used contained 3 parallel convolutional pathways, one of which was with normal image resolution and the other two of which were with low resolution using down-sampling factors of 3 and 5. There were 11 layers in the network, the first 8 of which were convolutional layers (number of feature maps: 30, 30, 40, 40, 40, 40, 50, 50 with 333 kernels) and the last 3 of which were fully connected layers (with 250 feature maps per layer). The network was trained using a rmsprop optimizer, with learning rate , minimizing the cross-entropy loss over 35 epochs.
2.2.3 Ensemble Model
3D U-Net and DeepMedic were trained separately with different hyperparameters and different objective functions to maximize the capability of the ensemble model. While U-Net utilized the full field of view for each image slice and addressed overall tumor segmentation, DeepMedic leveraged image segments during model training and focused on small metastases. Furthermore, U-Net was trained to optimize DSC and DeepMedic was set to minimize cross-entropy loss. At testing time, each model individually generated probability maps of brain metastases. An ensemble confidence map was then created by calculating the average of the predictions of both models.
3 Experiments and Results
The primary cohort for model training and testing consists of 305 patients with 864 brain metastases (median volume 760 , range 3-110,000 ) treated by CyberKnife G4 system in a single medical center. Each case contains volume masks of brain tumors delineated by an attending radiation oncologist or neurosurgeon on associated CT and T1-weighted MRI scan with contrast. The dataset was split into training (80%), validation (10%), and test set (10%) randomly. To evaluate the model robustness and generalizability, we collected an additional batch of test set from the same institution, containing 36 patients with 96 metastases (median volume 829 ). The results presented in this paper are the average of the two test sets.
For each case, after rigid image registration between CT and MRI image volumes, each slice was resized into 512512 pixels with the resolution of 0.6mm
0.6mm, while slice thickness was resampled to 2mm. Brain window and adaptive histogram equalization were applied to the CT and MRI images slice-by-slice, respectively. All the image volumes were then standardized with zero-mean and unit-variance normalization.
3.1 Volume-Aware Dice Loss
3.1.1 with different reweight strategies
We evaluated the efficacy of on the 3D U-Net. A standard 3D U-Net using the multimodal learning (MRI+CT) and conventional
was trained as the baseline model. Then we compared the relative change of DSC, precision and recall using different settings of. In the combination of two test sets, the tumors has median 1322 voxels (949 ) and mean 4721 voxels (3389 ). We tested the of 500, 1000, 2500, 5000 for the CR strategy.
Our baseline model achieves a DSC of 0.669, precision 0.689 and recall 0.700. Table 1 lists the results of applying relative to the baseline. Overall, using the with BR () yields the best performance, improving 8.57% of DSC and 24.14% of recall compared to the baseline. The performance of with CR largely depends on the constant value; the higher the constant, the better the recalls. Such an observation is consistent with our expectation that the focuses more on easily neglected small structures and has a higher sensitivity. However, in our dataset, the tumor sizes are highly diverse, therefore making it challenging to determine a value that can generalize to all the tumors. On the other hand, the BR approach has more flexibility using a dynamic weighting strategy, which provides a balance between precision and recall.
|configuration||Metric Change over loss (%)|
|Tumor||Numbers||Median Size||Loss||Pixel-wise recall||Metastasis-wise recall|
3.1.2 on small tumors
To evaluate the effectiveness of on different sizes of lesions, we further divided the lesions into large and small tumor groups at a cut-off point of 1500 . We calculated (1) pixel-wise recall, and (2) tumor-wise recall, where a positive tumor prediction is defined as detected if there is at least one pixel being predicted; noted that we did not measure the DSC and the precision because the false positive pixels can’t be categorized into either small or large tumors easily. The results are shown in Table 2. shows a more significant improvement in the small tumor groups, increasing the recall from 0.466 to 0.633 and detection rate from 0.619 to 0.672; while in the large tumor groups, the performances of the two settings are almost identical. The results indicate that the effectively improve the recall in the small tumors.
3.2 Deep Learning Framework
In our final proposed deep learning framework, we used (1) multimodal learning adopting MRI+CT, (2) ensemble learning considering 3D U-Net and DeepMedic, and (3) optimization using .
As shown in Table 3, 3D U-Net and DeepMedic obtain a DSC of 0.669 and 0.625 respectively. DeepMedic utilizes a patch-based training method, which makes the network focusing on smaller regions and contributing to a higher recall; on the other hand, 3D U-Net takes full resolution as inputs. The advantage of seeing the complete brain structure leads to higher precision. The ensemble of the two models improves the DSC to 0.719. Further applying the (with BR) on 3D U-Net, we achieve the best DSC of 0.740 and recall of 0.803. The results indicate that our deep learning approach effectively increases the DSC, precision, and recall. The performance surpasses the current benchmark methods. Examples of prediction results are shown in Figure 2.
|3D U-Net||0.669 (0.006)||0.689 (0.001)||0.700 (0.015)|
|DeepMedic||0.625 (0.013)||0.631 (0.004)||0.734 (0.035)|
|3D U-Net + DeepMedic||0.719 (0.004)||0.788 (0.002)||0.713 (0.023)|
|3D U-Net + DeepMedic||✓||0.740 (0.022)||0.779 (0.010)||0.803 (0.001)|
As the annotations were carried out by the neurosurgeon or radiation oncologist during SRS treatment planning, the ground truth labels represent the area for the treatment rather than the actual tumor extent, which leads to imperfect annotations for tumor segmentation. Based on the clinician’s experience and the patient’s disease status, these annotations can be delineated more aggressively or conservatively. Figure 3(a) illustrates an example of an aggressive treatment planning, which shows a broader area than the lesion. Also,the clinician would ignore previously treated tumors (Figure 3(b)). Last, some difficult cases, such as Figure 3(c) and (d), are highly subjective and should be determined through clinical manifestations or the series change of MRI. The cases mentioned above can underestimate our model performance and lead to a higher variance.
In this paper, we have achieved high performance for automated detection and segmentation of brain metastases, utilizing multimodal imaging (MRI+CT) as inputs and ensemble neural networks. We have also addressed the challenge of lesion size variance in multiple metastases by introducing a volume-aware Dice loss, which leverages the information of lesion size and significantly enhances the overall segmentation and sensitivity of small lesions, which are critical in the current SRS contouring workflow. It is expected that the proposed solution will facilitate tumor contouring and treatment planning of stereotactic radiosurgery.
Automatic detection and segmentation of brain metastases on multimodal mr images with a deep convolutional neural network. Computers in biology and medicine 95, pp. 43–54. Cited by: §1, §2.2.2, §2.2.
-  (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pp. 424–432. Cited by: §2.2.1.
-  (2018) Stereotactic radiosurgery in the management of patients with brain metastases of non-small cell lung cancer; indications, decision tools and future directions. Frontiers in oncology 8, pp. 154. Cited by: §1.
-  (2017) Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis 36, pp. 61–78. Cited by: §2.2.2.
-  (2015) Treatment of brain metastases. Journal of clinical oncology 33 (30), pp. 3475. Cited by: §1.
-  (2017) A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PloS one 12 (10), pp. e0185844. Cited by: §1, §2.2.2, §2.2.
-  (2015) The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34 (10), pp. 1993–2024. Cited by: §1.
-  (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §1, §2.1.
-  (2014) The role of imaging in radiation therapy planning: past, present, and future. BioMed research international 2014. Cited by: §1.
-  (2012) Radiotherapeutic and surgical management for newly diagnosed brain metastasis (es): an american society for radiation oncology evidence-based guideline. Practical radiation oncology 2 (3), pp. 210–225. Cited by: §1.
-  (2016) Uncertainties in volume delineation in radiation oncology: a systematic review and recommendations for future studies. Radiotherapy and Oncology 121 (2), pp. 169–179. Cited by: §1.