Practical application of AI medical imaging methods require accurate and robust performance on unseen domains, such as differences in acquisition protocols across different centers, scanner vendors, and patient populations (see Fig. 1). Unfortunately, labeled medical datasets are typically small and do not include sufficient variability for robust deep learning training. The lack of large, diverse medical imaging datasets often lead to marginal deep learning model performance on new “unseen” domains, which limits their applications in clinical practice .
To improve model performance on unseen domains, transfer learning methods attempt to fine-tune a portion of a pre-trained network given a small amount of annotated data from the unseen target domain. Transfer learning applications for medical 3D images are often lacking quality pre-trained models (trained on a large amount of data). Domain adaption methods do not require annotations in the unseen domain, but usually require all source and target domain images be available during training[11, 1, 10]. The assumption of a known target dataset is restrictive, and makes multi-site deployment impractical. Furthermore, due to medical data privacy requirements, it is difficult to collect both the source and target datasets beforehand.
In the field of medical imaging, we are usually faced with the difficult situation where the training dataset is derived from a single center and acquired with a specific protocol. In such situations, domain generalization methods seek a robust model, trained once, capable of generalizing well to unseen domains. In 2D computer vision applications, researchers focused on various complexity of data augmentation to expand the available data distribution. Specifically, data augmentation strategies are performed in input space or during adversarial learning . Compared to natural 2D images, 3D medical image domain variability is more compact. Within the same modality, e.g. T2 MRI or Ultrasound, images from different vendors (GE, Philips, Siemens), scanning protocols, and patient populations are visually different mainly in three aspects: image quality, image appearance and spatial shape (see Figure 1). Other imaging modalities, such as CT, generally have more consistent image characteristics.
Motivated by the observed heterogeneity of 3D medical images, we propose a systematic augmentation approach consisting of series of transformations to simulate domain shift properties of medical imaging data. We call this approach, Deep Stacked Transformations (DST) augmentation. DST operates on the image space, where input images undergo nine stacked transformations. Each transform is controlled by two parameters, which determine the probability and magnitude of the image transformation. As a backbone semantic segmentation network we use AH-Net.
In 3D medical imaging applications, the selection of image augmentations is often intuitive, random crop or flip, inherited from 2D computer vision applications. Furthermore, the contribution of augmentation method is rarely evaluated on the unseen domain. In this work, we comprehensively evaluate the effect of various data augmentation techniques on 3D segmentation generalization to the unseen domains. The evaluation tasks include segmentation of whole prostate from 3D MRI, left ventricle from 3D ultrasound, and left atrial from 3D MRI. For each task we have up to 4 different datasets to be able to train on one and evaluate generalization to other datasets. The results and analysis
Reveal the main factors causing domain shift in 3D medical imaging modalities.
Demonstrate that DST augmentation substantially outperforms conventional augmentation and CycleGAN-based domain adaptation on unseen domains for both MRI and ultrasound. The generalization improvements are observed even on the same domain (albeit much less noticeable).
Given a larger training dataset, DST achieves state-of-the-art segmentation accuracy on unseen domains.
To improve generalization of 3D medical semantic segmentation method, we use a series stacked augmentation transforms applied to input images during training. Each transformation is an image processing function with two hyper-parameters: probability and magnitude .
where are input image and its corresponding label. Augmentation transforms alter the image quality, appearance, and spatial structure. Specifically DST consists of the following transforms: sharpening, blurring, noise, brightness adjustment, contrast change, perturbation, rotation, scaling, deformation, in addition to random cropping. In DST, transforms are in the order as described – performances of models are not sensitive to different orders. As we show in our experiments, augmenting image sets during training can result in models with more robust segmentations than if data processing/synthesis was performed at the inference stage. Fig. 2 shows some examples of DST augmentation in 3D MRI and ultrasound demonstrating ability to mimic image appearances in unseen domains with a given modality.
Examples of deep stacked transformations (DST) results on (a) whole prostate MRI, (b) left atrial MRI, and (c) left ventricle ultrasound. 1st row: ROIs randomly cropped from source domains; 2nd row: corresponding ROIs after DST; 3rd row: ROIs randomly cropped from unseen domains. The image pairs of 2nd–3rd rows have better visual similarity than 1st–3rd rows.
Image Quality is related to sharpness, blurriness, and noise level
of medical images. Blurriness is commonly caused by MR/ultrasound motion artifacts and resolution. Gaussian filtering is used to blur the image, with a magnitude (Gaussian std) ranging between [0.25, 1.5]. Sharpness has a reverse effect, by using an unsharp masking with strength [10, 30]. Noise is added (from normal distribution with std. [0.1, 1.0]) to account for possible noise in images.
Image Appearance is associated with the statistical characteristics of image intensities, such as variations of brightness and contrast
, which often result form different scanning protocols and device vendors. Brightness augmentation refers to random shift [-0.1, 0.1] in the intensity space. Contrast augmentation refers to gamma correction with gamma (magnitude) ranging between [0.5, 4.5]. Finally, we use a random linear transform in intensity space with magnitude of scale and shift sampled from [-0.1, 0.1], which we refer to as intensityperturbation.
Spatial Transforms include rotation, scaling and deformation. Rotation is usually caused by different patient orientations during scanning (we use [-20, 20
] range). Scaling and Deformation are due to organ shape variability and soft tissue motion. Random scaling is used with magnitude [0.4, 1.6]. Deformation transform uses regular grid interpolation, after a random perturbation (Gaussian smoothed std [10, 13]). Same spatial transform are applied to both input images and the corresponding labels. These operations are computational expensive for large 3D volumetric data. GPU-based acceleration approach could be developed, but allocating the maximal capacity of GPU memory for model training only along with data augmentation on the fly are more desirable. In addition, since the whole 3D volume does not fit into the memory of the GPU, sub-volumes cropping are usually needed to fed into network training. We develop a CPU-based, efficient, spatial transform technique based on an open-source implementation111https://github.com/MIC-DKFZ/batchgenerators, which first calculates the 3D coordinate grid of sub-volume (with size of voxels) to which the transformations (combining random 3D rotation, scaling, deformation, and cropping) are applied and then image interpolation is performed. We make further acceleration by only performing interpolation within the minimal cuboid containing the 3D coordinate grid, as such, the computational time is independent from the input volume size (i.e., only depend on the cropping sub-volume size), and the spatial transform augmentation can be performed on the fly during training.
We validate our method on three segmentation tasks: segmentation of whole prostate from 3D MRI, left atrial from 3D MRI, and left ventricle from 3D ultrasound.
Task 1: For the whole prostate segmentation from 3D MRI, we use the following datasets: Prostate dataset from Medical Segmentation Decathlon111http://medicaldecathlon.com/index.html (MSD-P), PROMISE12 , NCI-ISBI13222http://doi.org/10.7937/K9/TCIA.2015.zF0vlOPv, and ProstateX . We train on the MSD-P dataset (source domain) and evaluate on the other datasets (unseen domains). We use only single channel (T2) input and segment the whole prostate, which is lowest common denominator among the datasets. One study in ProstateX was excluded due to prior surgical procedure.
Task 2: For left atrial segmentation from 3D MRI, we use the following datasets: Heart dataset from MSD (MSD-H), ASC  and MM-WHS . We train on the MSD-H dataset (source domain) and evaluate on the other datasets.
Task 3: For left ventricle segmentation from 3D ultrasounds, we use data from CETUS333https://www.creatis.insa-lyon.fr/Challenge/CETUS/
(30 volumes). We manually split the dataset into 3 subsets corresponding to different ultrasound device vendors A, B, C with 10 volumes each. We used heuristics to identify vendor association, but we acknowledge that our split strategy may include wrong associations. We train on Vendor A images, and evaluate on Vendors B and C.
|Task||1. MRI - whole prostate||2. MRI - left atrial||3. Ultrasound - left ventricle|
We implemented our approach in Tensorflow and train it on NVIDIA Tesla V100 16GB GPU. We use AH-Net as a backbone for 3D segmentation, which takes advantages of the 2D pretrained ResNet50 as an encoder, and learns the full 3D decoder. All data is re-sampled to 1x1x1mm isotropic resolution and normalized to [0,1] intensity range. We use a crop size of 96x96x32 batch 16 for Task1, crop 96x96x96 batch 16 for Tasks 2, and crop 96x96x96 batch 4 for Tasks 3. We use soft Dice loss and Adam optimizer with the learning rate . We use 0.5 probability of each transformation in DST.
3.3 Experimental Results and Analysis
First, we evaluate generalization performance for each augmentation transform individually. As a baseline, only random cropping with no other augmentations used. We compare results to DST with all 9 transformation stacked, and to a popular domain adaptation method, CycleGAN 
, which maps the unseen images (on per slice basis) into source-like appearance (we split each dataset into 4:1 for CycleGAN training and validation, and train for 200 epochs).
Table 2 lists segmentation Dice results on the source domain (trained on this domain, and validated on a keep-out subset) and on unseen domains (trained on the source, but tested on other unseen datasets). The major findings are:
|Task 1. MRI - whole prostate||Task 2. MRI- left atrial||Task 3. US - left ventricle||All Tasks|
|Supervised||-||91.4 ||88.0 ||91.9*||-||94.2 ||88.6||-||92.5*||92.5*||-||91.4|
DST augmentation performs substantially better than any one of the tested augmentations. On average, across different tasks, DST achieves 80% generalization Dice on unseen domains. Compare to baseline (49.8%) and CycleGAN (63.5%), which achieve worse generalization performance (even though e.g. CycleGAN domain adaptation got exposure to unseen domain images).
In 3D MRIs, image quality and appearance augmentation had the most impact, with larger improvements coming from sharpening, followed by by contrast, brightness, and intensity perturbation. Spatial transforms had less impact in prostate MRI compared to heart MRI where the shape, size, and orientation of heart can be very different (see Figure 1).
In Ultrasound, main contributions came from spatial scaling, followed by brightness, blurring, and contrast augmentations (see Figure 1(c)).
In some datasets (such as ASC), all the individual augmentations and CycleGAN perfomred very poorly ( Dice), whereas DST had reasonable performance. This supports our claim that comprehensive transforms are required to cover potentially large variability of the unseen data.
Individual augmentation transforms may perform slightly better on some isolated cases (e.g. brightness augmentation for WHS), but on average only DST consistently shows good generalization. Even the combination of top 4 performing augmentations (top4) is not sufficient for robust generalization.
Using only simple random crop (baseline) does not generalize well to unseen datasets (with Dice dropping as much as 40%) , which supports importance of data augmentation in general.
Besides the improvements on unseen domains, DST slightly improves (2.5%) on the source domains as well (it is valuable to not degrade the performance on the source domain).
DST peformance is 10% worse compared to fully supervised methods, as they have advantages of training and testing on the same domain and more training data. This gap can be reduced by using a larger source dataset (as shown in Section 3.3.1), in which case the DST performance is comparable to the supervised methods.
Examples of unseen domain segmentation produced by baseline model, CycleGAN-based domain adaptation, and DST domain generalization are shown in Fig. 3. The baseline and DST are trained only on individual source domains, while CycleGAN requires images from target/unseen domain to train an additional generative model.
3.3.1 DST with Larger Dataset.
So far we have evaluated that DST generalization performance using small (30 volumes) public datasets. In this section, we experiment with a larger dataset, and demonstrate generalization performance comparable to supervised state-of-the-art methods.
We train a model with DST on proprietary dataset of 465 3D MRIs (denoted as MultiCenter) with whole prostate annotations, collected from various medical centers worldwide. Table 3 show the results on unseen datasets. Overall, using a large source dataset, DST produces competitive results: with Dice being only 0.8% lower than state-of-the-art supervised methods. Supervised models were trained on the same domain individually, where we were able to achieve similar performance training only on the source domain. Importantly, on the unseen domain, our DST model achieves the same performance as two radiologists (relative novice versus expert) – it achieves a Dice score of 91.9% on the unseen ProstateX dataset, compared with the Dice score between a novice versus expert radiologist annotations on the same dataset (also 91.9%). These findings suggest feasibility of practical application of deep learning models in clinical sites, where the trained DST model generalize well to unseen data.
|State-of-the-art||-||-||-||91.4* ||88.0* ||91.9*||90.4|
We propose deep stacked transformations (DST) augmentation approach for unsupervised domain generalization in 3D medical image segmentation. We evaluate DST and different augmentation strategies on three segmentation tasks (prostate 3D MRI, left atrial 3D MRI and left ventricle 3D ultrasound) when applied to unseen domains. The experiments establish a strong benchmark for the study of domain generalization in medical imaging. Furthermore, using a larger training dataset, we show that DST generalization performance is comparable to fully supervised state-of-the-art methods, making deep learning segmentation more feasible in practise.
-  Degel, M.A., Navab, N., Albarqouni, S.: Domain and geometry agnostic CNNs for left atrium segmentation in 3D ultrasound. In: MICCAI. pp. 630–637 (2018)
-  Jia, H., Song, Y., Zhang, D., Huang, H., Feng, D., Fulham, M., Xia, Y., Cai, W.: 3d global convolutional adversarial network for prostate MR volume segmentation. arXiv preprint arXiv:1807.06742 (2018)
-  Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: Computer-aided detection of prostate cancer in MRI. TMI 33(5), 1083–1092 (2014)
-  Litjens, G., Toth, R., van de Ven, W., Hoeks, C., Kerkstra, S., van Ginneken, B., Vincent, G., Guillard, G., Birbeck, N., Zhang, J., et al.: Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Medical Image Analysis 18(2), 359–373 (2014)
-  Liu, S., Xu, D., Zhou, S.K., Pauly, O., Grbic, S., Mertelmeier, T., Wicklein, J., Jerebko, A., Cai, W., Comaniciu, D.: 3D anisotropic hybrid network: Transferring convolutional features from 2D images to 3D anisotropic volumes. In: MICCAI. pp. 851–858. Springer (2018)
-  Romera, E., Bergasa, L.M., Alvarez, J.M., Trivedi, M.: Train here, deploy there: Robust segmentation in unseen domains. In: 2018 IEEE Intelligent Vehicles Symposium (IV). pp. 1828–1833. IEEE (2018)
-  Volpi, R., Namkoong, H., Sener, O., Duchi, J., Murino, V., Savarese, S.: Generalizing to unseen domains via adversarial data augmentation. In: NeurIPS (2018)
Xiong, Z., Fedorov, V.V., Fu, X., Cheng, E., Macleod, R., Zhao, J.: Fully automatic left atrium segmentation from late gadolinium enhanced magnetic resonance imaging using a dual fully convolutional neural network. TMI38(2), 515–524 (2019)
Yasaka, K., Abe, O.: Deep learning and artificial intelligence in radiology: Current applications and future directions. PLoS Medicine15(11), e1002707 (2018)
-  Zhang, Y., Miao, S., Mansi, T., Liao, R.: Task driven generative modeling for unsupervised domain adaptation: Application to X-ray image segmentation. In: MICCAI (2018)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV. pp. 2223–2232 (2017)
-  Zhu, Q., Du, B., Yan, P.: Boundary-weighted domain adaptive neural network for prostate MR image segmentation. arXiv preprint arXiv:1902.08128 (2019)
-  Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Medical Image Analysis 31, 77–87 (2016)