Automatic Online Quality Control of Synthetic CTs

11/12/2019 ∙ by Louis D. van Harten, et al. ∙ Amsterdam UMC 29

Accurate MR-to-CT synthesis is a requirement for MR-only workflows in radiotherapy (RT) treatment planning. In recent years, deep learning-based approaches have shown impressive results in this field. However, to prevent downstream errors in RT treatment planning, it is important that deep learning models are only applied to data for which they are trained and that generated synthetic CT (sCT) images do not contain severe errors. For this, a mechanism for online quality control should be in place. In this work, we use an ensemble of sCT generators and assess their disagreement as a measure of uncertainty of the results. We show that this uncertainty measure can be used for two kinds of online quality control. First, to detect input images that are outside the expected distribution of MR images. Second, to identify sCT images that were generated from suitable MR images but potentially contain errors. Such automatic online quality control for sCT generation is likely to become an integral part of MR-only RT workflows.



There are no comments yet.


page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, substantial work has been published exploring the possibilities for MR-only workflows in radiotherapy (RT) treatment[1]

. Radiotherapy (RT) treatment planning typically requires patients to undergo an MR scan for soft tissue imaging and a CT scan to estimate radioactive dose absorption of locations in and around the treatment target volumes. To reduce costs and ionizing radiation for the patient, as well as to speed up the RT treatment planning process, it would be desirable to omit CT acquisition from the treatment workflow. The recent availability of MR-Linac machines has fueled the interest in such MR-only RT treatment

[2]. An MR-only workflow replaces the CT image by a synthetic CT (sCT) image generated from the MR image. Impressive results for sCT generation have been achieved using deep learning, in particular using generative adversarial networks (GANs) [1, 3, 4].

For MR-only workflows to be adopted in the clinic, CT synthesis should be accurate and robust. An sCT generator is a model which defines a mapping between the data distribution of MR images and the data distribution of CT images, but there is no exact analytical form of this mapping. Deep learning-based methods for sCT generation perform well when the input at test time is from the same distribution as the training data, but they may fail when applied to an MR image that falls outside this distribution. This could be the case when severe pathology is present or when patients are scanned with a different MRI machine due to logistics or maintenance. More robust sCT generative models could be obtained by using larger and more diverse training sets, i.e. by providing a wider data distribution at training time. However, it cannot be guaranteed that the entire space of possible inputs is covered. In case an input falls outside of the training distribution, an online quality control mechanism should be in place to alert a human operator, minimizing the chances of severe errors in the automatically generated synthetic CT images causing inaccurate treatment plans[5].

In this work, we propose a method for such quality control in 2D CNN-based sCT synthesis models by evaluating the model uncertainty. Uncertainty in sCT generation has previously been explored in the context of Gaussian mixture regression [6] and deep learning with Bayesian sampling [7]. Here, we determine uncertainty as the voxel-wise ensemble disagreement among three GANs trained for sCT generation of the head. Such a method for uncertainty estimation was originally proposed as an efficient alternative to Bayesian sampling, showing promising results on natural imaging tasks[8]. We build on this method by training the three different sCT generators in the ensemble on different views of the data (i.e. axial, coronal or sagittal slices). This means each network is trained to estimate a mapping between a different set of data distributions. This prevents the three networks from learning an identical function, which would result in all networks making identical mistakes when presented with out-of-distribution input data.

We train an ensemble of three GANs using T1-weighted images and evaluate whether our uncertainty metric can differentiate between three kinds of inputs at test time: T1-weighted images from the same distribution as the training set, T1-weighted images with gadolinium-based contrast agent acquired on the same scanner, and T1-weighted images acquired using a different sequence on a different model scanner. We show that based on the uncertainty of the model, we can identify unsupported inputs, such as MR images from a different scanner, that may lead to erroneous outputs. Furthermore, we explore whether the generated uncertainty maps can be used for more fine-grained quality estimation of the generated sCT images by investigating the correlation between the uncertainty and the quality of the produced sCT.

Figure 1: A schematic overview of the proposed processing pipeline. Gaxi, Gcor and Gsag are CT generative networks trained on axial, coronal and sagittal slices, respectively. The voxel-wise median among the three predictions is taken as the sCT result, while the maximum voxel-wise difference yields an uncertainty map.

2 Data

This study includes brain MR and CT images of 52 patients who were scanned at the University Medical Center Utrecht (Utrecht, the Netherlands) for RT treatment planning. For each patient, planning T1-weighted MR brain volumes were available, both with and without gadolinium-based contrast agent, acquired using a Philips Ingenia 1.5T MR system. Volumes were acquired with a voxel size of mm, 8°  flip angle, 7 ms repetition time, and 3.1 ms echo time. Scans were reconstructed to a voxel size of mm. For each MR volume, a matching CT volume was available, helically acquired on a Philips Brilliance Big Bore scanner at 120 kVp, 450 mA. The CT volumes were reconstructed with a slice thickness of 1 mm and in-plane resolutions varying between 0.7 and 1.0 mm. The MR volumes in this set were resampled and rigidly registered to the CT volumes. In the rest of this work, we refer to the MR volumes without gadolinium-based contrast agent from this set as the RT set, and to the MR volumes with contrast agent as the RTgado set.

In addition, we included 34 3T T1-weighted MR brain volumes of healthy volunteers included in the OASIS project [9]. These volumes were acquired on a Siemens scanner with a voxel size of mm and were vertically resampled to an isotropic voxel size of 1.0 mm. No CT volumes were available for these MR volumes. We refer to this set as the OASIS set.

Figure 2: A schematic overview of the CycleGAN configuration. Solid arrows indicate the flow of data, dashed lines indicate the loss terms for the generators. The generators ( and ) are used to create a synthetic CT and a synthetic MR respectively, shown in the middle of the figure. The opposite generators (duplicated in the figure for visual simplicity) are used to recreate the original input images (yielding rMR and rCT). Two discriminators ( and ) are trained simultaneously to differentiate between real and synthetic images; the generators are trained to fool these discriminators via adversarial loss terms. The cycle-consistency loss terms (L1 norms of MR-rMR and CT-rCT) encourage the reconstructed images to match with the original images.
Figure 3: Results from each of the three test sets. From left to right: input image, results generated using Gaxi, Gcor and Gsag, and the resulting uncertainty map. Inputs from top to bottom: RT MR (in-distribution), RTgado MR (out-of-distribution) and OASIS MR (out-of-distribution). The upper two rows show matched slices from the same patient. Values in the upper right corner indicate the mean uncertainty within the body contour for the shown slices. Note that the models were only trained using images in the RT data set.

3 Methods

We trained an ensemble of three MR-to-CT neural networks: Gaxi, Gcor and Gsag, which generate sCT slices in the axial, coronal and sagittal plane, respectively. These networks were trained using the CycleGAN configuration proposed by Zhu et al. [10] to learn a mapping between the input MR slices and CT slices as per Wolterink et al.[3]. A schematic overview of the CycleGAN configuration is shown in Fig. 2. The configuration contains two generator networks that are trained to learn a directional mapping between the MR and CT domains, along with two discriminator networks trained to distinguish real images from each domain from the images produced by the generator networks. At the same time, the generators are trained to fool the discriminators, with the aim of synthesizing realistic generated images. A cycle-consistency loss term is added to the generator loss, which maximizes the amount of information preserved when mapping a generated image back to its original domain. This encourages the generators to map the information from the input domain onto the output domain in a way that results in realistic images. Once the networks are trained, the two generators can be used separately to map images from one domain to the other. In our experiments, only the MR-to-CT generators of the trained CycleGAN models are used. Note that three separate instances of this configuration were trained: one for each anatomical plane.

At test time, an input MR image is processed using Gaxi, Gcor, and Gsag, resulting in three 3D sCT images. These sCT images are combined into one result by taking the median Hounsfield unit (HU) at each voxel. Moreover, for each voxel the uncertainty is determined as the maximum absolute difference of the intensity values between any two of the three network results (Fig. 1). To obtain one metric for uncertainty in a generated sCT, we average the uncertainty for all voxels inside of a body contour. This leads to an uncertainty metric that is insensitive to the volume of the head. The body contours are automatically extracted from the input MR volumes using thresholding and morphological region filling.

4 Experiments and Results

The RT set was divided into a training set of 30 patients, a test set of 20 patients and a validation set of 2 patients. Using the training set, three CycleGAN models were trained using the training scheme proposed in Zhu et al.[10] Each model was trained using 2D slices from one imaging plane, to learn a mapping between the MR volumes without contrast agent and the CT volumes. The models were trained using randomly cropped patches of at most 2562 pixels. During inference, slices were constructed by applying the network to patches from each of the corners of the slice and stitching the resulting sCT patches together. In regions where multiple of these corner-patches overlapped, the HU values from the first patch to finish processing were selected.

The MR-to-CT generative models were used to compute sCT images and uncertainty maps from the three test sets: 20 RT MR volumes without contrast agent (RT), 20 RT MRs with gadolinium contrast agent (RTgado) and 34 OASIS MR volumes (OASIS). As the models were only trained using images from the distribution of the RT set, the last two sets contain out-of-distribution MR inputs to the generative models. An example slice from each test set is shown in Fig. 3, along with the corresponding slices from the sCT images produced by the three generators and the uncertainty map resulting from these slices.

We compute the average uncertainty within an automatically extracted body contour. This yields a single uncertainty value for each generated sCT. The average uncertainties for all images in the three data sets are shown in Fig. 4

. An unpaired t-test shows that the average uncertainty in the

RTgado (144±9.6 HU) and OASIS (202±13.7 HU) sets is significantly (p < 0.05) higher than uncertainty in the RT set (135±8.3 HU). Note that based only on the average uncertainty in the result image, images in the out-of-distribution OASIS set could be identified with 100% accuracy.

Figure 4: Mean ensemble uncertainty for each patient in all three test sets. Each data point corresponds to the uncertainty in a single test image, averaged over all voxels within the body contour.

The results in Fig. 4 indicate that uncertainty is higher for out-of-distribution images, but this does not mean that the resulting images are necessarily wrong. Therefore, we performed an additional experiment in which we assessed the correlation between uncertainty and errors in the synthetic CT image. We calculated the mean absolute error (MAE) between sCT and reference CT images in the RT and RTgado sets; the results are shown in Fig. 5. These results confirm that there is a clear correlation between the average ensemble uncertainty and the MAE in both sets for which reference CT images are available. Additionally, as this correlation holds for images within the RT set, these results indicate that ensemble uncertainty could also be valuable for quality control when the input is from the correct distribution. Furthermore, this figure shows there is some overlap between the MAE distributions of the RT and the RTgado sets, indicating that the trained models are somewhat robust against the presence of gadolinium in the input images.

Figure 5: Correlation between the mean ensemble uncertainty and the mean absolute error in the RT and RTgado data sets. Lines indicate linear correlation estimates for both sets. The corresponding Pearson coefficients are indicated in the bottom right.

5 Discussion

In our experiments we explored the use of an ensemble of sCT generators as a method for automatic online quality control in MR-only radiotherapy workflows. By considering the disagreement of different sCT generators as a metric for their collective uncertainty, the networks can automatically alert a human operator if the uncertainty becomes higher than a threshold. The obvious fault mode of such a system is an input for which all networks in the ensemble make the same mistake. In this work, such a fault mode was mitigated by training the networks on different views on the data (i.e. axial, coronal and sagittal slices), causing the generators to learn mappings between different input and output distributions. This should reduce the chance that an incorrect mapping by one of the networks coincides with identical incorrect mappings by the other two. Our experiments have shown that there is indeed a negative correlation between the disagreement between the generators and the sCT quality.

A curious observation from Fig. 5 is that the distributions of mean absolute errors for the RT set the RTgado set have substantial overlap. This indicates that our sCT generative models are somewhat robust to the presence of gadolinium-based contrast agent in the MR images, even though no images with contrast agent were present in the training set. This can explain the overlap of the distributions of mean uncertainties in these sets as well. Conversely, when comparing the uncertainty results of the RT set and the OASIS set in Fig. 4, we observe that these sets are perfectly separable on the mean uncertainty metric. This means that all images from the set of incompatible MR images could be caught as invalid inputs to the sCT generative method using a simple threshold.

While the presented single value for uncertainty has merit in practical validation systems due to its simple nature, the absence of spatial information can be considered a weakness of the method. In the radiotherapy treatment workflow, the accuracy of an sCT is more relevant in the sections that are irradiated during treatment. This implies it could be valuable to acquire individual quality estimates for different regions, as errors in irrelevant sections of the sCT could be acceptable. Future work could investigate the use of spatial uncertainty information to find such a localised estimate of the sCT quality.

6 Conclusion

In this work, we have developed a method to aid quality control of MR-to-CT generative deep-learning models, to facilitate their clinical adoption in radiotherapy workflows. We have shown the automatically generated uncertainties produced by our method correlate with the quality of the generated synthetic CTs, and that they could be used to detect input images that are not from the correct data distribution. Such an automated check would be highly valuable as an automated online validation step in the clinical workflow.

7 New or breakthrough work to be presented

We have presented a method for estimating the uncertainty of an MR-to-CT generative deep-learning model, as used in MR-only radiotherapy workflows. We have shown that our estimated uncertainties correlate well with the quality of the generated sCTs, paving the way for automatic quality control in clinical practice.


  • [1] Edmund, J. M. and Nyholm, T., “A review of substitute CT generation for MRI-only radiation therapy,” Radiation Oncology 12(1), 28 (2017).
  • [2] Lagendijk, J. J., Raaymakers, B. W., and Van Vulpen, M., “The magnetic resonance imaging–linac system,” in [Seminars in radiation oncology ], 24(3), 207–209, Elsevier (2014).
  • [3] Wolterink, J. M., Dinkla, A. M., Savenije, M. H., Seevinck, P. R., van den Berg, C. A., and Išgum, I., “Deep MR to CT synthesis using unpaired data,” in [International Workshop on Simulation and Synthesis in Medical Imaging ], 14–23, Springer (2017).
  • [4] Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., and Shen, D., “Medical image synthesis with context-aware generative adversarial networks,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 417–425, Springer (2017).
  • [5] Nyholm, T. and Jonsson, J., “Counterpoint: opportunities and challenges of a magnetic resonance imaging–only radiotherapy work flow,” in [Seminars in radiation oncology ], 24(3), 175–180, Elsevier (2014).
  • [6] Johansson, A., Karlsson, M., Yu, J., Asklund, T., and Nyholm, T., “Voxel-wise uncertainty in CT substitute derived from MRI,” Medical physics 39(6Part1), 3283–3290 (2012).
  • [7] Bragman, F. J., Tanno, R., Eaton-Rosen, Z., Li, W., Hawkes, D. J., Ourselin, S., Alexander, D. C., McClelland, J. R., and Cardoso, M. J., “Uncertainty in multitask learning: joint representations for probabilistic mr-only radiotherapy planning,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 3–11, Springer (2018).
  • [8] Lakshminarayanan, B., Pritzel, A., and Blundell, C., “Simple and scalable predictive uncertainty estimation using deep ensembles,” in [Advances in Neural Information Processing Systems ], 6402–6413 (2017).
  • [9] Marcus, D. S., Wang, T. H., Parker, J., Csernansky, J. G., Morris, J. C., and Buckner, R. L., “Open access series of imaging studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults,” J Cogn Neurosci 19(9), 1498–1507 (2007).
  • [10]

    Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A., “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in [

    Proceedings of the IEEE international conference on computer vision

     ], 2223–2232 (2017).