Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image Enhancement

10/07/2021 ∙ by Uddeshya Upadhyay, et al. ∙ Tata Consultancy Services IIT Bombay 15

Image-to-image translation is an ill-posed problem as unique one-to-one mapping may not exist between the source and target images. Learning-based methods proposed in this context often evaluate the performance on test data that is similar to the training data, which may be impractical. This demands robust methods that can quantify uncertainty in the prediction for making informed decisions, especially for critical areas such as medical imaging. Recent works that employ conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings between the source and the target images. However, these methods do not focus on (i) robustness of the models to out-of-distribution (OOD)-noisy data and (ii) uncertainty quantification. This paper proposes a GAN-based framework that (i) models an adaptive loss function for robustness to OOD-noisy data that automatically tunes the spatially varying norm for penalizing the residuals and (ii) estimates the per-voxel uncertainty in the predictions. We demonstrate our method on two key applications in medical imaging: (i) undersampled magnetic resonance imaging (MRI) reconstruction (ii) MRI modality propagation. Our experiments with two different real-world datasets show that the proposed method (i) is robust to OOD-noisy test data and provides improved accuracy and (ii) quantifies voxel-level uncertainty in the predictions.



There are no comments yet.


page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Related Work

The image-to-image translation methods that generate images of the target view, given images of the source view, are often ill-posed problems as unique one-to-one mapping may not exist between the source and target views. Learning-based methods proposed in this context [18, 21, 12] evaluate the performance on test data that is similar to training data. Such models suffer from severe degradation in the performance when presented with out-of-distribution (OOD)-noisy data [11, 19, 24]. In critical applications, like in medical imaging, in addition to robustness to OOD-noisy data, it is important to quantify the uncertainty in the predictions made by the model to aid clinical decisions [27, 34]. Particularly, problems such as enhancing the quality of a given medical image and synthesizing medical images of target modality given a source modality benefit from the uncertainty estimation by allowing risk assessment in the predicted images [37, 27, 34]. In this work, we use two critical tasks in medical image analysis that can broadly be posed as image-to-image translation problems to demonstrate the efficacy of the proposed methods: (i) undersampled magnetic resonance imaging (MRI) reconstruction and (ii) MRI modality propagation, i.e., synthesizing T2 weighted (T2w) MRI from T1 weighted (T1w) MRI.

MRI k-space (a 2D complex-valued space based on 2D Fourier transform of the slices) data acquisition is a time-consuming process as the speed is dependent on hardware and physiological constraints 

[25, 32, 44]. Typically, faster acquisitions are realized by undersampling the k-space. However, it leads to blurring or aliasing effects in the MRI image. We formulate the reconstruction of high-quality MRI scans using undersampled k-space data as a translation from low-quality MRI (lqMR) to high-quality MRI (hqMR) scans (lqMR hqMR). Similarly, modality propagation is of interest because routine clinical protocols acquire images with multiple contrasts which are critical for better diagnostics. Acquiring multiple contrasts increases scanning time. T1w MRI and T2w MRI are among the most commonly acquired contrasts and synthesis of T2w from T1w is posed as a translation problem (T1w T2w).

Recently, conditional generative adversarial networks (GANs) have shown substantially improved performance for the above-mentioned tasks in comparison to other learning-based methods [33, 28, 6, 35, 40, 39, 38]. Such methods condition the generator (that generates the target view) with a source view and the task of the discriminator is to discriminate between the generated target samples from the ground-truth target samples through adversarial loss term. Additionally, some methods also impose a cycle-consistency penalty to constraint the ill-posed translation problem [45, 21, 12]. Along with improved accuracy, quantifying uncertainty can potentially aid in clinical decision making, especially in the presence of OOD-noisy data, while improving the accuracy of subsequent tasks such as image segmentation, classification, etc [37]. In the general context of medical imaging, OOD-noisy perturbations could be systemic (scanner-related) and/or physiological. Hence, beyond reliable image synthesis during inference, a thorough analysis of the robustness of the network along with quantification of uncertainty in the predicted images is crucial for clinical translation of synthesis frameworks [42, 26, 29]

. Designing learning strategies that are robust to large errors (i.e., outliers whose residuals follow heavy-tailed distributions) is a well-studied problem in statistics, optimization, and parameter estimation

[2, 10, 13, 5].

Quasi norm-based loss function, , use a generalized notion of the norm, where (often called the shape parameter) can be tuned empirically for robust learning [36, 1]. Moreover, such empirical methods often require the to be fixed spatially. This work proposes a novel loss function that automatically learns the shape parameter that can vary spatially for the image dataset along with the scale parameter that allows us to estimate per-voxel uncertainty.

2 Methods

Figure 1: Model schema. , , and denote the estimated output image, shape, and the scale parameter of the GGD prior respectively. The parameters governing output are a combination of the shared weights (across all three branches) in and -specific weights in . Similarly, and depend on parameters in and , respectively, in addition to shared weights in (refer Equation 1113).

We describe our proposed GAN framework and discuss specific details pertaining to the two applications: (i) MRI reconstruction using undersampled k-space data posed as a quality enhancement (QE) task and (ii) Modiality Propagation (MP). Our proposed solution for these two applications can easily be extended to other image-to-image translation problems both with MRI as well as other modalities.

Let the set of input images be represented by and the corresponding set of output images be represented by . Let and represent elements from the set of input and output images (i.e., and ), respectively. We learn the mapping from input to output using pairs of co-registered training data: . The input and output images for QE and MP are described below.

Undersampled MRI reconstruction (QE). Here, a mapping is learned from a low-quality MRI image (lqMR) (obtained via zero-filled inverse Fourier transform (IFT) of the undersampled k-space data) to the corresponding high-quality MRI image (hqMR) (obtained from fully-sampled k-space). Therefore, for QE, becomes . Let operator represent the 2D Fourier transform and matrix represent the sampling mask to perform undersampling in k-space. Then, the undersampled MRI is related to fully-sampled MRI via . With a fixed sampling mask, accounting for noisy k-space acquisitions, corrupted by complex Gaussian noise (say ), that represent OOD-noisy data [41, 46, 20], the OOD-noisy input sample (say ) can be modeled by,


Modality propagation (MP). Here, we learn a mapping from T1w images to corresponding T2w images. In this context, can be represented by the set of T1w MR and T2w MR images, i.e., . Unlike QE, the OOD-noisy input sample (say ) during inference is obtained by adding Gaussian noise in the image-space (say ) to the T1w MRI input image (say ). The addition of Gaussian noise in image-space is consistent with [9], i.e.,


2.1 Model

This work, inspired by conditional GANs, proposes a novel GAN-based framework which (i) employs learnable quasi-norm loss, and (ii) estimates uncertainty maps, as described below. Let and represent our generator and discriminator respectively. The input to the generator is , the predicted image is (i.e., output of O-branch of as shown in Figure 1), and the ground-truth is . Each image consists of pixels. We denote pixel in image, say , as .

Prior works typically formulate the above task as a regression and the distribution of residuals between the predictions and the ground-truths is assumed to be isotropic standard Gaussian distribution (zero mean and fixed standard deviation). However, some of the limitations of this approach include: (i) not being able to model the outliers/corruptions in the dataset, as residuals due to outliers tend to follow heavy-tailed distributions. (ii) the assumption of fixed standard deviation does not account for heteroscedastic uncertainty in their modeling. Recent works model the heteroscedastic uncertainty by assuming that the variance is data-dependent 

[15]. We improve upon the assumptions made by prior works by (i) modeling the residuals to follow a generalized Gaussian distribution (GGD) with zero mean, which is , and (ii) allowing the shape parameter () and the scale parameter () to vary spatially. This allows us to learn appropriate quasi-norms at all spatial locations, as well as estimating uncertainty at every output pixel. Therefore, for the image, the residual at pixel location , (between the predicted value and ground-truth value ), follows GGD, i.e.,

Figure 2: Generalized Gaussian Distribution. PDF for GGD with differnt shape parameters (). Lower corresponds to heavy-tailed distribution.

GGD is capable of modelling heavy-tailed distributions including the Gaussian and Laplace PDFs as shown in Figure 2. Here is the scale parameter, denotes the shape parameter, and is the standard gamma function. In our formulation all the ’s are independent but not necessarily identically distributed as and may vary spatially. Hence, the likelihood is,


Therefore, log-likelihood is,

Figure 3: Merging scheme. Moving window average to merge overlapping 2.5D volumes to produce a smooth 3D volume.

where represents the collection of network parameters. In this way, to improve the robustness of the network, we predict (i)  (the predicted value at every pixel location, also the mean of GGD), (ii)  (an estimate for the true at every pixel location), and (iii)  (an estimate for the true at every pixel location). Hence, the proposed robust quasi-norm based loss is


and . Where indicates the set of all pixel values over all the images in the dataset, similarly for others. In addition to the above uncertainty-aware fidelity loss term, the adversarial term depending upon

is defined in terms of binary cross-entropy between the true and predicted probability vectors for each of the generated image (say

) and ground-truth image (say ), respectively. The binary cross entropy loss is given by, , where represents the element in the output probability vector obtained from with input . Hence, minimizes the loss given by,


On the other hand, minimizes given by,


We use the strategy elucidated in [28, 8] for training.

The architecture of the proposed generator is inspired by U-Net [31]. We modify the U-Net such that the last convolutional layer is split into three, i.e., three independent convolutional layers attached to the penultimate block in U-Net (as shown in Figure 1). This model predicts , , and . The discriminator network ( in Figure 1 ) is a typical CNN architecture as described in [6]. Although image synthesis benefits from 3D processing, training CNNs with 3D images poses substantial challenges such as increased computational demand, cost, and training time. Several works use patch-based approaches to circumvent this issue. However, the final reconstruction from patches might result in artifacts [33]. In this work, we design our model to accept 2.5D input as described in [4] and produce a 2.5D output, this allows the model to exploit 3D like neighborhOOD-noisy information while learning, resulting in better quality outputs. The final 3D output is obtained by joining (via averaging) overlapping 2.5D volumes as shown in Figure 3.

2.2 Training and Testing Scheme

All the networks were trained using Adam optimizer [16] by sampling mini-batches of size 16. The initial learning rate was set to

and cosine annealing was used to decay the learning rate with epochs. The

for MP and SR (Equation 9) was set to and respectively. For numerical stability, the proposed network produces instead of

. The positivity constraint on the output is enforced by applying the ReLU activation function at the end of the three output layers in the network (Figure 

1). In addition to the network-generated outputs (, , ), we also compute aleatoric and epistemic uncertainty maps denoted by and , respectively. While captures the uncertainty in the model parameters, captures the data-dependent uncertainty, we combine both to produce . To capture the epistemic uncertainty, multiple forward passes ( forward passes) are done for every single input image at inference with dropouts activated. In this way, the final output of the proposed framework, for a given input along with the uncertainty maps are given by,


where denotes the index for forward pass through the network and denotes the parameters that are instantiated due to dropouts on the forward pass, as described in [7]. In our experiments was set to 50.

3 Datasets and Experiments

For the two applications QE and MP, we describe the in vivo training data and generation of OOD-noisy test data.

3.1 Datasets

Figure 4: Retrospective data generation for QE.. MRI undersampling is achieved by acquiring limited frequencies in k-space.

We use the following two datasets: (i) proprietary dataset consisting of registered T1w MRI scans for QE, and (ii) publicly available IXI dataset (https://brain-development.org/ixi-dataset/) for MP. The dataset in (i) consists of T1w MRI scans with  mm isotropic resolution from 100 subjects. We consider these images as the HQ-MR images for the QE experiment. The methodology to obtain LQ-MR images is shown in Figure 4. For LQ data acquisition, we sample only of the k-space using the mask in Figure 4, resulting in an acceleration of . Subsequently, the LQ images are obtained as the inverse Fourier transform (IFT) of the undersampled k-space. We use training, validation, and test split of 70, 10, and 20 non-overlapping subjects respectively. For each subject, we use 50 mid-brain axial slices. The dataset in (ii) consists of multi-contrast MR images from several healthy subjects, collected across three different scanners. For MP, we use T1w and T2w contrast images which were co-registered (intra-subject) using ANTS [17]. We employ a training, validation, and test split of 250, 50, and 100 non-overlapping subjects respectively and we use 70 axial slices from the mid-brain region per subject.

3.2 Experiments

Table 1: Trends across noise levels. overall variations in scale parameter (), beta parameter (), and the output PSNR values across all the noise levels for QE and MP.
Figure 5: Results for QE. Predicted HQ images with input LQ images at two different noise-levels (NLs), (b1): NL0 (high SNR, same NL as training-set), and (b5): NL3 (low SNR, highest simulated NL). (a2)–(a5) and (a7)–(a10): Predicted images for all the methods at NL0 and NL3, respectively. (b2)–(b5) and (b7)–(b10): Corresponding residual images. SSIM values are indicated within paranthesis.
Figure 6: Quantitative assessment for QE. SSIM, PSNR, and RRMSE values for all the methods at multiple noise-levels (NL0 to NL3). At each NL, 50 mid-brain slices from each subject (20 test-subjects) were evaluated (i.e., 1000 slices).

OOD-noisy data in MRI.

MRI data acquisition depends on several factors that may affect the quality of the MRI scans leading to a wide distribution of datasets. In practice, the dataset used for learning algorithms will not cater to all the possible variations in the distributions; hence, it is imperative to design learning algorithms that are robust to a wide-range of OOD-noisy data. In particular, the signal-to-noise ratio (SNR) of the MRI k-space data (and consequently the image) depends on various factors such as field strength, pulse sequence, tissue characteristics, number of receiver coils and their sensitivities, scan/physiological parameters, etc.  

[41, 46, 30, 22]. Hence, for both QE and MP, we generate OOD-noisy MRI data that captures variations in noise levels as described below. We consider the noise-level (NL) of the training set to be NL0 for both QE and MP. We model three increasing noise-levels, NL1–NL3, of OOD-noisy degradations for the test data. Degradations are in (i) k-space for QE and (ii) image-space for MP.
QE. Having trained the network using images at NL0, at inference, we evaluate the robustness of all the methods at three increasing levels of noise in the k-space. In practice, this is obtained by increasing the magnitude of complex Gaussian noise in the k-space and as described in Equation 1. In our experiments, sampling mask is fixed (i.e., fixing the k-space trajectory [23, 20], Figure 4), acquiring only of the k-space. The PSNR values of the LQ images at NL0–NL3, averaged across the test set, were around dB with respect to HQ images.
MP. Here we employ two kinds of OOD-noisy test data: (i) evaluation at three increasing levels of noise in the image-space and (ii) inclusion of unseen synthetic lesions. The image-space degradations are obtained by increasing the magnitude of Gaussian noise Equation 2. For the lesions related experiment, we add lesions in both the input (T1w) and the reference images (T2w) based on the segmented lesion masks available from the BRATS 2020 dataset (https://www.med.upenn.edu/cbica/brats2020/data.html).

4 Results and Discussion

In this work, for a fair evaluation, we modify the baselines (originally in 2D) to use a 2.5D-style training strategy. We use structural similarity index (SSIM) [43], peak signal-to-noise ratio (PSNR) and relative root mean squared error (RRMSE) for quantitative comparison. RRMSE between two images and is defined as , where indicates Frobenius norm.

Quantitative and Qualitative Evaluations for QE. We use the following state-of-the art methods for comparison: (i) U-Net as used in [44], (ii) pix2pix as used in [44, 14], and (iii) U-Net with uncertainty (U-Net+U) as used in [44]. Finally, our proposed method, URGAN (ours). Our uncertainty model is based on generalized Gaussian distributions allowing our loss function to be tuned automatically, unlike that of (U-Net+U) as described in [44]. All the baseline models are trained with the loss function proposed in their respective formulations.

Figure 7: Qualitative assessment for MP using orthogonal views. Results on two different input noise-levels (NLs): NL0 (High SNR, same NL as training-set), NL3 (Low SNR, highest simulated NL). SSIM values for each slice embedded.

Figure 5 shows the predicted images for QE at NL0 and NL3, for a representative test slice. At NL0, the predicted images from all the methods (Figure 5 (a2) – (a5)) have low residual error and are comparable among each other. However, at NL3, pix2pix and U-Net generate images with artifacts (Figure 5 (b4) - (b5)). Whereas, the predicted images from methods that model uncertainty (Ours and U-Net+U) show a superior recovery of structure and contrast in addition to the removal of artifacts (Figure 5 (a2-a3 and a7-a8)). The quantitative evaluation across all the OOD-noisy test data (Figure 6) shows that the performance of all the baseline methods suffers substantially at higher degradation levels, emphasizing the improved robustness due to the proposed GGD-based uncertainty model. Table 1 shows that with increasing NLs, the mean of s decreases, explained in 4.1.
Qualitative and Qualitative Evaluations for MP. We use the following state-of-the-art methods for comparison: (i) U-Net as used in [3, 4] that is adapted to work with unimodal input (T1w) and predicts the unimodal output (T2w). (ii) CycleGAN from [6]. Here we used a U-Net for the generator (as it led to better performance). (iii) We improve the robustness of U-Net (from [3, 4]) to OOD-noisy noisy input data by employing dropouts (DO) during both training and inference, called U-Net+DO. (iv) Similarly, we employ dropouts for CycleGAN (from [6]) giving us CycleGAN+DO. We perform multiple forward passes and take the mean to get the final outputs of models with dropouts activated at inference. Both U-Net and U-Net+DO are trained using the pixel-wise loss; CycleGAN and CycleGAN+DO use an additional adversarial loss with cycle consistency as in [6].

Figure 8: Results for MP (axial view). Predicted T2w images with input T1w images at two different noise-levels (NLs), (b1): NL0 (high SNR, same NL as training-set), and (b5): NL3 (low SNR, highest simulated NL). (a2)–(a4) and (a6)–(a8): Predicted (T2w) for all the methods at NL0 and NL3, respectively. (b2)–(b4) and (b6)–(b8): Corresponding residuals. SSIM values are indicated within paranthesis.
Figure 9: Quantitative assessment for MP. SSIM, PSNR, and RRMSE values for all the methods at multiple noise-levels (NL0 to NL3). At each NL, 70 mid-brain slices from each subject (100 test-subjects) were evaluated (i.e., 7000 slices).

Figure 7 shows the MP orthogonal views through the 3D output obtained from our method as well as the best performing baselines. U-Net+DO and CycleGAN+DO show improved performance in comparison to the non-DO counterparts as evident from Figure 9. Similar to QE, at NL0, outputs of all methods are comparable. However, at higher NLs, the proposed method outperforms all the baselines substantially, across all metrics (Figure 7 and 9). Figure 8 shows the predicted images for a representative test axial input slice along with the residuals for the proposed model, U-Net+DO, and CycleGAN+DO at two different NLs (NL0 and NL3). At NL0, the predicted images from all the methods (Figure 8 (a2) – (a4) appear closer in structure and contrast to the ground truth (Figure 8 (a1)). Our method shows the least residual (Figure 8 (b2)) compared to that of CycleGAN+DO (b3) and U-Net+DO (b4). At NL3, our method (Figure 8 (a6)) outperforms both CycleGAN+DO (Figure 8 (a7)) and U-Net+DO (Figure 8 (a8)) in terms of visual quality as well as SSIM values. All baselines show poor synthesis of structure, contrast, and limited removal of noise. Our method removes significant amount of noise, recover structure, contrast and texture, and is closer in appearance to the reference image (Figure 8 (a5)).

Figure 10: Uncertainty quantification on OOD-noisy data (synthetic lesion added to MP test data). Subfigures (i) – (ii) show results from two representative slices. (a1)-(a2): Input T1w () and the corresponding reference T2w (). (a3)-(a4): Predicted image () and absolute error map. (b1)-(b2): The learned scaling () and shape () parameter maps. (b3): Uncertainty map (). (b4): Masked aleatoric uncertainty map (refer Section 4.1).
Figure 11: Correlation between residual, uncertainty, and shape-parameter. Data from all NLs. Each point indicates an image, uncertainty score and shape-parameter score is calculated for an image by taking mean of and over all the pixels.

4.1 Uncertainty quantification

Along with improved accuracy of the predicted images, we demonstrate the efficacy of estimating uncertainty maps, (, , ). Uncertainty maps are derived from the outputs of the network: , , and as described in Equation 14-15. Scatter plots in Figure 11 show that for both QE and MP, the residuals between the predictions and the ground-truths correlate positively with the uncertainty obtained from our framework, i.e., high overall residuals correspond to high overall uncertainty for the image. Therefore, higher overall uncertainty may be used as a proxy to infer about increased imperfections in reconstruction/synthesis. Interestingly, Figure 11 also show that high residuals correlate negatively with predicted shape-parameter () that is consistent with the fact that lower shape-parameter values correspond to heavy-tailed distributions for residuals, i.e., outliers.
We also employ a testing scenario for MP, where the incoming test subject has an unseen pathology (a lesion). In this case too, we use the network which was trained on healthy individuals only. Figures 10 (a1) and (a2) show the input (T1w) and the reference (T2w) image with simulated lesions. The sub-figures (i) and (ii) show results on two representative slices. The predicted image (, Figure 10 (a3)), the scale parameter of the GGD (, Figure 10 (b1)), and the corresponding shape parameter (, Figure 10 (b2)), are the outputs of the network. The derived uncertainty map is shown in Figure 10 (b3), obtained using Equations 1315. For the background region, both the scale and shape maps show values that are less spatially varying. While the scale map () captures edges in the image at a coarser level, the shape map () captures subtle features in the image. In all the slices intensity values in the image, within the lesion region are significantly lower than other regions, indicating outliers. We observe that the absolute residual image has peak values in and around the lesion. This is expected as the training data is devoid of such pathological cases. The regions with “high” intensity values within the uncertainty map (, Figure 10 (b3)) show high residual values as well (Figure 10 (a4)). In addition to showing peak values of the uncertainty maps in regions with high residual error, the uncertainty map is able to capture minor variations in the predicted images, especially at the skull boundary and gyri. To make the uncertainty map clinically relevant, we identify “high” residual values as the ones that are above a pre-defined threshold, say . The value of is fixed as follows: we choose to be slightly above the standard deviation of the highest noise-level at which the network provided reasonably accurate reconstructed images (NL3). In this paper, we choose , which is slightly above the estimated standard deviation of the noise at NL2. Figure 10 (b4) shows the product of the uncertainty map () and the binary mask obtained by thresholding the absolute error map at . Clearly, the resultant uncertainty map (Figure 10 (b4)) reflects pixels with the highest predicted uncertainty. Hence, uncertainty map can be a proxy for residual map (that is not available at inference).
Conclusion. In this work, we proposed a GAN-based framework with adaptive quasi-norm loss functions for improved robustness to unseen perturbations on test data. We demonstrated the efficacy of our network on two key applications arising in the field of medical imaging, namely undersampled MRI reconstruction and modality propagation. Enhanced output produced using this method after post-processing can serve in clinical decision making based. We compared our network with state-of-the-art networks for both applications and found that the performance of our network is substantially better (both qualitatively and quantitatively) at high noise levels, and comparable at noise levels similar to the training set. While our experiments with perturbations related to scanners (noise level study) showed that our framework can generate predicted images with low residual, the experiments related to physiological perturbations (lesion study) showed that our uncertainty maps can be proxy for the residual maps.


  • [1] J. Barron (2019) A general and adaptive robust loss function. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 4331–4339. Cited by: §1.
  • [2] M. Black and A. Rangarajan (1996) On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int J Comp Vis 19 (1), pp. 57–91. Cited by: §1.
  • [3] A. Chartsias, T. Joyce, M. Giuffrida, and S. Tsaftaris (2017) Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans Med Imag. 37 (3), pp. 803–814. Cited by: §4.
  • [4] K. Chen, E. Gong, M. Carvalho, J. Xu, A. Boumis, M. Khalighi, K. Poston, S. Sha, M. Greicius, E. Mormino, et al. (2019)

    Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs

    Radiol. 290, pp. 649. Cited by: §2.1, §4.
  • [5] D. Cox and D. Hinkley (1979) Theoretical statistics. CRC Press. Cited by: §1.
  • [6] S. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, and T. Çukur (2019) Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans Med Imag. 38 (10), pp. 2375–2388. Cited by: §1, §2.1, §4.
  • [7] Y. Gal and Z. Ghahramani (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In Int Conf Mach Lear. (ICML), pp. 1050–1059. Cited by: §2.2.
  • [8] I. Goodfellow (2016) NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160. Cited by: §2.1.
  • [9] H. Gudbjartsson and S. Patz (1995) The Rician distribution of noisy MRI data. Magn Res Med. 34 (6), pp. 910–914. Cited by: §2.
  • [10] T. Hastie, R. Tibshirani, and M. Wainwright (2015) Statistical learning with sparsity: the lasso and generalizations. CRC press. Cited by: §1.
  • [11] Y. Hsu, Y. Shen, H. Jin, and Z. Kira (2020-06) Generalized odin: detecting out-of-distribution image without learning from out-of-distribution data. In IEEE Conf Comp Vis Patt Recog. (CVPR), Cited by: §1.
  • [12] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In European Conf Comp Vis.(ECCV), pp. 172–189. Cited by: §1, §1.
  • [13] P. Huber et al. (1972) The 1972 Wald lecture robust statistics: a review. The Annals of Mathematical Statistics 43 (4), pp. 1041–1067. Cited by: §1.
  • [14] P. Isola, J. Zhu, T. Zhou, and A. Efros (2017) Image-to-image translation with conditional adversarial networks. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 1125–1134. Cited by: §4.
  • [15] A. Kendall and Y. Gal (2017)

    What uncertainties do we need in Bayesian deep learning for computer vision?

    In Adv Neural Info Proc Syst., pp. 5574–5584. Cited by: §2.1.
  • [16] D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §2.2.
  • [17] A. Klein, J. Andersson, B. Ardekani, J. Ashburner, B. Avants, M. Chiang, G. Christensen, D. Collins, J. Gee, P. Hellier, et al. (2009) Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. Neuroimage 46 (3), pp. 786–802. Cited by: §3.1.
  • [18] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017)

    Photo-realistic single image super-resolution using a generative adversarial network

    In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 4681–4690. Cited by: §1.
  • [19] K. Lee, K. Lee, H. Lee, and J. Shin (2018) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Adv Neural Info Proc Sys., pp. 7167–7177. Cited by: §1.
  • [20] J. Liu and D. Saloner (2014) Accelerated MRI with CIRcular cartesian UnderSampling (CIRCUS): a variable density Cartesian sampling strategy for compressed sensing and parallel imaging. Quant Imag Med Surgery. 4 (1), pp. 57. Cited by: §2, §3.2.
  • [21] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In Adv Neural Info Proc Sys., pp. 700–708. Cited by: §1, §1.
  • [22] M. Lustig, D. Donoho, and J. Pauly (2007) Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Res Med. 58 (6), pp. 1182–1195. Cited by: §3.2.
  • [23] R. Mezrich (1995) A perspective on k-space.. Radiol. 195 (2), pp. 297–315. Cited by: §3.2.
  • [24] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard (2017) Universal adversarial perturbations. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 1765–1773. Cited by: §1.
  • [25] D. Moratal, A. Vallés-Luch, L. Martí-Bonmatí, and M. E. Brummer (2008) K-space tutorial: an MRI educational tool for a better understanding of k-space. Biomed Imag and Interven J. 4 (1). Cited by: §1.
  • [26] T. Nair, D. Precup, D. Arnold, and T. Arbel (2020) Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Med Imag Anal. 59, pp. 101557. Cited by: §1.
  • [27] T. Nair, D. Precup, D. Arnold, and T. Arbel (2020) Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Med Imag Anal. 59, pp. 101557. Cited by: §1.
  • [28] D. Nie, R. Trullo, J. Lian, L. Wang, C. Petitjean, S. Ruan, Q. Wang, and D. Shen (2018) Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Engg. 65 (12), pp. 2720–2730. Cited by: §1, §2.1.
  • [29] D. Ravì, A. Szczotka, S. Pereira, and T. Vercauteren (2019) Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy. Med Imag Anal. 53, pp. 123–131. Cited by: §1.
  • [30] P. Robson, A. Grant, A. Madhuranthakam, R. Lattanzi, D. Sodickson, and C. McKenzie (2008) Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions. Magn Res Med. 60 (4), pp. 895–907. Cited by: §3.2.
  • [31] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: Convolutional networks for biomedical image segmentation. In Int Conf Med Imag Comput Comput-Assist Intervention. (MICCAI), pp. 234–241. Cited by: §2.1.
  • [32] J. Schlemper, J. Caballero, J. Hajnal, A. Price, and D. Rueckert (2017)

    A deep cascade of convolutional neural networks for dynamic mr image reconstruction

    IEEE Trans Med Imag. 37 (2), pp. 491–503. Cited by: §1.
  • [33] D. Shen, G. Wu, and H. Suk (2017) Deep learning in medical image analysis. Ann Rev Biomed Engg. 19, pp. 221–248. Cited by: §1, §2.1.
  • [34] J. Sjölund, A. Eklund, E. Özarslan, M. Herberthson, M. Bånkestad, and H. Knutsson (2018) Bayesian uncertainty quantification in linear models for diffusion MRI. NeuroImage 175, pp. 272–285. Cited by: §1.
  • [35] V. P. Sudarshan, U. Upadhyay, G. F. Egan, Z. Chen, and S. P. Awate (2021) Towards lower-dose pet using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Medical Image Analysis 73. Cited by: §1.
  • [36] D. Sun, S. Roth, and M. Black (2010) Secrets of optical flow estimation and their principles. In IEEE Conf Comp Vis Patt Recog., pp. 2432–2439. Cited by: §1.
  • [37] R. Tanno, D. Worrall, A. Ghosh, E. Kaden, S. Sotiropoulos, A. Criminisi, and D. Alexander (2017) Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In Int Conf Med Image Comput Computer-Assist Intervention. (MICCAI), pp. 611–619. Cited by: §1, §1.
  • [38] U. Upadhyay and S. P. Awate (2019) A mixed-supervision multilevel gan framework for image quality enhancement. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, Cited by: §1.
  • [39] U. Upadhyay and S. P. Awate (2019) Robust super-resolution gan, with manifold-based and perception loss. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Cited by: §1.
  • [40] U. Upadhyay, V. P. Sudarshan, and S. P. Awate (2020) QUEST for medisyn: quasi-norm based uncertainty estimation for medical image synthesis. In ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning, Cited by: §1.
  • [41] P. Virtue and M. Lustig (2017) The empirical effect of Gaussian noise in undersampled MRI reconstruction. Tomography 3 (4), pp. 211. Cited by: §2, §3.2.
  • [42] G. Wang, W. Li, M. Aertsen, J. Deprest, S. Ourselin, and T. Vercauteren (2019)

    Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks

    Neurocomput. 338, pp. 34–45. Cited by: §1.
  • [43] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Imag Proc. 13 (4), pp. 600–12. Cited by: §4.
  • [44] Z. Zhang, A. Romero, M. Muckley, P. Vincent, L. Yang, and M. Drozdzal (2019) Reducing uncertainty in undersampled MRI reconstruction with active acquisition. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 2049–2058. Cited by: §1, §4.
  • [45] J. Zhu, T. Park, P. Isola, and A. Efros (2017-10) Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE Int Conf Comp Vis. (ICCV), Cited by: §1.
  • [46] Y. Zhu, W. Shen, F. Cheng, C. Jin, and G. Cao (2020) Removal of high density Gaussian noise in compressed sensing MRI reconstruction through modified total variation image denoising method. Heliyon 6 (3), pp. e03680. Cited by: §2, §3.2.