1 Introduction and Related Work
The imagetoimage translation methods that generate images of the target view, given images of the source view, are often illposed problems as unique onetoone mapping may not exist between the source and target views. Learningbased methods proposed in this context [18, 21, 12] evaluate the performance on test data that is similar to training data. Such models suffer from severe degradation in the performance when presented with outofdistribution (OOD)noisy data [11, 19, 24]. In critical applications, like in medical imaging, in addition to robustness to OODnoisy data, it is important to quantify the uncertainty in the predictions made by the model to aid clinical decisions [27, 34]. Particularly, problems such as enhancing the quality of a given medical image and synthesizing medical images of target modality given a source modality benefit from the uncertainty estimation by allowing risk assessment in the predicted images [37, 27, 34]. In this work, we use two critical tasks in medical image analysis that can broadly be posed as imagetoimage translation problems to demonstrate the efficacy of the proposed methods: (i) undersampled magnetic resonance imaging (MRI) reconstruction and (ii) MRI modality propagation, i.e., synthesizing T2 weighted (T2w) MRI from T1 weighted (T1w) MRI.
MRI kspace (a 2D complexvalued space based on 2D Fourier transform of the slices) data acquisition is a timeconsuming process as the speed is dependent on hardware and physiological constraints
[25, 32, 44]. Typically, faster acquisitions are realized by undersampling the kspace. However, it leads to blurring or aliasing effects in the MRI image. We formulate the reconstruction of highquality MRI scans using undersampled kspace data as a translation from lowquality MRI (lqMR) to highquality MRI (hqMR) scans (lqMR hqMR). Similarly, modality propagation is of interest because routine clinical protocols acquire images with multiple contrasts which are critical for better diagnostics. Acquiring multiple contrasts increases scanning time. T1w MRI and T2w MRI are among the most commonly acquired contrasts and synthesis of T2w from T1w is posed as a translation problem (T1w T2w).Recently, conditional generative adversarial networks (GANs) have shown substantially improved performance for the abovementioned tasks in comparison to other learningbased methods [33, 28, 6, 35, 40, 39, 38]. Such methods condition the generator (that generates the target view) with a source view and the task of the discriminator is to discriminate between the generated target samples from the groundtruth target samples through adversarial loss term. Additionally, some methods also impose a cycleconsistency penalty to constraint the illposed translation problem [45, 21, 12]. Along with improved accuracy, quantifying uncertainty can potentially aid in clinical decision making, especially in the presence of OODnoisy data, while improving the accuracy of subsequent tasks such as image segmentation, classification, etc [37]. In the general context of medical imaging, OODnoisy perturbations could be systemic (scannerrelated) and/or physiological. Hence, beyond reliable image synthesis during inference, a thorough analysis of the robustness of the network along with quantification of uncertainty in the predicted images is crucial for clinical translation of synthesis frameworks [42, 26, 29]
. Designing learning strategies that are robust to large errors (i.e., outliers whose residuals follow heavytailed distributions) is a wellstudied problem in statistics, optimization, and parameter estimation
[2, 10, 13, 5].Quasi normbased loss function, , use a generalized notion of the norm, where (often called the shape parameter) can be tuned empirically for robust learning [36, 1]. Moreover, such empirical methods often require the to be fixed spatially. This work proposes a novel loss function that automatically learns the shape parameter that can vary spatially for the image dataset along with the scale parameter that allows us to estimate pervoxel uncertainty.
2 Methods
We describe our proposed GAN framework and discuss specific details pertaining to the two applications: (i) MRI reconstruction using undersampled kspace data posed as a quality enhancement (QE) task and (ii) Modiality Propagation (MP). Our proposed solution for these two applications can easily be extended to other imagetoimage translation problems both with MRI as well as other modalities.
Let the set of input images be represented by and the corresponding set of output images be represented by . Let and represent elements from the set of input and output images (i.e., and ), respectively. We learn the mapping from input to output using pairs of coregistered training data: . The input and output images for QE and MP are described below.
Undersampled MRI reconstruction (QE). Here, a mapping is learned from a lowquality MRI image (lqMR) (obtained via zerofilled inverse Fourier transform (IFT) of the undersampled kspace data) to the corresponding highquality MRI image (hqMR) (obtained from fullysampled kspace). Therefore, for QE, becomes . Let operator represent the 2D Fourier transform and matrix represent the sampling mask to perform undersampling in kspace. Then, the undersampled MRI is related to fullysampled MRI via . With a fixed sampling mask, accounting for noisy kspace acquisitions, corrupted by complex Gaussian noise (say ), that represent OODnoisy data [41, 46, 20], the OODnoisy input sample (say ) can be modeled by,
(1) 
Modality propagation (MP). Here, we learn a mapping from T1w images to corresponding T2w images. In this context, can be represented by the set of T1w MR and T2w MR images, i.e., . Unlike QE, the OODnoisy input sample (say ) during inference is obtained by adding Gaussian noise in the imagespace (say ) to the T1w MRI input image (say ). The addition of Gaussian noise in imagespace is consistent with [9], i.e.,
(2) 
2.1 Model
This work, inspired by conditional GANs, proposes a novel GANbased framework which (i) employs learnable quasinorm loss, and (ii) estimates uncertainty maps, as described below. Let and represent our generator and discriminator respectively. The input to the generator is , the predicted image is (i.e., output of Obranch of as shown in Figure 1), and the groundtruth is . Each image consists of pixels. We denote pixel in image, say , as .
Prior works typically formulate the above task as a regression and the distribution of residuals between the predictions and the groundtruths is assumed to be isotropic standard Gaussian distribution (zero mean and fixed standard deviation). However, some of the limitations of this approach include: (i) not being able to model the outliers/corruptions in the dataset, as residuals due to outliers tend to follow heavytailed distributions. (ii) the assumption of fixed standard deviation does not account for heteroscedastic uncertainty in their modeling. Recent works model the heteroscedastic uncertainty by assuming that the variance is datadependent
[15]. We improve upon the assumptions made by prior works by (i) modeling the residuals to follow a generalized Gaussian distribution (GGD) with zero mean, which is , and (ii) allowing the shape parameter () and the scale parameter () to vary spatially. This allows us to learn appropriate quasinorms at all spatial locations, as well as estimating uncertainty at every output pixel. Therefore, for the image, the residual at pixel location , (between the predicted value and groundtruth value ), follows GGD, i.e.,(3)  
(4)  
(5) 
GGD is capable of modelling heavytailed distributions including the Gaussian and Laplace PDFs as shown in Figure 2. Here is the scale parameter, denotes the shape parameter, and is the standard gamma function. In our formulation all the ’s are independent but not necessarily identically distributed as and may vary spatially. Hence, the likelihood is,
(6) 
Therefore, loglikelihood is,
(7) 
where represents the collection of network parameters. In this way, to improve the robustness of the network, we predict (i) (the predicted value at every pixel location, also the mean of GGD), (ii) (an estimate for the true at every pixel location), and (iii) (an estimate for the true at every pixel location). Hence, the proposed robust quasinorm based loss is
(8) 
and . Where indicates the set of all pixel values over all the images in the dataset, similarly for others. In addition to the above uncertaintyaware fidelity loss term, the adversarial term depending upon
is defined in terms of binary crossentropy between the true and predicted probability vectors for each of the generated image (say
) and groundtruth image (say ), respectively. The binary cross entropy loss is given by, , where represents the element in the output probability vector obtained from with input . Hence, minimizes the loss given by,(9) 
On the other hand, minimizes given by,
(10) 
The architecture of the proposed generator is inspired by UNet [31]. We modify the UNet such that the last convolutional layer is split into three, i.e., three independent convolutional layers attached to the penultimate block in UNet (as shown in Figure 1). This model predicts , , and . The discriminator network ( in Figure 1 ) is a typical CNN architecture as described in [6]. Although image synthesis benefits from 3D processing, training CNNs with 3D images poses substantial challenges such as increased computational demand, cost, and training time. Several works use patchbased approaches to circumvent this issue. However, the final reconstruction from patches might result in artifacts [33]. In this work, we design our model to accept 2.5D input as described in [4] and produce a 2.5D output, this allows the model to exploit 3D like neighborhOODnoisy information while learning, resulting in better quality outputs. The final 3D output is obtained by joining (via averaging) overlapping 2.5D volumes as shown in Figure 3.
2.2 Training and Testing Scheme
All the networks were trained using Adam optimizer [16] by sampling minibatches of size 16. The initial learning rate was set to
and cosine annealing was used to decay the learning rate with epochs. The
for MP and SR (Equation 9) was set to and respectively. For numerical stability, the proposed network produces instead of. The positivity constraint on the output is enforced by applying the ReLU activation function at the end of the three output layers in the network (Figure
1). In addition to the networkgenerated outputs (, , ), we also compute aleatoric and epistemic uncertainty maps denoted by and , respectively. While captures the uncertainty in the model parameters, captures the datadependent uncertainty, we combine both to produce . To capture the epistemic uncertainty, multiple forward passes ( forward passes) are done for every single input image at inference with dropouts activated. In this way, the final output of the proposed framework, for a given input along with the uncertainty maps are given by,(11)  
(12)  
(13)  
(14)  
(15) 
where denotes the index for forward pass through the network and denotes the parameters that are instantiated due to dropouts on the forward pass, as described in [7]. In our experiments was set to 50.
3 Datasets and Experiments
For the two applications QE and MP, we describe the in vivo training data and generation of OODnoisy test data.
3.1 Datasets
We use the following two datasets: (i) proprietary dataset consisting of registered T1w MRI scans for QE, and (ii) publicly available IXI dataset (https://braindevelopment.org/ixidataset/) for MP. The dataset in (i) consists of T1w MRI scans with mm isotropic resolution from 100 subjects. We consider these images as the HQMR images for the QE experiment. The methodology to obtain LQMR images is shown in Figure 4. For LQ data acquisition, we sample only of the kspace using the mask in Figure 4, resulting in an acceleration of . Subsequently, the LQ images are obtained as the inverse Fourier transform (IFT) of the undersampled kspace. We use training, validation, and test split of 70, 10, and 20 nonoverlapping subjects respectively. For each subject, we use 50 midbrain axial slices. The dataset in (ii) consists of multicontrast MR images from several healthy subjects, collected across three different scanners. For MP, we use T1w and T2w contrast images which were coregistered (intrasubject) using ANTS [17]. We employ a training, validation, and test split of 250, 50, and 100 nonoverlapping subjects respectively and we use 70 axial slices from the midbrain region per subject.
3.2 Experiments
OODnoisy data in MRI.
MRI data acquisition depends on several factors that may affect the quality of the MRI scans leading to a wide distribution of datasets. In practice, the dataset used for learning algorithms will not cater to all the possible variations in the distributions; hence, it is imperative to design learning algorithms that are robust to a widerange of OODnoisy data. In particular, the signaltonoise ratio (SNR) of the MRI kspace data (and consequently the image) depends on various factors such as field strength, pulse sequence, tissue characteristics, number of receiver coils and their sensitivities, scan/physiological parameters, etc.
[41, 46, 30, 22]. Hence, for both QE and MP, we generate OODnoisy MRI data that captures variations in noise levels as described below. We consider the noiselevel (NL) of the training set to be NL0 for both QE and MP. We model three increasing noiselevels, NL1–NL3, of OODnoisy degradations for the test data. Degradations are in (i) kspace for QE and (ii) imagespace for MP.QE. Having trained the network using images at NL0, at inference, we evaluate the robustness of all the methods at three increasing levels of noise in the kspace. In practice, this is obtained by increasing the magnitude of complex Gaussian noise in the kspace and as described in Equation 1. In our experiments, sampling mask is fixed (i.e., fixing the kspace trajectory [23, 20], Figure 4), acquiring only of the kspace. The PSNR values of the LQ images at NL0–NL3, averaged across the test set, were around dB with respect to HQ images.
MP. Here we employ two kinds of OODnoisy test data: (i) evaluation at three increasing levels of noise in the imagespace and (ii) inclusion of unseen synthetic lesions. The imagespace degradations are obtained by increasing the magnitude of Gaussian noise Equation 2. For the lesions related experiment, we add lesions in both the input (T1w) and the reference images (T2w) based on the segmented lesion masks available from the BRATS 2020 dataset (https://www.med.upenn.edu/cbica/brats2020/data.html).
4 Results and Discussion
In this work, for a fair evaluation, we modify the baselines (originally in 2D) to use a 2.5Dstyle training strategy. We use structural similarity index (SSIM) [43], peak signaltonoise ratio (PSNR) and relative root mean squared error (RRMSE) for quantitative comparison. RRMSE between two images and is defined as , where indicates Frobenius norm.
Quantitative and Qualitative Evaluations for QE. We use the following stateofthe art methods for comparison: (i) UNet as used in [44], (ii) pix2pix as used in [44, 14], and (iii) UNet with uncertainty (UNet+U) as used in [44]. Finally, our proposed method, URGAN (ours). Our uncertainty model is based on generalized Gaussian distributions allowing our loss function to be tuned automatically, unlike that of (UNet+U) as described in [44]. All the baseline models are trained with the loss function proposed in their respective formulations.
Figure 5 shows the predicted images for QE at NL0 and NL3, for a representative test slice.
At NL0,
the predicted images from all the methods (Figure 5 (a2) – (a5)) have low residual error and are comparable among each other. However, at NL3, pix2pix and UNet generate images with artifacts (Figure 5 (b4)  (b5)).
Whereas, the predicted images from methods that model uncertainty (Ours and UNet+U) show a superior recovery of structure and contrast in addition to the removal of artifacts (Figure 5 (a2a3 and a7a8)).
The quantitative evaluation across all the OODnoisy test data (Figure 6) shows that the performance of all the baseline methods suffers substantially at higher degradation levels, emphasizing the improved robustness due to the proposed GGDbased uncertainty model.
Table 1 shows that with increasing NLs, the mean of s decreases, explained in 4.1.
Qualitative and Qualitative Evaluations for MP.
We use the following stateoftheart methods for comparison:
(i) UNet as used in [3, 4] that is adapted to work with unimodal input (T1w) and predicts the unimodal output (T2w).
(ii) CycleGAN from [6]. Here we used a UNet for the generator (as it led to better performance).
(iii) We improve the robustness of UNet (from [3, 4]) to OODnoisy noisy input data by employing dropouts (DO) during both training and inference, called UNet+DO.
(iv) Similarly, we employ dropouts for CycleGAN (from [6]) giving us CycleGAN+DO.
We perform multiple forward passes and take the mean to get the final outputs of models with dropouts activated at inference.
Both UNet and UNet+DO are trained using the pixelwise loss; CycleGAN and CycleGAN+DO use
an additional adversarial loss with cycle consistency as in [6].
Figure 7 shows the MP orthogonal views through the 3D output obtained from our method as well as the best performing baselines. UNet+DO and CycleGAN+DO show improved performance in comparison to the nonDO counterparts as evident from Figure 9. Similar to QE, at NL0, outputs of all methods are comparable. However, at higher NLs, the proposed method outperforms all the baselines substantially, across all metrics (Figure 7 and 9). Figure 8 shows the predicted images for a representative test axial input slice along with the residuals for the proposed model, UNet+DO, and CycleGAN+DO at two different NLs (NL0 and NL3). At NL0, the predicted images from all the methods (Figure 8 (a2) – (a4) appear closer in structure and contrast to the ground truth (Figure 8 (a1)). Our method shows the least residual (Figure 8 (b2)) compared to that of CycleGAN+DO (b3) and UNet+DO (b4). At NL3, our method (Figure 8 (a6)) outperforms both CycleGAN+DO (Figure 8 (a7)) and UNet+DO (Figure 8 (a8)) in terms of visual quality as well as SSIM values. All baselines show poor synthesis of structure, contrast, and limited removal of noise. Our method removes significant amount of noise, recover structure, contrast and texture, and is closer in appearance to the reference image (Figure 8 (a5)).
4.1 Uncertainty quantification
Along with improved accuracy of the predicted images, we demonstrate the efficacy of estimating uncertainty maps, (, , ).
Uncertainty maps are derived from the outputs of the network: , , and as described in Equation 1415.
Scatter plots in Figure 11 show that for both QE and MP, the residuals between the predictions and the groundtruths correlate positively with the uncertainty obtained from our framework, i.e., high overall residuals correspond to high overall uncertainty for the image.
Therefore, higher overall uncertainty may be used as a proxy to infer about increased imperfections in reconstruction/synthesis.
Interestingly, Figure 11 also show that high residuals correlate negatively with predicted shapeparameter () that is
consistent with the fact that lower shapeparameter values correspond to heavytailed distributions for residuals, i.e., outliers.
We also employ a testing scenario for MP, where the incoming test subject has an unseen pathology (a lesion).
In this case too, we use the network which was trained on healthy individuals only.
Figures 10 (a1) and (a2) show the input (T1w) and the reference (T2w) image with simulated lesions.
The subfigures (i) and (ii) show results on two representative slices.
The predicted image (, Figure 10 (a3)), the scale parameter of the GGD
(, Figure 10 (b1)), and the corresponding shape parameter (, Figure 10 (b2)),
are the outputs of the network.
The derived uncertainty map is shown in Figure 10 (b3), obtained using Equations 13 – 15.
For the background region, both the scale and shape maps show values that are less spatially varying.
While the scale map () captures edges in the image at a coarser level,
the shape map () captures subtle features in the image. In all the slices intensity values in the image, within the lesion region are significantly lower than other regions, indicating outliers.
We observe that the absolute residual image has peak values in and around the lesion. This is expected as the training data is devoid of such pathological cases.
The regions with “high” intensity values within the uncertainty map (, Figure 10 (b3)) show high residual values as well (Figure 10 (a4)).
In addition to showing peak values of the uncertainty maps in regions with high residual error, the uncertainty map is able to capture
minor variations in the predicted images, especially at the skull boundary and gyri.
To make the uncertainty map clinically relevant, we identify “high” residual values as the ones that are above a predefined threshold, say . The value of is fixed as follows: we choose to be slightly above the standard deviation of the highest noiselevel at which the network provided reasonably accurate reconstructed images (NL3).
In this paper, we choose , which is slightly above the estimated standard deviation of the noise at NL2.
Figure 10 (b4) shows the product of the uncertainty map () and the binary mask obtained by thresholding the absolute error map at .
Clearly, the resultant uncertainty map (Figure 10 (b4)) reflects pixels with the highest predicted uncertainty.
Hence, uncertainty map can be a proxy for residual map
(that is not available at inference).
Conclusion.
In this work, we proposed a GANbased framework with adaptive quasinorm loss functions for improved robustness to unseen perturbations on test data. We demonstrated the efficacy of our network on two key applications arising in the field of medical imaging, namely undersampled MRI reconstruction and modality propagation.
Enhanced output produced using this method after postprocessing can serve in clinical decision making based.
We compared our network with stateoftheart networks for both applications and found that the performance of our network is substantially better (both qualitatively and quantitatively) at high noise levels, and comparable at noise levels similar to the training set. While our experiments with perturbations related to scanners (noise level study) showed that our framework can generate predicted images with low residual,
the experiments related to physiological perturbations (lesion study) showed that our uncertainty maps can be proxy for the residual maps.
References
 [1] (2019) A general and adaptive robust loss function. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 4331–4339. Cited by: §1.
 [2] (1996) On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int J Comp Vis 19 (1), pp. 57–91. Cited by: §1.
 [3] (2017) Multimodal MR synthesis via modalityinvariant latent representation. IEEE Trans Med Imag. 37 (3), pp. 803–814. Cited by: §4.

[4]
(2019)
Ultralowdose 18Fflorbetaben amyloid PET imaging using deep learning with multicontrast MRI inputs
. Radiol. 290, pp. 649. Cited by: §2.1, §4.  [5] (1979) Theoretical statistics. CRC Press. Cited by: §1.
 [6] (2019) Image synthesis in multicontrast MRI with conditional generative adversarial networks. IEEE Trans Med Imag. 38 (10), pp. 2375–2388. Cited by: §1, §2.1, §4.
 [7] (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In Int Conf Mach Lear. (ICML), pp. 1050–1059. Cited by: §2.2.
 [8] (2016) NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160. Cited by: §2.1.
 [9] (1995) The Rician distribution of noisy MRI data. Magn Res Med. 34 (6), pp. 910–914. Cited by: §2.
 [10] (2015) Statistical learning with sparsity: the lasso and generalizations. CRC press. Cited by: §1.
 [11] (202006) Generalized odin: detecting outofdistribution image without learning from outofdistribution data. In IEEE Conf Comp Vis Patt Recog. (CVPR), Cited by: §1.
 [12] (2018) Multimodal unsupervised imagetoimage translation. In European Conf Comp Vis.(ECCV), pp. 172–189. Cited by: §1, §1.
 [13] (1972) The 1972 Wald lecture robust statistics: a review. The Annals of Mathematical Statistics 43 (4), pp. 1041–1067. Cited by: §1.
 [14] (2017) Imagetoimage translation with conditional adversarial networks. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 1125–1134. Cited by: §4.

[15]
(2017)
What uncertainties do we need in Bayesian deep learning for computer vision?
. In Adv Neural Info Proc Syst., pp. 5574–5584. Cited by: §2.1.  [16] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §2.2.
 [17] (2009) Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. Neuroimage 46 (3), pp. 786–802. Cited by: §3.1.

[18]
(2017)
Photorealistic single image superresolution using a generative adversarial network
. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 4681–4690. Cited by: §1.  [19] (2018) A simple unified framework for detecting outofdistribution samples and adversarial attacks. In Adv Neural Info Proc Sys., pp. 7167–7177. Cited by: §1.
 [20] (2014) Accelerated MRI with CIRcular cartesian UnderSampling (CIRCUS): a variable density Cartesian sampling strategy for compressed sensing and parallel imaging. Quant Imag Med Surgery. 4 (1), pp. 57. Cited by: §2, §3.2.
 [21] (2017) Unsupervised imagetoimage translation networks. In Adv Neural Info Proc Sys., pp. 700–708. Cited by: §1, §1.
 [22] (2007) Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Res Med. 58 (6), pp. 1182–1195. Cited by: §3.2.
 [23] (1995) A perspective on kspace.. Radiol. 195 (2), pp. 297–315. Cited by: §3.2.
 [24] (2017) Universal adversarial perturbations. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 1765–1773. Cited by: §1.
 [25] (2008) Kspace tutorial: an MRI educational tool for a better understanding of kspace. Biomed Imag and Interven J. 4 (1). Cited by: §1.
 [26] (2020) Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Med Imag Anal. 59, pp. 101557. Cited by: §1.
 [27] (2020) Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Med Imag Anal. 59, pp. 101557. Cited by: §1.
 [28] (2018) Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Engg. 65 (12), pp. 2720–2730. Cited by: §1, §2.1.
 [29] (2019) Adversarial training with cycle consistency for unsupervised superresolution in endomicroscopy. Med Imag Anal. 53, pp. 123–131. Cited by: §1.
 [30] (2008) Comprehensive quantification of signaltonoise ratio and gfactor for imagebased and kspacebased parallel imaging reconstructions. Magn Res Med. 60 (4), pp. 895–907. Cited by: §3.2.
 [31] (2015) Unet: Convolutional networks for biomedical image segmentation. In Int Conf Med Imag Comput ComputAssist Intervention. (MICCAI), pp. 234–241. Cited by: §2.1.

[32]
(2017)
A deep cascade of convolutional neural networks for dynamic mr image reconstruction
. IEEE Trans Med Imag. 37 (2), pp. 491–503. Cited by: §1.  [33] (2017) Deep learning in medical image analysis. Ann Rev Biomed Engg. 19, pp. 221–248. Cited by: §1, §2.1.
 [34] (2018) Bayesian uncertainty quantification in linear models for diffusion MRI. NeuroImage 175, pp. 272–285. Cited by: §1.
 [35] (2021) Towards lowerdose pet using physicsbased uncertaintyaware multimodal learning with robustness to outofdistribution data. Medical Image Analysis 73. Cited by: §1.
 [36] (2010) Secrets of optical flow estimation and their principles. In IEEE Conf Comp Vis Patt Recog., pp. 2432–2439. Cited by: §1.
 [37] (2017) Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI superresolution. In Int Conf Med Image Comput ComputerAssist Intervention. (MICCAI), pp. 611–619. Cited by: §1, §1.
 [38] (2019) A mixedsupervision multilevel gan framework for image quality enhancement. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, Cited by: §1.
 [39] (2019) Robust superresolution gan, with manifoldbased and perception loss. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Cited by: §1.
 [40] (2020) QUEST for medisyn: quasinorm based uncertainty estimation for medical image synthesis. In ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning, Cited by: §1.
 [41] (2017) The empirical effect of Gaussian noise in undersampled MRI reconstruction. Tomography 3 (4), pp. 211. Cited by: §2, §3.2.

[42]
(2019)
Aleatoric uncertainty estimation with testtime augmentation for medical image segmentation with convolutional neural networks
. Neurocomput. 338, pp. 34–45. Cited by: §1.  [43] (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Imag Proc. 13 (4), pp. 600–12. Cited by: §4.
 [44] (2019) Reducing uncertainty in undersampled MRI reconstruction with active acquisition. In IEEE Conf Comp Vis Patt Recog. (CVPR), pp. 2049–2058. Cited by: §1, §4.
 [45] (201710) Unpaired imagetoimage translation using cycleconsistent adversarial networks. In IEEE Int Conf Comp Vis. (ICCV), Cited by: §1.
 [46] (2020) Removal of high density Gaussian noise in compressed sensing MRI reconstruction through modified total variation image denoising method. Heliyon 6 (3), pp. e03680. Cited by: §2, §3.2.
Comments
There are no comments yet.