1 Introduction
Magnetic resonance imaging (MRI) is a noninvasive imaging technique that allows studying anatomy and tissue properties without ionizing radiation. However, acquiring a detailed highquality image is timeconsuming.
Reducing scan time is therefore essential in order to increase patient comfort and throughput, and to open up more possibilities for optimized examinations. Since acquisition time in MRI is directly related to the number of samples collected in kspace, shortening it usually means reducing the number of samples collected  for instance, by skipping phaseencoding lines in kspace in the Cartesian case. Yet doing so violates the Nyquist criterion, causing aliasing artifacts and requiring reconstruction techniques that can recover the fully sampled MR image from the undersampled data.
The problem of designing such techniques has received considerable attention in the clinical and signal processing research communities. Conventional approaches involve compressed sensing [3, 4] and parallel imaging [5]. More recently, the research efforts have focused on using deep learning to tackle the problem and achieved stateoftheart results. However, despite their multiple advantages, these learning based algorithms are susceptible to discrepancies between the training set and the image to reconstruct at test time. One such difference is due the different bias fields present in the images. Such discrepancies, sometimes referred to as ‘domain shift’, typically lead to a loss in performance. Various methods have been developed to tackle the domain shift problem in the deep learning community. One approach is to augment the training data to hopefully resemble the test data better [18][17]. Although this approach increases the generalization capabilities of the learned model, it does not optimize the solution specifically for the test image at hand. Another approach is to modify the network parameters to reduce the domain gap [19], which indirectly optimizes the solution by modifying the network rather than the solution.
Yet, while the effect of the bias field on the image encoding is easy to model and there are readily available methods that can estimate it well, none of the aforementioned approaches takes these into account.
In this work we propose a joint reconstruction algorithm, which estimates and explicitly models the bias field throughout the reconstruction process. By doing so, we remove one degree of variation between the training set and the image to be reconstructed. The training is done on images without bias field, and the bias field itself is modeled as a multiplicative term in the image encoding process, linking the training and test domains. In order to be able to do this we use a reconstruction algorithm which decouples the learned prior information from the image generation process, namely the DDP algorithm [2]. Using the N4 algorithm, we iteratively estimate the bias field in the test image throughout the reconstruction, and this estimation improves as the reconstructed image become better. We compare the proposed method to reconstruction without bias field estimation, using a publicly available dataset as well as inhouse measured images, and show improvement in performance.
2 Methods
2.1 Problem formulation
The measured kspace data, and the underlying true MR image, are related through the encoding operation
(which incorporates the coil sensitivites, Fourier transformation and the undersampling operation) as
, where is complex Gaussian noise.2.2 DDP reconstruction
The deep density prior reconstruction is based on maximumaposteriori estimation, i.e.
, where is the data consistency and is the prior term. The data consistency can be written exactly as the log Gaussian and the method approximates the prior term using the evidence lower bound (ELBO) of a variational autoencoder (VAE) [24][23]. With these, the reconstruction problem becomes(1) 
The VAE is trained on patches from fully sampled images and the ELBO term operates on a set of overlapping patches that cover the whole image. As evident in the equation above, the ELBO term is independent of the image encoding. The DDP method solves the problem using the projection onto convex sets (POCS) algorithm. In this scheme the optimization is implemented as successive applications of projection operations for prior , data consistency and the phase of the image , i.e. . The prior projection is defined as a gradient ascent for a fixed number of steps for the ELBO term w.r.t. the image magnitude, i.e. , where for . The data consistency projection is given as . For the phase projection, we use the one defined by [2], to which we add the minimization of the data consistency with respect to the image phase. For further details, we refer the reader to the original paper [2].
2.3 Modeling and estimating the bias field
Signal intensities are often not constant across the MR images, even inside the same tissue. Instead, they usually vary smoothly, with fluctuations of 10%20%, across the measured 3D volume [6]. These variations, collectively called the bias field, can be attributed to factors like patient anatomy, differences in coil sensitivity or standing wave effects [6, 9], which are difficult or impossible to control during an acquisition. Hence they can introduce a degree of variation between images used for training reconstruction models and an image to be reconstructed at test time. In order to prevent a loss of performance, this variation has to be taken into account.
To this end, we model the bias field explicitly and incorporate it into Equation 1 as a multiplicative term B before the encoding operation, modifying the image intensities pixelwise. The bias field is an additional unknown in the reconstruction process that is estimated alongside as
(2) 
Here, the measured kspace y carries the effect of a bias field, while x is bias field free as the bias field in the image is explicitly modeled using the B term. This setting allows us to learn the VAE model on images without bias field. The advantage of this idea is two fold. Firstly, since training can be performed on bias field free images, it is easier for the VAE to learn the distribution as there is less spurious variation in the data. Secondly, we make the reconstruction problem easier by explicitly providing the bias field information, which otherwise would have to be reconstructed from the undersampled kspace as well. We solve Equation 2 as a joint iterative reconstruction problem by minimizing alternatively two subproblems:
(3)  
(4) 
where N4 denotes the bias field estimation algorithm, which we will explain below. To account for the bias field, the data consistency projection needs to be adapted and becomes . In this case the reconstructed image corresponding to y is given as Bx. This modification can be interpreted as doing a forwardbackward projection with before and after the data consistency projection to move the image between the “normalized” bias field free domain and the bias field corrupted acquisition domain, i.e. . The pseudocode for the described joint optimization scheme is presented in Algorithm 1.
2.3.1 N4 bias field estimation
N4 is a variant of the widely used N3 algorithm [7], a nonparametric iterative method that approximates intensity nonuniformity fields in 3dimensional images. Given an image, its objective is to find a smooth, slowly varying, multiplicative field [6]. N4 improves upon the N3 algorithm by modifying the Bspline smoothing strategy and the iterative optimization scheme used in the original framework. We use the implementation available as N4ITK [8]. We denote this method as in our formulations.
2.4 Datasets used
To train the VAE we used 5 central, nonadjacent slices with 0.7mm isotropic resolution from T1 weighted images of 360 subjects from the Human Connectome Project (HCP) preprocessed dataset [15, 16], which by default have a bias field. We used N4 on the images to also create a bias field free training set.
For test images, we took a central slice from 20 different test subjects from the HCP data. As the HCP images tend to have a similar bias field, we additionally created a modified test set where we estimated the bias fields with N4, took their inverse and multiplied them with the bias field free images. In addition to HCP data, we also tested the proposed method with central slices from 9 inhouse measured subjects. These images were acquired using a 16 element head coil and have similar acquisition parameters as the HCP dataset with a 1mm isotropic resolution. We used ESPIRiT [21] to estimate the coil sensitivity maps for these images.
2.5 Training the VAE
We trained four patchwise VAEs  for two different resolution levels to match the datasets, each with and without bias field. We used patches of size 28x28 with a batch size of 50 and ran the training for 500,000 iterations. The patches were extracted randomly from the training images with replacement. All the VAEs were trained with the same training images extracted from the HCP dataset, as described above.
2.6 Experimental Setup
We used random Cartesian undersampling patterns with 15 fully sampled central profiles. We generated a different pattern for each subject, and applied the same pattern for a given subject throughout all experiments for comparability of results. When reconstructing test images from the HCP dataset, we used 302 iterations (NumIter) for R=2, 602 for R=3, 1002 for R=4 and 1502 for R=5, to allow for convergence of reconstructed images. Since the inhouse measured images have multiple coils, the successive applications of data consistency projections coincide with a POCSSENSE reconstruction [20], speeding up convergence and requiring less iterations. Hence, when performing reconstruction on images from the inhouse measured dataset, we ran the first 10 iterations without prior projection, applying only data consistency projections. Additionally, the discrepancy between the actual coil sensitivities and the ESPIRiT [21] estimations may lead to divergence after too many iterations. Hence, a lower number of iterations was used for reconstruction experiments on the inhouse measure dataset: 32 iters for R=2, 102 for R=3 and R=4, and 202 for R=5. For all test datasets, the parameters were set as = 1e4, BiasEstimFreq=10, DCProjFreq=10. As for the N4 bias field estimation algorithm, the default parameters were used.
To evaluate the reconstruction quality of the different models, we computed, for each image, the percentage RMSE. More specifically, the evaluation metric was given as
where is the original, fully sampled test image, and where the summation was applied pixelwise. The error was computed on the skullstripped images only in the brain area.To evaluate the statistical significance of our results, we performed a permutation test [22]
with 10,000 permutations to assess the null hypothesis that the RMSE’s for the reconstruction with and without bias field estimation are from the same distribution. From these tests, we reported the pvalues.
3 Experiments and Results
To assess the hypothesis that correcting the bias field improves the overall reconstruction quality, we performed reconstructions on the original and modified HCP test sets as well as on the inhouse measured images.
When applying our proposed method, we reconstructed undersampled images with a bias field, and used a VAE trained on images without bias field as well as the N4 bias field estimation algorithm. To evaluate our approach, we also ran baseline experiments on the same images, where we used a VAE trained on images with a bias field and did not apply bias field estimation during reconstruction. Given that the ground truth (fully sampled) images naturally have a bias field in them, we utilized the last bias field estimate multiplied with the reconstructed image, i.e. Bx, for visualisations and RMSE calculations.
Method  HCP dataset  Modified HCP dataset  Inhouse measured dataset  

R= 2  R = 3  R = 4  R = 5  R= 2  R = 3  R = 4  R = 5  R= 2  R = 3  R = 4  R = 5  
Baseline  2.24  3.39  4.42  5.72  2.24  3.45  4.26  5.58  4.64  6.852  8.593  11.046 
(0.31)  (0.45)  (0.51)  (1.05)  (0.38)  (0.60)  (0.46)  (1.20)  (0.391)  (0.726)  (1.344)  (1.727)  
Joint recon.  2.27  3.34  4.35*  5.52  2.20*  3.33*  4.16  5.12*  4.62  6.714*  8.218*  10.567* 
(0.34)  (0.40)  (0.47)  (0.66)  (0.39)  (0.53)  (0.54)  (0.57)  (0.418)  (0.821)  (1.266)  (1.632) 
The results in Table 1 indicate that the proposed joint reconstruction method with bias field estimation improves the reconstruction quality in terms of RMSE when the bias field of the test image is different from those in the training set. In these cases, namely the experiments with the modified HCP and inhouse measured images, the improvement is statistically significant with a pvalue of less than 0.05 for nearly all undersampling factors. For the unmodified HCP dataset, where the bias field in the test images matches those in the training set, we do not expect a big difference in the performance, which is reflected in the results.
The quantitative improvement is also supported by the visual inspection of the images given in Figure 1. From the HCP image, one can observe that the level of artifacts is reduced with the proposed method. This becomes more evident in the zoomedin images. The red arrow points to a part of the image where the proposed method can reconstruct the structures faithfully, whereas the baseline method struggles. Aliasing artifacts are globally suppressed better with the joint reconstruction method. Similarly, for the inhouse measured image, the grey matter structure that the red arrow points to is not reconstructed in the baseline method, whereas it again appears with the proposed method.
In this work, we demonstrate the performance loss due to the bias field for a specific algorithm, into which we also integrate our proposed solution. However, the problem may not be specific to the algorithm used as it arises from the domain gap, which is a fundamental problem affecting machine learning based methods in general. Furthermore, the proposed method of estimating and explicitly modeling the bias field in reconstruction is also a generic approach, which can be integrated into different algorithms.
4 Conclusion
In this paper we proposed and evaluated a method for joint reconstruction and bias field estimation to address variations due to the bias field when reconstructing undersampled MR images. The results indicate that the proposed method improves the baseline method (unsupervised learning based reconstruction algorithm), both in RMSE and visually. The improvements can be attributed to two factors. First, the proposed method allows the VAE prior to learn a simpler distribution, and second, providing the bias field explicitly makes the reconstruction problem easier. In essence, estimating the bias field during reconstruction makes the model less sensitive to differences between the data used to train the model and the test data used during reconstruction.
References
 [1] Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., van der Laak, J. A.W.M. , van Ginneken, B., Sanchez, C. I.: A Survey on Deep Learning in Medical Image Analysis. Medical Image Analysis 42, 60–88 (2017)
 [2] Tezcan, K. C., Baumgartner, C. F., Luechinger, R., Pruessmann, K. P., Konukoglu, E.: MR Image Reconstruction Using Deep Density Priors. IEEE Transactions on Medical Imaging 38(7), 1633–1642 (2019)
 [3] Donoho, D. L.: Compressed sensing: IEEE Transactions on Information Theory 52(4), 1289–1306 (2006)
 [4] Lustig, M., Donoho, D., Pauly, J. M.: Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 58(6), 1182–1195 (2007)
 [5] Deshmane, A., Gulani, V., Griswold, M. A., Seiberlich, N.: Parallel MR Imaging. Journal of Magnetic Resonance Imaging 36(1), 55–72 (2012)
 [6] Sled, J. G., Zijdenbos, A. P., Evans, A. C.: A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Transactions on Medical Imaging 17(1), 87–97 (1998)
 [7] Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., Gee J. C.: N4ITK: Improved N3 Bias Correction. IEEE Transactions on Medical Imaging 29(6), 1310–1320 (2010)
 [8] SimpleITK: N4 Bias Field Correction, https://simpleitk.readthedocs.io/en/master/link_N4BiasFieldCorrection_docs.html. Last accessed 12 March 2020
 [9] Sled J.G., Pike G.B.: Understanding intensity nonuniformity in MRI. In: Wells W.M., Colchester A., Delp S. (eds.) MICCAI 1998, LNCS, vol. 1496, pp. 614–622. Springer, Heidelberg (1998). 10.1007/BFb0056247
 [10] Meyer, C. R., Bland, P., H., Pipe, J.: Retrospective correction of intensity inhomogeneities in MRI. IEEE Transactions on Medical Imaging, 14(1) (1995)
 [11] Automated ModelBased Bias Field Correction of MR Images of the Brain. IEEE Transactions on Medical Imaging 18(10), 885–896 (1999)
 [12] Leemput, K., V., Maes, F., Vandermeulen, D., Suetens P.: Automated ModelBased Tissue Classification of MR Images of the Brain. IEEE Transactions on Medical Imaging 18(10), 897–908 (1999)
 [13] Ashburner, J., Friston, K. J.: Unified segmentation. NeuroImage, 26(3), 839–851 (2005)
 [14] The Minimal Preprocessing Pipelines for the HumanConnectome Project. Neuroimage 80, 105–124 (2013)
 [15] Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E., Yacoub, E., Ugurbil, K.: The WUMinn Human Connectome Project: An Overview. Neuroimage 80, 62–79 (2013)
 [16] Human Connectome Reference Manual, https://www.humanconnectome.org/storage/app/media/documentation/s1200/HCP_S1200_Release_Reference_Manual.pdf. Last accessed 4 March 2020
 [17] Chaitanya K., Karani N., Baumgartner C.F., Becker A., Donati O., Konukoglu E.: Semisupervised and TaskDriven Data Augmentation. In: Chung A., Gee J., Yushkevich P., Bao S. (eds.) IPMI 2019, LNCS, vol. 11492, pp. 29–41. Springer, Cham (2019)
 [18] Volpi, R., Namkoong, H., Sener O., Duchi J., Murino V., Savarese, S.: Generalizing to Unseen Domains via Adversarial Data Augmentation. NeurIPS 31, 5334–5344 (2018)
 [19] Sun B., Saenko K.: Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In: Hua G., Jégou H. (eds.) ECCV 2016, LNCS, vol. 9915. Springer, Cham (2016)
 [20] Samsonov AA, Kholmovski EG, Parker DL, Johnson CR: POCSENSE: POCSbased reconstruction for sensitivity encoded magnetic resonance imaging. MRM 52(6), 1397–1406 (2004)

[21]
Uecker M., Lai P., Murphy M. J., Virtue P, Elad M, Pauly J. M., Vasanawala S. S., Lustig M: ESPIRiT–an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA. MRM
71(3), 990–1001 (2014)  [22] Collingridge, D.S.: A Primer on Quantitized Data Analysis and Permutation Testing. Journal of Mixed Methods Research 7(1) (2013)
 [23] Kingma D. P., Welling, M.: AutoEncoding Variational Bayes. arXiv:1312.6114v10, (2014)
 [24] Rezende, D. J., Mohamed S., Wierstra D.: Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv:1401.4082v3, (2014)
Comments
There are no comments yet.