MRI Reconstruction Using Deep Bayesian Inference

09/03/2019 ∙ by GuanXiong Luo, et al. ∙ 48

Purpose: To develop a deep learning-based Bayesian inference for MRI reconstruction. Methods: We modeled the MRI reconstruction problem with Bayes's theorem, following the recently proposed PixelCNN++ method. The image reconstruction from incomplete k-space measurement was obtained by maximizing the posterior possibility. A generative network was utilized as the image prior, which was computationally tractable, and the k-space data fidelity was enforced by using an equality constraint. The stochastic backpropagation was utilized to calculate the descent gradient in the process of maximum a posterior, and a projected subgradient method was used to impose the equality constraint. In contrast to the other deep learning reconstruction methods, the proposed one used the likelihood of prior as the training loss and the objective function in reconstruction to improve the image quality. Results: The proposed method showed an improved performance in preserving image details and reducing aliasing artifacts, compared with GRAPPA, ℓ_1-ESPRiT, and MODL, a state-of-the-art deep learning reconstruction method. The proposed method generally achieved more than 5 dB peak signal-to-noise ratio improvement for compressed sensing and parallel imaging reconstructions compared with the other methods. Conclusion: The Bayesian inference significantly improved the reconstruction performance, compared with the conventional ℓ_1-sparsity prior in compressed sensing reconstruction tasks. More importantly, the proposed reconstruction framework can be generalized for most MRI reconstruction scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 16

page 17

page 18

page 19

page 20

page 21

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In compressed sensing MRI reconstruction, the commonly used analytical regularization such as regularization can ensure the convergence of the iterative algorithm and improve MR image quality [10]. The conventional iterative reconstruction algorithm with analytical regularization has an explicit mathematical deduction in gradient descent, which ensures the convergence of the algorithm to a local or global optimal and the generalizability, depending on the convexity of the regularization function. Besides, the dictionary learning is an extension of analytical regularization, providing an improvement over the regularization in specific applications [15]. The study of analytically regularized reconstruction mainly focused on choosing the appropriate regularization function and parameters for minimizing the reconstruction error. As an extension of analytical regularization, the deep learning reconstruction was employed as an unrolled iterative algorithm for solving the regularized optimization or used as a substitute for analytic regularization [20, 18, 12, 1].

With the advances of deep learning methodology, research started shifting the paradigm to structured feature representation of MRI, such as cascade, deep residual, and generative deep neural networks

[20, 18, 12, 1]. Especially, the method proposed in [20]

recast the compressed sensing reconstruction into a specially designed neural network that still partly imitated the analytical data fidelity and regularization terms. In that study, the analytical regularization term was replaced with convolutional layers and a specially designed activation function

[20]. These deep learning methods may show improved performance in some predetermined acquisition settings or pre-trained imaging tasks. However, they also lack flexibility when used with changes in MRI under-sampling scheme, the number of radio-frequency coils, and matrix size or spatial resolution. Such restriction is caused by the mixing of k-space data fidelity and the regularization terms in neural network implementations. Therefore, it was preferable to separate the k-space data fidelity and neural network-based regularization for improving the flexibility in changing MRI acquisition configurations.

This study applied Bayesian inference to model the MRI reconstruction problem, and the statistical representation of an MRI database was used as a prior model. In Bayesian inference, the prior model is required to be computationally scalable and tractable [16, 2]. The scalability of the prior model indicates the likelihood of it being used as a means for measuring the image quality [16, 2]. The tractability of the prior model means the gradient that facilitates the maximization of posterior distribution can be calculated by stochastic backpropagation for the model [16, 2]. In such Bayesian inference, the image to be reconstructed was referred to as the parameters of the Bayesian model, which was conditioned on the measured k-space data (as the posterior). Bayes’s theorem expressed the posterior as a function of the k-space data likelihood and the image prior. For the image prior, Refs [17, 13]

proposed a generative deep learning model, providing a tractable and scalable likelihood. In those studies, the image prior model was written as the multiplication of the conditional probabilities those indicated pixel-wise dependencies of the input image. The k-space data likelihood described how the measured k-space data was computed from a given MR image. The relationship between k-space data and MR image can be described, using the well-known MRI encoding matrix in an equality constraint

[10, 19]

. With such computationally scalable and tractable prior model, the maximum a posterior can serve as an effective estimator

[2] for the high dimensional image reconstruction problem tackled in this study. To summarize, the Bayesian inference for MRI reconstruction had two separate models: the k-space likelihood model that was used to encourage data consistency and the image prior model that was used to exploit knowledge learned from an MRI database.

This paper presented a generic and interpretable deep learning-based reconstruction framework, using Bayesian inference. It employed a generative network as the MR image prior model. The proposed framework was capable of exploiting the MR image database with the prior model, regardless of the changes in MR imaging acquisition settings. Also, the reconstruction was achieved by a series of inferences those employed the maximum likelihood of posterior with the image prior, i.e., applying the Bayesian inference repeatedly. The reconstruction iterated over the data fidelity enforcement in k-space and the image refinement, using the Bayesian inference. During the iteration, the projected sub-gradient algorithm was used to maximize the posterior. The method is theoretically described, which was adapted from the methodology proposed by others [17], and then demonstrated in different MRI acquisition scenarios, including parallel imaging, compressed sensing, and non-Cartesian reconstructions. The robustness and the reproducibility of the algorithm were also experimentally validated.

Theory

The proposed method applied a generative neural network, as a data-driven MRI prior, to an MRI reconstruction method. This section contained an MRI reconstruction method using Bayes

theorem and a generative neural network-based MRI prior model, a pixel-wise joint probability distribution for images, using the PixelCNN++

[17].

MRI reconstruction using Bayes theorem

With Bayes theorem, one could write the posterior as a product of likelihood and prior:

(1)

where was probability of the measured k-space data for a given image , and was the prior model of the image. The image reconstruction was achieved by exploring the posterior with an appropriate estimator. The maximum a posterior estimation (MAP) could provide the reconstructed image that was given by:

(2)

Following the PixelCNN++ [17], a deep neural network model, which was trained with MR image database, was used to approximate the prior .

Prior model for MR images

In this study, a deep autoregressive network [7]

was used as the prior model. This deep neural network served as a generative autoencoder that provided a hierarchic representation of the input image. The prior model predicted a mixture distribution of the input image

[17]. For MRI reconstruction, we adopted the prior model in the PixelCNN++ [17], except that t he number of image channels was changed from three (i.e., RGB channels for color image) to two (i.e., real and imaginary parts for MR image). For each image pixel, the variable had a continuous distribution that gave representation to real or imaginary signal intensity. Like in the VAE and pixelCNN++ [17, 9], the distribution of was a mixture of the logistic distribution, given by

(3)

Here, was the mixture indicator, and were the mean and scale of logistic distribution, respectively. Then the probability on each observed pixel was computed as [17]

(4)

where

was the logistic sigmoid function. Furthermore, in

[17, 13], each pixel was dependent on all previous pixels up and to the left in an image, as shown in Figure 1a. The conditional distribution of the subsequent pixel at position was given by [17]

(5)
(6)

where the denoted the context information which was comprised of the mixture indicator and the previous pixels as showed in Figure 1a, was the coefficient related to mixture indicator and previous pixels.

was also a joint distribution for both real and imaginary channels. The real part of the first pixel, i.e.,

in Figure 1a, was predicted by a mixture of logistics as described in Eq. 3. This definition assumed that the mean of mixture components of the imaginary channel was linearly dependent on the real channel. In this study, the number of mixture components was 10. In this model, mixture indicator was shared between two channels. The

image could be considered as an vectorized image

by stacking pixels from left to right and up to bottom of one another, i.e., and . The joint distribution of the image vector could be expressed as following [17]:

(7)

were the parameters of mixture distribution for each pixel intensity. The generative network PixelCNN++ was expected to predict the joint probability distribution of all pixels in the input image [17], as illustrated in Figure 1b. Therefore, the network was trained by maximizing the likelihood in Eq. 7, as the training loss was given by

(8)

where was the trainable parameter within the network. After training, the network could be used as the image prior. Here, we defined the prior model as

(9)

To summarize, a prior model of was defined in Eqs. from 2 to 9 that could be considered as as data-driven model, utilizing the knowledge learned from an image database. Such prior model was computationally tractable, as was described in PixelCNN++ [17].

Image reconstruction by MAP

The measured k-space data was given by

(10)

where was the encoding matrix, was MR image, and was the noise. The matrix consisted of Fourier matrix, sampling trajectory, and coil sensitivity map. Substituting Eq. 9 into the log-likelihood for Eq. 2 yielded

(11)

From the data model, the log-likelihood term for had less uncertainty, considering the MR imaging principles, for a given image , the probability for k-space, , i.e., when , was close to a constant with the uncertainty from noise that was irrelevant to . Hence, Eq. 11 could be rewritten as

(12)

The equality constraint for data consistency was the result of eliminating the first log-likelihood term in Eq. 11. The projected subgradient method was used to solve the equality constrained problem [7, 3]. In [3]

, authors proposed a stochastic backpropagation method for computing gradients through random variables for deep generative models. In PixelCNN++, the stochastic backpropagation provided the subgradient

, where , for minimizing the log-likelihood in Eq. 12. We empirically found that the dropout (which applied to ) was necessary, when using the gradient to update in Eq. 12 [6]. To summarize, the MAP-based MRI reconstruction had the following iterative steps:

  1. Get the descent direction

  2. Pick up a step size

  3. Update

  4. Projection

The projection of onto was given by

(13)

Therefore, the generative network as a prior model was incorporated into the reconstruction of through the Bayesian inference based on MAP.

Methods

MRI data and pre-processing

Both knee and brain MRI data were used to test the reconstruction performance of the proposed method. The knee MRI data (multi-channel k-space data, 973 scans) were downloaded from fastMRI reconstruction database [21]

. As such, NYU fastMRI investigators provided data but did not participate in analysis or writing of this report. A listing of NYU fastMRI investigators, subject to updates, can be found at: fastmri.med.nyu.edu. The primary goal of fastMRI is to test whether machine learning can aid in the reconstruction of medical images. The knee data had two contrast weightings: proton-density with and without fat suppression (PDFS and PD). Scan parameters included 15-channel knee coil and 2D multi-slice turbo spin-echo (TSE) acquisition, and other settings which could be found in Ref.

[21].

For brain MRI, we collected 2D multi-slice T1 weighted, T2 weighted, T2 weighted FLAIR, and T2 weighted brain images from 16 healthy volunteers examined with clinical standard-of-care protocols. All brain data were acquired using our 3T MRI (Philips, Achieva), and an eight-channel brain RF coil. T1 weighted, T2 weighted, and T2 weighted FIAIR images were all acquired with TSE readout. Meanwhile, T2-weighted images were obtained using a gradient-echo sequence. Brain MRI parameters for four contrast weightings were listed in Table 1.

Training images were reconstructed from multi-channel k-space data without undersampling. Then, these image datasets after coil combination were scaled to a magnitude range of and resized to an image size of 256256. The training of PixelCNN++ model required a considerable computational capacity when a large image size was used. In this study, the was the largest size that our 4-GPU server could handle. Hence, the original images were resized into low-resolution images by cropping in k-space for knee MRI. For brain MRI, we split each raw image into four image patches, before fed into the network for training. Real and imaginary parts of all 2D images were separated into two channels when inputted into the neural network. For knee MRI, 15541 images were used as the training dataset, and 100 images were used for testing. For brain MRI, 1300 images were used as the training dataset, and 100 images were used for testing.

Deep neural network

The PixelCNN++ was modified from the code in https://github.com/openai/pixel-cnn. We implemented the reconstruction algorithm using Python, as explained in Eq. 13 and Appendix. With the trained prior model, we implemented the iterative reconstruction algorithm for maximizing the posterior while enforcing the k-space data fidelity (as explained in Appendix and Fig. 1

)c. Only two deep learning models were trained and utilized, one for knee MRI with two contrast weightings, and another for brain MRI with four contrast weightings. These two models can support all experiments performed in this study with variable undersampling patterns, coil sensitivity maps, channel numbers, image sizes, and trajectory types. Our networks were trained in Tensorflow software, and on four NVIDIA RTX-2080Ti graphic cards. Other hyperparameters were 500 epochs, batch size = 4, and Adam optimizer. It took about five days to train the network for knee dataset and two days for brain dataset under the above-mentioned configuration.

Parallel imaging and or regularization driven reconstruction

The GRAPPA reconstruction was performed with a block size of 4 and 20 central k-space lines as the auto-calibration area [8]. We simulated GRAPPA accelerations with undersampling factors from 2 to 4. The representative undersampling masks were shown in Supplementary Figure 1. We chose -ESPIRIT and MODL [1] as baseline methods for comparison. They were analytical regularizations. The -ESPIRIT exploited the sparsity of image, and the MODL was a deep learning method for compressed sensing reconstruction, trained via minimizing reconstruction error. In the -ESPIRiT reconstruction, we set the regularization parameter to be 0.01, using BART software. One reason for choosing MODL was that it supported the coil sensitivity map for applying paralleling imaging. We followed settings in Ref [1] when training MODL to reconstruct the undersampled knee data. The only difference was the k-space mask in Ref [1] was 2D undersampled, while in the current study, the 1D undersampling was applied. The central 20 k-space lines were sampled which account for 7% of the full k-space of one image. The others in the outer region were picked randomly with certain undersampling rate.

For the proposed method, MR images with matrix size were reconstructed, using the prior model in Eq. 9 that was trained by images or image patches. During inference, the image was split into four patches for applying the prior model, as shown in Figure 1c. After updating , four patches for one image were concatenated to form an image with the original size of , before it was projected onto in Eq. 13. The detailed algorithm was presented in the Appendix.

Non-Cartesian k-space acquisition

In this experiment, spiral sampled k-space from the acquired T2-weighted k-space data was simulated. The method proposed in Ref [11]

was used to design the spiral trajectory. The full k-space coverage required 24 spiral interleaves for the spatial resolution used in this study. The spiral trajectory was shown in Supplementary Figure 1. Besides, the implementation of non-uniform fast Fourier transform was based on the method in Ref

[4]. For comparison, we used the iterative SENSE, i.e., conjugate gradient SENSE (CG SENSE), proposed in Ref [14], as a baseline method.

Results

Parallel imaging

Figures 2 and 3 show the comparison of knee and brain MRI reconstructed using GRAPPA and the proposed method. The proposed method had an improved performance in recovering brain and knee image details and reducing the aliasing artifacts, compared with GRAPPA. As expected, parallel imaging amplified the noise in the low coil sensitivity regions and along the undersampled dimension. On the other hand, error maps demonstrated in Figure 2 and 3 showed that the proposed method effectively eliminated the noise amplification and the aliasing artifacts. Table 2 presents the comparison of GRAPPA reconstruction and the proposed method for knee (N = 100) and brain (N = 100) MRI testing images. With the increase of the undersampling factor, the peak signal to noise ratio (PSNR) of the proposed method decreased less, compared with that of GRAPPA. In addition, with acceleration factor R = 2 in brain MRI, the proposed method showed 8 dB more improvement in the PSNR than GRAPPA.

Compressed sensing reconstruction

In Figures 4 and 5, the -ESPIRiT had caused apparent blurring in the reconstructed images for both knee and brain MRI data. Both the -ESPIRiT and MODL methods caused residual aliasing artifacts. Meanwhile, the proposed reconstruction recovered most anatomical structures and sharp boundaries in knee and brain MR images, compared with those from -ESPIRiT and MODL reconstructions, as shown on error maps in Figure 4 and 5. Tables 3 summarized reconstruction results using regularization, MODL, and the proposed method. The proposed method generally showed more than 5 dB PSNR improvement compared with -ESPIRiT and MODL.

Preliminary result in non-Cartesian MRI reconstruction and quantitative susceptibility mapping (QSM)

In this study, we used a T2 weighted gradient-echo images to simulate the spiral k-space data with 4-fold acceleration. The reconstructed images from the CG SENSE and the proposed method were compared. The proposed method showed apparent improvement regarding the aliasing artifact reduction and the preservation of T2 contrast between gray matter and white matter. The proposed method also showed a slight denoising effect on the reconstructed image compared with the ground truth. Noted that the same deep learning model used in the previous Cartesian k-space reconstruction experiments in Figures 3 and 5 was applied to spiral reconstruction, without the need of re-training the deep learning model. Figure 7 shows the preliminary result from the proposed accelerated reconstruction in QSM with 4-fold acceleration. Noted that the same deep learning model used in the previous brain experiments was applied to this experiment, with phase information preserved in all reconstructed images. The proposed deep learning method also showed an apparent de-noising effect on QSM maps, while still preserved the major phase contrast even with high acceleration.

Discussion

The proposed method can reliably and consistently recover the nearly aliased-free images with relatively high acceleration factors. Meanwhile, as expected, the increase of image smoothing with high acceleration factors was noticed, reflecting the loss of intrinsic resolution. The estimated image from the maximum of the posterior can not guarantee the full recovery of the image details, i.e., PSNR 40 dB for a full recovery. However, at modest acceleration, the reconstruction from a maximum of posterior showed the successful reconstruction of the detailed anatomical structures, such as vessels, cartilage, and membranes in-between muscle bundles.

In this study, the results demonstrated the successful reconstruction of high-resolution image (i.e., 256 256 matrix) with low-resolution prior (i.e., trained with 128 128 matrix), confirming the feasibility of reconstructing images of different sizes without the need for retraining the prior model. The prior model was trained by 128 x 128 images; it was still valid and applicable for the reconstruction of a high-resolution image. The proposed methods provided more than 8 dB improvement over the conventional GRAPPA reconstruction at the 4-fold acceleration in knee MRI. Besides, in contrast to other deep learning-based methods, which focused on the loss, the likelihood that was conditioned by pixel-wise dependencies of the whole image showed an improved representation capacity, leading to a higher reconstruction accuracy. The applicability of the proposed method in the patch-based reconstruction also suggested its high representation capacity and flexibility. Even when the inputs were image patches, the prior model could still recover the whole image.

The projected subgradient approach to solving Eq. 12 was computationally inexpensive but converged slowly, as shown in Figure 8. For a random initialization, the algorithm needed about 500 iterations to converge with a fixed step size. Meanwhile, we noticed that if the zero-filled-reconstructed image was used for initialization, the number of iterations required could be reduced to 100. Besides, the decay of residual norm stopped earlier than that of the log-likelihood, i.e., when the residual norm stopped decaying, the likelihood can still penalize the error. This evidence indicated that using the residual norm as the fidelity alone was sub-optimal, and the deep learning-based statistical regularization can lead to a better reconstruction result compared with the fidelity. Deep learning-based statistical regularization in the proposed method outperformed other conventional regularizations trained by image-level loss. loss did not give an explicit description of the relationship amid all pixels in the image, while the likelihood used in conjunction with the proposed image prior model was conditioned by the pixel-wise relationship and demonstrated superior performance compared with the conventional methods, under the current experimental setting.

Furthermore, the demonstrated image prior can be extended to a more elaborated form with clinical information, such as organs and contrast types, as the model inputs. For example, one could input the image prior with labels such as brain or knee. Then hypothetically, the image prior can be designed as a conditional probability for the given image label. In other words, the posterior would be dependent on both the k-space data and image labels. Moreover, the MR pulse sequence parameters could serve as image labels for the prior, such as echo time and repetition time. In short, the prior model can be used to describe clinical information or acquisition parameters. This setting opens up a future direction on a more elaborated image prior, incorporating clinical information and MR sequence parameters, for more intelligent image representation and pattern detection.

In this study, the generative network solely served as an image prior model, in contrast to how neural network was used in other deep learning-based reconstructions [20, 18, 12, 1]. Specifically, in previous studies [20, 18, 12, 1], embedding k-space fidelity term into the network made the algorithm inflexible because image prior and undersampling artifacts were mixed during the training. The proposed method used the standard analytical term for fidelity enforcement; therefore, its flexibility was comparable to the traditional optimization algorithm, such as regularization. Due to unavoidable changes of the encoding scheme, e.g., the image size and the RF coils during MRI experiment in practice, it was essentially needed to separate the learned component (the image prior) from the encoding matrix used in the fidelity term in reconstruction. Besides, the proposed method showed the feasibility of incorporating the coil sensitivity information in the fidelity term, which enabled the changeable encoding scheme without the need of retraining the model [14, 5]. In summary, the separation of the image prior and the encoding matrix embedded in the fidelity term made the proposed method more flexibility and generalizable compared with conventional deep learning approaches.

Conclusion

In summary, this study presented the application of Bayesian inference in MR imaging reconstruction with the deep learning-based prior model. We demonstrated that the deep MRI prior model was a computationally tractable and effective tool for MR image reconstruction. The Bayesian inference significantly improved the reconstruction performance over that of conventional sparsity prior in compressed sensing. More importantly, the proposed reconstruction framework was generalizable for most reconstruction scenarios.

Acknowledgment

We thank Dr. Hongjiang Wei for providing the QSM processing code.

Type Dimension Voxel(mm) TSE factor TR/TE(ms) TI
T1 25625624 0.9 0.9 4 7 2000/20 800
T2 25625624 0.9 0.9 4 13 3000/80 -
FLAIR 25625624 0.9 0.9 4 31 8000/135 135
T2 25625628 0.9 0.9 4 - 770/16 -
Table 1: The scan parameters of different weightings used in brain MRI experiments.
Undersampling factor organ GRAPPA Ours
R=2 knee 40.984.20 45.643.24
R=3 knee 34.873.38 41.713.42
R=4 knee 29.422.46 38.443.64
R=2 brain 37.814.7 48.402.18
R=3 brain 31.723.20 45.392.65
R=4 brain 28.852.87 43.582.66
Table 2: PSNR comparison (in dB, mean standard deviation, N = 100) for parallel imaging and the proposed method on knee and brain MRI.
Undersampling rate organ -ESPIRiT MODL Ours
15% + 7% knee 29.332.82 27.633.41 35.343.53
20% + 7% knee 31.513.60 29.293.76 37.453.81
15% + 7% brain 32.863.46 30.602.78 39.782.83
20% + 7% brain 34.723.89 32.462.95 41.242.81
Table 3: PSNR comparison (in dB, mean standard deviation, N = 100) for compressed sensing and the proposed method on knee and brain MRI.
(a)
(b)
(c)
Figure 1: (a) The conditional model in [17, 13] defined the probability of image pixel (yellow) dependent on all the pixels from its up and left side (green), and was a set that contains all the previous pixels. (b) Diagram shows the PixelCNN++ network in [17], which was the prior model used in this study, i.e., in Eq. 9. Each ResNet block (gray) consisted of 3 Resnet components. The input of network was , outputs of network were parameters of mixture distribution , which were fed into the conditional probability model in Eq. 7. (c) In this method, we reconstructed images with matrix size, using the prior model that was trained with images and illustrated in Fig. (b). To reconcile this mismatch, we split one image into four patches for applying the prior model. After updating , four patches for one image were merged to form an image with the original size of . Then the merged image was projected onto in Eq. 13. Furthermore, the random shift along phase encoding direction was applied to mitigate the stitching line in-between patches.
GRAPPA Ours Ground truth
PSNR(dB) 33.49 44.33
NMSE(%) 1.61 0.14
PSNR(dB) 30.11 41.58
NMSE(%) 11.06 0.79
Figure 2: Comparisons on PD and PDFS contrasts using GRAPPA and the proposed reconstructions with R=3 acceleration and matrix size. The intensity of error maps was five times magnified. The proposed method effectively eliminated noise amplification and aliasing artifact in GRAPPA reconstruction.
GRAPPA Ours Ground truth
PSNR(dB) 34.88 47.62
NMSE(%) 4.49 0.24
PSNR(dB) 33.18 48.31
NMSE(%) 2.64 0.10
PSNR(dB) 33.21 46.27
NMSE(%) 4.46 0.22
Figure 3: Comparisons on T1, T2, and FLAIR-T2 weighted image reconstruction, using parallel imaging and the proposed reconstruction with R=3 acceleration and matrix size. The intensity of error maps was 15 times magnified. The proposed method effectively eliminated the noise amplification in GRAPPA reconstruction.
-ESPIRiT MODL Ours Ground truth
PSNR (dB) 34.55 34.16 40.92
NMSE(%) 2.69 2.94 0.56
PSNR(dB) 32.59 33.33 34.15
NMSE(%) 7.02 5.9 4.9
Figure 4: Comparison of different methods on PD and PDFS contrasts, using 27% 1D undersampled k-space and matrix size. The intensity of error maps was five times magnified. The proposed method substantially reduced the aliasing artifact and preserved image details in compressed sensing reconstruction.
-ESPIRiT MODL Ours Ground truth
PSNR(dB) 36.32 33.79 43.47
NMSE(%) 3.22 5.78 0.62
PSNR(dB) 34.23 32.98 42.41
NMSE(%) 2.56 3.41 0.39
PSNR(dB) 36.19 35.61 41.53
NMSE(%) 2.25 2.56 0.66
Figure 5: Comparison of compressed sensing and deep learning approaches for T1, T2, and FLAIR-T2 weighted image reconstructions, using 22% 1D undersampled k-space and matrix size. The intensity of error maps was ten times magnified. The proposed method substantially reduced the aliasing artifact and preserved image details in compressed sensing reconstruction.
CG SENSE Ours Ground truth
PSNR(dB) 22.52 37.18
NMSE(%) 15.69 0.54
Figure 6: Comparison of the CG SENSE and proposed reconstruction for simulated spiral k-space with 4-fold acceleration (i.e., 6 out of 24 spiral interleaves), acquired by T2 weighted gradient echo sequence. The intensity of error maps was five times magnified. The proposed method substantially reduced the aliasing artifact in spiral reconstruction. Noted that the same deep learning model used in the previous Cartesian k-space reconstruction was applied to spiral reconstruction, without the need of re-training the deep learning model.
Ours Ground truth

Figure 7: The preliminary result from the proposed accelerated reconstruction in quantitative susceptibility mapping (QSM), with R = 4 and GRAPPA type of 1D undersampling. The raw images were acquired by T2 weighted gradient echo sequence. Noted that the same deep learning model used in the previous experiments was applied to this experiment, with phase information preserved in all reconstructed images. The proposed deep learning method also showed an apparent de-noising effect on QSM maps, while still preserved the major phase contrast even with high acceleration, i.e., R = 4. Two rows show maps on different slices from one healthy volunteer.
Figure 8: Convergence curves reflected stabilities of iterative steps: 1) maximizing the posterior, which effectively minimized the log-likelihood of MRI prior model and 2) k-space fidelity enforcement, which reduced the residual norm on k-space fidelity. The 22% sampling rate and 1D undersampling scheme were used in this simulation. The residual norm was written as in Eq. 13, and the reciprocal of log-likelihood for MRI prior model was given in Eq. 9.
(a) Mask in Figs. 2 and 3
(b) Mask in Fig. 4 top
(c) Mask in Fig. 4 bottom
(d) Mask in Fig. 5 top
(e) Mask in Fig. 5 middle
(f) Mask in Fig. 5 bottom
(g) One spiral leaf in Fig. 6
Supporting Figure 1: k-space masks used in the compressed sensing, parallel imaging, and deep learning reconstructions. Bright lines indicate the sampled frequency encoding lines in the 2D k-space, i.e., the 1D undersamplings were simulated.

References

  • [1] H. K. Aggarwal, M. P. Mani, and M. Jacob (2018) Modl: model-based deep learning architecture for inverse problems. IEEE transactions on medical imaging 38 (2), pp. 394–405. Cited by: §1, §1, Parallel imaging and or regularization driven reconstruction, Discussion.
  • [2] S. Arridge, P. Maass, O. Öktem, and C. Schönlieb (2019) Solving inverse problems using data-driven models. Acta Numerica 28, pp. 1–174. Cited by: §1.
  • [3] S. Boyd, L. Xiao, and A. Mutapcic (2003) Subgradient methods. lecture notes of EE392o, Stanford University, Autumn Quarter 2004, pp. 2004–2005. Cited by: Image reconstruction by MAP.
  • [4] J. A. Fessler and B. P. Sutton (2003)

    Nonuniform fast fourier transforms using min-max interpolation

    .
    IEEE transactions on signal processing 51 (2), pp. 560–574. Cited by: Non-Cartesian k-space acquisition.
  • [5] J. A. Fessler (2010) Model-based image reconstruction for mri. IEEE Signal Processing Magazine 27 (4), pp. 81–89. Cited by: Discussion.
  • [6] Y. Gal and Z. Ghahramani (2016-20–22 Jun) Dropout as a bayesian approximation: representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning, M. F. Balcan and K. Q. Weinberger (Eds.), Proceedings of Machine Learning Research, Vol. 48, New York, New York, USA, pp. 1050–1059. External Links: Link Cited by: Image reconstruction by MAP.
  • [7] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra (2013) Deep autoregressive networks. arXiv preprint arXiv:1310.8499. Cited by: Prior model for MR images, Image reconstruction by MAP.
  • [8] M. A. Griswold, P. M. Jakob, R. M. Heidemann, M. Nittka, V. Jellus, J. Wang, B. Kiefer, and A. Haase (2002) Generalized autocalibrating partially parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 47 (6), pp. 1202–1210. Cited by: Parallel imaging and or regularization driven reconstruction.
  • [9] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling Improving variational inference with inverse autoregressive flow.(nips), 2016. URL http://arxiv. org/abs/1606.04934. Cited by: Prior model for MR images.
  • [10] M. Lustig, D. Donoho, and J. M. Pauly (2007) Sparse mri: the application of compressed sensing for rapid mr imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 58 (6), pp. 1182–1195. Cited by: §1, §1.
  • [11] M. Lustig, S. Kim, and J. M. Pauly (2008) A fast method for designing time-optimal gradient waveforms for arbitrary k-space trajectories. IEEE transactions on medical imaging 27 (6), pp. 866–873. Cited by: Non-Cartesian k-space acquisition.
  • [12] M. Mardani, E. Gong, J. Y. Cheng, S. S. Vasanawala, G. Zaharchuk, L. Xing, and J. M. Pauly (2018) Deep generative adversarial neural networks for compressive sensing mri. IEEE transactions on medical imaging 38 (1), pp. 167–179. Cited by: §1, §1, Discussion.
  • [13] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu (2016)

    Pixel recurrent neural networks

    .
    arXiv preprint arXiv:1601.06759. Cited by: §1, Prior model for MR images, Figure 1.
  • [14] K. P. Pruessmann, M. Weiger, P. Börnert, and P. Boesiger (2001) Advances in sensitivity encoding with arbitrary k-space trajectories. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 46 (4), pp. 638–651. Cited by: Non-Cartesian k-space acquisition, Discussion.
  • [15] S. Ravishankar and Y. Bresler (2010) MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE transactions on medical imaging 30 (5), pp. 1028–1041. Cited by: §1.
  • [16] D. J. Rezende, S. Mohamed, and D. Wierstra (2014) Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. Cited by: §1.
  • [17] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma (2017) Pixelcnn++: improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517. Cited by: §1, §1, MRI reconstruction using Bayes theorem, Prior model for MR images, Prior model for MR images, Theory, Figure 1.
  • [18] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert (2017)

    A deep cascade of convolutional neural networks for dynamic mr image reconstruction

    .
    IEEE transactions on Medical Imaging 37 (2), pp. 491–503. Cited by: §1, §1, Discussion.
  • [19] M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig (2014)

    ESPIRiT—an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa

    .
    Magnetic resonance in medicine 71 (3), pp. 990–1001. Cited by: §1.
  • [20] Y. Yang, J. Sun, H. Li, and Z. Xu (2016) Deep admm-net for compressive sensing mri. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 10–18. External Links: Link Cited by: §1, §1, Discussion.
  • [21] J. Zbontar, F. Knoll, A. Sriram, M. J. Muckley, M. Bruno, A. Defazio, M. Parente, K. J. Geras, J. Katsnelson, H. Chandarana, Z. Zhang, M. Drozdzal, A. Romero, M. Rabbat, P. Vincent, J. Pinkerton, D. Wang, N. Yakubova, E. Owens, C. L. Zitnick, M. P. Recht, D. K. Sodickson, and Y. W. Lui (2018) FastMRI: an open dataset and benchmarks for accelerated MRI. CoRR abs/1811.08839. External Links: Link, 1811.08839 Cited by: MRI data and pre-processing.

Appendix A Reconstruction for the varied image size with deep prior model

Input:
- k-space data
- encoding matrix
- maximum iteration
Output:
- the restored image

1:Give a random initial point Initialization
2:while  and  do Iteration
3:     Generate a random shifting offset
4:     Shift pixels away from the center circularly
5:     Split into pieces for feeding to network
6:     Get subgradient at
7:     Pick a step size
8:     Update
9:     Merge pieces into for projection
10:     Shift pixels away from the center circularly
11:     Projection
12:return
Algorithm 1 Reconstruction algorithm with deep prior model