A Deep Error Correction Network for Compressed Sensing MRI

03/23/2018 ∙ by Liyan Sun, et al. ∙ 0

Compressed sensing for magnetic resonance imaging (CS-MRI) exploits image sparsity properties to reconstruct MRI from very few Fourier k-space measurements. The goal is to minimize any structural errors in the reconstruction that could have a negative impact on its diagnostic quality. To this end, we propose a deep error correction network (DECN) for CS-MRI. The DECN model consists of three parts, which we refer to as modules: a guide, or template, module, an error correction module, and a data fidelity module. Existing CS-MRI algorithms can serve as the template module for guiding the reconstruction. Using this template as a guide, the error correction module learns a convolutional neural network (CNN) to map the k-space data in a way that adjusts for the reconstruction error of the template image. Our experimental results show the proposed DECN CS-MRI reconstruction framework can considerably improve upon existing inversion algorithms by supplementing with an error-correcting CNN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: The proposed Deep Error Correction Network (DECN) architecture consists of three modules: a guide module, an error correction module, and a data fidelity module. The input of the error correction module is the concatenation of the zero-filled compressed MR samples and guidance image while the corresponding training label is the reconstruction error . After the error correction module is trained, the guidance image and feed-forward approximation of the reconstruction error for a test image are used to produce the final reconstructed MRI.

Magnetic resonance imaging (MRI) is an important medical imaging technique, but its slow imaging speed poses a limitation on its widespread application. Compressed sensing (CS) theory [2, 4] has been a significant development of the signal acquisition and reconstruction process that has allowed for significant acceleration of MRI. The CS-MRI problem can be formulated as the optimization

(1)

where is the complex-valued MRI to be reconstructed, is the under-sampled Fourier matrix and () are the k-space data measured by the MRI machine. The first data fidelity term ensures agreement between the Fourier coefficients of the reconstructed image and the measured data, while the second term regularizes the reconstruction to encourage certain image properties such as sparsity in a transform domain.

Recently, deep learning approaches have been introduced for the CS-MRI problem, achieving state-of-the-art performance compared with conventional methods. For example, an end-to-end mapping from input zero-filled MRI to a fully-sampled MRI was trained using the classic CNN model in

[29], or its residual network variant in [15]. Greater integration of the data fidelity term into the network has resulted in a Deep Cascade CNN (DC-CNN) [26]. Compared with previous models proposed for CS-MRI inversion, deep learning is able to capture more intricate patterns within the data, which leads to their improved performance.

Recently, the compressed sensing MRI is approved by the Food and Drug Administration (FDA) to two main MRI vendors: GE and Siemens [6]. As the growing needs for application of compressed sensing MRI, improving reconstruction accuracy of the CS-MRI is of great significance. In this paper, we propose a deep learning framework in which an arbitrary CS-MRI inversion algorithm is combined with a deep learning error correction network. The network is trained for a specific inversion algorithm to exploit structural consistencies in the errors they produce. The final reconstruction is found by combining the information from the original algorithm with the error correction of the network.

Ii Related Work

Much focus of previous work has been on proposing appropriate regularizations that lead to better MRI reconstructions. In the pioneering work of CS-MRI called SparseMRI [17], this regularization adds an penalty on the wavelet coefficients and the total variation of the reconstructed image. Based on SparseMRI, more efficient optimization methods have been proposed to optimize this objective, such as TVCMRI [18], RecPF [31] and FCSA [10]. Variations on the wavelet penalty exploit geometric information of MRI, such as PBDW/PBDWS [22, 20] and GBRWT [14], for improved results. Dictionary learning methods [24, 11, 16, 25] have also been applied to CS-MRI reconstruction, as have nonlocal priors such as NLR [3], PANO [23] and BM3D-MRI [5]. These previous works can be considered sparsity-promoting regularized CS-MRI methods that are optimized using iterative algorithms. They also represent images using simple single layer features that are either predefined (e.g., wavelets) or learned from the data (e.g., dictionary learning).

Previous work has also tried to exploit regularities in the reconstruction error in different ways. In the popular dynamic MRI reconstruction method k-t FOCUSS [33, 12]

, the original signal is decomposed into a predicted signal and a residual signal. The predicted signal is estimated by temporal averaging, while the highly sparse residual signal has a

-norm regularization. An iterative feature refinement strategy for CS-MRI was proposed in [28] to exploit the structural error produced in each iteration. In [32], the k-space measurements are divided into high and low frequency regions and reconstructed separately. In [21] the MR image is decomposed into a smooth layer and a detail layer which are estimated using total variation and wavelet regularization separately. In [27], the low frequency information is estimated using parallel imaging techniques. These methods each employ a fixed transform basis.

Iii Problem Formulation

Exploiting structural regularities in the reconstruction error of CS-MRI is a good approach to compensate for imperfect modeling. Starting with the standard formulation of CS-MRI in Equation 1, we formulate our objective function as

(2)

where is an intermediate reconstruction of the MRI. We model this intermediate reconstruction as the summation of a “guidance” image and the error image of the reconstruction ,

(3)

Substituting this into Equation 2, we obtain

(4)

The guidance image is the reconstructed MRI using any chosen CS-MRI method; thus can be formed using existing software prior to using our proposed method for the final reconstruction. The reconstruction error is between the ground truth and the reconstruction . Since we don’t know this at testing time, we use training data to model this error image with a neural network , where represents the network parameters and is the input to the network. Thus, Equation 4 can be rewritten as

(5)

For a new MRI, after obtaining the guidance image (using a pre-existing algorithm) and the mapping

(using a feed-forward neural network trained on data), the proposed framework produces the final output MRI by solving the least square problem of Equation

5.

Iv Deep Error Correction Network (DECN)

Following the formulation of our CS-MRI framework above and in Figure 1, we turn to a more detailed discussion of the optimization procedure. We next discuss each module of the proposed Deep Error Correction Network (DECN) framework.

Iv-a Guide Module

With the guide module, we seek a reconstruction of the MRI that approximates the fully-sampled MRI using a standard “off-the-shelf” CS-MRI approach. We denote this as

(6)

We illustrate with reconstructions for three CS-MRI methods: TLMRI (transform learning MRI) [25], PANO (patch-based nonlocal operator) [23] and GBRWT (graph-based redundant wavelet transform) [14]. The PANO and GBRWT models achieve impressive reconstruction qualities because they use an nonlocal prior and adaptive graph-based wavelet transform to exploit image structures. In TLMRI, the sparsifying transform learning and the reconstruction are performed simultaneously in more efficient way than DLMRI [24]. The three methods represent the state-of-the-art performance in the non-deep CS-MRI models. In Figure 2, we show the reconstructions error for zero-filled (itself a potential reconstruction “algorithm”), TLMRI, PANO and GBRWT on a complexed-valued brain MRI using Cartesian under-sampling. The error display ranges from 0 to 0.2 with normalized data. The parameter setting will be elaborated in the Experiment Section V.

We also consider the deep learning DC-CNN model [26] as the guide module. We also give the reconstruction error in Figure 2. We observe the zero-filled, TLMRI, PANO, GBRWT and DC-CNN models all suffer the structural reconstruction errors, while the DC-CNN model achieves the highest reconstruction quality with minimal errors because of its powerful model capacity. Another advantage of this CNN model is that, once the network is trained, testing is very fast compared with conventional sparse-regularization CS-MRI models. This is because no iterative algorithm needs to be run for optimization during testing since the operations are a simple feed forward function of the input. We compare the reconstruction time of TLMRI, PANO, GBRWT and DC-CNN for testing for Figure 2 in Table I.

PANO TLMRI GBRWT DC-CNN
Runtime (seconds) 11.37s 127.67s 100.60s 0.04s
TABLE I: Reconstruction time of PANO, TLMRI, GBRWT and DC-CNN.
(a) fully sampled
(b) ZF
(c) TLMRI
(d) PANO
(e) GBRWT
(f) DC-CNN
Fig. 2: The reconstruction error of a brain MRI using zero-filled, TLMRI, PANO, GBRWT and DC-CNN under 1D under-sampling mask.

Iv-B Error Reconstruction Module

Using the guidance image , we can train a deep error correction module on the residual. To perform this task, we need access during training to pairs of the true, fully sampled MRI , as well as its reconstruction found by manually undersampling the k-space of this image according to a pre-defined mask and inverting. We then optimize the following objective function over network parameter ,

(7)

where indicates the reconstructed MRI using zero-filled and the input to the error correction module is the concatenation of the zero-filled MRI and the guidance MRI . Therefore, the error-correcting network is learning how to map the concatenation of the zero-filled, compressively sensed MRI and the guidance image to the residual of the true MRI using a corresponding off-the-shelf CS-MRI inversion algorithm. Now we give the rationales and explanations for the concatenation operation.

In the CS-MRI inversions, the zero-filled MR images usually serve as the starting point in the iterative optimization. Although the iterative de-aliasing can effectively remove the artifacts and achieve much more pleasing visual quality compared with zero-filled reconstruction, the distortion and information loss is inevitable in the reconstruction. To further illustrate this phenomenon, we compare the pixel-wise reconstruction errors among the zero-filling reconstruction and other reconstruction models of the MR image in Figure 2.

We take the difference between the absolute reconstruction error of zero-filled and the compared CS-MRI methods and only keep the nonnegative values, which can be formulated as

(8)

Where the operator set the negative values to zero. We only keep the nonnegative values in the map, which results the filtered difference map. We show the corresponding filtered difference map in figure 3 in the range [0 0.2]. The bright region means the better accuracy of zero-filled reconstruction. We observe the zero-filling reconstruction provide better reconstruction accuracy on some regions, indicating the information loss in the reconstruction occurs.

(a) TLMRI
(b) PANO
(c) GBRWT
Fig. 3: The filtered difference map between the reconstruction errors of the zero-filled reconstruction and recent CS-MRI inversions.

To alleviate the information loss in the guide module, we introduce the concatenation operation to utilize the information from both the zero-filled MR image and guidance image as the input to the error correction network. In later Experiment Section V, we further validate it by the ablation study.

We again note that the network is paired with a particular inversion algorithm , since each algorithm may have unique and consistent characteristics in the errors they produce. The network can be any deep learning network trained using standard methods.

Iv-C Data Fidelity Module

After the error correction network is trained, for a new undersampled k-space data for which the true is unknown, we use its corresponding guidance image and the approximated reconstructed error to optimize the data fidelity module by solving the following optimization problem

(9)

The data fidelity module is utilized in our proposed DECN framework to correct the reconstruction by enforcing greater agreement at the sampled k-space locations. Using the properties of the fast Fourier transform (FFT), we can simplify the optimization by working in the Fourier domain using the common technique described in, e.g.,

[11]. The optimal values for in k-space can be found point-wise. This yields the closed-form solution

(10)

The regularization parameter is usually set very small in the noise-free environment. We found that worked well in our low-noise experiments.

The proposed DECN model can effectively exploiting the reconstruction residues. In later Experiment Section V, we further validate the network architecture via ablation study on the error correction strategies.

V Experiments

(a) The DECN model without input concatenation and error correction (DECN-NIC-NEC)
(b) The DECN model with input concatenation and without error correction (DECN-IC-NEC)
(c) The DECN model without input concatenation and with error correction (DECN-NIC-EC)
Fig. 4: The compared baseline network architectures for the ablation study to evaluate the input concatenation and error correction strategies.

In this section, we present experimental results using complex-valued MRI datasets. The T1 weighted MRI dataset (size ) is acquired on 40 volunteers with total 3800 MR images at Siemens 3.0T scanner with 12 coils using the fast low angle shot (FLASH) sequence (TR/TE = , 220 field of view, slice thickness). The SENSE reconstruction is introduced to compose the gold standard full k-space, which is used to emulate the single-channel MRI. The details can be found in [22]. We randomly select 80 MR images as training set and 20 MR images as testing set. Informed consent was obtained from the imaging subject in compliance with the Institutional Review Board policy. The magnitude of the full-sampled MR image is normalized to unity by dot dividing the image by its largest pixel magnitude.

Under-sampled k-space measurements are manually obtained via Cartesian and Random sampling mask with random phase encodes. Different undersampling ratios are adopted in the experiments.

V-a Network Architecture

Sampling Pattern Cartesian Under-sampling Random Under-sampling
Sampling Ratio 20% 30% 40% 20% 30% 40%
Evaluation Index PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
TLMRI 31.27 0.864 32.86 0.868 35.99 0.896 35.13 0.878 36.46 0.882 37.26 0.891
PANO 30.71 0.858 32.65 0.889 37.40 0.940 36.94 0.931 39.36 0.949 40.74 0.957
GBRWT 30.61 0.853 32.27 0.879 37.19 0.932 36.81 0.908 39.16 0.932 40.72 0.944
DC-CNN 32.58 0.885 34.67 0.905 39.52 0.955 38.54 0.937 40.91 0.953 42.47 0.961
TLMRI-DECN 32.77 0.876 34.41 0.891 38.62 0.944 37.60 0.930 39.54 0.944 40.72 0.949
PANO-DECN 32.57 0.864 34.43 0.891 39.27 0.953 38.51 0.940 40.88 0.956 42.42 0.963
GBRWT-DECN 32.58 0.869 34.41 0.891 39.07 0.950 38.48 0.940 40.79 0.955 42.36 0.963
DC-CNN-DECN 33.06 0.898 35.34 0.922 39.92 0.956 38.86 0.939 41.06 0.954 42.58 0.962
TLMRI 1.50 0.012 1.55 0.023 2.63 0.048 2.47 0.052 3.08 0.062 3.46 0.068
PANO 1.86 0.006 1.78 0.002 1.87 0.012 1.57 0.010 1.52 0.006 1.68 0.006
GBRWT 1.97 0.016 2.14 0.012 1.88 0.018 1.67 0.032 1.63 0.023 1.64 0.019
DC-CNN 0.48 0.013 0.67 0.017 0.40 0.010 0.32 0.002 0.15 0.001 0.11 0.010
TABLE II: The objective evalution on the regular CS-MRI inversions and their DECN frameworks.

For the deep guide module (i.e., learning ), we use the CNN architecture called deep cascade CNN (DC-CNN) [26]

, where the non-adjustable data fidelity layer is also incorporated into the model. This guide module consists of four blocks. Each block is formed by four consecutive convolutional layers with a shortcut and a data fidelity layer. For each convolutional layer, except the last one within a block, there are total of 64 feature maps. We use ReLU

[19]

as the activation function.

For the error correction module (i.e., learning ), we adopt the network architecture shown in Figure 1. There are 18 convolutional layers with a skip layer connection as proposed in [8, 9] to alleviate the gradient vanish problem. We again adopt ReLU as the activation function, except for the last layer where the identity function is used to allow negative values. All convolution filters are set to

with stride set to

.

V-B Experimental Setup

We train and test the two deep algorithms using Tensorflow

[1]

for the Python environment on a NVIDIA GeForce GTX 1080 with 8GB GPU memory. Padding is applied to keep the size of features the same. We use the Xavier method

[7] to initialize the network parameters, and we apply ADAM [13] with momentum. The implementation uses the initialized learning rate , first-order momentum and second momentum . The weight decay regularization parameter is set to . The size of training batch is . We report our performance after training iteration of DC-CNN guide module and iterations of error correction module.

In the guidance module, we implement the state-of-the-art CS-MRI models with the following parameter settings. In TLMRI [25], we set the data fidelity parameter , the patch size , the number of training signals , the sparsity fraction , the weight on the negative log-determinat+Frobenius norm terms , the patch overlap stride , the DCT matrix is used as initial transform operator, the iterations times for optimization. The above parameter setting follows the advices from the author [25]. In PANO [23], we use the implementation with parallel computation provided by [23]. The data fidelity parameter is set with zero-filled MR image as initial reference image. The non-local operation is implemented twice to yield the MRI reconstruction. In GBRWT [14], we set the data fidelity parameter . The Daubechies redundant wavelet sparsity is used as regularization to obtain the reference image. The graph is trained times.

V-C Ablation Study

To validate the architecture of the proposed DECN model, we conduct the ablation study by comparing the DECN framework with other Baseline network architectures in Figure 4, which we refer the model in Figure 4(a) as DECN with no input concatenation and error correction (DECN-NIC-NEC). With the guide module, a later cascaded CNN module learns the mapping from the pre-reconstructed MR image to the full-sampled MR image. Likewise, we name the models in Figure 4(b) (DECN-IC-NEC) and Figure 4(c) (DECN-NIC-EC). By comparing the DECN-NIC-NEC framework with the DECN-IC-NEC framework, we evaluate the benefit brought by the concatenating the zero-filled MR images and corresponding guide MR images as the input to compensate the information loss in the guide module. In Figure 3, we give the illustration the information from zero-filled MR images and guide images can be shared. By comparing the DECN-NIC-NEC framework with the DECN-NIC-EC framework, we evaluate how the error correction strategy improves the reconstruction accuracy compared with simple cascade manner.

We conduct the experiments using PANO with the Cartesian under-sampling mask shown in Figure 6 as the guide module. We give the averaged PSNR (peak signal-to-noise ratio) results in Figure 5. We observe the PANO-DECN-IC-NEC and PANO-DECN-NIC-EC both outperforms the PANO-DECN-NIC-NEC with the similar margins about 0.2dB in PSNR. While the proposed PANO-DECN model with the input concatenation and error correction outperforms the PANO-DECN-NIC-NEC about 0.5 dB in PSNR. The ablation study shows the input concatenation and error correction strategies can effectively improve the model performance in the DECN framework.

Fig. 5: The PSNR (top) and SSIM (bottom) comparison of the regular CS-MRI methods (solid line) and their corresponding DECN-based methods (dashed line). All the tested data are shown in the comparison. The x-axis shows the index of the tested data and the y-axis shows the model performances.

V-D Results

(a) Fully-sampled
(b) 1D Mask
(c) Zero-filled
(d) Full-sampled
(e) Full-sampled
(f) Full-sampled
(g) Full-sampled
(h) TLMRI
(i) PANO
(j) GBRWT
(k) DC-CNN
(l) TLMRI-DECN
(m) PANO-DECN
(n) GBRWT-DECN
(o) DC-CNN-DECN
(p) TLMRI
(q) PANO
(r) GBRWT
(s) DC-CNN
(t) TLMRI-DECN
(u) PANO-DECN
(v) GBRWT-DECN
(w) DC-CNN-DECN
Fig. 6: We show the reconstruction results of our DECN model with local area magnification. We also show the reconstruction error for our DECN model under different guide module in the last row.
(a) Fully-sampled
(b) 2D Mask
(c) Zero-filled
(d) Full-sampled
(e) Full-sampled
(f) Full-sampled
(g) TLMRI
(h) PANO
(i) GBRWT
(j) TLMRI-DECN
(k) PANO-DECN
(l) GBRWT-DECN
(m) TLMRI
(n) PANO
(o) GBRWT
(p) TLMRI-DECN
(q) PANO-DECN
(r) GBRWT-DECN
Fig. 7: We show the reconstruction results of our DECN framework on TLMRI, PANO and GBRWT methods with local area magnification on Random under-sampling mask. We also show the reconstruction error for our DECN model under different guide module in the last row.

We evaluate the proposed DECN framework using PSNR and SSIM (structural similarity index) [30] as quantitative image quality assessment measures. We give the quantitative reconstruction results of all the test data on different under-sampling patterns and different under-sampling ratios in Table II. We show the Cartesian under-sampling mask in Figure 6(b) and the Random under-sampling mask in Figure 7(b). We observe that DECN improved all off-the-shelf CS-MRI inversion methods on all the under-sampling patterns. Since the Random mask enjoys the more incoherence than the Cartesian mask with the same under-sampling ratio, the CS-MRI achieves better reconstruction quality on the Random masks. Also, we observe the plain DC-CNN model already achieves good reconstruction accuracy, leaving less structural errors for the error correction module, leading to the limited performance improvement about 0.1 dB on the Random and masks. While for other CS-MRI inversions on various sampling patterns, the improvements are at least 1.5dB or even up to 3.5 dB.

In Figure 6, we show reconstruction results and the corresponding error images of an example from the test data on the 1D under-sampling mask. With local magnification on the red box, we observe that by learning the error correction module, the fine details, especially the low-contrast structures are better preserved, leading to a better reconstruction.

In Figure 7, we also compare the MR images produced by the TLMRI, PANO and GBRWT with their DECN counterparts on the 2D under-sampling mask. The subjective comparison DC-CNN and DE-CNN-DECN isn’t included because of the limited improvement. The results are consistent with our observation in Cartesian under-sampling case.

Vi Conclusion

We have proposed a deep error correction framework for the CS-MRI inversion problem. Using any off-the-shelf CS-MRI algorithm to construct a template, or “guide” for the final reconstruction, we use a deep neural network that learns how to correct for errors that typically appear in the chosen algorithm. Experimental results show that the proposed model achieves consistently improves a variety of CS-MRI inversion techniques.

Acknowledgment

The work is supported in part by National Natural Science Foundation of China under Grants 61571382, 81671766, 61571005, 81671674, U1605252, 61671309 and 81301278, Guangdong Natural Science Foundation under Grant 2015A030313007, Fundamental Research Funds for the Central Universities under Grant 20720160075, 20720150169, CCF-Tencent research fund, Natural Science Foundation of Fujian Province of China (No.2017J01126), and the Science and Technology funds from the Fujian Provincial Administration of Surveying, Mapping, and Geoinformation.

References

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • [2] E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, Feb 2006.
  • [3] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang. Compressive sensing via nonlocal low-rank regularization. IEEE Transactions on Image Processing, 23(8):3618–3632, Aug 2014.
  • [4] D. L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, April 2006.
  • [5] E. M. Eksioglu. Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3D-MRI. Journal of Mathematical Imaging and Vision, 56(3):430–440, Nov 2016.
  • [6] J. A. Fessler. Medical image reconstruction: a brief overview of past milestones and future directions. arXiv preprint arXiv:1707.05927, 2017.
  • [7] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Y. W. Teh and M. Titterington, editors,

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    , volume 9 of

    Proceedings of Machine Learning Research

    , pages 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 770–778, June 2016.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016.
  • [10] J. Huang, S. Zhang, and D. Metaxas. Efficient MR image reconstruction for compressed MR imaging. Medical Image Analysis, 15(5):670 – 679, 2011. Special Issue on the 2010 Conference on Medical Image Computing and Computer-Assisted Intervention.
  • [11] Y. Huang, J. Paisley, Q. Lin, X. Ding, X. Fu, and X. P. Zhang. Bayesian nonparametric dictionary learning for compressed sensing MRI. IEEE Transactions on Image Processing, 23(12):5007–5019, Dec 2014.
  • [12] H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye. k-t FOCUSS: A general compressed sensing framework for high resolution dynamic MRI. Magnetic Resonance in Medicine, 61(1):103–116, 2009.
  • [13] D. Kingma and J. Ba. ADAM: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [14] Z. Lai, X. Qu, Y. Liu, D. Guo, J. Ye, Z. Zhan, and Z. Chen. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform. Medical Image Analysis, 27(Supplement C):93 – 104, 2016. Discrete Graphical Models in Biomedical Image Analysis.
  • [15] D. Lee, J. Yoo, and J. C. Ye. Deep residual learning for compressed sensing MRI. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages 15–18, April 2017.
  • [16] Q. Liu, S. Wang, L. Ying, X. Peng, Y. Zhu, and D. Liang. Adaptive dictionary learning in sparse gradient domain for image recovery. IEEE Transactions on Image Processing, 22(12):4652–4663, Dec 2013.
  • [17] M. Lustig, D. Donoho, and J. M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
  • [18] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An efficient algorithm for compressed MR imaging using total variation and wavelets. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2008.
  • [19] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 807–814, 2010.
  • [20] B. Ning, X. Qu, D. Guo, C. Hu, and Z. Chen. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and non-convex optimization. Magnetic Resonance Imaging, 31(9):1611 – 1622, 2013.
  • [21] S. Park and J. Park. Compressed sensing MRI exploiting complementary dual decomposition. Medical Image Analysis, 18(3):472 – 486, 2014.
  • [22] X. Qu, D. Guo, B. Ning, Y. Hou, Y. Lin, S. Cai, and Z. Chen. Undersampled MRI reconstruction with patch-based directional wavelets. Magnetic Resonance Imaging, 30(7):964 – 977, 2012.
  • [23] X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen. Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator. Medical Image Analysis, 18(6):843 – 856, 2014. Sparse Methods for Signal Reconstruction and Medical Image Analysis.
  • [24] S. Ravishankar and Y. Bresler. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Transactions on Medical Imaging, 30(5):1028–1041, May 2011.
  • [25] S. Ravishankar and Y. Bresler. Efficient blind compressed sensing using sparsifying transforms with convergence guarantees and application to magnetic resonance imaging. SIAM Journal on Imaging Sciences, 8(4):2519–2557, 2015.
  • [26] J. Schlemper, J. Caballero, J. V. Hajnal, A. Price, and D. Rueckert. A deep cascade of convolutional neural networks for MR image reconstruction. In International Conference on Information Processing in Medical Imaging, pages 647–658. Springer, 2017.
  • [27] K. Sung and B. A. Hargreaves. High-frequency subband compressed sensing MRI using quadruplet sampling. Magnetic Resonance in Medicine, 70(5):1306–1318, 2013.
  • [28] S. Wang, J. Liu, Q. Liu, L. Ying, X. Liu, H. Zheng, and D. Liang. Iterative feature refinement for accurate undersampled MR image reconstruction. Physics in Medicine & Biology, 61(9):3291, 2016.
  • [29] S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang. Accelerating magnetic resonance imaging via deep learning. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pages 514–517, April 2016.
  • [30] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, April 2004.
  • [31] J. Yang, Y. Zhang, and W. Yin. A fast alternating direction method for TVL1-L2 signal reconstruction from partial fourier data. IEEE Journal of Selected Topics in Signal Processing, 4(2):288–297, April 2010.
  • [32] Y. Yang, F. Liu, W. Xu, and S. Crozier. Compressed sensing MRI via two-stage reconstruction. IEEE Transactions on Biomedical Engineering, 62(1):110–118, Jan 2015.
  • [33] J. C. Ye, S. Tak, Y. Han, and H. W. Park. Projection reconstruction MR imaging using FOCUSS. Magnetic Resonance in Medicine, 57(4):764–775, 2007.