I Introduction
Magnetic resonance imaging (MRI) is an important medical imaging technique, but its slow imaging speed poses a limitation on its widespread application. Compressed sensing (CS) theory [2, 4] has been a significant development of the signal acquisition and reconstruction process that has allowed for significant acceleration of MRI. The CSMRI problem can be formulated as the optimization
(1) 
where is the complexvalued MRI to be reconstructed, is the undersampled Fourier matrix and () are the kspace data measured by the MRI machine. The first data fidelity term ensures agreement between the Fourier coefficients of the reconstructed image and the measured data, while the second term regularizes the reconstruction to encourage certain image properties such as sparsity in a transform domain.
Recently, deep learning approaches have been introduced for the CSMRI problem, achieving stateoftheart performance compared with conventional methods. For example, an endtoend mapping from input zerofilled MRI to a fullysampled MRI was trained using the classic CNN model in
[29], or its residual network variant in [15]. Greater integration of the data fidelity term into the network has resulted in a Deep Cascade CNN (DCCNN) [26]. Compared with previous models proposed for CSMRI inversion, deep learning is able to capture more intricate patterns within the data, which leads to their improved performance.Recently, the compressed sensing MRI is approved by the Food and Drug Administration (FDA) to two main MRI vendors: GE and Siemens [6]. As the growing needs for application of compressed sensing MRI, improving reconstruction accuracy of the CSMRI is of great significance. In this paper, we propose a deep learning framework in which an arbitrary CSMRI inversion algorithm is combined with a deep learning error correction network. The network is trained for a specific inversion algorithm to exploit structural consistencies in the errors they produce. The final reconstruction is found by combining the information from the original algorithm with the error correction of the network.
Ii Related Work
Much focus of previous work has been on proposing appropriate regularizations that lead to better MRI reconstructions. In the pioneering work of CSMRI called SparseMRI [17], this regularization adds an penalty on the wavelet coefficients and the total variation of the reconstructed image. Based on SparseMRI, more efficient optimization methods have been proposed to optimize this objective, such as TVCMRI [18], RecPF [31] and FCSA [10]. Variations on the wavelet penalty exploit geometric information of MRI, such as PBDW/PBDWS [22, 20] and GBRWT [14], for improved results. Dictionary learning methods [24, 11, 16, 25] have also been applied to CSMRI reconstruction, as have nonlocal priors such as NLR [3], PANO [23] and BM3DMRI [5]. These previous works can be considered sparsitypromoting regularized CSMRI methods that are optimized using iterative algorithms. They also represent images using simple single layer features that are either predefined (e.g., wavelets) or learned from the data (e.g., dictionary learning).
Previous work has also tried to exploit regularities in the reconstruction error in different ways. In the popular dynamic MRI reconstruction method kt FOCUSS [33, 12]
, the original signal is decomposed into a predicted signal and a residual signal. The predicted signal is estimated by temporal averaging, while the highly sparse residual signal has a
norm regularization. An iterative feature refinement strategy for CSMRI was proposed in [28] to exploit the structural error produced in each iteration. In [32], the kspace measurements are divided into high and low frequency regions and reconstructed separately. In [21] the MR image is decomposed into a smooth layer and a detail layer which are estimated using total variation and wavelet regularization separately. In [27], the low frequency information is estimated using parallel imaging techniques. These methods each employ a fixed transform basis.Iii Problem Formulation
Exploiting structural regularities in the reconstruction error of CSMRI is a good approach to compensate for imperfect modeling. Starting with the standard formulation of CSMRI in Equation 1, we formulate our objective function as
(2) 
where is an intermediate reconstruction of the MRI. We model this intermediate reconstruction as the summation of a “guidance” image and the error image of the reconstruction ,
(3) 
Substituting this into Equation 2, we obtain
(4) 
The guidance image is the reconstructed MRI using any chosen CSMRI method; thus can be formed using existing software prior to using our proposed method for the final reconstruction. The reconstruction error is between the ground truth and the reconstruction . Since we don’t know this at testing time, we use training data to model this error image with a neural network , where represents the network parameters and is the input to the network. Thus, Equation 4 can be rewritten as
(5) 
For a new MRI, after obtaining the guidance image (using a preexisting algorithm) and the mapping
(using a feedforward neural network trained on data), the proposed framework produces the final output MRI by solving the least square problem of Equation
5.Iv Deep Error Correction Network (DECN)
Following the formulation of our CSMRI framework above and in Figure 1, we turn to a more detailed discussion of the optimization procedure. We next discuss each module of the proposed Deep Error Correction Network (DECN) framework.
Iva Guide Module
With the guide module, we seek a reconstruction of the MRI that approximates the fullysampled MRI using a standard “offtheshelf” CSMRI approach. We denote this as
(6) 
We illustrate with reconstructions for three CSMRI methods: TLMRI (transform learning MRI) [25], PANO (patchbased nonlocal operator) [23] and GBRWT (graphbased redundant wavelet transform) [14]. The PANO and GBRWT models achieve impressive reconstruction qualities because they use an nonlocal prior and adaptive graphbased wavelet transform to exploit image structures. In TLMRI, the sparsifying transform learning and the reconstruction are performed simultaneously in more efficient way than DLMRI [24]. The three methods represent the stateoftheart performance in the nondeep CSMRI models. In Figure 2, we show the reconstructions error for zerofilled (itself a potential reconstruction “algorithm”), TLMRI, PANO and GBRWT on a complexedvalued brain MRI using Cartesian undersampling. The error display ranges from 0 to 0.2 with normalized data. The parameter setting will be elaborated in the Experiment Section V.
We also consider the deep learning DCCNN model [26] as the guide module. We also give the reconstruction error in Figure 2. We observe the zerofilled, TLMRI, PANO, GBRWT and DCCNN models all suffer the structural reconstruction errors, while the DCCNN model achieves the highest reconstruction quality with minimal errors because of its powerful model capacity. Another advantage of this CNN model is that, once the network is trained, testing is very fast compared with conventional sparseregularization CSMRI models. This is because no iterative algorithm needs to be run for optimization during testing since the operations are a simple feed forward function of the input. We compare the reconstruction time of TLMRI, PANO, GBRWT and DCCNN for testing for Figure 2 in Table I.
PANO  TLMRI  GBRWT  DCCNN  
Runtime (seconds)  11.37s  127.67s  100.60s  0.04s 
IvB Error Reconstruction Module
Using the guidance image , we can train a deep error correction module on the residual. To perform this task, we need access during training to pairs of the true, fully sampled MRI , as well as its reconstruction found by manually undersampling the kspace of this image according to a predefined mask and inverting. We then optimize the following objective function over network parameter ,
(7) 
where indicates the reconstructed MRI using zerofilled and the input to the error correction module is the concatenation of the zerofilled MRI and the guidance MRI . Therefore, the errorcorrecting network is learning how to map the concatenation of the zerofilled, compressively sensed MRI and the guidance image to the residual of the true MRI using a corresponding offtheshelf CSMRI inversion algorithm. Now we give the rationales and explanations for the concatenation operation.
In the CSMRI inversions, the zerofilled MR images usually serve as the starting point in the iterative optimization. Although the iterative dealiasing can effectively remove the artifacts and achieve much more pleasing visual quality compared with zerofilled reconstruction, the distortion and information loss is inevitable in the reconstruction. To further illustrate this phenomenon, we compare the pixelwise reconstruction errors among the zerofilling reconstruction and other reconstruction models of the MR image in Figure 2.
We take the difference between the absolute reconstruction error of zerofilled and the compared CSMRI methods and only keep the nonnegative values, which can be formulated as
(8) 
Where the operator set the negative values to zero. We only keep the nonnegative values in the map, which results the filtered difference map. We show the corresponding filtered difference map in figure 3 in the range [0 0.2]. The bright region means the better accuracy of zerofilled reconstruction. We observe the zerofilling reconstruction provide better reconstruction accuracy on some regions, indicating the information loss in the reconstruction occurs.
To alleviate the information loss in the guide module, we introduce the concatenation operation to utilize the information from both the zerofilled MR image and guidance image as the input to the error correction network. In later Experiment Section V, we further validate it by the ablation study.
We again note that the network is paired with a particular inversion algorithm , since each algorithm may have unique and consistent characteristics in the errors they produce. The network can be any deep learning network trained using standard methods.
IvC Data Fidelity Module
After the error correction network is trained, for a new undersampled kspace data for which the true is unknown, we use its corresponding guidance image and the approximated reconstructed error to optimize the data fidelity module by solving the following optimization problem
(9) 
The data fidelity module is utilized in our proposed DECN framework to correct the reconstruction by enforcing greater agreement at the sampled kspace locations. Using the properties of the fast Fourier transform (FFT), we can simplify the optimization by working in the Fourier domain using the common technique described in, e.g.,
[11]. The optimal values for in kspace can be found pointwise. This yields the closedform solution(10) 
The regularization parameter is usually set very small in the noisefree environment. We found that worked well in our lownoise experiments.
The proposed DECN model can effectively exploiting the reconstruction residues. In later Experiment Section V, we further validate the network architecture via ablation study on the error correction strategies.
V Experiments
In this section, we present experimental results using complexvalued MRI datasets. The T1 weighted MRI dataset (size ) is acquired on 40 volunteers with total 3800 MR images at Siemens 3.0T scanner with 12 coils using the fast low angle shot (FLASH) sequence (TR/TE = , 220 field of view, slice thickness). The SENSE reconstruction is introduced to compose the gold standard full kspace, which is used to emulate the singlechannel MRI. The details can be found in [22]. We randomly select 80 MR images as training set and 20 MR images as testing set. Informed consent was obtained from the imaging subject in compliance with the Institutional Review Board policy. The magnitude of the fullsampled MR image is normalized to unity by dot dividing the image by its largest pixel magnitude.
Undersampled kspace measurements are manually obtained via Cartesian and Random sampling mask with random phase encodes. Different undersampling ratios are adopted in the experiments.
Va Network Architecture
Sampling Pattern  Cartesian Undersampling  Random Undersampling  
Sampling Ratio  20%  30%  40%  20%  30%  40%  
Evaluation Index  PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  PSNR  SSIM 
TLMRI  31.27  0.864  32.86  0.868  35.99  0.896  35.13  0.878  36.46  0.882  37.26  0.891 
PANO  30.71  0.858  32.65  0.889  37.40  0.940  36.94  0.931  39.36  0.949  40.74  0.957 
GBRWT  30.61  0.853  32.27  0.879  37.19  0.932  36.81  0.908  39.16  0.932  40.72  0.944 
DCCNN  32.58  0.885  34.67  0.905  39.52  0.955  38.54  0.937  40.91  0.953  42.47  0.961 
TLMRIDECN  32.77  0.876  34.41  0.891  38.62  0.944  37.60  0.930  39.54  0.944  40.72  0.949 
PANODECN  32.57  0.864  34.43  0.891  39.27  0.953  38.51  0.940  40.88  0.956  42.42  0.963 
GBRWTDECN  32.58  0.869  34.41  0.891  39.07  0.950  38.48  0.940  40.79  0.955  42.36  0.963 
DCCNNDECN  33.06  0.898  35.34  0.922  39.92  0.956  38.86  0.939  41.06  0.954  42.58  0.962 
TLMRI  1.50  0.012  1.55  0.023  2.63  0.048  2.47  0.052  3.08  0.062  3.46  0.068 
PANO  1.86  0.006  1.78  0.002  1.87  0.012  1.57  0.010  1.52  0.006  1.68  0.006 
GBRWT  1.97  0.016  2.14  0.012  1.88  0.018  1.67  0.032  1.63  0.023  1.64  0.019 
DCCNN  0.48  0.013  0.67  0.017  0.40  0.010  0.32  0.002  0.15  0.001  0.11  0.010 
For the deep guide module (i.e., learning ), we use the CNN architecture called deep cascade CNN (DCCNN) [26]
, where the nonadjustable data fidelity layer is also incorporated into the model. This guide module consists of four blocks. Each block is formed by four consecutive convolutional layers with a shortcut and a data fidelity layer. For each convolutional layer, except the last one within a block, there are total of 64 feature maps. We use ReLU
[19]as the activation function.
For the error correction module (i.e., learning ), we adopt the network architecture shown in Figure 1. There are 18 convolutional layers with a skip layer connection as proposed in [8, 9] to alleviate the gradient vanish problem. We again adopt ReLU as the activation function, except for the last layer where the identity function is used to allow negative values. All convolution filters are set to
with stride set to
.VB Experimental Setup
We train and test the two deep algorithms using Tensorflow
[1]for the Python environment on a NVIDIA GeForce GTX 1080 with 8GB GPU memory. Padding is applied to keep the size of features the same. We use the Xavier method
[7] to initialize the network parameters, and we apply ADAM [13] with momentum. The implementation uses the initialized learning rate , firstorder momentum and second momentum . The weight decay regularization parameter is set to . The size of training batch is . We report our performance after training iteration of DCCNN guide module and iterations of error correction module.In the guidance module, we implement the stateoftheart CSMRI models with the following parameter settings. In TLMRI [25], we set the data fidelity parameter , the patch size , the number of training signals , the sparsity fraction , the weight on the negative logdeterminat+Frobenius norm terms , the patch overlap stride , the DCT matrix is used as initial transform operator, the iterations times for optimization. The above parameter setting follows the advices from the author [25]. In PANO [23], we use the implementation with parallel computation provided by [23]. The data fidelity parameter is set with zerofilled MR image as initial reference image. The nonlocal operation is implemented twice to yield the MRI reconstruction. In GBRWT [14], we set the data fidelity parameter . The Daubechies redundant wavelet sparsity is used as regularization to obtain the reference image. The graph is trained times.
VC Ablation Study
To validate the architecture of the proposed DECN model, we conduct the ablation study by comparing the DECN framework with other Baseline network architectures in Figure 4, which we refer the model in Figure 4(a) as DECN with no input concatenation and error correction (DECNNICNEC). With the guide module, a later cascaded CNN module learns the mapping from the prereconstructed MR image to the fullsampled MR image. Likewise, we name the models in Figure 4(b) (DECNICNEC) and Figure 4(c) (DECNNICEC). By comparing the DECNNICNEC framework with the DECNICNEC framework, we evaluate the benefit brought by the concatenating the zerofilled MR images and corresponding guide MR images as the input to compensate the information loss in the guide module. In Figure 3, we give the illustration the information from zerofilled MR images and guide images can be shared. By comparing the DECNNICNEC framework with the DECNNICEC framework, we evaluate how the error correction strategy improves the reconstruction accuracy compared with simple cascade manner.
We conduct the experiments using PANO with the Cartesian undersampling mask shown in Figure 6 as the guide module. We give the averaged PSNR (peak signaltonoise ratio) results in Figure 5. We observe the PANODECNICNEC and PANODECNNICEC both outperforms the PANODECNNICNEC with the similar margins about 0.2dB in PSNR. While the proposed PANODECN model with the input concatenation and error correction outperforms the PANODECNNICNEC about 0.5 dB in PSNR. The ablation study shows the input concatenation and error correction strategies can effectively improve the model performance in the DECN framework.
VD Results
We evaluate the proposed DECN framework using PSNR and SSIM (structural similarity index) [30] as quantitative image quality assessment measures. We give the quantitative reconstruction results of all the test data on different undersampling patterns and different undersampling ratios in Table II. We show the Cartesian undersampling mask in Figure 6(b) and the Random undersampling mask in Figure 7(b). We observe that DECN improved all offtheshelf CSMRI inversion methods on all the undersampling patterns. Since the Random mask enjoys the more incoherence than the Cartesian mask with the same undersampling ratio, the CSMRI achieves better reconstruction quality on the Random masks. Also, we observe the plain DCCNN model already achieves good reconstruction accuracy, leaving less structural errors for the error correction module, leading to the limited performance improvement about 0.1 dB on the Random and masks. While for other CSMRI inversions on various sampling patterns, the improvements are at least 1.5dB or even up to 3.5 dB.
In Figure 6, we show reconstruction results and the corresponding error images of an example from the test data on the 1D undersampling mask. With local magnification on the red box, we observe that by learning the error correction module, the fine details, especially the lowcontrast structures are better preserved, leading to a better reconstruction.
In Figure 7, we also compare the MR images produced by the TLMRI, PANO and GBRWT with their DECN counterparts on the 2D undersampling mask. The subjective comparison DCCNN and DECNNDECN isn’t included because of the limited improvement. The results are consistent with our observation in Cartesian undersampling case.
Vi Conclusion
We have proposed a deep error correction framework for the CSMRI inversion problem. Using any offtheshelf CSMRI algorithm to construct a template, or “guide” for the final reconstruction, we use a deep neural network that learns how to correct for errors that typically appear in the chosen algorithm. Experimental results show that the proposed model achieves consistently improves a variety of CSMRI inversion techniques.
Acknowledgment
The work is supported in part by National Natural Science Foundation of China under Grants 61571382, 81671766, 61571005, 81671674, U1605252, 61671309 and 81301278, Guangdong Natural Science Foundation under Grant 2015A030313007, Fundamental Research Funds for the Central Universities under Grant 20720160075, 20720150169, CCFTencent research fund, Natural Science Foundation of Fujian Province of China (No.2017J01126), and the Science and Technology funds from the Fujian Provincial Administration of Surveying, Mapping, and Geoinformation.
References
 [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Largescale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
 [2] E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, Feb 2006.
 [3] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang. Compressive sensing via nonlocal lowrank regularization. IEEE Transactions on Image Processing, 23(8):3618–3632, Aug 2014.
 [4] D. L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, April 2006.
 [5] E. M. Eksioglu. Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3DMRI. Journal of Mathematical Imaging and Vision, 56(3):430–440, Nov 2016.
 [6] J. A. Fessler. Medical image reconstruction: a brief overview of past milestones and future directions. arXiv preprint arXiv:1707.05927, 2017.

[7]
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
networks.
In Y. W. Teh and M. Titterington, editors,
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
, volume 9 ofProceedings of Machine Learning Research
, pages 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. 
[8]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 770–778, June 2016.  [9] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016.
 [10] J. Huang, S. Zhang, and D. Metaxas. Efficient MR image reconstruction for compressed MR imaging. Medical Image Analysis, 15(5):670 – 679, 2011. Special Issue on the 2010 Conference on Medical Image Computing and ComputerAssisted Intervention.
 [11] Y. Huang, J. Paisley, Q. Lin, X. Ding, X. Fu, and X. P. Zhang. Bayesian nonparametric dictionary learning for compressed sensing MRI. IEEE Transactions on Image Processing, 23(12):5007–5019, Dec 2014.
 [12] H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye. kt FOCUSS: A general compressed sensing framework for high resolution dynamic MRI. Magnetic Resonance in Medicine, 61(1):103–116, 2009.
 [13] D. Kingma and J. Ba. ADAM: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [14] Z. Lai, X. Qu, Y. Liu, D. Guo, J. Ye, Z. Zhan, and Z. Chen. Image reconstruction of compressed sensing MRI using graphbased redundant wavelet transform. Medical Image Analysis, 27(Supplement C):93 – 104, 2016. Discrete Graphical Models in Biomedical Image Analysis.
 [15] D. Lee, J. Yoo, and J. C. Ye. Deep residual learning for compressed sensing MRI. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages 15–18, April 2017.
 [16] Q. Liu, S. Wang, L. Ying, X. Peng, Y. Zhu, and D. Liang. Adaptive dictionary learning in sparse gradient domain for image recovery. IEEE Transactions on Image Processing, 22(12):4652–4663, Dec 2013.
 [17] M. Lustig, D. Donoho, and J. M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
 [18] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An efficient algorithm for compressed MR imaging using total variation and wavelets. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2008.
 [19] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML10), June 2124, 2010, Haifa, Israel, pages 807–814, 2010.
 [20] B. Ning, X. Qu, D. Guo, C. Hu, and Z. Chen. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and nonconvex optimization. Magnetic Resonance Imaging, 31(9):1611 – 1622, 2013.
 [21] S. Park and J. Park. Compressed sensing MRI exploiting complementary dual decomposition. Medical Image Analysis, 18(3):472 – 486, 2014.
 [22] X. Qu, D. Guo, B. Ning, Y. Hou, Y. Lin, S. Cai, and Z. Chen. Undersampled MRI reconstruction with patchbased directional wavelets. Magnetic Resonance Imaging, 30(7):964 – 977, 2012.
 [23] X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen. Magnetic resonance image reconstruction from undersampled measurements using a patchbased nonlocal operator. Medical Image Analysis, 18(6):843 – 856, 2014. Sparse Methods for Signal Reconstruction and Medical Image Analysis.
 [24] S. Ravishankar and Y. Bresler. MR image reconstruction from highly undersampled kspace data by dictionary learning. IEEE Transactions on Medical Imaging, 30(5):1028–1041, May 2011.
 [25] S. Ravishankar and Y. Bresler. Efficient blind compressed sensing using sparsifying transforms with convergence guarantees and application to magnetic resonance imaging. SIAM Journal on Imaging Sciences, 8(4):2519–2557, 2015.
 [26] J. Schlemper, J. Caballero, J. V. Hajnal, A. Price, and D. Rueckert. A deep cascade of convolutional neural networks for MR image reconstruction. In International Conference on Information Processing in Medical Imaging, pages 647–658. Springer, 2017.
 [27] K. Sung and B. A. Hargreaves. Highfrequency subband compressed sensing MRI using quadruplet sampling. Magnetic Resonance in Medicine, 70(5):1306–1318, 2013.
 [28] S. Wang, J. Liu, Q. Liu, L. Ying, X. Liu, H. Zheng, and D. Liang. Iterative feature refinement for accurate undersampled MR image reconstruction. Physics in Medicine & Biology, 61(9):3291, 2016.
 [29] S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang. Accelerating magnetic resonance imaging via deep learning. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pages 514–517, April 2016.
 [30] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, April 2004.
 [31] J. Yang, Y. Zhang, and W. Yin. A fast alternating direction method for TVL1L2 signal reconstruction from partial fourier data. IEEE Journal of Selected Topics in Signal Processing, 4(2):288–297, April 2010.
 [32] Y. Yang, F. Liu, W. Xu, and S. Crozier. Compressed sensing MRI via twostage reconstruction. IEEE Transactions on Biomedical Engineering, 62(1):110–118, Jan 2015.
 [33] J. C. Ye, S. Tak, Y. Han, and H. W. Park. Projection reconstruction MR imaging using FOCUSS. Magnetic Resonance in Medicine, 57(4):764–775, 2007.
Comments
There are no comments yet.