Long scan time is a primary disadvantage of Magnetic Resonance Imaging (MRI). Parallel imaging (PI) techniques have become popular strategies for reducing MRI scan time. Two conventional PI reconstruction algorithms are the SENSE [pruessmann1999sense] and the generalized autocalibrating partially parallel acquisitions (GRAPPA) [griswold2002generalized]. On the other hand, compressed sensing (CS)-based methods [lustig2007sparse, liang2009accelerating, ramani2011parallel, yazdanpanah2017compressed, yazdanpanah2017compressed2] seek to exploit intrinsic image properties of sparsity in a transform domain and have allowed for accelerated imaging in some settings. These techniques can be formulated similarly to a regularized SENSE reconstruction, but use different regularization strategies to increase data acquisition speed while generating better reconstructions. However, recent data-driven methods based on deep learning has resulted in promising improvements in image reconstruction algorithms.
Two primary deep neural networks-based MRI reconstruction frameworks include image-domain-based [wang2016accelerating, schlemper2018deep, hammernik2018learning] and -space-based [cheng2018deepspirit] frameworks. In [wang2016accelerating
], the authors used the convolutional neural network (CNN) either as an initialization or a regularization term for constrained reconstruction. In [schlemper2018deep], a cascade of CNNs is presented and trained, and the reconstruction process is considered as a de-aliasing problem in the image domain. In [hammernik2018learning], a variational network reconstruction method is presented and trained for accelerated multi-coil MRI data. In [cheng2018deepspirit], a deep network is applied entirely in the -space domain to utilize known properties in different ways to compensate for missing -space data. On the other hand, the method presented in [zhu2018image
] exploited the combination of fully connected layers plus convolutional autoencoder in order to find the direct mapping from the-space domain into the image domain and trained their network with data that modulated with synthesized-phase. In all the mentioned papers, the deep network needs to learn from new massive training datasets acquired under new configurations through the training process every time from scratch in order to be able to reconstruct efficiently.
Not being able to generalize to new datasets is the main disadvantage of these type of methods which make them unsuitable in practice especially since a wide range of MR system and protocols exists. The deep learning-based reconstruction methods are sensitive to any deviation between training and test datasets. Especially an SNR deviation between training and test datasets will leads to a considerable reduction of image quality [knoll2019assessment]. Also, any changes in -space sampling pattern will result in an immediate failure for any learning-based reconstruction method. Since having different acquisition parameters in MRI systems are very common between different institutions, the learning-based approaches are not considered as practical solutions for MRI reconstruction.
Here, we propose a new generalized parallel imaging method based on deep neural networks without using any training datasets. Unlike most deep learning-based MRI reconstruction methods, our method operates on real-world acquisitions with the complex data format, not on simulated data, real-valued data, or data with added simulated-phase. We categorize our method among the unsupervised energy-based methods [golts2018deep, ulyanov2017deep]. Using our proposed method, we evaluate the reconstruction performance compared to clinically-used GRAPPA reconstruction method and the recently published deep learning-based variational network (VN) method.
We develop the deep neural network-based method, without any training data involved, using encoder-decoder U-net [ronneberger2015u] convolutional network architecture with skip connections for parallel MRI reconstruction (Figure 1). The number of filters for both encoder and decoder layers is set to 128 and the network filter kernel size is set to 3 for both encoder and decoder layers. Only undersampled multi-coil -space raw data is needed for reconstruction. We initialized U-net network parameters randomly and zero-filled reconstruction is used as the network input. Since the deep network frameworks work on real-valued parameters, inputs, and outputs, in our method complex -space data are divided into real and imaginary parts and considered as two-channel input and output.
Two deep loss functions are proposed based on MRI imaging model. Given the deep loss function, we optimize the deep loss over the network parameters at run-time per subject. The following equations are the proposed non-regularized (Eq. 1) and regularized (Eq. 2) loss functions:
where is the undersampled -space data. represents the network parameters, is the network output, and is the MR image to be reconstructed. P is a mask representing -space undersampling pattern and F
is the Fourier transform operator.represents coil sensitivity maps and is the total number of coils. The sensitivity maps were computed using ESPIRiT [uecker2014espirit] method applied to only calibration data.
The reconstruction is an iterative process which minimizes our loss function over the network parameters at run-time per subject. In our reconstruction process, our network parameters are getting updated in every iteration, not through some training step based on training dataset. They are getting updated based on the single multi-coil undersampled -space data by optimizing the loss function over the network parameters. Loss minimization was performed using ADAM [kingma2014adam] optimizer with an update rate of 0.001. The output of the network at each iteration is the reconstructed image at that step and the output will be updated iteratively through the loss function minimization.
For testing our method, we used MRI reconstructions for 3D MPRAGE dataset with 32-channel head coil using NLDpMRI reconstruction and compared our results to standard GRAPPA reconstruction. We used a fully sampled MPRAGE data and retrospectively undersampled in both phase encoding dimensions with an acceleration factor of 2x2. Full -space data reconstructed with the adaptive combine method [walsh2000adaptive], was used as our gold standard for comparison. Also, in order to demonstrate the generalization capability of our method, we have used the entire test datasets (knee datasets) of recently published variational network reconstruction method [hammernik2018learning] and compared our results to their results in terms of the structural similarity index (SSIM) and normalized root-mean-square error (NRMSE). The dataset includes 50 test cases from five different sequences including coronal proton-density (PD), coronal fat-saturated (FS) PD, axial FS T2, sagittal FS T2, and sagittal PD. For more detail, regarding the sequence parameters, you can refer to [hammernik2018learning]. The datasets include fully sampled data which then retrospectively undersampled with an acceleration factor of 4.
The advantage of this experiment is to demonstrate that NLDpMRI can easily handle and reconstruct all five test datasets, which include five different sequences with different parameters, without any training datasets and any training steps involved. On the other hand, for VN method five different networks should be trained individually, a one trained network for each specific sequence, using five massive training datasets (one training dataset per each sequence) beforehand so that VN method can reconstruct the test datasets.
Figure 2 denotes the results from our network using the non-regularized loss function and compares the result of our network to the ground truth, zero-filled reconstruction, and GRAPPA reconstruction. Figure 3
shows the reconstruction comparison between regularized and non-regularized NLDpMRI reconstruction. We observed that NLDpMRI reconstructs artifact-free images, which have better quality than GRAPPA reconstruction, and GRAPPA result shows noise amplification compared to NLDpMRI result. Additionally, we observed that adding the regularization term to the loss function will slightly improve the reconstruction accuracy (PSNR of regularized NLDpMRI is 50.9 compared to PSNR of 50.2 for non-regularized NLDpMRI). The proposed method uses sensitivities estimated from exactly the same calibration data as GRAPPA method uses (24 x 24 x 24 Cartesian grid).
Figure 4 shows the impact of the acceleration factor of 4 for VN method, and NLDpMRI method on coronal PD-weighted data. The NLDpMRI result in Figure 4 outperforms the learned VN result and generate sharper and higher quality reconstruction. A similar observation can be made for coronal FS PD-weighted data in Figure 5. The same sensitivities estimated from exactly the same calibration data have been used for both reconstruction methods in this experiment. Table 1 provides the quantitative evaluation for all five knee datasets using NLDpMRI and VN reconstructions. The NLDpMRI reconstruction show superior performance in terms of SSIM and NRMSE for four datasets (out of five datasets). The VN method shows slightly better performance compared to our method for saggital PD dataset. Considering the fact that no training datasets have been used for NLDpMRI reconstructions in different experiments including different sequences with different parameters, proves the capability of NLDpMRI as a generalized parallel imaging method.
We propose a generalized method to solve MRI parallel image reconstruction problem using deep neural networks without any training data involved. The proposed approach eliminates the need to collect massive datasets for training purposes, any form of normalization, and transfer learning techniques to bring the data into the same domain as the trained network. Experimental results on real MRI acquisitions show that our proposed method outperforms the clinical gold standard GRAPPA method and the deep learning-based VN method.
|Coronal FS PD|
|Sagittal FS T2|
|Axial FS T2|