Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction

We present simple reconstruction networks for multi-coil data by extending deep cascade of CNN's and exploiting the data consistency layer. In particular, we propose two variants, where one is inspired by POCSENSE and the other is calibration-less. We show that the proposed approaches are competitive relative to the state of the art both quantitatively and qualitatively.



There are no comments yet.


page 3


Σ-net: Systematic Evaluation of Iterative Deep Neural Networks for Fast Parallel MR Image Reconstruction

Purpose: To systematically investigate the influence of various data con...

Deep Network Interpolation for Accelerated Parallel MR Image Reconstruction

We present a deep network interpolation strategy for accelerated paralle...

A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction

Inspired by recent advances in deep learning, we propose a framework for...

How to Calibrate Your Event Camera

We propose a generic event camera calibration framework using image reco...

Σ-net: Ensembled Iterative Deep Neural Networks for Accelerated Parallel MR Image Reconstruction

We explore an ensembled Σ-net for fast parallel MR imaging, including pa...

DeepcomplexMRI: Exploiting deep residual network for fast parallel MR imaging with complex convolution

This paper proposes a multi-channel image reconstruction method, named D...

Distributed image reconstruction for very large arrays in radio astronomy

Current and future radio interferometric arrays such as LOFAR and SKA ar...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, several deep learning approaches have been proposed for accelerated parallel MR image reconstruction

[4, 6, 5, 2, 1, 11]. In this work, we present simple reconstruction networks for multi-coil data by extending deep cascade of CNN’s[9]. In particular, we propose two approaches, where one is inspired by POCSENSE[8] and the other is calibration-less. The method is evaluated using a public knee dataset containing 100 subjects[4]. We show that the proposed approaches are competitive relative to the state of the art both quantitatively and qualitatively. Presented at ISMRM 27th Annual Meeting & Exhibition (Abstract #4663)

2 Methods

Figure 3:

The proposed network architectures. (left) D-POCSENSE architecture. The input to the CNN is a single, sensitivity-weighted recombined image. At each iteration, the CNN updates an estimate of the combined image. The sub-network takes a single recombined image as an input and produces the denoised result as an output. The data consistency is performed by mapping the intermediate output to the raw

-space by applying encoding matrix. The updated image is recombined by the adjoint of the encoding matrix. (right) The proposed DC-CNN architecture. The network jointly reconstructs each coil data simultaneously. The data consistency operation is applied separately for each coil.

The proposed networks are direct extensions of deep cascades of CNN (DC-CNN), where the denoising sub-networks and the data consistency layers are interleaved. However, for parallel imaging, the data consistency layer can be extended in two ways, yielding two network variants. In the first approach, sensitivity estimates are required, which can be computed using algorithms such as E-SPIRiT[10]. The input to the CNN is a single, sensitivity-weighted recombined image. At each iteration, the CNN updates an estimate of the combined image. For the data consistency layer, the forward operation is performed, then acquired samples are filled coil-wise as:


where , are the -th coil-weighted image for the intermediate CNN reconstruction in -space and the original -space data respectively. The result is mapped back to image domain via the adjoint of the encoding matrix. As the operation in the data consistency layer is analogous to the projection step from POCSENSE, the proposed network is termed D(eep)-POCSENSE. The balancing term depends on the input noise level, however, this is made trainable as a network parameter. The network is trained using loss:


where and are the initial recombined image and ground truth respectively.

The second approach reconstructs the multiple coil data directly without performing the recombination and the coil images are stacked along the channel-axis and fed into each sub-network. For the data consistency layer, each coil image is Fourier transformed and

Eq. 1 is applied individually. As it does not require a sensitivity estimate, the proposed approach is calibration-less. The proposed network, DC-CNN, is trained with the following weighted- loss:


where the subscript indexes -th coil data and is the sensitivity map. The proposed architectures are shown in Fig. 3.

3 Evaluation

We used the public knee dataset provided by Hammernik et al.[4]111Available at The dataset contains 100 patients, 20 subjects per acquisition protocol. For each approach, one network was trained to reconstruct all acquisition protocols simultaneously. We used 15 for training and 5 for testing per protocol. The proposed approach was compared with -SPIRiT[7] and Variational Network (VN)[4]. We used Cartesian undersampling with acceleration factor (AF) , sampling 24 central region, which was also used as the calibration region for estimating the sensitivity maps. In this work, D-POCSENSE and DC-CNN were trained with ,,[9] and convolution kernels with dilation factor 2. The network was trained using Adam with

for 200 epoch with batch size 4. The default parameters were used for both

-SPIRiT and VN. We used PSNR and SSIM for the metric.

width= AF=4 AF=6 Modality Model PSNR SSIM PSNR SSIM Axial FS -SPIRIT 33.5512.62 0.870.26 26.6112.83 0.800.30 D-POCSENSE 36.0310.26 0.900.24 31.838.79 0.870.25 DC-CNN 35.458.63 0.910.19 30.146.85 0.870.23 VN 36.4910.41 0.900.23 32.399.94 0.860.29 Coronal PD -SPIRIT 36.961.28 0.980.00 31.841.32 0.950.01 D-POCSENSE 36.941.24 0.980.00 32.270.83 0.950.00 DC-CNN 35.141.26 0.970.01 29.881.72 0.940.01 VN 37.071.15 0.980.00 33.171.06 0.950.01 Coronal PDFS -SPIRIT 36.987.85 0.970.06 33.326.84 0.950.07 D-POCSENSE 39.023.37 0.980.02 34.912.7 0.970.04 DC-CNN 38.373.23 0.980.03 33.612.74 0.960.05 VN 39.393.32 0.980.02 35.712.80 0.970.02 Sagittal PD -SPIRIT 36.760.47 0.980.00 31.430.84 0.940.01 D-POCSENSE 37.090.54 0.980.00 31.940.45 0.940,00 DC-CNN 35.760.59 0.970.00 30.121.31 0.940.01 VN 37.470.55 0.980.00 32.860.59 0.950.00 Sagittal FS -SPIRIT 37.711.55 0.980.00 33.321.29 0.960.01 D-POCSENSE 37.961.08 0.980.01 33.431.08 0.960.01 DC-CNN 37.021.55 0.980.01 27.763.04 0.950,01 VN 38.391.21 0.980.01 34.321.12 0.960.01

Table 1: The summary of quantitative results for AF=4 and AF=6. PD denotes proton tensity and FS denotes fat saturation.
Figure 6: (left) The reconstruction results from each method for Cartesian undersampling (left) with acceleration factor 4 and (right) acceleration factor 6.

4 Results

Quantitative results are summarised in Table 1 for each acquisition protocol. The proposed methods both outperformed the compressed sensing approach on average. D-POCSENSE achieved the performance close to VN for AF=4, whereas DC-CNN was slightly worse. All methods provided similar SSIM. For AF=6, VN achieved the highest PSNR. The sample reconstructions are shown in Fig. 6 for AF=4 and AF=6 respectively. For Axial image, D-POCSENSE gave the most homogeneous image, whereas DC-CNN and VN often failed to remove aliasing. For AF=4, all methods generated sharp images. For AF=6, DC-CNN performed worse that D-POCSENSE and VN and the residual aliasing is prominent.

5 Discussion and Conclusion

In this work, we proposed simple extensions to DC-CNN for parallel imaging. When comparing the two approaches so far explored, D-POCSENSE outperformed DC-CNN overall, which leads to the conclusion that incorporating the sensitivity estimate is advantageous. We speculate that this is because it allows intermediate sub-networks to directly operate in the output space as well as directly optimising the loss with respect to the final output. Nevertheless, DC-CNN achieved the highest SSIM in some regimes, which shows that a novel way of combining the raw data could lead to improved algorithms. The proposed methods achieved comparable performance to state-of-the-art algorithms, however, we note that the variational network produced the best result overall.

6 Note

We observed that training D-POCSENSE and DC-CNN networks longer can further remove the residual aliasing present in the reconstruction to eventually reach similar performances. The presented work is now extended to variable-splitting network [3].

7 Acknowledgements

Jo Schlemper is partially funded by EPSRC Grant (EP/P001009/1).


  • [1] M. Akçakaya, S. Moeller, S. Weingärtner, and K. Uğurbil (2019)

    Scan-specific robust artificial-neural-networks for k-space interpolation (raki) reconstruction: database-free deep learning for fast imaging

    Magnetic resonance in medicine 81 (1), pp. 439–453. Cited by: §1.
  • [2] J. Y. Cheng, M. Mardani, M. T. Alley, J. M. Pauly, and S. S. Vasanawala (2018)

    DeepSPIRiT: generalized parallel imaging using deep convolutional neural networks

    ISMRM 26th Annual Meeting & Exhibition. Cited by: §1.
  • [3] J. Duan, J. Schlemper, C. Qin, C. Ouyang, W. Bai, C. Biffi, G. Bello, B. Statton, D. P. O’Regan, and D. Rueckert (2019) VS-net: variable splitting network for accelerated parallel mri reconstruction. arXiv preprint arXiv:1907.10033. Cited by: §6.
  • [4] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll (2018) Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine 79 (6), pp. 3055–3071. Cited by: §1, §3.
  • [5] Y. Han, L. Sunwoo, and J. C. Ye (2019) K-space deep learning for accelerated mri. IEEE transactions on medical imaging. Cited by: §1.
  • [6] M. Mardani, E. Gong, J. Y. Cheng, S. S. Vasanawala, G. Zaharchuk, L. Xing, and J. M. Pauly (2018) Deep generative adversarial neural networks for compressive sensing mri. IEEE transactions on medical imaging 38 (1), pp. 167–179. Cited by: §1.
  • [7] M. Murphy, M. Alley, J. Demmel, K. Keutzer, S. Vasanawala, and M. Lustig (2012) Fast -spirit compressed sensing parallel imaging mri: scalable parallel implementation and clinically feasible runtime. IEEE transactions on medical imaging 31 (6), pp. 1250–1262. Cited by: §3.
  • [8] A. A. Samsonov, E. G. Kholmovski, D. L. Parker, and C. R. Johnson (2004) POCSENSE: pocs-based reconstruction for sensitivity encoded magnetic resonance imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 52 (6), pp. 1397–1406. Cited by: §1.
  • [9] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert (2017) A deep cascade of convolutional neural networks for dynamic mr image reconstruction. IEEE transactions on Medical Imaging 37 (2), pp. 491–503. Cited by: §1, §3.
  • [10] M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig (2014)

    ESPIRiT—an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa

    Magnetic resonance in medicine 71 (3), pp. 990–1001. Cited by: §2.
  • [11] P. Zhang, F. Wang, W. Xu, and Y. Li (2018) Multi-channel generative adversarial network for parallel magnetic resonance image reconstruction in k-space. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 180–188. Cited by: §1.