Real-Time Mapping of Tissue Properties for Magnetic Resonance Fingerprinting

Magnetic resonance Fingerprinting (MRF) is a relatively new multi-parametric quantitative imaging method that involves a two-step process: (i) reconstructing a series of time frames from highly-undersampled non-Cartesian spiral k-space data and (ii) pattern matching using the time frames to infer tissue properties (e.g., T1 and T2 relaxation times). In this paper, we introduce a novel end-to-end deep learning framework to seamlessly map the tissue properties directly from spiral k-space MRF data, thereby avoiding time-consuming processing such as the nonuniform fast Fourier transform (NUFFT) and the dictionary-based Fingerprint matching. Our method directly consumes the non-Cartesian k- space data, performs adaptive density compensation, and predicts multiple tissue property maps in one forward pass. Experiments on both 2D and 3D MRF data demonstrate that quantification accuracy comparable to state-of-the-art methods can be accomplished within 0.5 second, which is 1100 to 7700 times faster than the original MRF framework. The proposed method is thus promising for facilitating the adoption of MRF in clinical settings.



There are no comments yet.


page 7

page 8

page 9


Magnetic Resonance Fingerprinting using Recurrent Neural Networks

Magnetic Resonance Fingerprinting (MRF) is a new approach to quantitativ...

Multiparametric Deep Learning Tissue Signatures for Muscular Dystrophy: Preliminary Results

A current clinical challenge is identifying limb girdle muscular dystrop...

Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping

Quantitative magnetic resonance imaging (qMRI) derives tissue-specific p...

MRF-ZOOM: A Fast Dictionary Searching Algorithm for Magnetic Resonance Fingerprinting

Magnetic resonance fingerprinting (MRF) is a new technique for simultane...

An off-the-grid approach to multi-compartment magnetic resonance fingerprinting

We propose a novel numerical approach to separate multiple tissue compar...

A deep learning approach for Magnetic Resonance Fingerprinting

Current popular methods for Magnetic Resonance Fingerprint (MRF) recover...

Channel Attention Networks for Robust MR Fingerprinting Matching

Magnetic Resonance Fingerprinting (MRF) enables simultaneous mapping of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Magnetic resonance fingerprinting (MRF) [12] is a new quantitative imaging paradigm that allows fast and parallel measurement of multiple tissue properties in a single acquisition, unlike conventional methods that quantify one specific tissue property at a time. MRF randomizes multiple acquisition parameters to generate unique signal evolutions, called “fingerprints”, that encode information of multiple tissue properties of interest. time points are usually acquired and one image is reconstructed for each time point. Dictionary matching (DM) is then used to match the fingerprint at each pixel to a pre-defined dictionary of fingerprints associated with a wide range of tissue properties.

Figure 1: (A) The original MRF framework. (B) The proposed framework.

To improve the clinical feasibility of MRF, many studies have investigated replacing DM with deep neural networks to accelerate tissue mapping

[2, 4, 8, 9, 3]. However, these methods, similar to DM, operate on the reconstructed MRF images and are therefore still limited by the speed and computational efficiency of conventional reconstruction methods. Particularly, since MRF employs a spiral -space sampling trajectory for robustness to motion [12], the reconstruction is non-trivial and more time-consuming than the Cartesian case.

A major challenge is that the computationally efficient inverse fast Fourier transform (FFT) cannot be directly applied to non-Cartesian data. Besides, the density of the samples varies along the non-Cartesian trajectory and must be compensated for to ensure high-quality reconstruction. Most existing non-Cartesian MRI reconstruction methods thus consist of independent steps that are not optimized end-to-end, relying heavily on Non-Uniform Fast Fourier Transform (NUFFT) [5].

Fewer deep learning based reconstruction methods focus on non-Cartesian sampling [6, 17] than on Cartesian sampling [21, 18, 19, 16]. AUTOMAP [22] attempts to use a fully-connected network (FNN) to learn the full mapping from raw -space data to images, including the Fourier transform (FT). Although FNN makes no assumptions on the sampling pattern and aligns with the nature of FT, the network size is quadratic with the image size (

), incurring immense memory costs that limit scalability to large images, especially to high-dimensional MRF data with thousands of time frames. Moreover, MRF involves an additional tissue mapping that may require a tailored network architecture based on convolutional neural networks (CNNs)

[3, 4]

or recurrent neural network (RNNs)

[9] for optimal performance.

Our aim in this paper is to introduce a framework for real-time tissue quantification directly from non-Cartesian -space MRF data using only regular 2D convolutions and FFT, providing a computationally more feasible solution to high-dimensional MR data reconstruction and allowing greater flexibility in network design. Mimicking DFT directly with locally-connected CNNs is not effective since every point in the -space has a global effect on the image. Our approach is inspired by the gridding process in NUFFT [1, 5]. However, instead of explicitly incorporating the memory-costly gridding kernel of NUFFT as in [6, 17], we show for the first time that gridding and tissue mapping can be performed seamlessly in a single mapping. Experiments on 2D and 3D MRF data demonstrate that our completely end-to-end framework achieves results on par with state-of-the-art methods that use more complicated reconstruction schemes while being orders of magnitude faster. To the best of our knowledge, no prior methods have demonstrated the feasibility of end-to-end non-Cartesian MRI reconstruction for data as high-dimensional as MRF in a single framework dealing with both reconstruction and tissue mapping simultaneously without the need for NUFFT.

2 Methods

2.1 Problem Formulation

With the MRF sequence employed in this study, only -th of the full data, i.e., a single spiral, is collected for each time frame for significant acceleration. The original MRF framework first reconstructs an image from each spiral of length using NUFFT, leading to highly-aliased images. The image series are then mapped to the corresponding tissue property T1 and T2 maps with image dimensions .

In contrast, our approach directly maps the highly-undersampled spiral -space MRF data to the Cartesian -space of the or map, and finally to the image space of or map simply via inverse FFT (Fig. 1).

Let each data point in

-space be represented as a location vector

and a signal value . To grid the signal , convolution is applied via weighted summation of the signal contributions of neighboring sampled data points of :


where denotes the gridding kernel centered at , and is the density compensation factor for data point . Points in sparsely sampled regions are associated with greater compensation factor. Density compensation is required because in non-Cartesian imaging, the central -space (low-frequency components) is usually more densely sampled than the outer -space (high-frequency components).

2.2 Proposed Framework

Instead of performing gridding in -space and tissue mapping in image space separately as in most existing methods [2, 4], we propose to perform tissue mapping directly from -space

. This allows gridding and tissue mapping for thousands of time frames to be performed simultaneously via a single CNN, which is key to achieving real-time tissue quantification. Applying CNNs to non-Cartesian data, however, is not straightforward. Here, without actually interpolating each grid point on the Cartesian

-space of MRF frames, our key idea is to directly use the signal time courses of nearest neighboring spiral points of a target grid point

to infer the corresponding tissue properties (Fig. 2(a)), based on their relative positions to the target grid point and their densities (Fig. 2(b)). K Nearest-Neighbor (KNN) search and density estimation only need to be computed once for each trajectory and pre-stored; therefore the required time cost is negligible. Individual components of the proposed framework are described next.

Figure 2: Illustration of the proposed method. The point distribution features () consist of the relative Cartesian offsets of with respect to , the radial distance of with respect to , and the density of represented as the polar coordinates of with respect to the -space center.

2.3 Sliding-Window Stacking of Spirals

In MRF, data are typically sampled incoherently in the temporal dimension via a series of rotated spirals. Each MRF time frame is highly undersampled with one spiral. Here, we combine every 48 temporally consecutive spirals in a sliding-window fashion for full -space coverage (Fig. 2(a, Left)). This reduces the number of time frames from to and allows each spiral point to be associated with a dimensional feature vector . The input to the network thus become: . From Eq. (1), sampled points are gridded based only on their relative positions to the target grid point, i.e., . Thus, as exemplified in Fig. 2(a, Right), different contributes differently according to its spatial proximity with respect to the center grid point in a -point local neighborhood.

2.4 Learned Density Compensation

In non-Cartesian imaging, measurement density varies in -space and is typically dense at the center and sparse at the peripheral of -space. Density compensation (DC) can thus be viewed as a function of data location on a -space sampling trajectory with respect to the -space center. This is different from gridding using local weighting with respect to a target grid point. Thus, we propose to parameterize the DC function using 2D polar coordinates of the sampled points:


where and . Straightforward choices of are and . However, rather than fixing and handcrafting , we learn the DC function to adapt to different sampling trajectories via a network that is sensitive to sample locations. This is achieved by directly concatenating the polar coordinates with the spiral point features, inspired by “CoordConv” [10]. By simply giving the convolution access to the input coordinates, the network can adaptively decide where to compensate by learning different weights for the density features associated with different spiral points. This is unlike conventional methods where DC weighting functions are computed analytically [7] or iteratively [14]. See [10] for more information on translation-variant CoordConv.

2.5 Tissue Mapping via Agglomerated Neighboring Features

The features for each target grid point are agglomerated from its nearest neighbors from a stack of spirals. This transforms the spiral data to a grid, allowing regular 2D convolutions to be applied directly. Concatenating point features with additional -dimensional point distribution information required by gridding and density compensation leads to input . Since our framework does not emphasize on and is not limited to a certain network architecture, we extend an existing U-Net [15] based MRF tissue quantification network [4] to make it fully end-to-end, mapping the agglomerated features directly to the corresponding tissue property maps. To improve computational efficiency, a micro-network is employed preceding the quantification network to reduce the dimensionality of each target grid feature vector

by a shared linear transformation

, implemented as an convolution:


where denotes concatenation, , and . The resulting feature map is then fed to the quantification network.

2.5.1 Network Parameters and Training.

Our network backbone consists of a micro-network and a 2D U-Net, which is lighter than AUTOMAP [22]. AUTOMAP is computationally expensive when applied to MRF ( params). The micro-network is composed of four

convolutional layers, each followed by batch normalization and ReLU. The number of output channels of all

convolutions is ( for T1, and for T2, chosen by cross validation). The input channel number of the micro-network is , where . The network was trained in batches of 2 samples and optimized via ADAM with an initial learning rate of , which was decayed by

after each epoch. Following

[4], relative-L1 was used as the objective function. Two GPUs (TITAN X, G) were used for training.

3 Experiments and Results

3.0.1 Datasets.

2D MRF datasets were acquired from six normal subjects, each consisting of to scans. For each scan, a total of MRF time points were acquired and each contains only one spiral readout of length . Two 3D MRF datasets were used for evaluation. The first 3D MRF dataset with a spatial resolution of mm were collected from three subjects, each covering slices. A total of time points were acquired for each scan. The second 3D MRF datasets were acquired from six volunteers with a high isotropic resolution of  mm, each covering slices. time points were collected for each scan. For both 3D datasets, FFT was first applied in the slice-encoding direction, and then the data of each subject were processed slice-by-slice, just as in the 2D case. All MRI measurements were performed on a Siemens 3T scanner with a -channel head coil. Real and imaginary parts of the complex-valued MRF signals are concatenated. For acceleration, only the first time frames in each 2D MRF scan and the first in each 3D MRF scan were used for training. The training data size for each 2D and 3D scan is and (or ) (# coils # spiral readouts # time frames), respectively. The ground-truth and maps with voxels were obtained via dictionary matching using all time frames in 2D MRF and all (or ) time frames in 3D MRF.

3.0.2 Experimental Setup.

1) We compared our end-to-end approach with four state-of-the-art MRF methods: a U-Net based deep learning method (SCQ) [4], dictionary matching (DM) [12], SVD-compressed dictionary matching (SDM) [13], and a low-rank approximation method (Low-Rank) [11]. Note that these competing methods require first reconstructing the image for each time frame using NUFFT [5]. Leave-one-out cross validation was employed. 2) We also compared our adaptive gridding with typical handcrafted gridding methods, and investigated the effects of including the relative positions and density features. 3) As a proof of concept, we applied our method on the additional high-resolution 3D MRF dataset for qualitative evaluation.

Method MAE SSIM NRMSE Recon. (s) Patt. Match. (s) Total (s)
T1 T2 T1 T2 T1 T2
DM 2.42 10.06 0.994 0.954 0.0150 0.0421 467 25 492
SDM 2.42 10.05 0.994 0.954 0.0150 0.0421 467 10 477
Low-Rank 2.87 8.17 0.991 0.960 0.0156 0.0302 3133 25 3158
SCQ 4.87 7.53 0.992 0.968 0.0217 0.0309 9.73 0.12 9.85
Ours 4.24 7.09 0.986 0.972 0.0258 0.0335 0.41
Table 1: Quantitative comparison on the 2D MRF dataset with 4 under-sampling. MAE is computed relative to the ground truth (unit: %). Times reported for reconstruction and pattern matching are per-slice averages.
Figure 3: Example 2D MRF results and the associated error maps with 4 under-sampling. Artifacts are indicated by arrows. SSIM is reported at the bottom right.

3.0.3 Results and Discussion.

As shown in Table 1 and Table 2, our method performs overall best in T2 quantification accuracy and achieves competitive accuracy in T1 quantification with processing speed 24 times faster than a CNN method and 1,100 to 7,700 times faster than DM methods. Particularly, for 3D MRF, our method performs best for most metrics. Qualitative results are shown in Fig. 3 and Fig. 4. The higher T1 than T2 quantification accuracy is consistent with previous findings [20, 4]. Due to the sequence used in this study, the early portion of the MRF time frames, which were used for training, contain more information on T1 than T2. Hence, all methods are more accurate in T1 quantification. DM methods exhibit significant artifacts in T2 as indicated by the arrows in Fig. 3. Representative results for the additional high-resolution 3D MRF data are shown in Fig. 5. In the ablation study shown in Table 3, our adaptive gridding performs better than typical handcrafted gridding methods.

Method MAE SSIM NRMSE Recon. (s) Patt. Match. (s) Total (s)
T1 T2 T1 T2 T1 T2
DM 5.89 12.19 0.996 0.968 0.0415 0.0521 140.56 17.01 157.57
SCQ 16.58 16.74 0.933 0.919 0.0652 0.0479 8.58 0.11 8.69
Ours 9.14 11.78 0.980 0.968 0.0389 0.0323 0.33
Table 2: Quantitative comparison using the first 3D MRF data with 2 under-sampling and a resolution of  mm. MAE is computed relative to the ground truth (unit: %). Times reported for reconstruction and pattern matching are per-slice averages.
Figure 4: Example 3D MRF results with 2 under-sampling and  mm resolution. SSIM is reported at the bottom right.
Figure 5: Example high-resolution 3D MRF results with 2 under-sampling and mm isotropic resolution. SSIM is reported at the bottom right.
Average Bilinear Gaussian Ours
No xy/density xy density xy+density
T1 5.59 5.27 5.53 5.24 4.34 4.48 4.24
T2 7.74 8.48 7.95 9.05 8.43 7.37 7.09
Table 3: Comparison of our adaptive gridding method with typical handcrafted gridding methods, and effects of including relative positions and density features.

4 Conclusion

In this paper, we introduced a novel and scalable end-to-end framework for direct tissue quantification from non-Cartesian MRF data in milliseconds. With s per slice, slices for whole-brain coverage can be processed in one minute, allowing timely re-scan decisions to be made in clinical settings without having to reschedule additional patient visits. It should be noted that the U-Net based network backbone can be replaced with a more advanced architecture to further boost quantification accuracy. Our framework is also agnostic to the data sampling pattern, and thus can be potentially adapted to facilitate other non-Cartesian MRI reconstruction tasks. We believe that our work will improve the clinical feasibility of MRF, and spur the development of fast, accurate and robust reconstruction techniques for non-Cartesian MRI.


  • [1] M. A. Bernstein, K. F. King, and X. J. Zhou (2004) Handbook of mri pulse sequences. Elsevier. Cited by: §1.
  • [2] O. Cohen, B. Zhu, and M. S. Rosen (2018) MR fingerprinting deep reconstruction network (drone). Magnetic resonance in medicine 80 (3), pp. 885–894. Cited by: §1, §2.2.
  • [3] Z. Fang, Y. Chen, S. Hung, X. Zhang, W. Lin, and D. Shen (2020) Submillimeter mr fingerprinting using deep learning–based tissue quantification. Magnetic Resonance in Medicine 84 (2), pp. 579–591. Cited by: §1, §1.
  • [4] Z. Fang, Y. Chen, M. Liu, L. Xiang, Q. Zhang, Q. Wang, W. Lin, and D. Shen (2019) Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting. IEEE transactions on medical imaging 38 (10), pp. 2364–2374. Cited by: §1, §1, §2.2, §2.5.1, §2.5, §3.0.2, §3.0.3.
  • [5] J. A. Fessler and B. P. Sutton (2003) Nonuniform fast fourier transforms using min-max interpolation. IEEE transactions on signal processing 51 (2), pp. 560–574. Cited by: §1, §1, §3.0.2.
  • [6] Y. Han, L. Sunwoo, and J. C. Ye (2019) -Space deep learning for accelerated mri. IEEE transactions on medical imaging 39 (2), pp. 377–386. Cited by: §1, §1.
  • [7] R. D. Hoge, R. K. Kwan, and G. Bruce Pike (1997) Density compensation functions for spiral mri. Magnetic Resonance in Medicine 38 (1), pp. 117–128. Cited by: §2.4.
  • [8] E. Hoppe, G. Körzdörfer, M. Nittka, T. Wür, J. Wetzl, F. Lugauer, and M. Schneider (2018) Deep learning for magnetic resonance fingerprinting: accelerating the reconstruction of quantitative relaxation maps. In Proceedings of the 26th Annual Meeting of ISMRM, Paris, France, Cited by: §1.
  • [9] E. Hoppe, F. Thamm, G. Körzdörfer, C. Syben, F. Schirrmacher, M. Nittka, J. Pfeuffer, H. Meyer, and A. Maier (2019)

    RinQ fingerprinting: recurrence-informed quantile networks for magnetic resonance fingerprinting

    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 92–100. Cited by: §1, §1.
  • [10] R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski (2018) An intriguing failing of convolutional neural networks and the coordconv solution. arXiv preprint arXiv:1807.03247. Cited by: §2.4.
  • [11] D. Ma, E. Pierre, D. McGivney, B. Mehta, Y. Chen, Y. Jiang, and M. Griswold (2017) Applications of low rank modeling to fast 3d mrf. In Proc Intl Soc Mag Reson Med, Vol. 25, pp. 129. Cited by: §3.0.2.
  • [12] D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. L. Sunshine, J. L. Duerk, and M. A. Griswold (2013) Magnetic resonance fingerprinting. Nature 495 (7440), pp. 187–192. Cited by: §1, §1, §3.0.2.
  • [13] D. F. McGivney, E. Pierre, D. Ma, Y. Jiang, H. Saybasili, V. Gulani, and M. A. Griswold (2014) SVD compression for magnetic resonance fingerprinting in the time domain. IEEE transactions on medical imaging 33 (12), pp. 2311–2322. Cited by: §3.0.2.
  • [14] J. G. Pipe and P. Menon (1999) Sampling density compensation in mri: rationale and an iterative numerical solution. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 41 (1), pp. 179–186. Cited by: §2.4.
  • [15] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §2.5.
  • [16] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert (2017) A deep cascade of convolutional neural networks for dynamic mr image reconstruction. IEEE transactions on Medical Imaging 37 (2), pp. 491–503. Cited by: §1.
  • [17] J. Schlemper, S. S. M. Salehi, P. Kundu, C. Lazarus, H. Dyvorne, D. Rueckert, and M. Sofka (2019) Nonuniform variational network: deep learning for accelerated nonuniform mr image reconstruction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 57–64. Cited by: §1, §1.
  • [18] A. Sriram, J. Zbontar, T. Murrell, C. L. Zitnick, A. Defazio, and D. K. Sodickson (2020) GrappaNet: combining parallel imaging with deep learning for multi-coil mri reconstruction. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    pp. 14315–14322. Cited by: §1.
  • [19] Z. Zhang, A. Romero, M. J. Muckley, P. Vincent, L. Yang, and M. Drozdzal (2019) Reducing uncertainty in undersampled mri reconstruction with active acquisition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2049–2058. Cited by: §1.
  • [20] B. Zhao, K. Setsompop, H. Ye, S. F. Cauley, and L. L. Wald (2016) Maximum likelihood reconstruction for magnetic resonance fingerprinting. IEEE transactions on medical imaging 35 (8), pp. 1812–1823. Cited by: §3.0.3.
  • [21] B. Zhou and S. K. Zhou (2020) DuDoRNet: learning a dual-domain recurrent network for fast mri reconstruction with deep t1 prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4273–4282. Cited by: §1.
  • [22] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen (2018) Image reconstruction by domain-transform manifold learning. Nature 555 (7697), pp. 487–492. Cited by: §1, §2.5.1.