Deep Learning Methods for Parallel Magnetic Resonance Image Reconstruction

by   Florian Knoll, et al.
University of Minnesota
TU Graz

Following the success of deep learning in a wide range of applications, neural network-based machine learning techniques have received interest as a means of accelerating magnetic resonance imaging (MRI). A number of ideas inspired by deep learning techniques from computer vision and image processing have been successfully applied to non-linear image reconstruction in the spirit of compressed sensing for both low dose computed tomography and accelerated MRI. The additional integration of multi-coil information to recover missing k-space lines in the MRI reconstruction process, is still studied less frequently, even though it is the de-facto standard for currently used accelerated MR acquisitions. This manuscript provides an overview of the recent machine learning approaches that have been proposed specifically for improving parallel imaging. A general background introduction to parallel MRI is given that is structured around the classical view of image space and k-space based methods. Both linear and non-linear methods are covered, followed by a discussion of recent efforts to further improve parallel imaging using machine learning, and specifically using artificial neural networks. Image-domain based techniques that introduce improved regularizers are covered as well as k-space based methods, where the focus is on better interpolation strategies using neural networks. Issues and open problems are discussed as well as recent efforts for producing open datasets and benchmarks for the community.



page 1

page 2

page 6

page 7

page 8

page 9


A review of deep learning methods for MRI reconstruction

Following the success of deep learning in a wide range of applications, ...

Machine Learning in Magnetic Resonance Imaging: Image Reconstruction

Magnetic Resonance Imaging (MRI) plays a vital role in diagnosis, manage...

fastMRI: An Open Dataset and Benchmarks for Accelerated MRI

Accelerating Magnetic Resonance Imaging (MRI) by taking fewer measuremen...

Experimental design for MRI by greedy policy search

In today's clinical practice, magnetic resonance imaging (MRI) is routin...

Machine Learning Guided 3D Image Recognition for Carbonate Pore and Mineral Volumes Determination

Automated image processing algorithms can improve the quality, efficienc...

Two-Stage Deep Learning for Accelerated 3D Time-of-Flight MRA without Matched Training Data

Time-of-flight magnetic resonance angiography (TOF-MRA) is one of the mo...

Multi-Weight Respecification of Scan-specific Learning for Parallel Imaging

Parallel imaging is widely used in magnetic resonance imaging as an acce...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

During recent years, there has been a substantial increase of research activity in the field of medical image reconstruction. One particular application area is the acceleration of Magnetic Resonance Imaging (MRI) scans. This is an area of high impact, because MRI is the leading diagnostic modality for a wide range of exams, but the physics of its data acquisition process make it inherently slower than modalities like X-Ray, Computed Tomography or Ultrasound. Therefore, the shortening of scan times has been a major driving factor for routine clinical application of MRI.

One of the most important and successful technical developments to decrease MRI scan time in the last 20 years was parallel imaging  [1, 2, 3]. Currently, essentially all clinical MRI scanners from all vendors are equipped with parallel imaging technology, and it is the default option for a large number of scan protocols. As a consequence, there is a substantial benefit of using multi-coil data for machine learning based image reconstruction. Not only does it provide a complementary source of acceleration that is unavailable when operating on single channel data, or on the level of image enhancement and post-processing, it also is the scenario that ultimately defines the use-case for accelerated clinical MRI, which makes it a requirement for clinical translation of new reconstruction approaches. The drawback is that working with multi-coil data adds a layer of complexity that creates a gap between cutting edge developments in deep learning [4] and computer vision, where the default data type are images. The goal of this manuscript is to bridge this gap by providing both a comprehensive review of the properties of parallel MRI, together with an introduction how current machine learning methods can be used for this particular application.

Fig. 1:

In k-space based parallel imaging methods, missing data is recovered first in k-space, followed by an inverse Fourier transform and combination of the individual coil elements. In image space based parallel imaging, the Fourier transform is performed as the first step, followed by coil sensitivity based removal of the aliasing artifacts from the reconstructed image by solving an inverse problem.

I-a Background on multi-coil acquisitions in MRI

The original motivation behind phased array receive coils [5] was to increase the SNR of MR measurements. These arrays consist of multiple small coil elements, where an individual coil element covers only a part of the imaging field of view. These individual signals are then combined to form a single image of the complete field of view. The central idea of all parallel imaging methods is to complement spatial signal encoding of gradient fields with information about the spatial position of these multiple coil elements. For multiple receiver coils, the MR signal equation can be written as follows


In Equation 1, is the MR signal of coil , is the target image to be reconstructed, and

is the corresponding coil sensitivity. Parallel imaging methods use the redundancies in these multi-coil acquisitions to reconstruct undersampled k-space data. After discretization, this undersampling is described in matrix-vector notation by


where is the image to be reconstructed, is the acquired k-space data in the coil, is a diagonal matrix containing the sensitivity profile of the receiver coil [2], is a partial Fourier sampling operator that samples locations and is measurement noise in the coil.

Historically, parallel imaging methods were put in two categories: Approaches that operate in image domain, inspired by the sensitivity encoding (SENSE) method [2] and approaches that operated in k-space, inspired by simultaneous acquisition of spatial harmonics (SMASH) [1] and generalized autocalibrating partial parallel acquisition (GRAPPA) [6]. This is conceptually illustrated in Figure 1. While these two schools of thought are closely related [7, 8], we organized this document according to these classic criteria for historical reasons.

Ii Classical parallel imaging in image space

Classical parallel imaging in image space follows the SENSE method [2]

, which can be identified by two key features. First, the elimination of the aliasing artifacts is performed in image space after the application of an inverse Fourier transform. Second, information about receive coil sensitivities is obtained via precomputed, explicit coil sensitivity maps from either a separate reference scan or from a fully sampled block of data at the center of k-space (all didactic experiments that are shown in this manuscript follow the latter approach). More recent approaches jointly estimate coil sensitivity profiles during the image reconstruction process 

[9, 10], but for the rest of this manuscript, we assume that sensitivity maps were precomputed. The reconstruction in image domain in Figure 1 shows three example undersampled coil images, corresponding coil sensitivity maps and the final reconstructed images from a brain MRI dataset. The coil sensitivities were estimated using ESPIRiT [8].

MRI reconstruction in general and parallel imaging in particular can be formulated as an inverse problem. This provides a general framework that allows easy integration of the concepts of regularized and constrained image reconstruction as well as machine learning that are discussed in more detail in later sections. Equation 1 can be discretized and then written in matrix-vector form:


where contains all k-space measurement data points and is the forward encoding operator that includes information about the sampling trajectory and the receive coil sensitivities and is measurement noise. The task of image reconstruction is to recover the image . In classic parallel imaging one generally operates under the condition that the number of receive elements is larger than the acceleration factor. Therefore, Equation 3 corresponds to an over-determined system of equations. However, the rows of are linearly dependent because individual coil elements do not measure completely independent information. Therefore the inversion of is an ill-posed problem, which can lead to severe noise amplification, described via the g-factor in the original SENSE paper [2]. Equation 3 is usually solved in an iterative manner, which is the topic of the following sections.

Ii-a Overview of conjugate gradient SENSE (CG-SENSE)

The original SENSE approach is based on equidistant or uniform Cartesian k-space sampling, where the aliasing pattern is defined by a point spread function that has a small number of sharp equidistant peaks. This property leads to a small number of pixels that are folded on top of each other, which allows a very efficient implementation [2]. When using alternative k-space sampling strategies like non-Cartesian acquisitions or random undersampling, this is no longer possible and image reconstruction requires a full inversion of the encoding matrix in Equation 3. This operation is demanding both in terms of compute and memory requirements (the dimensions of are the total number of acquired k-space points times where is the size of the image matrix that is to be reconstructed), which lead to the development of iterative methods, in particular the CG-SENSE method introduced by Pruessmann et al. as a follow up of the original SENSE paper [11]. In iterative image reconstruction the goal is to find a that is a minimizer of the following cost function, which corresponds to the quadratic form of the system in Equation 3:


In standard parallel imaging, is linear and Equation 4 is a convex optimization problem that can be solved with a large number of numerical algorithms like gradient descent, Landweber iterations [12], primal-dual methods [13] or the alternating direction method of multipliers (ADMM) algorithm [14] (a detailed review of numerical methods is outside the scope of this article). In the original version of CG-SENSE [11], the conjugate gradient method [15] is employed. However, since MR k-space data are corrupted by noise, it is common practice stop iterating before theoretical convergence is reached, which can be seen as a form of regularization. Regularization can be also incorportated via additional constraints in Equation 4, which will be covered in the next section.

As a didactic example for this manuscript, we will use a single slice of a 2D coronal knee exam to illustrate various reconstruction approaches. This data were acquired on a clinical 3T system (Siemens Skyra) using a 15 channel phased array knee coil. A turbo spin echo sequence was used with the following sequence parameters: TR=2750ms, TE=27m, echo train length=4, field of view 160mm in-plane resolution 0.5mm, slice thickness 3mm. Readout oversampling with a factor of 2 was used, and all images were cropped in the frequency encoding direction (superior-inferior) for display purposes. In the spirit of reproducible research, data, sampling masks and coil sensitivity estimations that were used for the numerical results in this manuscript are available online111 Endpoint: NYULH Radiology Reconstruction Data, coronal pd data, subject 17, slice 25.. Figure 3 shows an example of a retrospectively undersampled CG-SENSE reconstruction with an acceleration factor of 4 for the data from Figure 3. Early stopping was employed by setting the numeric tolerance of the iteration to , which resulted in the the algorithm stopping after 14 CG iterations.

Ii-B Nonlinear regularization and compressed sensing

Equation 4 can be extended by including a-priori knowledge via additional penalty terms, which results in a constrained optimization problem defined in Equation 5, which forms the cornerstone of almost all modern MRI reconstruction methods


Here, are dedicated regularization terms and are regularization parameters that balance the trade-off between data fidelity and prior. Since the introduction of compressed sensing [16, 17] and its adoption for MRI [18, 19, 20], nonlinear regularization terms, in particular -norm based ones, are popular in image reconstruction and are commonly used in parallel imaging [19, 21, 22, 23, 24, 25, 26]. The goal of these regularization terms is to provide a separation between the target image that is to be reconstructed from the aliasing artifacts that are introduced due to an undersampled acquisition. Therefore, they are usually designed in conjunction with a particular data sampling strategy. The classic formulation of compressed sensing in MRI [18] is based on sparsity of the image in a transform domain (Wavelets are a popular choice for static images) in combination with pseudo-random sampling, which introduces aliasing artifacts that are incoherent in the respective domain. For dynamic acquisitions where periodic motion is encountered, sparsity in the Fourier domain common choice [27]. Total Variation based methods have been used successfully in combination with radial [19] and spiral [28] acquisitions as well as in dynamic imaging [29]. More advanced regularizers based on low-rank properties have also been utilized [30].

Figure 3 shows an example of a nonlinear combined parallel imaging and compressed sensing reconstruction with a Total Generalized Variation [22] constraint. The regularization parameter was set to and the reconstruction was using 1000 primal-dual [13] iterations. The used equidistant sampling was chosen for consistency with the other reconstruction methods is not optimal for the incoherence condition in compressed sensing. Nevertheless, the nonlinear regularization still provides a superior reduction of aliasing artifacts and noise suppression in comparison to the CG-SENSE reconstruction from the last section.

Iii Classical parallel imaging in k-space

Parallel imaging reconstruction can also be formulated in k-space as an interpolation procedure. The initial connections between the SENSE-type image-domain inverse problem approach and k-space interpolation has been made more than a decade ago [7], where it was noted that the forward model in Equation 3 can be restated in terms of the Fourier transform, of the combined image, as


where corresponds to the acquired k-space lines across all coils, and is a linear operator. Similarly, the unacquired k-space lines across all coils can be formulated using


Combining these two equations yield


Thus, the unaccquired k-space lines across all coils can be interpolated based on the acquired lines across all coils, assuming the pseudo-inverse, , of exists [7]. Thus, the main difference between the k-space parallel imaging methods and the aforementioned image domain parallel imaging techniques is that the former produces k-space data across all coils at the output, whereas the latter typically produces one image that combines the information from all coils.

Iii-a Linear k-space interpolation in GRAPPA

The most clinically used k-space reconstruction method for parallel imaging is GRAPPA, which uses linear shift-invariant convolutional kernels to interpolate missing k-space lines using uniformly-spaced acquired k-space lines [6]. For the coil k-space data, , we have


where is the acceleration rate of the uniformly undersamped acquisition; ; are the linear convolutional kernels for estimating the spacing location in coil; is the number of coils; and , are parameters determined from the convolutional kernel size. A high-level overview of such interpolation is shown in the reconstruction in k-space section of Figure 1.

Similar to the coil sensitivity estimation in SENSE-type reconstruction, the convolutional kernels, are estimated for each subject, from either a separate reference scan or from a fully-sampled block of data at the center of k-space, called autocalibrating signal (ACS) [6]. A sliding window approach is used in this calibration region to identify the fully-sampled acquisition locations specified by the kernel size and the corresponding missing entries. The former, taken across all coils, is used as rows of a calibration matrix, ; while the latter, for a specific coil, yields a single entry in the target vector, . Thus for each coil and missing location , a set of linear equations are formed, from which the vectorized kernel weights , denoted , are estimated via least squares, as . GRAPPA has been shown to have several favorable properties compared to SENSE, including lower g-factors, sometimes even less than unity at certain parts of the image [31], and more smoothly varying g-factor maps [32]. Furthermore, k-space interpolation is often less sensitive to motion [33]. Due to these favorable properties, GRAPPA has found utility in multiple large-scale projects, such as the Human Connectome Project [34].

Iii-B Advances in k-space interpolation methods

Though GRAPPA is widely used in clinical practice, it is a linear method that suffers from noise amplification based on the coil geometry and the acceleration rate [2]. Therefore, several alternative strategies have been proposed in the literature to reduce the noise in reconstruction.

Iterative self-consistent parallel imaging reconstruction (SPIRiT) is a strategy for enforcing self-consistency among the k-space data in multiple receiver coils by exploiting correlations between neighboring k-space points [21]. Similar to GRAPPA, SPIRiT also estimates a linear shift-invariant convolutional kernel from ACS data. In GRAPPA, this convolutional kernel used information from acquired lines in a neighborhood to estimate a missing k-space point. In SPIRiT, the kernel includes contributions from all points, both acquired and missing, across all coils for a neighborhood around a given k-space point. The self-consistency idea suggests that the full k-space data should remain unchanged under this convolution operation. SPIRiT objective function also includes a term that enforces consistency with the acquired data, where the undersampling can be performed with arbitrary patterns, including random patterns that are typically employed in compressed sensing [19, 18]. Additionally, this formulation allows incorporation of regularizers, for instance based on transform-domain sparsity, in the objective function to reduce reconstruction noise via non-linear processing [21]. Furthermore, SPIRiT has facilitated the connection between coil sensitivities used in image-domain parallel imaging methods and the convolutional kernels used in k-space methods via a subspace analysis [8].

An alternative line of work utilizes non-linear k-space interpolation for estimating missing k-space points for uniformly undersampled parallel imaging acquisitions [35]. It was noted that during GRAPPA calibration, both the regressand and the regressor have errors in them due to measurement noise in the acquisition of calibration data, which leads to a non-linear relationship in the estimation. Thus, the reconstruction method, called non-linear GRAPPA, uses a kernel approach to map the data to a higher-dimensional feature space, where linear interpolation is performed, which also corresponds to a non-linear interpolation in the original data space. The interpolation function is estimated from the ACS data, although this approach typically required more ACS data than GRAPPA [35]. This method was shown to reduce reconstruction noise compared to GRAPPA. Note that non-linear GRAPPA, through its use of the kernel approach, is a type of machine learning approach, though the non-linear kernel functions were empirically fixed a-priori and not learned from data.

Iii-C k-space reconstruction via low-rank matrix completion

While k-space interpolation methods remain the prevalent method for k-space parallel imaging reconstruction, there has been recent efforts on recasting this type of reconstruction as a matrix completion problem. Simultaneous autocalibrating and k-space estimation (SAKE) is an early work in this direction, where local neighborhoods in k-space across all coils are restructured into a matrix with block Hankel form [36]. Then low-rank matrix completion is performed on this matrix, subject to consistency with acquired data, enabling k-space parallel imaging reconstruction without additional calibration data acquisition. Low-rank matrix modeling of local k-space neighborhoods (LORAKS) is another method exploiting similar ideas, where the motivation is based on utilizing finite image support and image phase constraints instead of correlations across multiple coils [37]. This method was later extended to parallel imaging to further include the similarities between image supports and phase constraints across coils [38]. A further generalization to LORAKS is annihilating filter-based low rank Hankel matrix approach (ALOHA), which extends the finite support constraint to transform domains [39]. By relating transform domain sparsity to the existence of annihilating filters in a weighted k-space, where the weighting is determined by the choice of transform domain, ALOHA recasts the reconstruction problem as the low-rank recovery of the associated Hankel matrix.

Iv Machine learning methods for parallel imaging in image space

The use of machine learning for image-based parallel MR imaging evolves naturally from Equation 5 based on the following key insights. First, in classic compressed sensing, are a general regularizers like the image gradient or wavelet transforms, which were not designed specifically with undersampled parallel MRI acquisitions in mind. These regularizers can be generalized to models that have a higher computational complexity.

can be formulated as a convolutional neural network (CNN) 

[40], where the model parameters can be learned from training data inspired by the concepts of deep learning [4]. This was already demonstrated earlier in the context of computer vision by Roth and Black [41]. They proposed a non-convex regularizer of the following form:


The regularizer in Equation 10 consists of terms of non-linear potential functions , are convolution operators. indicates a vector of ones. The parameters of the convolution operators and the parametrization of the non-linear potential functions, form the free parameters of the model, which are learned from training data.

Fig. 2: Illustration of machine learning-based image reconstruction. The network architecture consists of stages that perform the equivalent of gradient descent steps in a classic iterative algorithm. Each stage consists of a regularizer and a data consistency layer. Training the network parameters is performed by retrospectively undersampling fully sampled multi-coil raw k-space data and comparing the output of the network to a target reference reconstruction obtained from the fully sampled data.
Fig. 3: Comparison of image-domain based parallel imaging reconstructions of a retrospectively accelerated coronal knee acquisition. The used sampling pattern, zero-filling, CG-SENSE, combined parallel imaging and compressed sensing with a TGV constrained and a learned reconstruction are shown together with their SSIM values to the fully sampled reference. See text in the respective sections for details on the individual experiments.

The second insight is that the iterative algorithm that is used to solve Equation 5 naturally maps to the structure of a neural network, where every layer in the network represents an iteration step of a classic algorithm [42]. This follows naturally from gradient descent for the least squares problem in Equation 4 that leads to the iterative Landweber method [12]. After choosing an initial , the iteration scheme is given by Equation 11:


is the adjoint of the encoding operator and is the step size of iteration . Using this iteration scheme to solve the reconstruction problem in Equation 5 with the regularizer defined in Equation 10 leads to the update scheme defined in Equation 12, which forms the basis of recently proposed image space based machine learning methods for parallel MRI:


This update scheme can then be represented as a neural network with stages corresponding to iteration steps Equation 12. are the first derivatives of the nonlinear potential functions

, which are represented as activation functions in the neural network. The transposed convolution operations

correspond to convolutions with filter kernels rotated by 180 degrees. Most recently proposed approaches follow this structure, and their difference mainly lies is the used model architecture. The idea of the variational network [43] follows the structure of classic variational methods and gradient-based optimization, and the network architecture is designed to mimic a classic iterative image reconstruction. The approach from Aggarwal et al. [44] follows a similar design concept, while using convolutional neural networks (CNNs), but shares the same set of parameters for all stages of the network, thus reducing the number of free parameters. It also uses an unrolled conjugate-gradient step for data consistency instead of the gradient based on in Equation 11.

An illustration of image space based machine learning for parallel MRI along the lines of [43, 44] is shown in Figure 2. To determine the model parameters of the network that will perform the parallel imaging reconstruction task, an optimization problem needs to be defined that minimizes a training objective. In general, this can be formulated in a supervised or unsupervised manner. Supervised approaches are predominantly used while unsupervised approaches are still a topic of ongoing research (an approach for low-dose CT reconstruction was presented in [45]). Therefore, we will focus on supervised approaches for the remainder of this section. We define the number of stages, corresponding to gradient steps in the network, as . is the current training image out of the complete set of training data . The variable contains all trainable parameters of the reconstruction model. The training objective then takes the following form:


As it is common in deep learning, Equation 13

is a non-convex optimization problem that is solved with standard numerical optimizers like stochastic gradient descent or ADAM 

[46]. This requires the computation of the gradient of the training objective with respect to the model parameters

. This gradient can be computed via backpropagation 



The basis of supervised approaches is the availability of a target reference reconstruction . This requires the availability of a fully-sampled set of raw phased array coil k-space data. This data is then retrospectively undersampled by removing k-space data points as defined by the sampling trajectory in the forward operator and serves as the input of the reconstruction network. The current output of the network is then compared to the reference via an error metric. The choice of this error metric has an influence on the properties of the trained network, which is a topic of currently ongoing work. A popular choice is the mean squared error (MSE), which was also used in Equation 13. Other choices are the norm of the difference [48] and the structural similarity index (SSIM) [49]. Research on generative adversarial networks [50]

and learned content loss functions is currently in progress. The current literature in this area is further noted in the discussion section.

An example reconstruction that compares the variational network learning approach from [43] to CG-SENSE and constrained reconstructions from the previous sections is shown in Figure 3 together with the SSIM to the fully sampled reference. It can be observed that the learned reconstruction outperforms the other approaches in terms of artifact removal and preservation of small image features, which is also reflected in the highest SSIM. All source code222 for this method is available online.

V Machine learning methods for parallel imaging in k-space

There has been a recent interest in using neural network to improve the k-space interpolation techniques using non-linear approaches in a data-driven manner. These newer approaches can be divided into two groups based on how the interpolation functions are trained. The first group uses scan-specific ACS lines to train neural networks for interpolation, similar to existing interpolation approaches, such as GRAPPA or non-linear GRAPPA. The second group uses training databases, similar to the machine learning methods discussed in image domain parallel imaging.

Robust artificial-neural-networks for k-space interpolation (RAKI) is a scan-specific machine learning approach for improved k-space interpolation [51]. This approach trains CNNs on ACS data, and uses these for interpolating missing k-space points from acquired ones. The interpolation function can be represented by


where is the interpolation rule implemented via a multi-layer CNN for outputting the k-space of the set of uniformly spaced missing lines in the coil, is the undersampling rate, are parameters specified by the receptive field of the CNN, is the number of coils. Thus, the premise of RAKI is similar to GRAPPA, while the interpolation function is implemented using CNNs, whose parameters are learned from ACS data with an MSE loss function. The scan-specific nature of this method is attractive since it requires no training databases, and can be applied to scenarios where a fully-sampled gold reference cannot be acquired, for instance in perfusion or real-time cardiac MRI, or high-resolution brain imaging.

Fig. 4: A slice from a high-resolution (0.6 mm isotropic) 7T brain acquisition, where all acquisitions were performed with prospective acceleration. It is difficult to acquire fully-sampled reference datasets for training for such acquisitions, thus two scan-specific k-space methods were compared. The CNN-based RAKI method visibly reduced noise amplification compared to the linear GRAPPA reconstruction.

Example RAKI and GRAPPA reconstructions for such high-resolution brain imaging datasets, which were acquired with prospective undersampling are shown in Figure 4. These data were acquired on a 7T system (Siemens Magnex Scientific) with 0.6 mm isotropic resolution. data were acquired with two averages for improved SNR to facilitate visualization of any residual artifacts. Other imaging parameters are available in [51]. For these datasets, RAKI leads to a reduction in noise amplification compared to GRAPPA. Note the noise reduction here is based on exploiting properties of the coil geometry, and not on assumptions about image structure, as in traditional regularized inverse problem approaches, as in Section II-B. However, the scan-specificity also comes with downsides, such as the computational burden of training for each scan [52], as well as the requirement for typically more calibration data. In Figure 5, reconstructions of the knee dataset from Figure 3 are shown, where all methods, which rely on subject-specific calibration data, exhibit a degree of artifacts, due to the small size of the ACS region, while RAKI has the highest SSIM among these.

While originally designed for uniform undersampling patterns, this method has been extended to arbitrary sampling, building on the self-consistency approach of SPIRiT [53]. Additionally, recent work has also reformulated this interpolation procedure as a residual CNN, with residual defined based on a GRAPPA interpolation kernel [54]. Thus, in this approach called residual RAKI (rRAKI), the CNN effectively learns to remove the noise amplification and artifacts associated with GRAPPA, giving a physical interpretation to the CNN output, which is similar to the use of residual networks in image denoising [55]. An example application of the rRAKI approach in simultaneous multi-slice imaging [56] is shown in Figure 6.

Fig. 5: Comparison of k-space parallel imaging reconstructions of a retrospectively accelerated coronal knee acquisition, as in Figure 3. Due to the small size of the ACS data relative to the acceleration rate, the methods, none of which utilizes training databases, exhibit artifacts. GRAPPA has residual aliasing, whereas SPIRiT shows noise amplification. These are reduced in RAKI, though the residual artifacts remain. Respective SSIM values reflect these visual assessment.
Fig. 6: Reconstruction results of simultaneous multi-slice imaging of 16 slices in fMRI, where the central 8 slices are shown. GRAPPA method exhibits noise amplification at this high acceleration rate. The rRAKI method, whose linear and residual components are depicted by and respectively, exhibits reduced noise. Due to imperfections in the ACS data for this application, the residual component includes both noise amplification and residual artifacts.

A different line of work, called DeepSPIRiT, explores using CNNs trained on large databases for k-space interpolation with a SPIRiT-type approach [57]. Since sensitivity profiles and number of coils vary for different anatomies and hardware configurations, k-space data in the database were normalized using coil compression to yield the same number of channels [58, 59]

. Coil compression methods effectively capture most of the energy across coils in a few virtual channels, with the first virtual channel containing most of the energy, the second being the second most dominant, and so on, in a manner reminiscent of principal component analysis. After this normalization of the k-space database, CNNs are trained for different regions of k-space, which are subsequently applied in a multi-resolution approach, successively improving the resolution of the reconstructions, as illustrated in

Figure 7. The method was shown to remove aliasing artifacts, though difficulty with high-resolution content was noted. Since DeepSPIRiT trains interpolation kernels on a database, it does not require calibration data for a given scan, potentially reducing acquisition time further.

Fig. 7: The multi-resolution k-space interpolation in DeepSPIRiT uses distinct CNNs for diffrent regions of k-space, successively refining the resolution of the reconstructed k-space.

Neural networks have also been applied to the Hankel matrix based approaches in k-space [60]. Specifically, the completion of the weighted k-space in ALOHA method has been replaced with a CNN, trained with an MSE loss function. The method was shown to not only improve the computational time, but also the reconstruction quality compared to original ALOHA by exploiting structures beyond low-rankness of Hankel matrices.

Vi Discussion

Vi-a Issues and open problems

Several advantages of machine learning approaches over classic constrained reconstruction using predefined regularizers have been proposed in the literature. First, the regularizer is tailored to a specific image reconstruction task, which improves the removal of residual artifacts. This becomes particularly relevant in situations where the used sampling trajectory does not fulfill the incoherence requirements of compressed sensing, which is often the case for clinical parallel imaging protocols. Second, machine learning approaches decouple the compute-heavy training step from a lean inference step. In medical image reconstruction, it is critical to have images available immediately after the scan, while prolonged training procedures that can be done on specialized computing hardware, are generally acceptable. The training for the experiment in Figure 3

took 40 hours for 150 epochs with 200 slices of training data on a single NVIDIA M40 GPU with 12GB of memory. Training data, model and training parameters exactly follow the training from 

[61]. Reconstruction of one slice then took 200ms, in comparison to 10ms for zero filling, 150ms for CG-SENSE and 10000ms for the PI-CS TGV constrained reconstruction.

The focus in Section IV and Section V were on methods that were developed specifically in the context of parallel imaging. Some architectures for image domain machine learning have been designed specifically towards a target application, for example dynamic imaging [62, 63]. In their current form, these were not yet demonstrated in the context of multi-coil data. The approach recently proposed by Zhu et al. learns the complete mapping from k-space raw data to the reconstructed image [64]. The proposed advantage is that since no information about the acquisition is included in the forward operator , it is more robust against systematic calibration errors during the acquisition. This comes at the price of a significantly higher number of model parameters. The corresponding memory requirements make it challenging use this model for matrix sizes that are currently used in clinical applications. We also note that there are fewer works in k-space machine learning methods for MRI reconstruction. This may be due to the different nature of k-space signal that usually has very different intensity characteristics in the center versus the outer k-space, which makes it difficult to generalize the plethora of techniques developed in computer vision and image processing that exploit properties of natural images.

Machine learning reconstruction approaches also come with a number of drawbacks when compared to classic constrained parallel imaging. First, they require the availability of a curated training data set that is representative so that the trained model generalized towards new unseen test data. While recent approaches from the literature [43, 44, 62, 63, 65] have either been trained with hundreds of examples rather than millions of examples as it is common in deep learning for computer vision [66, 4], or trained on synthetic non-medical data [64] that is publicly available from existing databases. However, this is still a challenge that will potentially limit the use of machine learning to certain applications. Several applications in imaging of moving organs, such as the heart, or in imaging of the brain connectivity, such as diffusion MRI, cannot be acquired with fully-sampled data due to constraints on spatio-temporal resolutions. This hinders the use of fully-sampled training labels for such datasets, highlighting applications for scan-specific approaches or unsupervised training strategies.

These reconstruction methods also require the availability of computing resources during the training stage. This is a less critical issue due to the increased availability and reduced prices of GPUs. The experiments in this paper were made with computing resources that are available for less than 10,000 USD, which are usually available in academic institutions. In addition, the availability of on-demand cloud based machine learning solutions is constantly increasing.

A more severe issue is that in contrast to conventional parallel imaging and compressed sensing, machine learning models are mostly non-convex. Their properties, especially regarding their failure modes and generalization potential for daily clinical use, are understood less well than conventional iterative approaches based on convex optimization. For example, it was recently shown that while reconstructions generalize well with respect to changes in image contrast between training and test data, they are susceptible towards systematic deviations in SNR [61]. It is also still an open question how specific trained models have to be. Is it sufficient to train a single model for all types of MR exams, or are separate models required for scans of different anatomical areas, pulse sequences, acquisition trajectories and acceleration factors as well as scanner manufacturers, field strengths and receive coils? [67] While pre-training a large number of separate models for different exams would be feasible in clinical practice, if certain models do not generalize with respect to scan parameter settings that are usually tailored to the specific anatomy of an individual patient by the MR technologist, this will severely impact their translational potential and ultimately their clinical use.

Finally, the choice of the loss function that is used during the training has an impact on the properties of the trained network, and a particular ongoing research direction is the use of GANs [50, 68, 69, 70]. This is an interesting direction because these models have the potential to create images that are visually indistinguishable from fully sampled reference images, where features in the images that are not supported by the amount of acquired data, are hallucinated by the network. This situation obviously must be avoided in any application in medical imaging. Strategies to mitigate this effect are the combination of GANs with conventional error metrics like MSE [71, 72]. Comparable approaches were used in the context of MRI reconstruction [73, 74, 75, 76, 77, 78].

Vi-B Availability of training databases and community challenges

As mentioned in the previous section, one open issue in the field of machine learning reconstruction for parallel imaging is the lack of publicly available databases of multi-channel raw k-space data. This restricts the number of researchers who can work in this field to those who are based at large academic medical centers where this data is available, and for the most part excludes the core machine learning community that has the necessary theoretical and algorithmic background to advance the field. In addition, since the used training data becomes an essential part of the performance of a certain model, it is currently almost impossible to compare new approaches that are proposed in the literature with each other if the training data is not shared when publishing the manuscript. While the momentum in initiatives for public releases of raw k-space data is growing [79], the number of available data sets is still on the order of hundreds and limited to very specific types of exams. Examples of publicly available rawdata sets are mridata.org333 and the fastMRI dataset444

Vii Conclusion

Machine learning methods have recently been proposed to improve the reconstruction quality in parallel imaging MRI. These techniques include both image domain approaches for better image regularization and k-space approaches for better k-space completion. While the field is still in its development, there are many open problems and high-impact applications, which are likely to be of interest to the broader signal processing community.


  • [1] D. K. Sodickson and W. J. Manning, “Simultaneous Acquisition of Spatial Harmonics (SMASH): Fast Imaging with Radiofrequency Coil Arrays,” Magn Reson Med, vol. 38, no. 4, pp. 591–603, 1997.
  • [2] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, “SENSE: Sensitivity encoding for fast MRI,” Magn Reson Med, vol. 42, no. 5, pp. 952–962, 1999.
  • [3] M. A. Griswold, M. Blaimer, F. Breuer, et al., “Parallel magnetic resonance imaging using the GRAPPA operator formalism,” Magn Reson Med, vol. 54, no. 6, pp. 1553–1556, 2005.
  • [4] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [5] P. B. Roemer, W. A. Edelstein, C. E. Hayes, S. P. Souza, and O. M. Mueller, “The NMR phased array,” Magn Reson Med, vol. 16, no. 2, pp. 192–225, nov 1990.
  • [6] M. A. Griswold, P. M. Jakob, R. M. Heidemann, et al., “Generalized autocalibrating partially parallel acquisitions (GRAPPA),” Magn Reson Med, vol. 47, no. 6, pp. 1202–1210, jun 2002.
  • [7] E. G. Kholmovski and D. L. Parker, “Spatially variant GRAPPA,” in Proc. 14th Scientific Meeting and Exhibition of ISMRM, Seattle, 2006, p. 285.
  • [8] M. Uecker, P. Lai, M. J. Murphy, et al.,

    “ESPIRiT – An Eigenvalue Approach to Autocalibrating Parallel MRI: Where SENSE meets GRAPPA,”

    Magn Reson Med, vol. 71, no. 3, pp. 990–1001, 2014.
  • [9] L. Ying and J. Sheng, “Joint image reconstruction and sensitivity estimation in SENSE (JSENSE),” Magn Reson Med, vol. 57, no. 6, pp. 1196–1202, jun 2007.
  • [10] M. Uecker, T. Hohage, K. T. Block, and J. Frahm, “Image reconstruction by regularized nonlinear inversion–joint estimation of coil sensitivities and image content,” Magn Reson Med, vol. 60, no. 3, pp. 674–682, sep 2008.
  • [11] K. P. Pruessmann, M. Weiger, P. Boernert, and P. Boesiger, “Advances in sensitivity encoding with arbitrary k-space trajectories,” Magn Reson Med, vol. 46, no. 4, pp. 638–651, 2001.
  • [12] L. Landweber, “An Iteration Formula for Fredholm Integral Equations of the First Kind,” American Journal of Mathematics, vol. 73, no. 3, pp. 615–624, 1951.
  • [13] A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120–145, 2010.
  • [14] S. Boyd, N. Parikh, B. P. E Chu, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
  • [15] M. R. Hestenes and E. Stiefel, “Methods of Conjugate Gradients for Solving Linear Systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952.
  • [16] E. J. Candes, J. Romberg, and T. Tao, “Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006.
  • [17] D. L. Donoho, “Compressed Sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, apr 2006.
  • [18] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn Reson Med, vol. 58, no. 6, pp. 1182–1195, 2007.
  • [19] K. T. Block, M. Uecker, and J. Frahm, “Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint.,” Magn Reson Med, vol. 57, no. 6, pp. 1086–1098, 2007.
  • [20] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed Sensing MRI,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 72–82, 2008.
  • [21] M. Lustig and J. M. Pauly, “SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space,” Magn Reson Med, vol. 64, no. 2, pp. 457–471, aug 2010.
  • [22] F. Knoll, K. Bredies, T. Pock, and R. Stollberger, “Second order total generalized variation (TGV) for MRI,” Magnetic Resonance in Medicine, vol. 65, no. 2, pp. 480–491, 2011.
  • [23] F. Knoll, C. Clason, K. Bredies, M. Uecker, and R. Stollberger, “Parallel imaging with nonlinear reconstruction using variational penalties,” Magn Reson Med, vol. 67, no. 1, pp. 34–41, 2012.
  • [24] M. Akcakaya, T. A. Basha, B. Goddu, et al., “Low-dimensional-structure self-learning and thresholding: Regularization beyond compressed sensing for MRI reconstruction,” Magn Reson Med, vol. 66, no. 3, pp. 756–767, Sep 2011.
  • [25] M. Akcakaya, T. A. Basha, R. H. Chan, W. J. Manning, and R. Nezafat, “Accelerated isotropic sub-millimeter whole-heart coronary MRI: compressed sensing versus parallel imaging,” Magn Reson Med, vol. 71, no. 2, pp. 815–822, Feb 2014.
  • [26] H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye, “k-t FOCUSS: A general compressed sensing framework for high resolution dynamic MRI,” Magn Reson Med, vol. 61, no. 1, pp. 103–116, 2009.
  • [27] U. Gamper, P. Boesiger, and S. Kozerke, “Compressed Sensing in Dynamic MRI,” Magn Reson Med, vol. 59, no. 2, pp. 365–373, 2008.
  • [28] G. Valvano, N. Martini, L. Landini, and M. F. Santarelli, “Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI,” Magn Reson Med, 2016.
  • [29] L. Feng, R. Grimm, K. Tobias Block, et al., “Golden-Angle Radial Sparse Parallel MRI: Combination of Compressed Sensing, Parallel Imaging, and Golden-Angle Radial Sampling for Fast and Flexible Dynamic Volumetric MRI,” Magn Reson Med, vol. 72, no. 3, pp. 707–717, 2014.
  • [30] S. G. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR,” IEEE Trans Med Imaging, vol. 30, no. 5, pp. 1042–1054, May 2011.
  • [31] P. M. Robson, A. K. Grant, A. J. Madhuranthakam, et al., “Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions,” Magn Reson Med, vol. 60, no. 4, pp. 895–907, Oct 2008.
  • [32] F. A. Breuer, S. A. R. Kannengiesser, M. Blaimer, et al., “General formulation for quantitative G-factor calculation in GRAPPA reconstructions,” Magn Reson Med, vol. 62, no. 3, pp. 739–746, sep 2009.
  • [33] F. A. Breuer, P. Kellman, M. A. Griswold, and P. M. Jakob, “Dynamic autocalibrated parallel imaging using temporal GRAPPA (TGRAPPA),” Magn Reson Med, vol. 53, no. 4, pp. 981–985, apr 2005.
  • [34] K. Ugurbil, J. Xu, E. J. Auerbach, et al., “Pushing spatial and temporal resolution for functional and diffusion MRI in the Human Connectome Project,” Neuroimage, vol. 80, pp. 80–104, Oct 2013.
  • [35] Y. Chang, D. Liang, and L. Ying, “Nonlinear GRAPPA: A kernel approach to parallel MRI reconstruction,” Magn Reson Med, vol. 68, no. 3, pp. 730–740, Sep 2012.
  • [36] P. J. Shin, P. E. Larson, M. A. Ohliger, et al., “Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion,” Magn Reson Med, vol. 72, no. 4, pp. 959–970, Oct 2014.
  • [37] J. P. Haldar, “Low-rank modeling of local k-space neighborhoods (LORAKS) for constrained MRI,” IEEE Trans Med Imaging, vol. 33, no. 3, pp. 668–681, Mar 2014.
  • [38] J. P. Haldar and J. Zhuo, “P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data,” Magn Reson Med, vol. 75, no. 4, pp. 1499–1514, Apr 2016.
  • [39] K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank hankel matrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 480–495, Dec 2016.
  • [40] Y. LeCun, “Handwritten Digit Recognition with a Back-Propagation Network,” Advances in Neural Information Processing Systems, 1989.
  • [41] S. Roth and M. J. Black, “Fields of Experts,” International Journal of Computer Vision, vol. 82, no. 2, pp. 205–229, 2009.
  • [42] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. 27th International Conference on International Conference on Machine Learning, USA, 2010, ICML’10, pp. 399–406, Omnipress.
  • [43] K. Hammernik, T. Klatzer, E. Kobler, et al., “Learning a variational network for reconstruction of accelerated MRI data,” Mag Reson Med, vol. 79, pp. 3055–71, 2018.
  • [44] H. K. Aggarwal, M. P. Mani, and M. Jacob, “MoDL: Model-Based Deep Learning Architecture for Inverse Problems,” IEEE Transactions on Medical Imaging, 2019.
  • [45] D. Wu, K. Kim, G. El Fakhri, and Q. Li, “Iterative Low-dose CT Reconstruction with Priors Trained by Artificial Neural Network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2479–2486, 2017.
  • [46] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv, dec 2014.
  • [47] Y. A. LeCun, L. Bottou, G. B. Orr, and K. R. Müller, “Efficient BackProp,” in Neural Networks: Tricks of the Trade, pp. 9–50. Springer Berlin Heidelberg, 2012.
  • [48] K. Hammernik, F. Knoll, D. Sodickson, and T. Pock, “L2 or Not L2: Impact of Loss Function Design for Deep Learning MRI Reconstruction,” in Proceedings of the International Society of Magn Reson Med (ISMRM), 2017, p. 687.
  • [49] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans Image Proc, vol. 13, no. 4, pp. 600–612, 2004.
  • [50] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems 27, pp. 2672–2680, 2014.
  • [51] M. Akcakaya, S. Moeller, S. Weingartner, and K. Ugurbil, “Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: Database-free deep learning for fast imaging,” Magn Reson Med, vol. 81, no. 1, pp. 439–453, Jan 2019.
  • [52] C. Zhang, S. Weingärtner, S. Moeller, K. Uğurbil, and M. Akçakaya, “Fast gpu implementation of a scan-specific deep learning reconstruction for accelerated magnetic resonance imaging,” in 2018 IEEE International Conference on Electro/Information Technology (EIT), May 2018, pp. 0399–0403.
  • [53] S. A. H. Hosseini, S. Moeller, S. Weingärtner, K. Ugurbil, and M. Akçakaya, “Accelerated coronary MRI using 3D SPIRiT-RAKI with sparsity regularization,” in Proc. IEEE International Symposium on Biomedical Imaging, 2019.
  • [54] C. Zhang, S. Moeller, S. Weingärtner, K. Uurbil, and M. Akçakaya, “Accelerated MRI using residual RAKI: Scan-specific learning of reconstruction artifacts,” in Proc. 27th Annual Meeting of the ISMRM, Montreal, Canada, 2019.
  • [55] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, July 2017.
  • [56] C. Zhang, S. Moeller, S. Weingärtner, K. Uğurbil, and M. Akçakaya, “Accelerated simultaneous multi-slice mri using subject-specific convolutional neural networks,” in 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Oct 2018, pp. 1636–1640.
  • [57] J. Y. Cheng, M. Mardani, M. T. Alley, J. M. Pauly, and S. S. Vasanawala, “DeepSPIRiT: Generalized parallel imaging using deep convolutional neural networks,” in Proc. 26th Annual Meeting of the ISMRM, Paris, France, 2018.
  • [58] M. Buehrer, K. P. Pruessmann, P. Boesiger, and S. Kozerke, “Array compression for {MRI} with large coil arrays,” Magn Reson Med, vol. 57, no. 6, pp. 1131–1139, jun 2007.
  • [59] F. Huang, S. Vijayakumar, Y. Li, S. Hertel, and G. R. Duensing, “A software channel compression technique for faster reconstruction with many channels,” Magn Reson Imaging, vol. 26, no. 1, pp. 133–141, Jan 2008.
  • [60] Y. Han and J. C. Ye, “k-space deep learning for accelerated MRI,” CoRR, vol. abs/1805.03779, 2018.
  • [61] F. Knoll, K. Hammernik, E. Kobler, et al.,

    “Assessment of the generalization of learned image reconstruction and the potential for transfer learning,”

    Magnetic Resonance in Medicine, 2019.
  • [62] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert, “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Trans Med Imaging, vol. 37, no. 2, pp. 491–503, 2018.
  • [63] C. Qin, J. Schlemper, J. Caballero, et al.,

    “Convolutional recurrent neural networks for dynamic MR image reconstruction,”

    IEEE Transactions on Medical Imaging, 2019.
  • [64] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.
  • [65] F. Chen, V. Taviani, I. Malkiel, et al., “Variable-Density Single-Shot Fast Spin-Echo MRI with Deep Learning Reconstruction by Using Variational Networks,” Radiology, p. 180445, 2018.
  • [66] J. Deng, W. Dong, R. Socher, et al.,

    ImageNet: A large-scale hierarchical image database,”


    2009 IEEE Conference on Computer Vision and Pattern Recognition

    , 2009, pp. 248–255.
  • [67] Y. C. Eldar, A. O. Hero III, L. Deng, et al., “Challenges and open problems in signal processing: Panel discussion summary from icassp 2017 [panel and forum],” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 8–23, Nov 2017.
  • [68] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein Generative Adversarial Networks,” Proc. 34th International Conference on Machine Learning, pp. 1–32, 2017.
  • [69] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” arXiv preprint arXiv:1704.00028, 2017.
  • [70] X. Mao, Q. Li, H. Xie, et al., “Least Squares Generative Adversarial Networks,” in IEEE International Conference on Computer Vision, 2017, pp. 2794–2802.
  • [71] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” arXiv preprint arXiv:1611.07004, 2016.
  • [72] C. Ledig, L. Theis, F. Huszár, et al.,

    “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,”

    Cvpr, pp. 4681–4690, 2017.
  • [73] T. M. Quan, T. Nguyen-Duc, and W.-K. Jeong, “Compressed Sensing MRI Reconstruction using a Generative Adversarial Network with a Cyclic Loss,” IEEE Transactions on Medical Imaging, vol. early view, 2018.
  • [74] O. Shitrit and T. Riklin Raviv, “Accelerated magnetic resonance imaging by adversarial neural network,” in

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    , 2017, vol. 10553 LNCS, pp. 30–38.
  • [75] G. Yang, S. Yu, H. Dong, et al., “DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction,” IEEE Transactions on Medical Imaging, 2018.
  • [76] K. Hammernik, E. Kobler, T. Pock, et al., “Variational Adversarial Networks for Accelerated MR Image Reconstruction,” in Proceedings of the International Society of Magnetic Resonance in Medicine (ISMRM), 2018, p. 1091.
  • [77] K. H. Kim, W. J. Do, and S. H. Park, “Improving resolution of MR images with an adversarial network incorporating images with different contrast,” Medical Physics, vol. 45, no. 7, pp. 3120–3131, 2018.
  • [78] M. Mardani, E. Gong, J. Y. Cheng, et al., “Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI,” IEEE Transactions on Medical Imaging, vol. PP, no. c, pp. 1, 2018.
  • [79] J. Zbontar, F. Knoll, A. Sriram, et al., “fastMRI: An open dataset and benchmarks for accelerated MRI,” arXiv:1811.08839 preprint, 2018.