1 Introduction
The excellent soft tissue contrast and flexibility of magnetic resonance imaging (MRI) makes it a very powerful diagnostic tool for a wide range of disorders, including neurological, musculoskeletal, and oncological diseases. However, the long acquisition time in MRI, which can easily exceed 30 minutes, leads to low patient throughput, problems with patient comfort and compliance, artifacts from patient motion, and high exam costs.
As a consequence, increasing imaging speed has been a major ongoing research goal since the advent of MRI in the 1970s. Increases in imaging speed have been achieved through both hardware developments (such as improved magnetic field gradients) and software advances (such as new pulse sequences). One noteworthy development in this context is parallel imaging, introduced in the 1990s, which allows multiple data points to be sampled simultaneously, rather than in a traditional sequential order [39, 26, 9].
The introduction of compressed sensing (CS) in 2006 [2, 23] promised another breakthrough in the reduction of MR scan time. At their core, CS techniques speed up the MR acquisition by acquiring less measurement data than has previously been required to reconstruct diagnostic quality images. Since undersampling of this kind violates the NyquistShannon sampling theorem, aliasing artifacts are introduced which must be eliminated in the course of image reconstruction. This can be achieved by incorporating additional a priori knowledge during the image reconstruction process.
The last two years have seen the rapid development of machine learning approaches for MR image reconstruction, which hold great promise for further acceleration of MR image acquisition [10, 48, 11, 35, 60]. Some of the first work on this subject was presented at the 2016 annual meeting of the International Society for Magnetic Resonance in Medicine (ISMRM). The 2017 ISMRM annual meeting included, for the first time, a dedicated session on machine learning for image reconstruction, and presentations on the subject at the 2018 annual meeting spanned multiple focused sessions, including a dedicated category for abstracts.
Despite this substantial increase in research activity, the field of MR image reconstruction still lacks largescale, public datasets with consistent evaluation metrics and baselines. Many MR image reconstruction studies use datasets that are not openly available to the research community. This makes it challenging to reproduce and validate comparisons of different approaches, and it restricts access to work on this important problem to researchers associated with or cooperating with large academic medical centers where such data is available.
In contrast, research in computer vision applications such as object classification has greatly benefited from the availability of largescale datasets associated with challenges such as the
ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [31]. Such challenges have served as a catalyst for the recent explosion in research activity on deep learning
[21].The goal of the fastMRI dataset is to provide a first step towards enabling similar breakthroughs in the machinelearningbased reconstruction of accelerated MR images. In this work we describe the first largescale release of raw MRI data that includes 1,594 volumes, consisting of 56,987 slices^{1}^{1}1A slice corresponds to one image., associated with in vivo examinations from a range of MRI systems. In addition, we are releasing processed MR images in the DICOM format from 10,000 knee examinations from a representative clinical patient population, consisting of more than 1.2 million slices.
Prior to providing details about the dataset and about target reconstruction tasks with associated benchmarks, we begin with a brief primer on MR image acquisition and reconstruction, in order to enable nonMRIexperts to get up to speed quickly on the information content of the dataset. In general, both the fastMRI dataset and this paper aim to connect the data science and the MRI research communities, with the overall goal of advancing the state of the art in accelerated MRI.
2 Introduction to MR Image Acquisition and Reconstruction
MR imaging is an indirect process, whereby crosssectional images of the subject’s anatomy are produced from frequency and phase measurements instead of direct, spatiallyresolved measurements. A measuring instrument, known as a receiver coil, is placed in proximity to the area to be imaged (Figure 1). During imaging, a sequence of spatially and temporallyvarying magnetic fields, called a “pulse sequence,” is applied by the MRI machine. This induces the body to emit resonant electromagnetic response fields which are measured by the receiver coil. The measurements typically correspond to points along a prescribed path through the multidimensional Fourierspace representation of an imaged body. This Fourier space is known as kspace in the medical imaging community. In the most basic usage of MR imaging, the full Fourierspace representation of a region is captured by a sequence of samples that tile the space up to a specified maximum frequency.
The spatiallyresolved image
can be estimated from the full kspace
by performing an inverse multidimensional Fourier transform:
(1) 
where is a noisecorrupted estimate of the true image .
The number of samples captured in kspace is a limiting factor for the speed of MR imaging. Fewer samples can be captured by sampling up to a lower maximum frequency, however this produces images of lower spatial resolution. An alternative undersampling approach involves omitting some number of kspace samples within a given maximum frequency range, which then results in aliasing artifacts. In order to remove these artifacts and infer the true underlying spatial structure of the imaged subject, one may apply a number of possible reconstruction strategies.
2.1 Parallel MR Imaging
In parallel MR imaging, multiple receiver coils are used, each of which produces a separate kspace measurement matrix. Each of these matrices is different, since the view each coil provides of the imaged volume is modulated by the differential sensitivity that coil exhibits to MR signal arising from different regions. In other words, each coil measures Fourier components of the imaged volume multiplied by a complexvalued positiondependent coil sensitivity map . The measured kspace signal for coil in an array of coils is given by
(2) 
where the multiplication is entrywise. This is illustrated in Figure (b)b, which shows the absolute value of the inverse discrete Fourier transform (DFT) of fullysampled complexvalued kspace signals for each coil in a 15element coil array. Each coil is typically highly sensitive in one region, and its sensitivity falls off significantly in other regions.
If the sensitivity maps are known, and the kspace sampling is full (i.e., satisfying the Nyquist sampling condition), then the set of linear relations between and each defines a linear system that is overdetermined by a factor of . It may be inverted using a pseudoinverse operation to produce a reconstruction of , as long as the linear system is full rank. The quality of this reconstruction will depend on the measurement noise, since the signaltonoise ratio is poor in parts of the volume where the coil sensitivity is low.
In accelerated parallel imaging, each coil’s kspace signal is undersampled. As long as the total number of measurements across all coils exceeds the number of image voxels to be reconstructed, an unregularized least squares solution can still be used, leading to a theoretical fold speedup over fullysampled singlecoil imaging. Each extra coil effectively produces an additional “sensitivityencoded” measurement of the volume [26], which augments the frequency and phase encoded measurements obtained from the sequential application of magnetic field gradients in the MR pulse sequence. Estimates of coil sensitivity patterns, required for inversion of the undersampled multicoil linear system, may be generated from separate lowresolution calibration scans. They may also be derived directly from the kspace measurements by fully sampling a comparatively small central region of kspace, which corresponds to low spatial frequencies.
In practice, the use of subsampling results in significant amplification of noise, and regularization is usually needed. In cases where a tight imaging field of view is used, or at imaging depths exceeding the dimensions of the individual coils, the sensitivity patterns of different coils become somewhat linearly dependent, thereby lowering the effective rank of the linear system, increasing noise amplification associated with the inverse operation, and limiting the maximum practical acceleration. As a result, in the clinic, parallel imaging acceleration factors are typically on the order of two to three.
2.2 Machine Learning Reconstruction of Undersampled MRI Data
Classical approaches to MRI reconstruction solve a regularized inverse optimization problem to find the spatiallyresolved image from the subsampled kspace data, in both the singlecoil and the multicoil case. We describe the classical approach in more detail in Section 6. In the machine learning approach, a reconstruction function
(3) 
is learned from input and output pair tuples drawn from a population. The goal is to find a function that minimizes the risk (i.e., expected loss) over the population distribution:
We discuss error metrics that may be used as loss functions
in Section 5. In practice this optimization problem must be approximated with the empirical risk using a sample from the population, with respect to a loss function :(4) 
3 Prior Public Datasets
The availability of public datasets has played an important role in advancing research in medical imaging, providing benchmarks to compare different approaches and leading to more impactful contributions. Early works such as DDSM [13], SLIVER07 [14] and CAUSE07 [8] triggered increasing efforts to collect new largerscale biomedical datasets, which resulted in over one hundred public releases (counting the entries on https://grandchallenge.org/) to advance medical image analysis research. The vast majority of these datasets, which include a range of medical imaging modalities, are designed to test the limits of current methods in the tasks of segmentation, classification, and detection. Datasets such as BraTS [24], LUNA [37], ChestXray [50], DeepLesion [55], and Camelyon [1] are widely used in research with clinical applications.
However, the current lack of largescale reference standards for MR image reconstruction hinders progress in this important area. Most research uses synthetic kspace data that is not directly acquired but rather obtained from postprocessing of alreadyreconstructed images [5, 38, 57, 56, 27]. Research using smallscale proprietary raw kspace datasets is also common [15, 36, 35, 33, 22].
In order to address the abovementioned shortcomings, recent efforts have been devoted to collecting and publicly releasing datasets containing raw (unprocessed) kspace data; see, e.g., [34, 11]. However, the size of these existing datasets remains small. Although these datasets might provide a valuable test bed for signal processing methods, larger datasets are required to fully realize the potential of deep learning. Table 1 lists publicly available MR datasets containing raw kspace data.
Dataset  Volumes  Body part  MR scan type 

NYU dataset [11]  100  knee  PD, T2 
Stanford dataset 2D FSE  89  knee  PD 
Stanford dataset 3D FSE [34]  20  knee  PD 
Stanford undersampled dataset  38  knee  PD 
fastMRI dataset  1594  knee  PD 
4 The fastMRI Dataset and Associated Tasks
The fastMRI dataset (http://fastmri.med.nyu.edu/) contains four types of data from MRI acquisitions of knees.
 Raw multicoil kspace data:

unprocessed complexvalued multicoil MR measurements.
 Emulated singlecoil kspace data:

combined kspace data derived from multicoil kspace data in such as way as to approximate singlecoil acquisitions, for evaluation of singlecoil reconstruction algorithms.
 Groundtruth images:

realvalued images reconstructed from fullysampled multicoil acquisitions using the simple rootsumofsquares method detailed below. These may be used as references to evaluate the quality of reconstructions.
 DICOM images:

spatiallyresolved images for which the raw data was discarded during the acquisition process. These images are provided to represent a larger variety of machines and settings than are present in the raw data.
This data was designed to enable two distinct types of tasks:

Singlecoil reconstruction task: reconstruct images approximating the groundtruth from undersampled singlecoil data.

Multicoil reconstruction task: reconstruct images approximating the groundtruth from undersampled multicoil data.
For each task we provide an official split of the kspace data and groundtruth images into training and validation subsets that contain fullysampled acquisitions, as well as test and challenge subsets which contain kspace data that have been subjected to undersampling masks as described below. Groundtruth images are not being released for the test and challenge datasets. During training of a machinelearning model, the training kspace data should be programmatically masked following the same procedure. The challenge subsets are not being released at the time of writing and are reserved for future challenges associated with the fastMRI dataset.
The rationale for having a singlecoil reconstruction task (and for providing simulated singlecoil data), even though reconstruction from multicoil data is expected to be more precise, is twofold: (i) to lower the barrier of entry for researchers who may not be familiar with MRI data, since the use of a single coil removes a layer of complexity, and (ii) to include a task that is relevant for the singlecoil MRI machines still in use throughout the world.
The DICOM images may be useful as additional data for training. Their distribution is different from that of the groundtruth images, since they were acquired with a larger diversity of scanners, manners of acquisition, reconstruction methods, and postprocessing algorithms, so the application of transferlearning techniques may be necessary. Most DICOM images are the result of accelerated parallel imaging acquisitions and corresponding reconstructions, with image quality that differs from that of putative fullysampled acquisitions and reconstructions. The groundtruth images may, in many cases, represent a higher standard of image quality than the clinical gold standard, for which full sampling is not routine or even practical.
4.1 Anonymization
Curation of the datasets described here was part of a study approved by the NYU School of Medicine Institutional Review Board. Raw data was anonymized via conversion to the vendorneutral ISMRMRD format [18]. DICOM data was anonymized using the RSNA clinical trial processor. We performed manual inspection of each DICOM image for the presence of unexpected protected health information (PHI), manual checking of metadata in raw data files, as well as spot checking of all metadata and image content.
4.2 Raw Multicoil kSpace Data
Multicoil raw data was stored for 1,594 scans acquired for the purpose of diagnostic knee MRI. For each scan, a single fully sampled MRI volume was acquired on one of three clinical 3T systems (Siemens Magnetom Skyra, Prisma and Biograph mMR) or one clinical 1.5T system (Siemens Magnetom Aera). Data acquisition used a 15 channel knee coil array and conventional Cartesian 2D TSE protocol employed clinically at NYU School of Medicine. The dataset includes data from two pulse sequences, yielding coronal protondensity weighting with (PDFS, 798 scans) and without (PD, 796 scans) fat suppression (see Figure 3). Sequence parameters are, as per standard clinical protocol, matched as closely as possible between the systems.
System  Number of scans 

Skyra 3T  663 
Prisma 3T  83 
Biograph mMR 3T  153 
Aera 1.5T  695 
The following sequence parameters were used: Echo train length 4, matrix size , inplane resolution mmmm, slice thickness 3mm, no gap between slices. Timing varied between systems, with repetition time (TR) ranging between 2200 and 3000 milliseconds, and echo time (TE) between 27 and 34 milliseconds.
4.3 Emulated Singlecoil kSpace Data
We used an emulated singlecoil (ESC) methodology to simulate singlecoil data from a multicoil acquisition [43]. ESC computes a complexvalued linear combination of the responses from multiple coils, with the linear combination fitted to the groundtruth rootsumofsquares reconstruction in the leastsquares sense.
4.4 Ground Truth
The rootsumofsquares reconstruction method applied to the fully sampled kspace data [28] provides the ground truth for the multicoil dataset. The singlecoil dataset includes two ground truth reconstructions, which we denote ESC and RSS. The ESC ground truth is given by the inverse Fourier transform of the singlecoil data, and the RSS ground truth is given by the rootsumofsquares reconstruction computed on the multicoil data that were used to generate the virtual singlecoil kspace data. All ground truth images are cropped to the central pixel region to compensate for readoutdirection oversampling that is standard in clinical MR examinations.
The rootsumofsquares approach [28] is one of the most commonlyused coil combination methods in clinical imaging. It first applies the inverse Fourier Transform to the kspace data from each coil:
(5) 
where is the kspace data from the th coil and is the th coil image. Then, the individual coil images are combined voxel by voxel as follows:
(6) 
where is the final image estimate and
is the number of coils. The rootsumofsquares image estimate is known to converge to the optimal, unbiased estimate of
in the highSNR limit [20].4.5 Dataset Split
Volumes  Slices  

Multicoil  Singlecoil  Multicoil  Singlecoil  
training  973  973  34,742  34,742 
validation  199  199  7,135  7,135 
test  118  108  4,092  3,903 
challenge  104  92  3,810  3,305 
Each volume is randomly assigned to one of the following six component datasets: training, validation, multicoil test, singlecoil test, multicoil challenge, or singlecoil challenge. Table 3
shows the number of volumes assigned to each dataset. The training and validation datasets may be used to fit model parameters or to determine hyperparameter values. The test dataset is used to compare the results across different approaches. To ensure that models do not overfit to the test set, the ground truth reconstructions are not publicly released for this set. Evaluation on the test set is accomplished by uploading results to the public leaderboard at
http://fastmri.org/. The challenge portion of the dataset will be forthcoming.A volume from the train or validation dataset is used in both the singlecoil and multicoil tracks, whereas a volume from the test or challenge dataset is only used in either the singlecoil or the multicoil track. Volumes were only included in a single test or challenge set to ensure information from one could not be used to help the result in another.
4.6 Cartesian Undersampling
Volumes in the test and challenge datasets contain undersampled kspace data. The undersampling is performed by retrospectively masking kspace lines from a fullysampled acquisition. kspace lines are omitted only in the phase encoding direction, so as to simulate physically realizable accelerations in 2D data acquisitions. The same undersampling mask is applied to all slices in a volume, with each case consisting of a single volume. In order to provide diverse undersampling patterns across the datasets, the undersampling mask is chosen randomly for each case. First, the overall acceleration factor is set randomly either to four or to eight (representing a fourfold or an eightfold acceleration, respectively), with equal probability for each. The undersampling mask is then generated by first including some number of adjacent lowestfrequency kspace lines to provide a fullysampled kspace region. When the acceleration factor equals four, the fullysampled central region includes 8% of all kspace lines; when it equals eight, 4% of all kspace lines are included. The remaining kspace lines are included uniformly at random, with the probability set so that, on average, the undersampling mask achieves the desired acceleration factor. Figure
4 depicts two randomlyundersampled kspace trajectories. Random undersampling is performed in order to meet the general conditions for compressed sensing [2, 23], for fair comparison of learned reconstruction algorithms with traditional sparsitybased regularizers.4.6.1 DICOM Data
In addition to the scanner raw data described above, the fastMRI dataset includes DICOM data from 10,000 clinical knee MRI scans. These images represent a wider variety of scanners and pulse sequences than those represented in the collection of raw data. Each MR exam for which DICOM images are included typically consisted of five clinical pulse sequences:

Coronal protondensity weighting without fat suppression,

Coronal protondensity weighting with fat suppression,

Sagittal protondensity weighting without fat suppression,

Sagittal T2 weighting with fat suppression, and

Axial T2 weighting with fat suppression.
The two coronal sequences have the same basic specifications (matrix size, etc) as the sequences associated with raw data. The sagittal and axial sequences have different matrix sizes and have no direct correspondence to the sequences represented in raw data.
The Fourier transformation of an image from a DICOM file does not directly correspond to the originally measured raw data, due to the inclusion of additional postprocessing steps in the vendorspecific reconstruction pipeline. Most of the DICOM images are also derived from accelerated acquisitions and are reconstructed with parallel imaging algorithms, since this baseline acceleration represents the current clinical standard. The image quality of DICOM images, therefore, is not equivalent to that of the ground truth images directly associated with fully sampled raw data. The DICOM images are distinct from the validation, test, or challenge sets.
5 Metrics
The assessment of MRI reconstruction quality is of paramount relevance to develop and compare machine learning and medical imaging systems [51, 53, 3, 58]. The most commonly used evaluation metrics in the MRI reconstruction literature [3] include (normalized) mean squared error, which measures pixelwise intensity differences between reconstructed and reference images, and signaltonoise ratio, which measures the degree to which image information rises above background noise. These metrics are appealing because they are easy to understand and efficient to compute. However, they both evaluate pixels independently, ignoring the overall image structure.
Additional metrics have been introduced in the literature to capture structural distortion [41, 6, 58]. For example, the structural similarity index [53] and its extended version, multiscale structural similarity [52], provide a mechanism to assess the perceived quality of an image using local image patches.
The most recent developments in the computer vision literature leverage pretrained deep neural networks to measure the perceptual quality of an image by computing differences at the representation level
[19], or by means of a downstream task such as classification [32].In the remainder of this section, we review the definitions of the commonlyused metrics of normalized mean square error, peak signaltonoise ratio, and structural similarity. As is discussed later, while we expect these metrics to serve as a familiar starting point, we also hope that the fastMRI dataset will enable robust investigations into improved evaluation metrics as well as improved reconstruction algorithms.
5.1 Normalized Mean Square Error
The normalized mean square error
(NMSE) between a reconstructed image or image volume represented as a vector
and a reference image or volume is defined as(7) 
where is the squared Euclidean norm, and the subtraction is performed entrywise. In this work we report NMSE values computed and normalized over full image volumes rather than individual slices, since imagewise normalization can result in strong variations across a volume.
NMSE is widely used, and we recommend that it be reported as the primary measure of reconstruction quality for experiments on the fastMRI dataset. However, due to the many downsides of NMSE, such as a tendency to favor smoothness rather than sharpness, we recommend also reporting additional metrics such as those described below.
5.2 Peak SignaltoNoise Ratio
The peak signaltonoise ratio (PSNR) represents the ratio between the power of the maximum possible image intensity across a volume and the power of distorting noise and other errors:
(8) 
Here is the reconstructed volume, is the target volume, is the largest entry in the target volume , is the mean square error between and defined as and is the number of entries in the target volume . Higher values of PSNR (as opposed to lower values of NMSE) indicate a better reconstruction.
5.3 Structural Similarity
The structural similarity (SSIM) index measures the similarity between two images by exploiting the interdependencies among nearby pixels. SSIM is inherently able to evaluate structural properties of the objects in an image and is computed at different image locations by using a sliding window. The resulting similarity between two image patches and is defined as
(9) 
where and are the average pixel intensities in and , and
are their variances,
is the covariance between and and and are two variables to stabilize the division; and . For SSIM values reported in this paper, we choose a window size of , we set , , and define as the maximum value of the target volume, .5.4 L1 Error
6 Baseline Models
Along with releasing the fastMRI data, we detail two reference approaches to be used as reconstruction baselines: a classical nonmachine learning approach, and a deeplearning approach. Each of these baselines has versions tailored for singlecoil or multicoil data. The “classical” baselines are comprised of reconstruction methods developed by the MRI community over the last 30+ years. These methods have been extensively tested and validated, and many have demonstrated robustness sufficient for inclusion in the clinical workflow. By comparison, machine learning reconstruction methods are relatively new in MRI, and deeplearning reconstruction techniques in particular have emerged only in the past few years. We include some deliberately rudimentary deeplearning models as starting points, with the expectation that future learning algorithms will provide markedly improved performance.
6.1 Singlecoil Classical Baselines
In the singlecoil imaging setting, the task is to reconstruct an image, , from kspace observations, . In the presence of undersampling, the vector has a length smaller than that of . Therefore there are, in principle, infinitely many possibilities for that can be mapped onto a single . The advent of compressed sensing [2, 23] provided a framework for reconstruction of images from undersampled data that closely approximate images derived from fullysampled data, subject to sparsity constraints. Compressed sensing theory requires the images in question to be sparse in some transform domain. Two common examples are to assume sparsity in the wavelet domain, or to assume sparsity of the spatial gradients of the image. The particular assumption impacts the mathematical formulation of the reconstruction problem, either in the cost function or through a regularization term.
More concretely, the sparse reconstruction approach consists of finding an image whose Fourier space representation is close to the measured kspace matrix at all measured spatial frequencies, yet at the same time minimizes a sparsityinducing objective that penalizes unnatural reconstructions:
(11) 
Here, is a projection function that zeros out entries that are masked, and is a specified small threshold value. In most applications it is easier to work with a soft penalty instead of a constraint, so the Lagrangian dual form of Equation 11 is used instead, with penalty parameter :
(12) 
For a convex regularizer , there exists, for any choice , a value such that these two formulations have equivalent solutions.
The most common regularizers used for MRI are:
The penalty works best when the MR images are sparse in image space, for instance in vascular imaging (e.g., Yamamoto et al. [54]). This is not the case for most MRI applications. The totalvariation (TV) penalty encourages sparsity in the spatial gradients of the reconstructed image, as given by a local finitedifference approximation [30] (Figure (g)g). The TV regularizer can be very effective for some imaging protocols, but it also has a tendency to remove detail (Figure (h)h). The penalty encourages sparsity in the discrete wavelet transform of the image. Most natural images exhibit significant sparsity when expressed in a wavelet basis. The most commonly used transform is the Multiscale Daubechies (DB2) transform (Figure (e)e). To date, due to their computational complexity as well as their tendency to introduce compression artifacts or oversmoothing, compressed sensing approaches have taken some time to gain acceptance in the clinic, though commercial implementations of compressed sensing are currently beginning to appear.
The singlecoil classical baseline provided with the fastMRI dataset was adopted from the widelyused opensource BART toolkit (Appendix B), using total variation as the regularizer. We ran the optimization algorithm for 200 iterations on each slice independently.
Singlecoil classical baseline (TV model) applied to validation data  

Acceleration  Regularization Weight  NMSE  PSNR  SSIM  
PD  PDFS  PD  PDFS  PD  PDFS  
4fold  0.0355  0.0919  30.2  27.6  0.637  0.506  
0.0342  0.0916  30.4  27.6  0.641  0.505  
0.0287  0.09  31.4  27.7  0.645  0.494  
0.0313  0.0993  30.9  27.3  0.575  0.399  
1  0.0522  0.124  28.5  26.2  0.526  0.327  
8fold  0.0708  0.118  27.1  26.4  0.551  0.417  
0.0699  0.118  27.1  26.4  0.553  0.416  
0.063  0.117  27.7  26.4  0.564  0.408  
0.0537  0.117  28.4  26.5  0.55  0.357  
1  0.0742  0.132  26.9  25.9  0.538  0.333 
Table 4 summarizes the results of applying this method to the singlecoil validation data with different regularization strengths and different acceleration factors. These results indicate that NMSE and PSNR metrics are highly (inversely) correlated and generally favor models with stronger regularization than SSIM does. Stronger regularization generally results in smoother images that lack the fine texture of the ground truth images. A regularization parameter of 0.01 yields the best results for 4fold acceleration in most cases, whereas the higher 8fold acceleration gets slightly better results with a regularization parameter of 0.1.
6.2 Multicoil Classical Baselines
When multiple receiver coils are used, the reconstruction process must combine information from multiple channels into one image. Multicoil acquisitions currently represent the norm in clinical practice, for two principal reasons: they provide increased SNR, as compared with singlecoil acquisitions, over extended fields of view, and they enable acceleration via parallel imaging. Equation 2 in Section 2.1 describes the forward model for parallel imaging. The SENSE formulation [26] of parallel image reconstruction involves direct inversion of this forward model, via a suitable pseudoinverse. Leveraging the convolution property of the Fourier Transform reveals the following convolution relationship:
(13) 
Here is the Fourier Transform of the coil sensitivity pattern and denotes the convolution operation. The GRAPPA/SMASH formulation of parallel image reconstruction [39, 9] involves filling in missing kspace data via combinations of acquired kspace data within a defined convolution kernel, prior to inverse Fourier transformation.
Either formulation requires estimates of the coil sensitivity information in or , which may be derived either from a separate reference scan or directly from the acquired undersampled kspace data itself. Reference scan methods are often used in the SENSE formulation, whereas GRAPPA formulations are typically selfcalibrating, relying on subsets of fullysampled data generally in central kspace regions.
The parallel imaging techniques described above may be combined productively with compressed sensing, via the use of sparsitybased regularizers. For example, one may extend Equation 12 in Section 6.1 above to include multicoil data as follows:
(14) 
Various methods may be used to solve this reconstruction problem. One frequentlyused method is the ESPIRiT approach [45], which harmonizes parallel imaging and compressed sensing in a unified framework.
As was the case for the classical singlecoil baseline, the classical multicoil baseline provided with the fastMRI dataset was adopted from the BART toolkit (Appendix B). In the multicoil case, the ESPIRiT algorithm was used to estimate coil sensitivities, and to perform parallel image reconstruction in combination with compressed sensing using a totalvariation regularizer.
Multicoil classical baseline (TV model) applied to validation data  

Acceleration  Regularization Weight  NMSE  PSNR  SSIM  
PD  PDFS  PD  PDFS  PD  PDFS  
4fold  0.0246  0.0972  31.6  27.4  0.677  0.53  
0.0222  0.0951  32.1  27.5  0.693  0.554  
0.0198  0.0971  32.6  27.5  0.675  0.588  
0.0251  0.109  31.3  27  0.633  0.538  
8fold  0.0494  0.114  28.2  26.5  0.61  0.505  
0.0447  0.112  28.6  26.6  0.626  0.524  
0.0352  0.109  29.6  26.8  0.642  0.551  
0.0389  0.114  29.2  26.7  0.632  0.527 
Results using this baseline model are summarized in Table 5. The experimental setup is identical to the singlecoil scenario, except that we compare the reconstructions with the rootsumofsquares ground truth instead of the ESC ground truth.
6.3 Singlecoil DeepLearning Baselines
Various deeplearning techniques based on Convolutional Neural Networks have recently been proposed to tackle the problem of reconstructing MR images from undersampled kspace data
[10, 48, 11, 35, 60, 17, 12]. Many of these proposed methods are based on the UNet architecture introduced in [29]. UNet models and their variants have successfully been used for many imagetoimage prediction tasks including MRI reconstruction [17, 12] and image segmentation [29].The UNet singlecoil baseline model included with the fastMRI data release (Figure 6) consists of two deep convolutional networks, a downsampling path followed by an upsampling path. The downsampling path consists of blocks of two 33 convolutions each followed by instance normalization [46]
and Rectified Linear Unit (ReLU) activation functions. The blocks are interleaved by downsampling operations consisting of maxpooling layers with stride 2 which halve each spatial dimension. The upsampling path consists of blocks with a similar structure to the downsampling path, interleaved with bilinear upsampling layers which double the resolution between blocks. Each block consists of two 3
3 convolutions with instance normalization [46] and ReLU activation layers. In contrast to the downsampling path, the upsampling path concatenates two inputs to the first convolution in each block: the upsampled activations from the previous block, together with the activations that follow the skip connection from the block in the downsampling path with the same resolution (horizontal arrows in Figure 6). At the end of the upsampling path, we include a series of 11 convolutions that reduce the number of channels to one without changing the spatial resolution.For the singlecoil MRI reconstruction case, the zerofilled image is used as the input to the model. The zerofilled image is obtained by first inserting zeros at the location of all unobserved kspace values, applying a twodimensional Inverse Fourier Transform (IFT) to the result, and finally computing the absolute value. The result is center cropped to remove any readout and phase oversampling. Using the notation from section 6.1, the zerofilled image is given by , where is the linear operator corresponding to the center cropping and is the twodimensional IFT.
The entire network is trained on the training data in an endtoend manner to minimize the mean absolute error with respect to corresponding ground truth images. Let be the function computed by the UNet model, where represents the parameters of the model. Then the training process corresponds to the following optimization problem:
(15) 
where the ground truths are obtained using the ESC method described in Section 4.3
. Our particular singlecoil UNet baseline model was trained on 973 image volumes in the training set, using the RMSProp algorithm
[42]. We used an initial learning rate of 0.001, which was multiplied by 0.1 after 40 epochs, after which the model was trained for an additional 10 epochs. During training, we randomly sampled a different mask for each training example in each epoch independently using the protocol described in Section
4.6 for the test data. At the end of each epoch, we recorded the NMSE on the validation data. After training, we picked the model that achieved the lowest validation NMSE.Singlecoil UNet baseline applied to validation data  

Acceleration  Channels  #Params  NMSE  PSNR  SSIM  
PD  PDFS  PD  PDFS  PD  PDFS  
4fold  32  3.35M  0.0161  0.0531  33.78  29.90  0.81  0.631 
64  13.39M  0.0157  0.0528  33.90  29.9  0.813  0.633  
128  53.54M  0.0154  0.0525  34.01  29.95  0.815  0.634  
256  214.16M  0.0154  0.0525  34.00  29.95  0.815  0.636  
8fold  32  3.35M  0.0283  0.0698  31.13  28.6  0.754  0.555 
64  13.39M  0.0272  0.0693  31.30  28.63  0.758  0.558  
128  53.54M  0.0265  0.0686  31.44  28.68  0.761  0.558  
256  214.16M  0.0261  0.0682  31.5  28.71  0.762  0.559 
Table 6 presents the results from running trained UNet models of different capacities on the singlecoil validation data. These results indicate that the trained UNet models perform significantly better than the classical baseline method. The best UNet models obtain 4050% relative improvement over the classical methods (see Table 4) in terms of NMSE.
The performance of the UNet models continues to increase with increasing model capacity, and even the largest model with over 200 million parameters is unable to overfit the training data. These improvements begin to saturate after 50 million parameters for the simpler 4fold acceleration case. However, for the more challenging 8fold acceleration task, the largest model performs significantly better than the smaller models. This suggests that models with very large capacities trained on large amounts of data can enable high acceleration factors.
Singlecoil classical and UNet baselines applied to test data  
Model  Acceleration  NMSE  PSNR  SSIM 
Classical Model (Total Variation)  4fold  0.0479  30.69  0.603 
8fold  0.0795  27.12  0.469  
Aggregate  0.0648  28.77  0.531  
UNet  4fold  0.0320  32.22  0.754 
8fold  0.0480  29.45  0.651  
Aggregate  0.0406  30.7  0.699 
Table 7 compares the performance of the classical and the UNet baseline models for the singlecoil task, as applied to the test dataset. For the classical baseline model, we chose the best regularization weights for each modality and for each acceleration factor based on the validation data results, resulting in a regularization weight of 0.1 for 8fold acceleration on Proton Density without fat suppression and 0.01 for every other case. For the UNet baseline model, we chose the model with the largest capacity.
6.4 Multicoil DeepLearning Baselines
In the multicoil MRI reconstruction task, we have one set of undersampled kspace measurements from each coil, and a different zerofilled image can be computed from each coil. These coil images can be combined using the rootsumofsquares algorithm. Let be the zerofilled image from coil . With defined as in Equation 6, the UNet model described in Section 6.3 can be used for the multicoil reconstruction task by simply feeding this combined image in as input: The model is trained to minimize the mean absolute error loss similarly to the singlecoil task. The training procedure is also identical to the singlecoil case except that the rootsumofsquares image is used as the ground truth as described in Section 4.4.
Multicoil UNet baseline applied to validation data  

Acceleration  Channels  #Params  NMSE  PSNR  SSIM  
PD  PDFS  PD  PDFS  PD  PDFS  
4fold  32  3.35M  0.0066  0.0122  36.7  35.97  0.9192  0.8595 
64  13.39M  0.0063  0.0120  36.95  36.11  0.9224  0.8615  
128  53.54M  0.0057  0.0113  37.38  36.33  0.9266  0.8641  
256  214.16M  0.0054  0.0112  37.58  36.39  0.9287  0.8655  
8fold  32  3.35M  0.0144  0.0197  33.31  33.82  0.8778  0.8213 
64  13.39M  0.0136  0.0198  33.56  33.93  0.8825  0.8238  
128  53.54M  0.0123  0.0179  34.01  34.25  0.8892  0.8277  
256  214.16M  0.0120  0.0181  34.12  34.23  0.8915  0.8286 
As is the case for the singlecoil task, the multicoil UNet baselines substantially outperform the classical baseline models (compare Table 8 with Table 5). Note that this is true despite the fact that the multicoil UNet baseline defined above does not take coil sensitivity information into account, and therefore neither includes a direct parallel image reconstruction nor accounts for sparsity or other correlations among coils. Models that incorporate coil sensitivity information are expected to perform better than the current multicoil UNet baselines. Table 8 shows, once again, that the performance of the UNet models improves with model size, with the largest UNet baseline model providing the best performance.
Multicoil classical and UNet baselines applied to test data  
Model  Acceleration  NMSE  PSNR  SSIM 
Classical Model (Total Variation)  4fold  0.0503  30.88  0.628 
8fold  0.0760  28.25  0.593  
Aggregate  0.0633  29.54  0.610  
UNet  4fold  0.0106  35.91  0.904 
8fold  0.0171  33.57  0.858  
Aggregate  0.0139  34.7  0.881  
Table 9 compares the performance of the classical and the UNet baseline models for the multicoil task, as applied to the test dataset. For the classical baseline model, we chose the best regularization weights for each modality and for each acceleration factor based on the validation data results, resulting in a regularization weight of 0.001 for 4fold undersampling for Proton Density with Fat Suppression and 0.01 for every other case. For the UNet baseline model, we chose the model with the largest capacity.
To appreciate the value of the dataset size, we study how model performance scales with the amount of data used to train a model. To this end, we trained several UNet models with varying model capacities on different sized subsets of the training data. Figure 7 shows the SSIM metric computed on the validation data for the multicoil task. It is evident from these results that training with larger amounts of data yields substantial improvements in the quality of reconstructions, which highlights the need for the release of large datasets like fastMRI.
As mentioned in Section 4.6.1, the fastMRI dataset also includes a large set of DICOM images that can be used as additional training data. It is possible that the baseline UNet models could be improved further by making use of this additional data.
7 Discussion
MR image reconstruction is an inverse problem, and thus it has many connections to inverse problems in the computer vision literature [40, 7, 4, 47]
, such as superresolution, denoising and inpainting. In all of these inverse problems, the goal is to recover a highdimensional ground truth image from a lowerdimensional measurement. Such illposed problems are very difficult to solve since there exists an infinite number of highdimensional images that can result in the samelow dimensional measurement. In order to simplify the problem, an assumption is often made that only a small number of highresolution images would correspond to natural images
[4]. Given that MRI reconstruction is a similar inverse problem, we hope that the computer vision community, as well as the medical imaging community, will find our dataset beneficial.In the clinical setting, radiologists use MRI to search for abnormalities, make diagnoses, and recommend treatment options. Thus, contrary to many computer vision problems where small texture changes might not necessarily alter the overall satisfaction of the observer, in MRI reconstruction, extra care should be taken to ensure that the human interpreter is not misled by a very plausible but not necessarily correct reconstruction. This is especially important as image generation techniques increase in their ability to generate photorealistic results [49]. Therefore some research effort should be devoted to look for solutions that, by design, ensure correct diagnosis, and we hope that our dataset will provide a testbed for new ideas in these directions as well.
An important question in MRI reconstruction is the choice of the evaluation metric. The current consensus in the MRI community is that global metrics, such as NMSE, SSIM and PSNR, do not necessarily capture the level of detail required for proper evaluation of MRI reconstruction algorithms [25, 16]. A natural question arises: what would the optimal metric be? An ideal MRI reconstruction algorithm should produce sharp, trustworthy images, that ultimately ensure the proper radiologic interpretation. While our dataset will help ensure consistent evaluation, we hope that it will also trigger research on MRI reconstruction metrics. This goal will be impossible to achieve without clinical studies involving radiologists evaluating fullysampled and undersampled MRI reconstructions to make sure that both images lead to the same diagnosis.
Although this dataset provides an excellent entry point for machine learning methods for MR reconstruction, there are some aspects of MR imaging that we have not yet considered here. Physical effects such as spin relaxation, eddy currents and field distortions are not at present explicitly accounted for in our retrospective undersampling approaches or our baseline models. The manifestation of these effects depends upon the object being imaged, the MRI scanner used, and even the sampling pattern selected. Extending the results from methods developed for this challenge to the clinic remains an open problem, but we believe the provision of this dataset is an important first step on the path to this goal.
8 Conclusion
In this work we detailed the fastMRI dataset: the largest raw MRI dataset to be made publicly available to date. Previous public datasets have focused on postprocessed magnitude images for specific biologic and pathologic questions. Although our dataset was originally acquired for a focused task, the inclusion of raw kspace data allows methods to be developed for the imaging pipeline itself, in principle allowing them to be applied on any MRI scanner for any imaging task.
In addition to the data, we provide evaluation metrics and baseline algorithms to aid the research community in assessing new approaches. Consistent evaluation of MRI reconstruction techniques is provided by a leaderboard using heldout test data.
We hope that the availability of this dataset will accelerate research in MR image reconstruction, and will serve as a benchmark during training and validation of new algorithms.
9 Acknowledgements
We acknowledge grant support from the National Institutes of Health under grants NIH R01 EB024532 and NIH P41 EB017183. We would also like to thank Michela Paganini and Mark Tygert.
References
 Bejnordi et al. [2017] Babak Ehteshami Bejnordi, Mitko Veta, Paul Johannes Van Diest, Bram Van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen AWM Van Der Laak, Meyke Hermsen, Quirine F Manson, Maschenka Balkenhol, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Journal of the American Medical Association, 318(22), 2017.
 Candès et al. [2006] Emmanuel J Candès, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2), 2006.
 Chandler [2013] Damon M Chandler. Seven challenges in image quality assessment: past, present, and future research. ISRN Signal Processing, 2013.
 Chang et al. [2017] JenHao Rick Chang, ChunLiang Li, Barnabás Póczos, B. V. K. Vijaya Kumar, and Aswin C. Sankaranarayanan. One network to solve them all  solving linear inverse problems using deep projection models. IEEE International Conference on Computer Vision (ICCV), 2017.
 Dar and Çukur [2017] Salman Ul Hassan Dar and Tolga Çukur. A transferlearning approach for accelerated MRI using deep neural networks. arXiv preprint, 2017.
 Eckert and Bradley [1998] Michael P Eckert and Andrew P Bradley. Perceptual quality metrics applied to still image compression. Signal Processing, 70(3), 1998.
 Fan et al. [2017] Kai Fan, Qi Wei, Wenlin Wang, Amit Chakraborty, and Katherine A. Heller. InverseNet: Solving inverse problems with splitting networks. arXiv preprint, 2017.
 Ginneken et al. [2007] Bram Van Ginneken, Tobias Heimann, and Martin Styner. 3d segmentation in the clinic: A grand challenge. In MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge, 2007.
 Griswold et al. [2002] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magnetic Resonance in Medicine, 47(6), 2002.
 Hammernik et al. [2016] Kerstin Hammernik, Florian Knoll, Daniel K Sodickson, and Thomas Pock. Learning a Variational Model for Compressed Sensing MRI Reconstruction. In Magnetic Resonance in Medicine (ISMRM), 2016.
 Hammernik et al. [2018] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P. Recht, Daniel K. Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine, 2018.
 Han and Ye [2018] Yoseob Han and Jong Chul Ye. Framing UNet via deep convolutional framelets: Application to sparseview CT. IEEE Transactions on Medical Imaging, 37(6), 2018.
 Heath et al. [1998] Michael Heath, Kevin Bowyer, Daniel Kopans, P Kegelmeyer, Richard Moore, Kyong Chang, and S Munishkumaran. Current status of the digital database for screening mammography. In Digital Mammography, 1998.
 Heimann et al. [2009] Tobias Heimann, Bram Van Ginneken, Martin A Styner, Yulia Arzhaeva, Volker Aurich, Christian Bauer, Andreas Beck, Christoph Becker, Reinhard Beichel, György Bekes, et al. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Transactions on Medical Imaging, 28(8), 2009.
 Huang et al. [2013] Yue Huang, John Paisley, Xianbo Chen, Xinghao Ding, Feng Huang, and XiaoPing Zhang. MR image reconstruction from undersampled kspace with bayesian dictionary learning. arXiv preprint, 2013.
 Huo et al. [2006] Donglai Huo, Dan Xu, ZhiPei Liang, and David Wilson. Application of perceptual difference model on regularization techniques of parallel MR imaging. Magnetic Resonance Imaging, 24(2), 2006.
 Hyun et al. [2018] Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deep learning for undersampled MRI reconstruction. Physics in medicine and biology, 63(13), 2018.
 Inati et al. [2017] Souheil J Inati, Joseph D Naegele, Nicholas R Zwart, Vinai Roopchansingh, Martin J Lizak, David C Hansen, ChiaYing Liu, David Atkinson, Peter Kellman, Sebastian Kozerke, et al. ISMRM raw data format: a proposed standard for MRI raw datasets. Magnetic resonance in medicine, 77(1), 2017.
 Johnson et al. [2016] Justin Johnson, Alexandre Alahi, and Li FeiFei. Perceptual losses for realtime style transfer and superresolution. In European Conference on Computer Vision, 2016.
 Larsson et al. [2003] Erik G Larsson, Deniz Erdogmus, Rui Yan, Jose C Principe, and Jeffrey R Fitzsimmons. SNRoptimality of sumofsquares reconstruction for phasedarray magnetic resonance imaging. Journal of Magnetic Resonance, 163(1), 2003.
 LeCun et al. [2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep Learning. Nature, 521(7553), 2015.
 Lonning et al. [2018] Kai Lonning, Patrick Putzky, Matthan W. A. Caan, and Max Welling. Recurrent inference machines for accelerated MRI reconstruction, 2018.
 Lustig et al. [2007] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magnetic Resonance in Medicine, 58(6), 2007.
 Menze et al. [2015] Bjoern H. Menze, András Jakab, Stefan Bauer, Jayashree KalpathyCramer, Keyvan Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, Levente Lanczi, Elizabeth R. Gerstner, MarcAndré Weber, Tal Arbel, Brian B. Avants, Nicholas Ayache, Patricia Buendia, D. Louis Collins, Nicolas Cordier, Jason J. Corso, Antonio Criminisi, Tilak Das, Herve Delingette, Çagatay Demiralp, Christopher R. Durst, Michel Dojat, Senan Doyle, Joana Festa, Florence Forbes, Ezequiel Geremia, Ben Glocker, Polina Golland, Xiaotao Guo, Andac Hamamci, Khan M. Iftekharuddin, Raj Jena, Nigel M. John, Ender Konukoglu, Danial Lashkari, José Antonio Mariz, Raphael Meier, Sérgio Pereira, Doina Precup, Stephen J. Price, Tammy Riklin Raviv, Syed M. S. Reza, Michael T. Ryan, Duygu Sarikaya, Lawrence H. Schwartz, HooChang Shin, Jamie Shotton, Carlos A. Silva, Nuno Sousa, Nagesh K. Subbanna, Gábor Székely, Thomas J. Taylor, Owen M. Thomas, Nicholas J. Tustison, Gözde B. Ünal, Flor Vasseur, Max Wintermark, Dong Hye Ye, Liang Zhao, Binsheng Zhao, Darko Zikic, Marcel Prastawa, Mauricio Reyes, and Koen Van Leemput. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 34(10), 2015.
 Miao et al. [2013] Jun Miao, Feng Huang, Sreenath Narayan, and David L. Wilson. A new perceptual difference model for diagnostically relevant quantitative image quality evaluation: A preliminary study. Magnetic Resonance Imaging, 31(4), 2013.
 Pruessmann et al. [1999] Klaas P Pruessmann, Markus Weiger, Markus B Scheidegger, and Peter Boesiger. SENSE: sensitivity encoding for fast MRI. Magnetic resonance in medicine, 42(5), 1999.
 Quan and Jeong [2016] Tran Minh Quan and WonKi Jeong. Compressed sensing dynamic MRI reconstruction using GPUaccelerated 3d convolutional sparse coding. In Medical Image Computing and ComputerAssisted Intervention (MICCAI), 2016.
 Roemer et al. [1990] Peter B Roemer, William A Edelstein, Cecil E Hayes, Steven P Souza, and Otward M Mueller. The NMR phased array. Magnetic resonance in medicine, 16(2), 1990.
 Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: Convolutional networks for biomedical image segmentation. Medical Image Computing and ComputerAssisted Intervention, 2015.
 Rudin et al. [1992] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(14), 1992.
 Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 2015.
 Salimans et al. [2016] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, 2016.
 Sandino et al. [2017] Christopher M. Sandino, Neerav Dixit, Joseph Y. Cheng, and Shreyas S Vasanawala. Deep convolutional neural networks for accelerated dynamic magnetic resonance imaging. Technical report, Stanford University, 2017.
 Sawyer et al. [2013] Anne Marie Sawyer, Michael Lustig, Marcus Alley, Phdmartin Uecker, Patrick Virtue, Peng Lai, Shreyas Vasanawala, and Ge Healthcare. Creation of fully sampled MR data repository for compressed sensing of the knee, 2013.
 Schlemper et al. [2017] Jo Schlemper, Jose Caballero, Joseph V. Hajnal, Anthony N. Price, and Daniel Rueckert. A deep cascade of convolutional neural networks for MR image reconstruction. Information Processing in Medical Imaging, 2017.
 Schlemper et al. [2018] Jo Schlemper, Jose Caballero, Joseph V. Hajnal, Anthony N. Price, and Daniel Rueckert. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Transactions on medical imaging, 37(2), 2018.
 Setio et al. [2017] Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas de Bel, Moira S. N. Berens, Cas van den Bogaard, Piergiorgio Cerello, Hao Chen, Qi Dou, Maria Evelina Fantacci, Bram Geurts, Robbert van der Gugten, PhengAnn Heng, Bart Jansen, Michael M. J. de Kaste, Valentin Kotov, Jack YuHung Lin, Jeroen T. M. C. Manders, Alexander SónoraMengana, Juan Carlos GarcíaNaranjo, Mathias Prokop, Marco Saletta, Cornelia SchaeferProkop, Ernst Th. Scholten, Luuk Scholten, Miranda M. Snoeren, Ernesto Lopez Torres, Jef Vandemeulebroucke, Nicole Walasek, Guido C. A. Zuidhof, Bram van Ginneken, and Colin Jacobs. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge. Medical Image Analysis, 2017.
 Shitrit and Raviv [2017] Ohad Shitrit and Tammy Riklin Raviv. Accelerated magnetic resonance imaging by adversarial neural network. In M. Jorge Cardoso, Tal Arbel, Gustavo Carneiro, Tanveer F. SyedaMahmood, João Manuel R. S. Tavares, Mehdi Moradi, Andrew P. Bradley, Hayit Greenspan, João Paulo Papa, Anant Madabhushi, Jacinto C. Nascimento, Jaime S. Cardoso, Vasileios Belagiannis, and Zhi Lu, editors, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 2017.
 Sodickson and Manning [1997] Daniel K Sodickson and Warren J Manning. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magnetic resonance in medicine, 38(4), 1997.
 Szeliski [2011] Richard Szeliski. Computer vision algorithms and applications. Springer, 2011.
 Teo and Heeger [1994] Patrick C Teo and David J Heeger. Perceptual image distortion. In IEEE International Conference on Image Processing (ICIP), volume 2, 1994.
 Tieleman and Hinton [2012] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5  rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 2012.
 Tygert and Zbontar [2018] Mark Tygert and Jure Zbontar. Simulating singlecoil MRI from the responses of multiple coils. arXiv preprint, 2018.
 Uecker et al. [2013] Martin Uecker, Patrick Virtue, Frank Ong, Mark J. Murphy, Marcus T. Alley, Shreyas S. Vasanawala, and Michael Lustig. Software toolbox and programming library for compressed sensing and parallel imaging. In ISMRM Workshop on Data Sampling and Image Reconstruction, 2013.

Uecker et al. [2014]
Martin Uecker, Peng Lai, Mark J Murphy, Patrick Virtue, Michael Elad, John M
Pauly, Shreyas S Vasanawala, and Michael Lustig.
ESPIRiT an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA.
Magnetic resonance in medicine, 71(3), 2014.  Ulyanov et al. [2016] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint, 2016.
 Ulyanov et al. [2017] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep image prior. arXiv preprint, 2017.
 Wang et al. [2016] Shanshan Wang, Zhenghang Su, Leslie Ying, Xi Peng, Shun Zhu, Feng Liang, Dagan Feng, and Dong Liang. Accelerating magnetic resonance imaging via deep learning. In IEEE International Symposium on Biomedical Imaging (ISBI), 2016.

Wang et al. [2018]
TingChun Wang, MingYu Liu, JunYan Zhu, Andrew Tao, Jan Kautz, and Bryan
Catanzaro.
Highresolution image synthesis and semantic manipulation with
conditional GANs.
Conference on Computer Vision and Pattern Recognition (CVPR)
, 2018.  Wang et al. [2017] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald Summers. ChestXray8: Hospitalscale chest Xray database and benchmarks on weaklysupervised classification and localization of common thorax diseases. In 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2017.
 Wang and Bovik [2009] Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine, 26(1), 2009.
 Wang et al. [2003] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems & Computers, 2003.
 Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
 Yamamoto et al. [2018] Takayuki Yamamoto, T Okada, Yasutaka Fushimi, Akira Yamamoto, Koji Fujimoto, Sachi Okuchi, Hikaru Fukutomi, Jun C. Takahashi, Takeshi Funaki, Susumu Miyamoto, Aurélien F. Stalder, Yutaka Natsuaki, Peter Speier, and Kaori Togashi. Magnetic resonance angiography with compressed sensing: An evaluation of moyamoya disease. In PloS one, 2018.
 Yan et al. [2018] Ke Yan, Xiaosong Wang, Le Lu, and Ronald Summers. Deeplesion: Automated mining of largescale lesion annotations and universal lesion detection with deep learning. Journal of Medical Imaging, 5, 2018.
 Yang et al. [2016] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Deep ADMMNet for compressive sensing MRI. Advances in Neural Information Processing Systems 29, 2016.
 Yang et al. [2017] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. ADMMNet: A deep learning approach for compressive sensing MRI. arXiv preprint, 2017.
 Zhang et al. [2011] Lin Zhang, Lei Zhang, Xuanqin Mou, David Zhang, et al. FSIM: a feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8), 2011.
 Zhao et al. [2017] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1), 2017.
 Zhu et al. [2018] Bo Zhu, Jeremiah Z. Liu, Stephen F. Cauley, Bruce R. Rosen, and Matthew S. Rosen. Image reconstruction by domaintransform manifold learning. Nature, 555(7697), 2018.
Appendix A Raw kspace File Descriptions
ISMRMRD files were converted into simpler HDF5 files that store the entire kspace in a single tensor. One HDF5 file was created per volume. The HDF5 files share the following common attributes:
 acquisition

Acquisition protocol (either CORPD or CORPDFS, indicating coronal proton density with or without fat saturation, respectively. See Figure 3).
 ismrmrd_header

The XML header copied verbatim from the ISMRMRD file that was used to generate the HDF5 file. It contains information about the scanner, field of view, dimensions of kspace, and sequence parameters.
 patient_id

A unique string identifying the examination, and substituting anonymously for the patient identification.
 norm, max

The Euclidean norm and the largest entry of the target volume. For the multicoil track the target volume is stored in reconstruction_rss. For the singlecoil track the target volume is stored in reconstruction_esc. These two attributes are only available in the training and validation datasets.
 acceleration

Acceleration factor of the undersampled kspace trajectory (either 4 or 8). This attribute is only available in the test dataset.
 num_low_frequency

The number of lowfrequency kspace lines in the undersampled kspace trajectory. This attribute is only available in the test dataset.
The rest of this section describes the format of the HDF5 files for the multicoil and singlecoil tracks.
a.1 Multicoil Track
 multicoil_train.tar.gz

Training dataset for the multicoil track. The HDF5 files contain the following tensors:
 kspace

Multicoil kspace data. The shape of the kspace tensor is (number of slices, number of coils, height, width).
 reconstruction_rss

rootsumofsquares reconstruction of the multicoil kspace data cropped to the center region. The shape of the reconstruction_rss tensor is (number of slices, 320, 320).
 multicoil_val.tar.gz

Validation dataset for the multicoil track. The HDF5 files have the same structure as the HDF5 files in multicoil_train.tar.gz.
 multicoil_test.tar.gz

Test dataset for the multicoil track. The HDF5 files contain the following tensors:
 kspace

Undersampled multicoil kspace. The shape of the kspace tensor is (number of slices, number of coils, height, width).
 mask

Defines the undersampled Cartesian kspace trajectory. The number of elements in the mask tensor is the same as the width of kspace.
a.2 Singlecoil Track
 singlecoil_train.tar.gz

Training dataset for the singlecoil track. The HDF5 files contain the following tensors:
 kspace

Emulated singlecoil kspace data. The shape of the kspace tensor is (number of slices, height, width).
 reconstruction_rss

rootsumofsquares reconstruction of the multicoil kspace that was used to derive the emulated singlecoil kspace cropped to the center region. The shape of the reconstruction_rss tensor is (number of slices, 320, 320).
 reconstruction_esc

The inverse Fourier transform of the singlecoil kspace data cropped to the center region. The shape of the reconstruction_esc tensor is (number of slices, 320, 320).
 singlecoil_val.tar.gz

Validation dataset for the singlecoil track. The HDF5 files have the same structure as the HDF5 files in singlecoil_train.tar.gz.
 singlecoil_test.tar.gz

Test dataset for the singlecoil track. The HDF5 files contain the following tensors:
 kspace

Undersampled emulated singlecoil kspace. The shape of the kspace tensor is (number of slices, height, width).
 mask

Defines the undersampled Cartesian kspace trajectory. The number of elements in the mask tensor is the same as the width of kspace.
Appendix B Classical Reconstruction with BART
The Berkeley Advanced Reconstruction Toolbox (BART) [44] ^{2}^{2}2Version 0.4.03 https://mrirecon.github.io/bart/ contains implementations of standard methods for coil sensitivity estimation and undersampled MR image reconstruction incorporating parallel imaging and compressed sensing. We used this tool to produce the classical baseline MSE estimates, as well as the illustrations in Figure 2. In this section we provide a brief introduction to the tool sufficient for reproducing our baseline results. We will use as an example a 640x368 undersampled MRI scan with 15 coils. The target region is a central region which will be cropped to after reconstruction.
BART provides a command line interface which acts on files in a simple storage format. Each multidimensional array is stored in a pair of files, a header file .hdr
and a data file .cfl
. The header file contains the dimensions of the array given in ASCII. In our running example, this should be input.hdr
:
1 640 368 15
The CFL file contains the raw data in columnmajor order, stored as complex float values. Missing kspace values are indicated by 0 entries. BART provides Python and MATLAB interfaces for reading and writing this format.
When working with kspace data with BART, it is simplest to use data in ”centered” form, where the low frequency values are in the center of the image, and the high frequency values are at the edges. Most FFT libraries output the data in uncentered form. BART provides a tool for conversion:
bart fftshift 7 input output
The input and output are specified without file extensions. The value 7 above is a bitmask indicating the image is stored in axis 0,1,2 (1+2+4) of the input array. This bitmask is used in the commands that follow also.
Uncentered kspace data is easily identified by comparing the magnitude of the corners versus the center of the array. Centered FFTs of natural data will have the largest magnitudes near the center of the array when plotted.
Parallel MR imaging is often performed as a twostep process consisting of coilsensitivity estimation, then reconstruction assuming the estimated sensitivity maps are exact. BART implements this approach through the ecalib
and pics
commands. The coilsensitivity maps can be estimated using the ESPIRiT approach using the command
bart ecalib
m1
Produce a single set of sensitivity maps
r26
Number of fully sampled reference lines
input output_sens
The central reference region is used by BART to estimate the coil sensitivities. This area is also known as the autocalibration region. The number of lines used in our masking procedure is a percentage of the kspace width, as described in Section 4.2.
Given the estimated coil sensitivities, a reconstruction using TV regularization can be performed with
bart pics
d4
Debug log level, use 0 for no stdout output
i200
Optimization iterations
R T:7:0:0.05
Use TV (T) with regularizer strength 0.05, with bitmask 7
input output_sens output
The output of this command is in CFL format. It can be converted to a PNG using bart toimg
. When using L1 wavelet regularization, the character ”W” should be used in the R
option, with the additional m
argument to ensure that ADMM is used.
Comments
There are no comments yet.