QSMGAN: Improved Quantitative Susceptibility Mapping using 3D Generative Adversarial Networks with Increased Receptive Field

05/08/2019 ∙ by Yicheng Chen, et al. ∙ UC San Francisco 0

Quantitative susceptibility mapping (QSM) is a powerful MRI technique that has shown great potential in quantifying tissue susceptibility in numerous neurological disorders. However, the intrinsic ill-posed dipole inversion problem greatly affects the accuracy of the susceptibility map. We proposed QSMGAN: a 3D deep convolutional neural network approach based on improved U-Net with increased phase receptive field and further refined the network using the WGAN-GP training strategy. Our method could generate accurate and realistic QSM from single orientation phase maps efficiently and performed significantly better than traditional non-learning-based dipole inversion algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 6

page 7

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Quantitative susceptibility mapping (QSM) is a recent phase-based quantitative magnetic resonance imaging (MRI) technique that enables in vivo quantification of magnetic susceptibility, a tissue parameter that is altered in variety of neurological disorders[1, 2]. QSM has been shown to quantify changes in vascular injury such as the formation of cerebral microbleeds over time, hemorrhage, and stoke [3, 4]. Iron deposition in the deep gray matter due to aging or disease can also be investigated using QSM[5]. In neurodegenerative diseases such as Parkinson’s disease[6], Alzheimer’s disease[7] and Huntington’s disease[8], QSM can quantify the paramagnetic iron deposition related to the progression of these diseases and potentially becomes a biomarker for diagnosis and managing neurodegenerative disease patients.

Although QSM has been demonstrated to have great potential in both research studies and clinical practice, accurate and reproducible quantification of tissue susceptibility requires multiple steps of careful data processing including phase reconstruction, coil combination (for multi-channel coils)[9], multi-echo phase combination (for multi-echo sequences)[10], background phase removal[11, 12, 13] and phase-susceptibility dipole inversion[14, 12, 15]. Among them, the dipole inversion step is considered the most difficult because it is intrinsically an ill-posed inverse problem[16]. This is due to the representation of the relationship between magnetic field perturbation and susceptibility distribution as a convolution, which can be more efficiently calculated as a point-wise multiplication in frequency space except along the conical surface where zero values result in missing data or noise amplification when solving for the inverse .

To overcome this issue, we can sample the missing data by acquiring at least three scans with different relative orientations of the volume-of-interest in the main magnetic field and apply the algorithm of calculation of susceptibility through a multiple-orientation sampling (COSMOS)[15]. This requires a subject to change the head orientation between the repeated scans, which has several disadvantages that has significantly limited its application in practice: 1) the scan time is prolonged since multiple repeated scans are required, increasing both the cost of QSM and the risk of motion artifact; 2) co-registration of the different orientation images are required, which both increases processing time and potentially introduces errors due to misalignment; and 3) modern high-field head coils are usually configured to be very close to the subject’s head for higher sensitivity, limiting the ability to rotate one’s head and degrading the quality of the QSM calculation. As a result, COSMOS is usually considered impractical for patient studies despite its superior potential to alleviate the ill-posed dipole inversion.

In the last decade, many algorithms have been proposed to get around this inverse problem. Thresholded K-space Division (TKD) simply thresholds the dipole kernel to a predetermined non-zero value to avoid dividing-by-zero problem[17]. Morphology Enabled Dipole Inversion (MEDI) regularized the ill-posed inversion problem by imposing edge preservation according to the information from the magnitude images[12]. Compressed Sensing Compensated inversion (CSC) adopts the observation that the missing k-space satisfies the compressed sensing requirement and regularizes the problem using sparse L1 norm[18]. QSIP approaches the problem by inversion of a perturbation model and makes use of tissue/air susceptibility atlas[19]

. These traditional methods have three major limitations: 1) they either suffer from significant streaking artifacts or require careful hyperparameter tuning process; 2) they are iterative and therefore can take hours to compute, greatly reducing their practical implementation; 3) they give vastly different susceptibility quantifications and reproducibility, making it difficult to compare studies that use different algorithms.

Recently, Deep Convolutional Neural Networks (DCNNs) have showed great potential in computer vision tasks such as image classification

[20], semantic segmentation[21] and object detection[22]. Among various deep neural network architecture, U-Net[23] becomes the most popular backbone for many medical image-related problems[24, 25, 26] due to its effectiveness and universality. [27] and [28] adopted the U-Net structure and extended it to 3D to solve the dipole inversion problem of QSM by training the network to learn the inversion using patches of various sizes as the input. Since its inception in 2014, Generative Adversarial Networks (GANs)[29] have been incorporated into CNNs to further improve performance of segmentation, classification, and especially contrast generation tasks[30, 31, 32, 33, 34, 35] by combining a generator that is trained to generate more realistic and accurate images with a discriminator who is trained to distinguish real from the generated images. In the method described in this study (QSMGAN) , we 1) modified the structure of the 3D U-Net proposed by [27] and [28] to incorporate the physical indication into the model and 2) utilized the power of GAN to regularize the model training process to further improve the accuracy of QSM dipole inversion.

2 Materials and Methods

2.1 Theory of QSM dipole inversion and generative adversarial networks

Assuming that the susceptibility-induced magnetization is regarded as a magnetic dipole and the orientation of the main magnetic field is defined as the z-axis in the imaging Cartesian coordinate, the magnetic field perturbation and susceptibility distribution is related by a convolution, which can be efficiently calculated by a point-wise multiplication in frequency space[1].

(1)

Where is the local field perturbation, is the main magnetic field, represents the tissue susceptibility,

is the frequency space vector and

is the z-component. In practice, we measure by phase variation and solve the inverse problem for the susceptibility distribution . However, notice that when , the bracket term on the right-hand side becomes close to zero, which causes missing measurement or noise amplification when solving the inverse problem, making it ill-posed.

Assume is the acquired tissue phase of the subject and is the susceptibility map of the subject we want to solve in the ill-posed phase-susceptibility dipole inversion problem, and function f represents the relationship between them, then we can simplify equation (1) with:

To solve the dipole inversion problem, we are finding a function that gives:

Where

is an estimate of the true susceptibility map

. The idea of GANs is to define a game between two competing components (networks): the discriminator (D) and the generator (G). G takes an input and generates a sample that D receives and tries to distinguish from a real sample. The goal of G is to “fool” D by generating more realistic samples. In this case, we use G as the function h:

The adversarial game between G and D is a minimax objective:

where is the distribution of true susceptibility maps and is the distribution of tissue phases. To stabilize the training process, we adopt the method of Wasserstein GAN (WGAN)[30], and the value function for WGAN is:

(2)

where is the set of 1-Lipschitz functions, which can be enforced by adding a gradient penalty term to the value function[36]:

where is a parameter that controls the weight of the gradient penalty. Since the goal for G in this task is to recover/reconstruct QSM from a certain input tissue phase, we also included an L1 loss as content loss in the objective function of G:

where is the adversarial loss indicated in equation (2).

2.2 QSMGAN framework

We designed a 3D U-Net like architecture as the generator part of the QSMGAN framework as shown in Figure 1

. In each U-Net block, there are two 3x3x3 Conv3d-BatchNorm-LeakyReLU (negative slope of 0.2) layers. 3D average pooling is used to down-sample the image patch, while 3D transpose convolution is applied to restore the resolution in the up-sampling path. At the end of the generator, we applied a cropping layer to focus the training on only the center part of the patch. For the discriminator part of the QSMGAN, we designed a 3D patch-based convolutional neural network where each block of the network is composed of a 3D convolution (4x4x4 kernel size and stride 2) and a LeakyReLU (negative slope of 0.2). The four blocks in the network lowers the input patch to 1/16 of the original size and the 3D convolution layer at the end converts the resulting patch to a binary output corresponding to the prediction of true and fake QSM patches.

Figure 1: QSMGAN network architecture. a) The generator part of the GAN, which adopts a 3D U-Net with center cropping as building block. b) The discriminator (“critic” in WGAN-GP) is constructed using 3D convolution with stride=2 to reduce image size. c) The overall GAN structure combines the generator and discriminator, where G is trained to generate more realistic and accurate QSM to fool D and D is trained to distinguish real and generated QSM.

2.3 Subjects and data acquisition

Eight healthy volunteers (average age 28, M/F=3/5) were recruited in this study as the training and validation dataset for QSMGAN. All volunteers were scanned with a 3D multi-echo gradient-recalled sequence (4 Echoes, TE= 6/9.5/13/16.5ms, TR=50ms, FA=20, bandwidth=50kHz, 0.8mm isotropic resolution, FOV=24x24x15cm) using a 32-channel phase-array coil on a 7T MRI scanner (GE Healthcare Technologies, Milwaukee, WI, USA). The sequence was repeated for three times on each volunteer with different head orientation (normal position, tilted forward and tilted left) to acquire data for COSMOS reconstruction. GRAPPA-based parallel imaging[37] with an acceleration factor of 3 and 16 auto-calibration lines were also adopted to reduce the scan time of each orientation to about 17 minutes.

2.4 QSM data processing and dataset preparation

The raw k-space data were retrieved from the scanner and processed on a Linux workstation using in-house software developed with Matlab 2015b (Mathworks Inc., Natick, MA, USA). The following processing steps (summarized in Figure 2

) were performed to obtain the tissue phase maps required for input to the QSMGAN and calculation for the gold standard COSMOS-QSM which was used as the learning target of QSMGAN: 1) GRAPPA reconstruction was applied to interpolate the missing k-space lines due to parallel imaging acceleration and channel-wise inverse Fourier transform was applied to obtain the coil magnitude and phase images; 2) coil images were combined to obtain robust echo magnitude and phase images using the MCPC-3D-S method

[10]; 3) raw phase was unwrapped using a Laplacian-based algorithm[38]; 4) FSL BET[39] was applied on magnitude images from all echoes to obtain a composite brain mask from the intersection of each individual echo mask; 5) V-SHARP[18] was used to remove the background field phase to get the tissue phase map; 6) images from different orientations were co-registered using magnitude images with FSL FLIRT[39]; 7) the dipole field inversion was solved using the COSMOS algorithm[15]. In addition, TKD[17], MEDI[12] and iLSQR[14] QSM maps were also reconstructed from single orientation data for evaluation and comparison. A threshold of 0.15 was selected for TKD algorithm, and =2000 was used in MEDI.

Figure 2: QSM data processing pipeline employed in this study. This figure shows processing of one scan orientation, and data from the other two orientations were processed similarly and introduced in the gray boxes in this figure to reconstruct the COSMOS-QSM.

2.5 Training and validation

The 8 subjects were divided into 5 for training, 1 for validation, and 2 for testing. Scans with three orientations were all included in the dataset so the total number of scans in training/validation/test set are 15/3/6. To build the training set, tissue phase and susceptibility patches were sampled by center coordinates with a gap of 8 voxels in all three spatial dimensions. Since background occupies most of the image volume, we sampled 90% patches from inside the brain and only 10% from the background to increase the efficiency of the training. For validation and testing, the input tissue phase volume is divided into non-overlapping patches according to the output patch size and the susceptibility map is reconstructed patch-wise by feeding the input tissue phase patch to the trained network. Figure 3 demonstrates the relationship between receptive field and input/output patch size.

Figure 3: Demonstration of the relationship between receptive field and input/output patch size. a) input patch size = output patch size, red dot represents voxels near the patch center. b) input patch size = output patch size, voxels near the patch edge receive only information from the orange region. c) input patch size > output patch size (with center cropping), voxels near the edge receive more information than in b).

To assist the neural network training, we multiplied the input phase by a scale factor of 100 and the transform the output x by a scaled hyperbolic tangent operation to get the surrogate target :

This transform not only converts the range of the target susceptibility map to [-1, 1], which aids in the network training, but also results in a more Gaussian distributed histogram, helping the network learn values in different ranges (Figure

4).

Figure 4: An axial slice of the original QSM (top left) and its histogram (top right) compared to the tanh transformed QSM (bottom left) and its histogram (bottom right). We can see that the tanh transform distributed the susceptibility values more evenly between -1.0 and +1.0, resulting in better contrast and value ranges for the network training.

As the baseline network, we first trained the U-Net based generator separately with the pairs of input and output patch sizes listed in Table 1

. To train the generator, an Adam optimizer with a learning rate of 1e-4 was used and betas were set to (0.5, 0.999). The network was trained for 40,000 iterations with a batch size of 16 that was lowered to 8 for larger input patch sizes. L1 loss was used as the loss function for the baseline network.

3D U-Net Patch Size (inputoutput) L1 error (1e-3) PSNR NMSE
3232 1.4900.184 42.251.01 0.3020.056
4832 1.4030.204 43.071.22 0.2520.063
6432 1.3160.230 43.391.37 0.2370.072
9632 1.3190.216 43.381.32 0.2370.068
4848 1.4240.195 42.581.13 0.2810.061
6448 1.3090.210 43.531.31 0.2290.065
9648 1.3100.212 43.371.28 0.2370.067
12848 1.3110.215 43.401.31 0.2360.068
6464 1.3890.211 42.871.21 0.2640.063
9664 1.3160.207 43.461.28 0.2330.066
12864 1.3220.211 43.321.27 0.2400.067
Table 1: Test set performance of U-Net baseline with different input and output patch size.

To train the QSMGAN, we again started with the baseline network and then: 1) fixed the generator G and trained D for 20,000 iterations to ensure that D was well trained, as suggested by [36]; and 2) trained G and D together for 40,000 iterations. During each iteration, D (the critic) was updated 5 times with the gradient penalty . Adam optimizers were used for both G and D and the learning rate was lowered to 1e-5. To balance the content loss and adversarial loss, was set to 1 and to 0.01.

2.6 Evaluation metrics

To evaluate the quality of the predicted QSM map reconstructed by the network (), we calculated and compared the following metrics: 1) L1 error = ; 2) Peak Signal-to-Noise Ratio (PSNR) = , where computes the voxel value range of the input image and computes the mean squared error between the reconstructed image and target image; and 3) Normalized Mean Squared Error (NMSE) = .

3 Results

3.1 Baseline 3D U-Net

We experimented with combinations of three different input patch sizes (323, 483, 643) and 5 output patch sizes (323, 483, 643, 963, 1283, with input > output) for the baseline 3D U-Net. Figure 5 demonstrates example axial slices of the effects of different input-output size pairs while Table 1 lists the comparison of the quantitative metrics (L1, PSNR, NMSE) that evaluate the quality of the resulting QSM map. When input patch size was the same as the output patch size, inversion error increased towards the edge of the patch, resulting in visible discontinuities in a grid-like pattern in the reconstructed QSM map. The higher L1 error, lower PSNR and NMSE also supports this phenomenon quantitatively. When we increased the input patch size and applied center cropping at the end of the U-Net as shown in Figure 3, the patch edge artifact reduced and the metrics improved. Among the different combinations of patch sizes, the input patch size of 643 and output patch size of 483 (6448) provided the best balance between sufficient accuracy of the U-Net dipole inversion and low computation burden/efficiency. Therefore, for the QSMGAN evaluation we used the 6448 3D U-Net as a basic building block.

Figure 5: Comparison of reconstructed QSM using 3D U-Net with different input/output patch sizes (input output). Green box highlights the ground truth COSMOS QSM. Red arrows highlight the edge incontituity artifacts.

3.2 Effectiveness of QSMGAN

Using the 6448 3D U-Net as the generator, the added benefit of using QSMGAN over the 3D U-Net is shown by the quantitative metrics listed in Table 2. Figure 6 demonstrates the visual comparison of reconstructed QSM of 3D U-Net and QSMGAN, where the adversarial training further improved the quality of the reconstructed QSM map by reducing both residual blurring and the remaining edge discontinuity artifacts from the relatively smaller input patch size, providing a more accurate and detailed mapping of susceptibility compared to the 3D U-Net baseline.

Methods L1 error (1e-3) PSNR NMSE
TKD 2.8260.178 38.821.69 0.4960.076
MEDI 2.9090.194 41.241.71 0.5390.059
iLSQR 2.1930.227 42.031.45 0.4100.088
3D U-Net 6448 1.3090.210 43.531.31 0.2290.065
QSMGAN 6448 1.2620.248 43.721.55 0.2210.078
Table 2: Test set performance of U-Net baseline, QSMGAN and non-learning-based algorithms.
Figure 6: Comparison of QSM reconstructed using 3D U-Net baseline network and QSMGAN. Row 1 and 3: sagittal view. Row 2 and 4: coronal view. Row 3 and 6: axial view. Numbers at bottom of each slice show the L1 error relative to COSMOS-QSM of the slice.

3.3 Comparison with non-learning-based methods

Compared to 3 common ‘non-learning-based’ QSM dipole inversion algorithms (TKD, MEDI and iLSQR), our QSMGAN approach had 42-59% reductions in NMSE and L1 error in the test datasets while increasing PSNR by 4-13% as shown in Table 2. Figure 7 shows example QSM slices from the two test subjects generated from QSMGAN and the non-learning-based algorithms. Although TKD had the lowest computational complexity, it also resulted in the most streaking artifacts. Despite its smoothe appearance, MEDI was the least uniform with relatively high L1 error and inaccurate contrast of some fine structures such as vessels. It also required the longest computation time of all of the methods (about 2 hours on a regular desktop workstation). Although iLSQR QSM had lower L1 error than TKD and MEDI, it was visually noisier than all other methods. QSMGAN not only resulted in the best L1 error, PSNR, and NMSE but achieved the most similar QSM map to COSMOS in only 2 srconds of reconstruction time per scan, the same order of time complexity as with TKD method.

Figure 7: Comparison of QSM reconstructed using non-learning-based dipole inversion algorithms (TKD, MEDI and iLSQR) and QSMGAN. Row 1 and 3: sagittal view. Row 2 and 4: coronal view. Row 3 and 6: axial view. Numbers at bottom of each slice show the L1 error relative to COSMOS-QSM of the slice.

4 Discussion

Although in theory, the phase-susceptibility relationship in QSM is global, meaning the tissue phase is determined by the susceptibility of all locations in the imaging volume, we still adopted a patch-based deep learning approach similar to

[28] for several reasons. Since the network is 3D, the patch-based method can significantly reduce the computation complexity and memory requirement compared to whole-volume based methods like that described in [27], especially when conducting high-resolution QSM. For example, if we needed to generate a full QSM volume with 256x256x150 matrix size using the entire volume as an input to the 3D U-Net architecture, even the most advanced GPU with 32GB graphics memory would not be able to hold the whole training sample. The patch-based method also converts one single scan into hundreds of input images, even before data augmentation. Since COSMOS takes a relatively long scan time and is cumbersome to conduct, training a more generalizable deep convolutional network is beneficial when only a limited amount of data is available. Because the phase is mostly determined by nearby susceptibilities due to the properties of the susceptibility-phase convolutional kernel, the patch-based approach a good approximate of the dipole inversion.

As Table 1 demonstrates, increasing the input patch size and applying center cropping at the end of the 3D U-Net significantly improved the quality of the reconstructed QSM maps. This can be intuitively described by Figure 3, where when the input patch size applied equaled the output patch, an output voxel near the center of the patch (Figure 3a) could receive information from the entire patch. However, a voxel near the edge of the output patch (Figure 3b) would only receive information from the orange region and a large portion of the phase information from the gray region would be missing, reducing the ability of the network to accurately solve for the susceptibility. When we increase the input patch size (Figure 3c) and crop the output patch such that only the center of the patch is considered a valid QSM prediction, voxels near the edge of the patch regain phase input information thereby increasing the accuracy of the quantified susceptibility values.

Another observation from Table 1 is that the medium output patch size (483) achieved the best QSM reconstruction performance. The smaller patch size (323) performed worse because the output voxels received less information, introducing more error to the patch approximation of global convolution. Unexpectedly, the larger patch size (643) didn’t provide any extra benefit to the dipole inversion either. This might be due to the fact that it introduces more variables into the computation process and increased the difficulty of training a good network for QSM reconstruction. In addition, for each output patch size, using excessively large input patches (such as 96 to 32) did not further reduce the error but slightly downgraded the QSM quality. This might be due to the increased information far from the output patch that could interfere with the dipole inversion.

A disadvantage of using an excessively large input patch size is the dramatically increased computational complexity and GPU memory requirement. Note that the network is three-dimensional, the computational complexity and memory requirement of training the networks roughly increases with the input patch size by O(). The center cropping we applied to ensure a large enough receptive field, only exacerbated this problem, greatly reducing the efficiency of the prediction process. For example, if we increased the input patch size from 323 to 643, the training/prediction time and memory becomes 8x as long and only 1/8 of the computed patches are utilizied. Based on the observation that excessively large input patch sizes greatly increased the computation burden without improving the quality of the resulting QSM map, we selected the 4832 3D U-Net as the base network to integrate with the GAN.

The rationale for the GAN training, which included adding a discriminator or “critic”, was to guide the generator (or the 3D U-Net) to further refine its result so that it cannot be distinguished from a real COSMOS QSM patch. Although it took a long time (48 hours) to train the QSMGAN, once the training was finished, the discriminator was no longer needed. As a result, reconstruction or prediction of the QSM map for a new scan/subject from tissue phase only required one forward pass through the 3D U-Net for each input patch, thereby resulting in a computation complexity that is identical to the 3D U-Net baseline.

5 Conslusions

In this study, we implemented a 3D U-Net deep convolutional neural network approach to improve the dipole inversion problem in quantitative susceptibility mapping reconstruction. To better approximate the global convolution property in the phase-susceptibility relationship though patch-based neural networks, we enlarged the input patch size and introduced center cropping to ensure an increased receptive field for all neural network outputs. This cropping technique provided significantly lower edge discontinuity artifacts and higher accuracy. Including a generative adversarial network based on the WGAN-GP technique further improved the stability of training process, the image quality, and accuracy of the susceptibility quantification. Compared the other traditional non-learning dipole inversion algorithms such as TKD, MEDI and iLSQR, our proposed method could efficiently generate more accurate, COSMOS-like QSM maps from single- orientation, background-field-removed, tissue phase images. Future directions will investigate the network’s ability to generalize to other scan parameters such as TE, TR, and image resolution as well as test the performance of the QSMGAN on patients with different pathologies.

References

  • [1] Chunlei Liu, Hongjiang Wei, Nan-jie Gong, Matthew Cronin, Russel Dibb, and Kyle Decker. Quantitative Susceptibility Mapping: Contrast Mechanisms and Clinical Applications. Tomography, 1(1):3–17, 2015.
  • [2] Yi Wang, Pascal Spincemaille, Zhe Liu, Alexey Dimov, Kofi Deh, Jianqi Li, Yan Zhang, Yihao Yao, Kelly M. Gillen, Alan H. Wilman, Ajay Gupta, Apostolos John Tsiouris, Ilhami Kovanlikaya, Gloria Chia Yi Chiang, Jonathan W. Weinsaft, Lawrence Tanenbaum, Weiwei Chen, Wenzhen Zhu, Shixin Chang, Min Lou, Brian H. Kopell, Michael G. Kaplitt, David Devos, Toshinori Hirai, Xuemei Huang, Yukunori Korogi, Alexander Shtilbans, Geon Ho Jahng, Daniel Pelletier, Susan A. Gauthier, David Pitt, Ashley I. Bush, Gary M. Brittenham, and Martin R. Prince. Clinical quantitative susceptibility mapping (QSM): Biometal imaging and its emerging roles in patient care. Journal of Magnetic Resonance Imaging, pages 1–21, 2017.
  • [3] Jan Klohs, Andreas Deistung, Ferdinand Schweser, Joanes Grandjean, Marco Dominietto, Conny Waschkies, Roger M Nitsch, Irene Knuesel, Jürgen R Reichenbach, and Markus Rudin. Detection of cerebral microbleeds with quantitative susceptibility mapping in the ArcAbeta mouse model of cerebral amyloidosis. Journal of Cerebral Blood Flow & Metabolism, 31(12):2282–2292, 2011.
  • [4] Tian Liu, Krishna Surapaneni, Min Lou, Liuquan Cheng, Pascal Spincemaille, and Yi Wang. Cerebral microbleeds: burden assessment by using quantitative susceptibility mapping. Radiology, 262(1):269–78, 2012.
  • [5] Wei Li, Bing Wu, Anastasia Batrachenko, Vivian Bancroft-Wu, Rajendra A. Morey, Vandana Shashi, Christian Langkammer, Michael D. De Bellis, Stefan Ropele, Allen W. Song, and Chunlei Liu. Differential developmental trajectories of magnetic susceptibility in human brain gray and white matter over the lifespan. Human Brain Mapping, 35(6):2698–2713, 2014.
  • [6] Naying He, Huawei Ling, Bei Ding, Juan Huang, Yong Zhang, Zhongping Zhang, Chunlei Liu, Kemin Chen, and Fuhua Yan. Region-specific disturbed iron distribution in early idiopathic Parkinson’s disease measured by quantitative susceptibility mapping. Human Brain Mapping, 36(11):4407–4420, 2015.
  • [7] Julio Acosta-Cabronero, Guy B. Williams, Arturo Cardenas-Blanco, Robert J. Arnold, Victoria Lupson, and Peter J. Nestor. In vivo quantitative susceptibility mapping (QSM) in Alzheimer’s disease. PLoS ONE, 8(11), 2013.
  • [8] Jiri M.G. Van Bergen, J. Hua, P. G. Unschuld, Issel Anne L. Lim, Craig K. Jones, Russell L. Margolis, Christopher A. Ross, Peter C.M. Van Zijl, and Xu Li. Quantitative susceptibility mapping suggests altered brain iron in premanifest Huntington disease. American Journal of Neuroradiology, 37(5):789–796, 2016.
  • [9] Kathryn E Hammond, Janine M Lupo, Duan Xu, Meredith Metcalf, Douglas A C Kelley, Daniel Pelletier, Susan M Chang, Pratik Mukherjee, Daniel B Vigneron, and Sarah J Nelson. Development of a robust method for generating 7.0 T multichannel phase images of the brain with application to normal volunteers and patients with neurological diseases. 2007.
  • [10] Korbinian Eckstein, Barbara Dymerska, Beata Bachrata, Wolfgang Bogner, Karin Poljanc, Siegfried Trattnig, and Simon Daniel Robinson. Computationally Efficient Combination of Multi-channel Phase Data From Multi-echo Acquisitions (ASPIRE). Magnetic Resonance in Medicine, c:1–11, 2017.
  • [11] Wei Li, Bing Wu, and Chunlei Liu. iHARPERELLA: an improved method for integrated 3D phase unwrapping and background phase removal. Proc. Intl. Soc. Mag. Reson. Med., 23(1):3313, 2015.
  • [12] Tian Liu, Jing Liu, Ludovic De Rochefort, Pascal Spincemaille, Ildar Khalidov, James Robert Ledoux, and Yi Wang. Morphology enabled dipole inversion (MEDI) from a single-angle acquisition: Comparison with COSMOS in human brain imaging. Magnetic Resonance in Medicine, 66(3):777–783, 2011.
  • [13] Hongfu Sun and Alan H. Wilman. Background field removal using spherical mean value filtering and Tikhonov regularization. Magnetic Resonance in Medicine, 71(3):1151–1157, 2014.
  • [14] Wei Li, Nian Wang, Fang Yu, Hui Han, Wei Cao, Rebecca Romero, Bundhit Tantiwongkosi, Timothy Q. Duong, and Chunlei Liu. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping. NeuroImage, 108:111–122, 2015.
  • [15] Tian Liu, Pascal Spincemaille, Ludovic De Rochefort, Bryan Kressler, and Yi Wang. Calculation of Susceptibility Through Multiple Magnetic Field Map to Susceptibility Source Image in. 204:196–204, 2009.
  • [16] Andreas Deistung, Ferdinand Schweser, and Jürgen R. Reichenbach. Overview of quantitative susceptibility mapping. NMR in Biomedicine, (December 2015), 2016.
  • [17] Karin Shmueli, Jacco A de Zwart, Peter van Gelderen, Tie-Qiang Li, Stephen J Dodd, and Jeff H Duyn. Magnetic susceptibility mapping of brain tissue in vivo using MRI phase data. Magnetic resonance in medicine, 62(6):1510–22, 2009.
  • [18] Bing Wu, Wei Li, Arnaud Guidon, and Chunlei Liu. Whole brain susceptibility mapping using compressed sensing. Magnetic Resonance in Medicine, 67(1):137–147, 2012.
  • [19] Clare Poynton, Mark Jankinson, Elfar Adalsteinsson, Edith Sullivan, Adolf Pfefferbaum, and William Wells. Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation with Brain Iron in Normal Aging. IEEE transactions on medical imaging, 34(1):339–353, 2015.
  • [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. dec 2015.
  • [21] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully Convolutional Networks for Semantic Segmentation. nov 2014.
  • [22] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. jun 2015.
  • [23] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. pages 1–8, may 2015.
  • [24] Enhao Gong, John M. Pauly, Max Wintermark, and Greg Zaharchuk. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. Journal of Magnetic Resonance Imaging, 48(2):330–340, 2018.
  • [25] Jens Kleesiek, Gregor Urban, Alexander Hubert, Daniel Schwarz, Klaus Maier-Hein, Martin Bendszus, and Armin Biller. Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. NeuroImage, 129:460–469, 2016.
  • [26] Jure Zbontar, Florian Knoll, Anuroop Sriram, Matthew J. Muckley, Mary Bruno, Aaron Defazio, Marc Parente, Krzysztof J. Geras, Joe Katsnelson, Hersh Chandarana, Zizhao Zhang, Michal Drozdzal, Adriana Romero, Michael Rabbat, Pascal Vincent, James Pinkerton, Duo Wang, Nafissa Yakubova, Erich Owens, C. Lawrence Zitnick, Michael P. Recht, Daniel K. Sodickson, and Yvonne W. Lui. fastMRI: An Open Dataset and Benchmarks for Accelerated MRI. pages 1–29, 2018.
  • [27] Steffen Bollmann, Kasper Gade Bøtker Rasmussen, Mads Kristensen, Rasmus Guldhammer Blendal, Lasse Riis Østergaard, Maciej Plocharski, Kieran O’Brien, Christian Langkammer, Andrew Janke, and Markus Barth. DeepQSM - using deep learning to solve the dipole inversion for quantitative susceptibility mapping. NeuroImage, 195(March):373–383, 2019.
  • [28] Jaeyeon Yoon, Enhao Gong, Itthi Chatnuntawech, Berkin Bilgic, Jingu Lee, Woojin Jung, Jingyu Ko, Hosan Jung, Kawin Setsompop, Greg Zaharchuk, Eung Yeop Kim, John Pauly, and Jongho Lee. Quantitative susceptibility mapping using deep neural network: QSMnet. NeuroImage, 179:199–206, oct 2018.
  • [29] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. Advances in Neural Information Processing Systems 27, pages 2672–2680, 2014.
  • [30] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. 2017.
  • [31] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P. Recht, Daniel K. Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine, 79(6):3055–3071, 2018.
  • [32] Dong Nie, Roger Trullo, Jun Lian, Li Wang, Caroline Petitjean, Su Ruan, Qian Wang, and Dinggang Shen. Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Transactions on Biomedical Engineering, 9294(c):1–11, 2018.
  • [33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. pages 1–16, 2015.
  • [34] Guang Yang, Simiao Yu, Hao Dong, Greg Slabaugh, Pier Luigi Dragotti, Xujiong Ye, Fangde Liu, Simon Arridge, Jennifer Keegan, Yike Guo, and David Firmin. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Transactions on Medical Imaging, 37(6):1310–1321, jun 2018.
  • [35] Jun Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros.

    Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks.

    Proceedings of the IEEE International Conference on Computer Vision, 2017-Octob:2242–2251, 2017.
  • [36] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved Training of Wasserstein GANs. 2017.
  • [37] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magnetic resonance in medicine, 47(6):1202–10, jun 2002.
  • [38] Wei Li, Bing Wu, and Chunlei Liu. Quantitative susceptibility mapping of human brain reflects spatial variation in tissue composition. NeuroImage, 55(4):1645–1656, 2011.
  • [39] Mark Jenkinson, Christian F. Beckmann, Timothy E.J. Behrens, Mark W. Woolrich, and Stephen M. Smith. FSL. NeuroImage, 62(2):782–790, aug 2012.