1 Introduction
Magnetic resonance imaging (MRI) is a frequently used medical imaging modality in clinical practice as it proves to be an excellent noninvasive source for revealing structural as well as anatomical information. A major shortcoming of the MRI acquisition process is its considerably long scan time. This is due to sequential acquisition of large volumes of data, not in the image domain but in kspace, i.e. Fourier domain. Such a prolonged scanning time can cause significant artefacts because of physiological motion, movement of patient during the scan. It may also hinder the use of MRI in timecritical diagnosis.
A possible way to speed up the imaging process is by parallel imaging techniques [30], [11]. However, the number and arrangement of the receiver coils severely impact the acceleration factor for such techniques, and could possibly introduce imaging artefacts. Another possible way for fast acquistion is leveraging compressive sensing (CS) [8] theory based undersampling, which is used very often for this purpose. However, doing this renders the inverse problem illposed, making the recovery of highquality MR images extremely challenging. Moreover, the presence of noise during the acquisition may also severely impact the reconstruction quality.
Conventional approaches for CSMRI reconstruction focus extensively on the usage of sparse representations to assume prior knowledge on the structure of the MR image to be reconstructed. Sparse representations can be explored by the use of predefined transforms [24]
such as total variation, discrete cosine transform, discrete wavelet transform, discrete Fourier transform, etc. Alternatively, dictionary learning based methods
[32] learn sparse representations from the subspace spanned by the data. Both these types of approaches suffer from long computation time due to the iterative nature of the optimization processes. Moreover, the universally applicable sparsifying transforms might find it difficult to completely capture the fine details as observed in biological tissues [23].Deep learning frameworks have enjoyed great success in similar inverse problems such as singleimage super resolution [7], denoising [39], etc., where the methods try to recover missing information from incomplete/ noisy data. Yang et al. [41] used the alternating direction method of multipliers (ADMM) algorithm [4] to train their deep network for CSMRI reconstruction. With the recent advancements of generative adversarial networks (GANs) [10, 18], CSMRI reconstruction problems are also addressed using adversarial learning framework. In a recent study, Bora et al. [3]
have shown that pretrained generative models like varitional autoencoders and GANs
[10] can be used for recovery of CS signal without making use of sparsity at all. Yang et al. [40] proposed a Unet [34] based generator, following a refinement learning based approach, with mean square error (MSE) and perceptual loss to reconstruct the images. In [31], authors proposed fully residual network using additionbased skip connections. They used cyclic loss for data consistency constraints in the training process to achieve better reconstruction quality. Deora et al. [6] proposed a Unet based generator with patch discriminator [18] based model to perform the reconstruction task. Along with mean absolute error (MAE) and structural similarity (SSIM) [42], authors used Wasserstein loss [1] to improve the adversarial learning. Mardani et al. [25] introduced affine projection operator inbetween the generator and the discriminator to improve the data consistency in the reconstructed images. Though deep adversarial networks have significantly improved the quality of the CSMRI reconstruction, one of the biggest shortcomings is that they work on realvalued inputs though the input is inherently complexvalued. The other limitation is that almost all reconstruction networks depend on pixelbased or losses. This efficiently reconstructs the low frequency components of data, but often fails to generate the middle and high frequency information which depicts fine textural and contextual parts of an image [16].Motivation of the work: Complex parameter space provides several benefits over real parameter space. Apart from having biological inspiration and significance [33], complexvalued representation not only increases the representational capacity of the network, it can be more stable than realvalued space in various applications [36, 5]. Complexvalued operations can be performed with easy optimization techniques [27] without sacrificing the generalization ability of the network [14]
. Several researchers have reported that the complexvalued neural network exhibits faster learning with better robustness to noise
[5, 2, 38]. Complexvalued operations also preserve the phase information which encodes fine structural details of an image [36, 28]. Even with these striking benefits, complexvalued deep networks are not widely explored. In fact, to the best of our knowledge, complexvalued GAN (CoVeGAN) model has never been explored for any reconstruction problem.Contributions: The contributions of this work are as follows.

Complexvalued operations are widely unexplored for GAN architecture, and it has never been explored for CSMRI reconstruction problem. This work not only exploits complexvalued operations to achieve better reconstruction, but also documents the stability and quality of a CoVeGAN approach for a reconstruction problem with various losses like , SSIM, etc.

We have proposed a novel generator architecture by modifying the existing Unet model [34].

As CSMRI problem formulation is closely related to superresolution problems, taking an inspiration from [16], we introduce wavelet loss in our model to better reconstruct midfrequency components of the MR image. To the best of our knowledge, wavelet loss has never been integrated with any complexvalued neural network before.
2 Methodology
The acquistion model of MRI can be described as follows:
(1) 
where is the desired image,
denotes the observed data vector, the vector
captures the noise. denotes the matrix to compute the 2D Fourier transform, describes the matrix for undersampling, and denotes vectorization. Given an observation , the aim of reconstruction is to recover in the presence of a nonzero noise vector . We attempt the recovery of by using a GAN model.2.1 Complexvalued GAN
A GAN comprises of two networks, namely a generator and a discriminator . In order to generate images which are similar to the samples of the distribution of true data , the generator attempts to map an input vector to the output
. On the other hand, the discriminator aims to classify the generated samples
and the samples from the distribution of .We propose the use of a complexvalued GAN, where the generator , and the discriminator can be complexvalued networks. However, since both the generated and ground truth (GT) images are realvalued, we stick to the use of a realvalued discriminator.
2.2 Complexvalued operations
In this section, we discuss the important operations required for the implementation of a complexvalued network, namely convolution, backpropagation, batch normalization (BN)
[17], and activation.The complexvalued equivalent of realvalued 2D convolution is discussed below. The convolution of a complexvalued kernel with complex valued feature maps , can be represented as , where
(2) 
In these notations, subscripts and
denote the real and imaginary parts, respectively. In order to implement the aforementioned complexvalued convolution, we use realvalued tensors, where
() is stored such that the imaginary part () is concatenated to the real part (). The resultant includes four realvalued 2D convolutions as mentioned in (2), which is stored in a similar manner by concatenating to .Backpropagation can be performed on a function that is nonholomorphic as long as it is differentiable with respect to its real and imaginary parts [21]
. Since all the loss functions considered in this work are realvalued, we consider
to be a realvalued function of weight vector . The update rule of using gradient descent can be written as:(3) 
where is the learning rate, denotes the complex conjugate, and the gradient of is calculated as follows:
(4) 
We make use of the complex BN applicable to complex numbers, proposed by [36]. To ensure that the complex data is scaled in such a way that the distribution of real and imaginary components is circular, the 2D complex vector can be whitened as follows:
(5) 
where denotes the covariance matrix, and denotes expectation operator. It can be represented as:
(6) 
Learnable parameters , are used to scale and shift the aforementioned standardized vector as follows:
(7) 
where is a 2 matrix, and is a complex number.
In order to work with complexvalued entities, several activations have been proposed in previous works [2]. In this work, the following complexvalued activations are considered:

Complex rectified linear unit (
ReLU): The complex equivalent of ReLU, known as ReLU, is obtained when ReLU is applied separately to the real and imaginary parts of an element. It is formulated as follows:(8) where is a complexvalued input.

Complex parametric ReLU (PReLU): The complex equivalent of PReLU [13] is obtained when PReLU is applied separately to the real and imaginary parts of an element. The expression for PReLU is given by:
where and are trainable parameters.

ReLU [12]: This activation allows a complex element to pass only if both its real and imaginary parts are positive. It is given by:
(9)
2.3 Network Architecture
The generator architecture of the proposed model is shown in Fig. 1
. It is based on a Unet architecture. The left side is a contracting path, where each step involves the creation of downsampled feature maps using a convolutional layer with a stride of two, which is followed by BN and activation. The right side is an expanding path, where each step consists of upsampling (by a factor of two), convolutional layer to create new feature maps, followed by BN and activation layers.
In order to provide richer context about the lowlevel features for superior reconstruction, the lowlevel feature maps from the contracting path are concatenated to the highlevel feature maps of same size in the expanding path. In this work, we propose the use of dense connections between the steps (layers) within the contracting as well as the expanding path. These dense connections [15]
help to improve the flow of information between the layers and encourage the features to be reused from the preceding layers. There is also an added benefit of increase in variation of available information by concatenation of feature maps. Since the feature maps at various layers are not of the same size, the average pooling and upsampling (with bilinear interpolation) operations have been introduced in the connections between the layers of the contracting path and the expanding path, respectively. However, the use of these operations to change the size of the feature maps by a factor greater than
(less than ) not only increases the computational and memory requirement, but also reduces the quality of information available to the subsequent layers. As shown in Fig. 1, the parameter is set as 3 in this work.Further, residualinresidual dense blocks (RRDBs) [37] are incorporated at the lowest layer of the generator, where feature maps of size are present. Each block uses residual learning across each dense block, as well as across a group of three dense blocks. At both the levels, residual scaling is used, i.e. the residuals are scaled by before they are added to the identity mapping, as shown in Fig. 1
. These RRDBs not only make the network length variable, because of the residual connections which make identity mappings easier to learn, but also make a rich amount of information accessible to the deeper layers, through the dense connections.
At the output of the generator, a hyperbolic tangent activation is applied, which brings the real as well as imaginary parts of the final feature map in the range . These are brought to the range . In order to make the output realvalued, the absolute value is obtained (which lies in the range ), and then brought back to the range .
The discriminator architecture is based on a standard convolutional neural network. It has 11 convolutional layers, each of which is followed by BN and activation. We use a patch based discriminator to increase the focus on the reconstruction of high frequency content. The patch based discriminator scores each patch of the image separately, and its output is the mean of the individual patch scores. This framework makes the network insensitive to the input size.
2.4 Training Losses
2.4.1 Adversarial Loss
In order to constrain the generator to produce the MR image corresponding to the samples acquired in the kspace, it is conditioned [26] over the zerofilled reconstruction (ZFR) given by:
(10) 
where , denotes the Hermitian operator, and denotes the conversion of vector to a square matrix. Conventionally, the solution to the minmax game between the generator and the discriminator is obtained by using binary crossentropy based loss. However, it causes the problem of vanishing and exploding gradients, which makes the training of the GAN model unstable. In order to prevent this, Wasserstein distance based loss [1] is used. Mathematically, the training process of this conditional GAN using Wasserstein loss is formulated as follows:
(11) 
where is the distribution of the GT images, and is the distribution of the aliased ZFR images.
2.4.2 Content Loss
Besides adversarial loss, other losses are required to bring the reconstructed output closer to the corresponding GT image. In order to do so, we incorporate an MAE based loss, so that the pixelwise difference between the GT and the generated image is minimized. It is given by:
(12) 
where vec denotes vectorization, and denotes the norm.
2.4.3 Structural Similarity Loss
As the high frequency details in the MR image help in distinguishing various regions of the brain, it is extremely important to improve their reconstruction. SSIM quantifies the similarity between the local patches of two images on the basis of luminance, contrast and structure. It is calculated as follows:
(13) 
where and represent two patches of size from an image, and
denote the mean and variance of
, and denote the mean and variance of , and denotes the covariance of and . and are slack values to avoid division by zero.In order to improve the perceptual quality of the reconstructed MR image and preserve the structural details, a mean SSIM (mSSIM) based loss is incorporated in the training of the generator. It maximizes the patchwise SSIM between the generated image and the corresponding GT image, as follows:
(14) 
where is the number of patches in the image.
2.4.4 Wavelet Loss
In order to further enhance the textural details in the generated image, a weighted version of MAE in the wavelet domain is included as another loss term. In order to decompose the image into sets of wavelet coefficients
, which are equal in size, and correspond to even division of bands in the frequency domain, the wavelet packet transform is used. Fig.
2 depicts one step of the recursive process which is followed to obtain the sets of wavelet coefficients. For an level decomposition which produces sets of wavelet coefficients of size with , the wavelet loss is formulated as follows:where denotes the weight of the set of coefficients. Since the pixelwise MAE loss contributes more towards the improvement of low frequency details, and the mSSIM loss focuses more on preserving the high frequency content in the reconstructed image, higher weights
are assigned to the wavelet coefficients corresponding to the bandpass components to improve their reconstruction. This is done by setting the weights according to the probability density function of a Gaussian distribution with mean
and variance . In this work, , i.e. , and .2.4.5 Overall Loss
The overall loss which is used to train the generator, is formulated as a weighted sum of the losses presented above:
(15) 
In this work, , , , and .
Network Settings  

Complexvalued GAN  ✗  ✓  ✓  ✓  ✓ 
RRDBs  ✗  ✗  ✓  ✓  ✓ 
Dense Unet  ✗  ✗  ✗  ✓  ✓ 
Wavelet loss  ✗  ✗  ✗  ✗  ✓ 
No. of generator  2M  1.2M  1.2M  1.5M  1.5M 
parameters  
PSNR (dB)/ mSSIM  39.640/ 0.9823  40.048/ 0.9866  41.418/ 0.9879  43.798/ 0.9902  45.044/ 0.9919 
2.5 Training settings
For implementing the model, Keras framework with TensorFlow backend is used. The model is trained using 4 NVIDIA GeForce GTX 1080 Ti GPUs. In this work, the batch size is set as 16. In the generator, each layer produces 32 feature maps. The growth rate for the dense blocks in the RRDBs is set as 8,
is 0.2, and 4 RRDBs are used. The absolute value of the discriminator weights is clipped at 0.05. For each generator update, the discriminator is updated thrice. For training the model, we use Adam optimizer [20], with and . The initial learning rate is set as , with a decay of , so that it becomesof the initial value after 5 epochs.
3 Results and Discussion
We evaluate our models on T1 weighted brain MR images from the MICCAI 2013 grand challenge dataset [22] as well as on T2 weighted MR images of brain from the IXI dataset^{1}^{1}1https://braindevelopment.org/ixidataset/.
For the MICCAI 2013 dataset, 20 787 images are used for training, after randomly choosing images from the train set, and then applying data augmentation using images with 10% and 20% additive Gaussian noise. 1D Gaussian undersampling is used in this case to generate 20% and 30% undersampled data using the masks shown in Fig. 4(a) and (b), respectively. For testing, 2000 images are chosen randomly from the test set of the dataset. For the IXI dataset, 7500 images are randomly chosen for training and 100 nonoverlapping images are chosen for testing. In this case, radial undersampling is used, with the mask shown in Fig. 4(c).
Activation  ReLU  ReLU  PReLU 

PSNR (dB)/ mSSIM  45.044/ 0.9919  35.9912/ 0.9690  45.377/ 0.9930 
Table 1 and Fig. 3 show the quantitative and qualitative results for ablation study of the model, respectively, to highlight the importance of various components used in the model. These results are reported for 30% undersampled images, from the MICCAI 2013 dataset. In the first case, a realvalued GAN model comprised of a Unet based generator without RRDBs, without the dense connections in the contracting and expanding paths, with ReLU activation and with in Eq. 15, is considered. In the next case, the complexvalued equivalent of the previous model is considered. In the third case, the effect of adding RRDBs in the last layer of the complex valued Unet is observed without the wavelet loss. In the fourth case, the addition of dense connections in the complex valued Unet is considered with the RRDBs, ignoring the wavelet loss. In the last case, the effect of including by setting is observed. Each step results in significant improvement in peak signaltonoise ratio (PSNR) as well as mSSIM. For the rest of the results, we use the network settings mentioned in the last stage.
Table 2 and Fig. 5 show the quantitative and qualitative results for comparing various activations, respectively. These results are also reported for 30% undersampled images, from the MICCAI 2013 dataset. It is observed that ReLU has the worst, while PReLU has the best performance. The rest of the results are reported by using PReLU activation.
Fig. 6 shows the qualitative results of our final model for reconstruction of two 20% and 30% undersampled images from the MICCAI 2013 dataset. It is observed that the proposed model is able to reconstruct highquality images by preserving most of the structural details. Also, we observe that use of noisy images during training helps in reducing the impact of noise on the reconstruction quality.
Method  Noisefree images  10% noise added  20% noise added 

PSNR (dB) / mSSIM  PSNR (dB)  PSNR (dB)  
ZFR  35.61 / 0.7735  24.42  18.56 
DeepADMM[41]  37.36 / 0.8534  25.19  19.33 
DLMRI[32]  38.24 / 0.8020  24.52  18.85 
Noiselet[29]  39.01 / 0.8588  25.29  19.42 
BM3D[9]  39.93 / 0.9125  24.71  18.65 
DAGAN[40]  40.20 / 0.9681  37.40  34.23 
Proposed  45.377/ 0.9930  41.793  39.057 
Table 3 illustrates the comparison of the proposed method with the stateoftheart approaches. These results are reported for 30% undersampled images, from the MICCAI 2013 dataset. It can be observed that there is a significant boost in both the PSNR and mSSIM values when the proposed method is compared with existing approaches.
Method  PSNR (dB)  mSSIM 

ZFR  29.704  0.5705 
FBPCNet[19]  30.103  0.6719 
DLMRI[32]  37.037  0.6864 
DeepADMM[41]  35.704  0.6705 
DeepCascade[35]  38.667  0.7227 
Proposed  38.659  0.9598 
Table 4 show the comparison of the proposed method with ZFR and methods like FBPCNet [19], DeepADMM [41], DLMRI [32], and DeepCascade [35]. These results are reported for 30% undersampled images, from the IXI dataset. It can be observed that although the PSNR is marginally less, there is a significant boost in the mSSIM for the proposed method. Also, the inference step takes only 0.0227 seconds.
The model trained on 30% undersampled brain images from the MICCAI 2013 dataset is also tested for reconstruction of 30% undersampled images of canine legs from the MICCAI 2013 challenge. Fig. 7 shows the qualitative results of this zeroshot inference. The proposed model achieves an average PSNR 42.584 dB and mSSIM 0.9857, when inferred for 2000 test images.
4 Conclusion
In this paper, we introduced a novel complexvalued generator architecture that is trained using SSIM and wavelet losses to achieve fast yet efficient reconstruction. The fast inference step opens up the possibility for a realtime implementation of the method. Detailed analyses have shown that the proposed method significantly improves the quality of the reconstruction compared to stateoftheart CSMRI reconstruction methods. It has also been shown that the present model can be trained with noise based data augmentation, and it outperforms all the existing methods when reconstruction is attempted using noisy data.
References

[1]
(201706–11 Aug)
Wasserstein generative adversarial networks.
In
Proceedings of the 34th International Conference on Machine Learning
, Vol. 70, International Convention Centre, Sydney, Australia, pp. 214–223. External Links: Link Cited by: §1, §2.4.1. 
[2]
(2015)
Unitary evolution recurrent neural networks
. CoRR abs/1511.06464. Cited by: §1, §2.2.  [3] (2017) Compressed sensing using generative models. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70, pp. 537–546. External Links: Link Cited by: §1.
 [4] (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. 3 (1), pp. 1–122. External Links: Link, Document, ISSN 19358237 Cited by: §1.

[5]
(2016)
Associative long shortterm memory
. CoRR abs/1602.03032. Cited by: §1.  [6] (2019) Robust compressive sensing mri reconstruction using generative adversarial networks. CoRR abs/1910.06067. Cited by: §1.
 [7] (201602) Image superresolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2), pp. 295–307. External Links: ISSN 01628828, Link, Document Cited by: §1.
 [8] (200604) Compressed sensing. 52 (4), pp. 1289–1306. External Links: Document, ISSN Cited by: §1.
 [9] (20161101) Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3DMRI. 56 (3), pp. 430–440. External Links: ISSN 15737683, Document, Link Cited by: Table 3.
 [10] (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680. External Links: Link Cited by: §1.
 [11] (2002) Generalized autocalibrating partially parallel acquisitions (GRAPPA). 47 (6), pp. 1202–1210. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/mrm.10171 Cited by: §1.
 [12] (2016) On complex valued convolutional neural networks. abs/1602.09046. External Links: Link, 1602.09046 Cited by: item 3.

[13]
(2015)
Delving deep into rectifiers: surpassing humanlevel performance on imagenet classification
. InProceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)
, ICCV ’15, Washington, DC, USA, pp. 1026–1034. External Links: ISBN 9781467383912, Link, Document Cited by: item 2.  [14] (2012) Generalization characteristics of complexvalued feedforward neural networks in relation to signal coherence. IEEE Transactions on Neural Networks and learning systems 23 (4), pp. 541–551. Cited by: §1.

[15]
(201707)
Densely connected convolutional networks.
In
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, Vol. , pp. 2261–2269. External Links: Document, ISSN Cited by: §2.3.  [16] (201710) WaveletSRNet: a waveletbased CNN for multiscale face super resolution. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 1698–1706. External Links: Document, ISSN Cited by: 3rd item, §1.
 [17] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning  Volume 37, ICML’15, pp. 448–456. External Links: Link Cited by: §2.2.
 [18] (201707) Imagetoimage translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 5967–5976. External Links: Document, ISSN Cited by: §1.
 [19] (2017Sep.) Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing 26 (9), pp. 4509–4522. External Links: Document, ISSN 19410042 Cited by: Table 4, §3.
 [20] (2015) Adam: A method for stochastic optimization. See DBLP:conf/iclr/2015, External Links: Link Cited by: §2.5.
 [21] (2009) The complex gradient operator and the CRcalculus. abs/0906.4835. Cited by: §2.2.
 [22] 2013 Diencephalon standard challenge. Cited by: §3.
 [23] (2015) Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging. 10 (4), pp. 1–19. External Links: Link, Document Cited by: §1.
 [24] (2007) Sparse MRI: the application of compressed sensing for rapid MR imaging. 58 (6), pp. 1182–1195. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/mrm.21391 Cited by: §1.
 [25] (201901) Deep generative adversarial neural networks for compressive sensing mri. IEEE Transactions on Medical Imaging 38 (1), pp. 167–179. External Links: Document, ISSN Cited by: §1.
 [26] (2014) Conditional generative adversarial nets. abs/1411.1784. Cited by: §2.4.1.
 [27] (200211) On the critical points of the complexvalued neural network. In Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP ’02., Vol. 3, pp. 1099–1103 vol.3. External Links: Document, ISSN Cited by: §1.
 [28] (1981) The importance of phase in signals. Proceedings of the IEEE 69 (5), pp. 529–541. Cited by: §1.
 [29] (2015) Multichannel compressive sensing MRI using noiselet encoding. 10 (5), pp. 1–27. External Links: Link, Document Cited by: Table 3.
 [30] SENSE: sensitivity encoding for fast MRI. Magnetic Resonance in MedicineMagnetic Resonance in MedicineCoRRIEEE Transactions on Image ProcessingArXivArXivIEEE Transactions on Medical ImagingIEEE Transactions on Medical ImagingIEEE Transactions on Computational ImagingIEEE Transactions on Information TheoryIEEE Transactions on Medical ImagingMagnetic Resonance in MedicineInverse Problems in Science and EngineeringPLOS ONEInverse ProblemsJournal of Mathematical Imaging and VisionPLOS ONEFoundations and Trends in Machine LearningIEEE Transactions on Medical ImagingCoRR 42 (5), pp. 952–962. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/Abstract New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementary to Fourier preparation by linear field gradients. Thus, by using multiple receiver coils in parallel scan time in Fourier imaging can be considerably reduced. The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and kspace sampling patterns. Special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density. For this case the feasibility of the proposed methods was verified both in vitro and in vivo. Scan time was reduced to onehalf using a twocoil array in brain imaging. With an array of five coils doubleoblique heart images were obtained in onethird of conventional scan time. Magn Reson Med 42:952–962, 1999. © 1999 WileyLiss, Inc. 1999 @article{sense, author = {Pruessmann, K. P. and Weiger, M. and Scheidegger, M. B. and Boesiger, P.}, title = {{SENSE}: Sensitivity encoding for fast {MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {42}, number = {5}, pages = {952962}, keywords = {MRI, sensitivity encoding, SENSE, fast imaging, receiver coil array}, doi = {10.1002/(SICI)15222594(199911)42:5<952::AIDMRM16>3.0.CO;2S}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%2915222594%28199911%2942%3A5%3C952%3A%3AAIDMRM16%3E3.0.CO%3B2S}, eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/%28SICI%2915222594%28199911%2942%3A5%3C952%3A%3AAIDMRM16%3E3.0.CO%3B2S}, abstract = {Abstract New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementary to Fourier preparation by linear field gradients. Thus, by using multiple receiver coils in parallel scan time in Fourier imaging can be considerably reduced. The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and kspace sampling patterns. Special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density. For this case the feasibility of the proposed methods was verified both in vitro and in vivo. Scan time was reduced to onehalf using a twocoil array in brain imaging. With an array of five coils doubleoblique heart images were obtained in onethird of conventional scan time. Magn Reson Med 42:952–962, 1999. © 1999 WileyLiss, Inc.}, year = {1999}} Cited by: §1.
 [31] (201806) Compressed sensing mri reconstruction using a generative adversarial network with a cyclic loss. IEEE Transactions on Medical Imaging 37 (6), pp. 1488–1497. External Links: Document, ISSN Cited by: §1.
 [32] (201105) MR image reconstruction from highly undersampled kspace data by dictionary learning. 30 (5), pp. 1028–1041. External Links: Document, ISSN Cited by: §1, Table 3, Table 4, §3.
 [33] (2013) Neuronal synchrony in complexvalued deep networks. CoRR abs/1312.6115. Cited by: §1.
 [34] (2015) Unet: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computerassisted intervention, pp. 234–241. Cited by: 2nd item, §1.
 [35] (201802) A deep cascade of convolutional neural networks for dynamic MR image reconstruction. 37 (2), pp. 491–503. External Links: Document, ISSN Cited by: Table 4, §3.
 [36] (2018) Deep complex networks. See DBLP:conf/iclr/2018, External Links: Link Cited by: §1, §2.2.
 [37] (201809) ESRGAN: enhanced superresolution generative adversarial networks. In The European Conference on Computer Vision (ECCV) Workshops, pp. 63–79. Cited by: §2.3.
 [38] (2016) Fullcapacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880–4888. Cited by: §1.
 [39] (2012) Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems 25, pp. 341–349. External Links: Link Cited by: §1.
 [40] (201806) DAGAN: Deep dealiasing generative adversarial networks for fast compressed sensing MRI reconstruction. 37 (6), pp. 1310–1321. External Links: Document, ISSN Cited by: §1, Table 3.
 [41] (2016) Deep ADMMNet for compressive sensing MRI. In Advances in Neural Information Processing Systems 29, pp. 10–18. External Links: Link Cited by: §1, Table 3, Table 4, §3.
 [42] (200404) Image quality assessment: From error visibility to structural similarity. 13 (4), pp. 600–612. External Links: Document, ISSN Cited by: §1.
Comments
There are no comments yet.