1 Introduction
Linear inverse problems appear broadly in image restoration tasks, in applications ranging from natural image superresolution to biomedical image reconstruction. In such tasks, one oftentimes encounters a seriously illposed recovery task, which necessitates regularization with proper statistical priors. This is however impeded by the following challenges: c1) realtime and interactive tasks demand a low overhead for inference; e.g., imagine MRI visualization for neurosurgery clearpoint , or, interactive superresolution on cell phones romano2017raisr ; c2) the need for recovering plausible images that are consistent with the physical model; this is particularly important for medical diagnosis, which is sensitive to artifacts; c3) and limited labeled training data especially for medical imaging.
Conventional compressed sensing (CS) relies on sparse coding of images in a proper transform domain via a universal regularization; see e.g., donoho2006compressed ; pualy_mri20017 ; duarte2009learning . To automate the timeintensive iterative softthresholding algorithm (ISTA) for sparse coding, gregor2010learning puts forth the learned ISTA (LISTA). Relying on softthresholding it trains a simple (single dense layer) recurrent network to map measurements to the sparse code as a surrogate for the code. sprechmann2015learning advocates a wider class of functions derived from proximal operators. xin2016maximal also adopts LSTMs to learn the minimal sparse code, where the learned network was seen to improve the RIP of coherent dictionaries. Sparse recovery however is the common objective of xin2016maximal ; gregor2010learning , and the measurement model is not explicitly taken into account. No guarantees were also provided for the convergence and quality of the iterates.
Deep neural networks have recently proven quite powerful in modeling prior distributions for images
gangoodfellow2014 ; johnson2016 ; lsgan2017 ; leding_photorealistic_2016 ; inpaintingyeh2016 ; lossfunction_zhao2017 . There is a handful of recent attempts to integrate the priors offered by generative nets for inverting linear inverse tasks dealing with local image restoration such as superresolution johnson2016 ; leding_photorealistic_2016 , inpainting inpaintingyeh2016 ; and more global tasks such as biomedical image reconstruction Majumdar_autoencoder_15 ; sun2016deep ; lowdose_ct2017 ; mardani2017deep ; automap_ismrm_2017 ; partial_fourier_cnn_ismrm_2017 ; cascade_cnn_mrreocn_ismrm_2017 ; cs_residual_learning_ismrm_2017. One can divide them into two main categories, with the first category being the postprocessing methods that train a deep network to map a poor (linear) estimate of the image to the true one
Majumdar_autoencoder_15 ; lowdose_ct2017 ; automap_ismrm_2017 ; cs_residual_learning_ismrm_2017 ; johnson2016 ; leding_photorealistic_2016 ; mardani2017deep . Residual networks (ResNets) are a suitable choice for training such deep nets due to their stable training behavior resnet2016 along with pixelwise and perceptual costs induced e.g., by generative adversarial networks (GANs) gangoodfellow2014 ; mardani2017deep . The postprocessing schemes offer a clear gain in computation time, but they offer no guarantee for data fidelity. Their accuracy is also only comparable with CSbased iterative methods. The second category is inspired by unrolling the iterations of classical optimization algorithms, and learns the filters and nonlinearities by training deep CNNs diamond2017unrolled ; sun2016deep ; metzler2017learned ; adler2017learned . They improve the accuracy relative to CS, but deep denoising CNNs that are changing over iterations incur a huge training overhead. Note also that for a signal that has a lowdimensional code under a deep pretrained generative model, Bora_dimakis_2017 ; hand2017global establishes reconstruction guarantees. The inference however relies on a iterative procedure based on empirical risk minimization that is quite time intensive for realtime applications.Contributions. Aiming for rapid, feasible, and plausible image recovery in illposed linear inverse tasks, this paper puts forth a novel neural proximal gradient descent algorithm that learns the proximal map using a recurrent ResNet. Local convergence of the iterates is studied for the inference phase assuming that the true image is a fixed point for a proximal (lies on a manifold represented by proximal). In particular, contraction of the learned proximal is empirically analyzed to ensure the RNN iterates converge to the true solution. Extensive evaluations are examined for the global task of MRI reconstruction, and a local task of natural image superresolution. We find:

For MRI reconstruction, it works better to repeat a small ResNet (with a single RB) several times than to build a general deep network.

Our recurrent ResNet architecture outperforms general deep network schemes by about 2dB SNR, with much less training data needed. It is also trained much more rapidly.

Our architecture outperforms existing stateoftheart CSWV schemes, with a 4dB gain in SNR, while achieving reconstruction with 100x reduction in computing time.
These findings rest on several novel project contributions:

Successful design and construction of a neural proximal gradient descent scheme based on recurrent ResNets.

Rigorous experimental evaluations, both for undersampled pediatric MRI data, and for superresolving natural face images, comparing our proposed architecture with conventional nonrecurrent deep ResNets and with CSWV.

Formal analysis of the map contraction for the proximal gradient algorithm with accompanying empirical measurements.
2 Preliminaries and problem statement
Consider an illposed linear system with where , and captures the noise and unmodeled dynamics. Suppose the unknown and (complexvalued) image lies in a lowdimensional manifold. No information is known about the manifold besides the training samples drawn from it, and the corresponding (possibly) noisy observations . Given a new undersampled observation , the goal is to quickly recover a plausible image .
The stated problem covers a wide range of image restoration tasks. For instance, in medical image reconstruction,
describes a projection driven by physics of the acquisition system (e.g., Fourier transform for MRI scanner, and Radon transform for the CT scanner). For image superresolution it is the downsampling operator that averages out nonoverlapping image regions to arrive at a lowresolution image. Given an image prior distribution, one typically forms a maximumlikelihood estimator formulated as a regularized leastsquares (LS) program
(1) 
with the regularizer parameterized by that incorporates the image prior.
In order to solve (P1) one can adopt a variation of proximal gradient algorithm parikh2014proximal with a proximal operator that depends on the regularizer parikh2014proximal . Starting from , and adopting a small step size the overall iterative procedure is expressed as
(2) 
For convex function , the proximal map is monotone, and the fixed point of (2) coincides with the global optimum for (P1) parikh2014proximal . For some simple prior distributions, the proximal operation is tractable in closedform. One popular example of such a proximal pertains to norm regularization for sparse coding, where the proximal operator gives rise to softthresholding and shrinkage in a certain domain such as Wavelet, or, Fourier. The associated iterations have been labeled ISTA; the related FISTA iterations offer accelerated convergence beck2009fast .
3 Neural Proximal learning
Motivated by the proximal gradient iterations in (2), to design efficient network architectures that automatically invert linear inverse tasks, the following questions need to be first addressed:
Q1. How to ensure rapid inference with affordable training for realtime image recovery?
Q2. How to ensure plausible reconstructions that are physically feasible?
3.1 Deep recurrent network architecture
The recursion in (2) can be envisioned as a feedback loop which at the th iteration takes an image estimate , moves it towards the affine subspace of data consistent images, and then applies the proximal operator to obtain . The iterations adhere to the statespace model
(3)  
(4) 
where is the gradient descent step that encourages data consistency. The initial input is , with initial state that is a linear (lowquality) image estimate. The state variable is essentially a linear network with the learnable step size that linearly combines the linear image estimate with the output of the previous iteration, namely .
In order to model the proximal mapping we use a homogeneous recurrent neural network depicted in Fig.
1. In essence, a truncated RNN with iterations is used for training. The measurement forms the input variables for all iterations, which together with the output of the previous iteration form the state variable for the current iteration. The proximal operator is modeled via a possibly deep neural network, as will be elaborated in the next section. As argued earlier, the proximal resembles projection onto the manifold of visually plausible images. Thus, one can interpret as a denoiser that gradually removes the aliasing artifacts from the input image.3.2 Proximal modeling
We consider a
layer neural network with elementwise activation function
. We study several examples of the mask function, including the step function for ReLU, and the sigmoid function for Swish
ramachandran2018searching . The th layer maps to throughwhere the bias term is included in the weight matrix. At the th iteration, the network starts with the input , and outputs . Typically, the linear weights
are modeled by a convolution operation with a certain kernel and stride size. The network weights collected in
then parameterize the proximal. To avoid vanishing gradients associated with training RNNs we can use ResNets resnet2016 or, highway nets greff2016highway . An alternate path to our model goes via DiracNets zagoruyko2017diracnets with , which are shown to exhibit similar behavior as ResNet.3.3 Neural proximal training
In order to learn the proximal map, the recurrent neural network in Fig. 1 is trained endtoend using the population of training data and . For the measurement , RNN with iterations recovers , where the composite map is parameterized by the network training weights and step size . Let denote the population of recovered images. In general, one can use the populationwise costs such as GANs gangoodfellow2014 ; mardani2017deep , or, the elementwise costs such as to penalize the difference between and . To ease the exposition, we adopt the elementwise empirical risk minimization
for some , where a typical choice for loss is MSE, i.e., . The second term encourages the outputs of different iterations to be consistent with the measurements. It is found to significantly improve training convergence of RNN for large iteration numbers . Note, one can additionally augment (P1) with adversarial GAN loss as in our companion work mardani2017deep that favors more the image perceptual quality that is critical in medical imaging.
4 Contraction Analysis
Consider the trained RNN in Fig. 1. In the inference phase with a new measurement , we are motivated to study whether the iterates in (3)(4) converge, their speed of convergence, and whether upon convergence they coincide with the true unknown image. To make the analysis tractable, the following assumptions are made:
(A1) The measurements are noiseless, namely, . and the true image is close to a fixed point of the proximal operator, namely for some small .
The fixed point assumption seems to be an stringent requirement, but it is typically made in this context to make the analysis tractable; see e.g., Bora_dimakis_2017 . It roughly means that the images lie on a manifold ^{1}^{1}1We use here the term manifold purely in an informal sense. represented by the map . Assuming that the train and test data lie on the same manifold, one can enforce it during the training by adding a penalty term to (P1).
The mask can then be decomposed as
(5) 
where is the true mask, and models the perturbation. Passing the input image into the layer neural network then yields the output
(6) 
where . One can further write as
(7) 
Let us define the residual operator
(8) 
It can then be expressed as with
(9) 
The term captures the mask perturbation in every subset of the layers.
Rearranging the terms in (6), and using the assumption (A1), namely for some representation error such that , and the noiseless model , we arrive at
(10) 
To study the contraction property and thus local convergence of the iterates to the true solution , let us first suppose that the perturbation at th iteration belongs to the set . We then introduce the contraction parameter associated with as
(11) 
Similarly, for the perturbation map define the contraction parameter
(12) 
Applying triangle inequality to (10), one then simply arrives at
(13)  
(14) 
According to (14), for small values a sufficient condition for (asymptotic) linear convergence of the iterates to true is that . For the nonnegligible representation error , if one wants the iterates to converge within a ball of , i.e., , a sufficient condition is that .
Motivated by realtime applications, e.g., in MRI neurosurgery visualization, it is of high interest to use the minimum iteration count that algorithm reaches within a close neighborhood of . Our conjecture is that for a reasonably expressive neural proximal network, the perturbation masks become highly sparse for the perturbed layers over the iterations so as for some small . Further analysis of this phenomenon, and establishing guarantees under simple and interpretable conditions in terms of network parameters is an important next step. This is the subject of our ongoing research, and will be reported elsewhere. Nonetheless, the next section provides empirical observations about the contraction parameters, where in particular is observed to be an orderofmagnitude larger than .
Remark 1 [Debiasing].
In sparse linear regression, LASSO is used to obtain a sparse solution that is possibly biased, while the support is accurate. The solution can then be debiased by solving a LS program given the LASSO support. In a similar manner, neural proximal gradient descent may introduce a bias due to e.g., the representation error
. To reduce the bias, after the convergence of iterates to , one can fix the masks at all layers and replace the proximal map with the linear map , and then find another fixed point for the iterates (6).5 Experiments
Performance of our novel neural proximal gradient descent scheme was assessed in two tasks: reconstructing pediatric MR images from undersampled kspace data; and superresolving natural face images. In the first task, undersampling kspace introduces aliasing artifacts that globally impact the entire image, while in the second task the blurring is local. While our focus is mostly on MRI, experiments with the image superresolution task are included to shed some light on the contraction analysis in previous section. In particular, we aim to address the following questions:
Q1. What is the performance compared with the conventional deep architectures and with CSMRI?
Q2. What is the proper depth for the proximal network, and number of iterations (T) for training?
Q3. Can one empirically verify the deep contraction conditions for the convergence of the iterates?
5.1 ResNets for proximal training
To address the above questions, we adopted a ResNet with a variable number of residual blocks (RB). Each RB consisted of two convolutional layers with kernels and a fixed number of
feature maps, respectively, that were followed by batch normalization (BN) and ReLU activation. We followed these by three simple convolutional layers with
kernels, where the first two layers undergo ReLU activation.We used the Adam SGD optimizer with the momentum parameter , minibatch size , and initial learning rate that is halved every
K iterations. Training was performed with TensorFlow interface on an NVIDIA Titan X Pascal GPU with 12GB RAM. The source code for TensorFlow implementation is publicly available in the Github page
githubnpgd2018 .5.2 MRI reconstruction and artifact suppression
Performance of our novel recurrent scheme was assessed in removing kspace undersampling artifacts from MR images. In essence, the MR scanner acquires Fourier coefficients (kspace data) of the underlying image across various coils. We focused on a singlecoil MR acquisition model, where for the th patient, the acquired kspace data admits
(15) 
Here, refers to the 2D Fourier transform, and the set indexes the sampled Fourier coefficients. Just as in conventional CS MRI, we selected based on variabledensity sampling with radial view ordering that is more likely to pick low frequency components from the center of kspace pualy_mri20017 . Only of Fourier coefficients were collected.
Dataset. T1weighted abdominal image volumes were acquired for pediatric patients. Each 3D volume includes axial slices of size pixels. All invivo scans were acquired on a 3T MRI scanner (GE MR750) with voxel resolution mm. The input and output were complexvalued images of the same size and each included two channels for real and imaginary components. The input image was generated using an inverse 2D FT of the kspace data where the missing data were filled with zeros (ZF); it is severely contaminated with artifacts.
5.2.1 Performance for various number/size of iterations
In order to assess the impact of network architecture on image recovery performance, the RNN was trained for a variable number of iterations () with a variable number of residual blocks (RBs). K slices ( patients) from the train dataset were randomly picked for training, and slices ( patients) from the test dataset for test. For training RNN, we use cost in (P1) with .
Fig. 2 depicts the SNR and structural similarity index metric (SSIM) wang2004image versus the number of iterations (copies), when proximal network comprises RBs. It is observed that increasing the number of iterations significantly improves the SNR and SSIM, but lead to a longer inference and training time. In particular, using three iterations instead of one achieves more than dB SNR gain for RB, and more than dB for RBs. Interestingly, when using a single iteration, adding more than RBs to make a deeper network does not yield further improvements; the SNR= for RBs, and SNR= for RBs. Notice also that a single RB tends to be reasonably expressive to model the MR image denoising proximal, and as a result, repeating it several times, the SNR does not seem to exceed dB. Using RBs however turns out to be more expressive to learn the proximal, and perform as good as using RBs. Similar observations are made for SSIM.
Training and inference time. Inference time is proportional to the number of unrolled iterations. Passing each image through one unrolled iteration with one RB takes msec when fully using the GPU. It is hard to precisely evaluate the training and inference time under fair conditions as it strongly depends on the implementation and the allocated memory and processing power per run. Estimated inference times as listed in Table 1 are averaged out over a few runs on the GPU. We observed empirically that with shared weights, e.g., iterations with RB, the training converges in hours. In constrast, training a deep ResNet with RBs takes around hours to converge.
iterations  RBs  train time (hours)  inference time (sec)  SNR (dB)  SSIM 

deep ResNet  
CSTV  n/a  n/a  
CSWV  n/a  n/a 
5.2.2 Comparison with sparse coding
To compare with conventional CSMRI, CSWV is tuned for best SNR performance using BART bart2016 that runs iterations of FISTA along with iterations of conjugate gradient descent to reach convergence. Quantitative results are listed under Table 1, where it is evident that the recurrent scheme with shared weights significantly outperforms CS with more than dB SNR gain that leads to sharper images with finer texture details as seen in Fig. 3. As a representative example, Fig. 3 depicts the reconstructed abdominal slice of a test patient. CSWV retrieves a blurry image that misses out the sharp details of the liver vessels. A deep ResNet with one iteration and RBs captures a cleaner image, but still blurs out fine texture details such as vessels. However, when using unrolled iterations with a single RB for proximal modeling, more details of the liver vessels are visible, and the texture appears to be more realistic. Similarly, using iterations and RBs retrieves finer details than iterations with relatively large RBs network for proximal.
In summary, we make three key findings:
F1. The proximal for denoising MR images can be well represented by training a ResNet with a small number of RBs.
F2. Multiple backandforth iterations are needed to recover a plausible MR image that is physically feasible.
F3. Considering the training and inference overhead and the quality of reconstructed images, RNN with iterations and RB proximal is promising to implement in clinical scanners.
5.3 Verification of the contraction conditions
To verify the contraction analysis developed for Proposition 1, we focus on the image superresolution (SR) task. In this linear inverse task, one only has access to a lowresolution (LR) image downsampled via the convolution kernel . To form , the image pixels in nonoverlapping regions are averaged out. SR is a challenging illposed problem, and has been subject of intensive research; see e.g., bruna2015super ; Sonderby_2014 ; johnson2016 ; romano2017raisr . Our goal is not to achieve stateoftheart performance, but to only study the behavior of proximal learning.
CelebA dataset. Adopting celebFaces Attributes Dataset (CelebA) liu2015faceattributes , for training and test we use K and images, respectively. Groundtruth images has pixels that is downsampled to LR images.
The proximal net is modeled as a multilayer CNN with the Smash nonlinearity ramachandran2018searching in the last layer. The hidden layers undergo no nonlinearity and the kernel size and are adopted. The proximal then admits as per (6). RNN with is trained, and normalized RMSE, i.e., is plotted versus the iteration index in Fig. 5 (top) for various kernel sizes. It decreases quickly and after a few iterations it converges which suggests that the converged solution is possibly a fixed point for the proximal map. For a representative face image, output of different iterations as well as the groundtruth are plotted in Fig. 4. Apparently, the resolution improves over the iterations.
The contraction parameters are also plotted in Fig. 5. The space of perturbations for the operator norm are limited to the admissible ones that inherit the structure of iterations. For the th test sample, we inspect the behavior , where . The corresponding error bars are then plotted in Fig. 5 for kernel size . It is apparent that and quickly decay across iterations, indicating that later iterations produce perturbations that are more incoherent to the proximal map. Also, we can see that converges to a level that represents the bias generated by the iterates, similar to the bias introduced in LASSO. In addition, one can observe that is the dominant term  usually an order of magnitude larger than .
6 Conclusions
This paper develops a novel neural proximal gradient descent scheme for recovery of images from highly compressed measurements. Unrolling the proximal gradient iterations, a recurrent architecture is proposed that models the proximal map via ResNets. For the trained network, contraction of the proximal map and subsequently the local convergence of the iterates is studied and empirically evaluated. Extensive experiments are performed to assess various network wirings, and to verify the contraction conditions in reconstructing MR images of pediatric patients, and superresolving natural images. Our findings for MRI indicate that a small ResNet can effectively model the proximal, and significantly improve the quality and complexity of recent deep architectures as well as conventional CSMRI.
While this paper sheds some light on the local convergence of neural proximal gradient descent, our ongoing research focuses on a more rigorous analysis to derive simple and interpretable contraction conditions. The main challenge pertains to understanding the distribution of activation masks that needs extensive empirical evaluation. Stable training of RNN for large iteration numbers using gated recurrent network is also another important focus of our current research.
7 Acknowledgements
We would like to acknowledge Dr. Marcus Alley from the Radiology Department at Stanford University for setting up the infrastructure to automatically collect the MRI dataset used in this paper. We would also like to acknowledge Dr. Enhao Gong, and Dr. Joseph Cheng for fruitful discussions and their feedback about the MRI reconstruction and software implementation.
References
 (1) http://www.mriinterventions.com/clearpoint/clearpointoverview.html.

(2)
Yaniv Romano, John Isidoro, and Peyman Milanfar.
RAISR: rapid and accurate image super resolution.
IEEE Transactions on Computational Imaging, 3(1):110–125, 2017.  (3) David L Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289–1306, 2006.
 (4) Michael Lustig, David Donoho, and John M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, December 2007.
 (5) Julio Martin DuarteCarvajalino and Guillermo Sapiro. Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization. IEEE Transactions on Image Processing, 18(7):1395–1408, 2009.

(6)
Karol Gregor and Yann LeCun.
Learning fast approximations of sparse coding.
In
Proceedings of the 27th International Conference on Machine Learning (ICML)
, pages 399–406, June 2010.  (7) Pablo Sprechmann, Alexander M Bronstein, and Guillermo Sapiro. Learning efficient sparse and low rank models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1821–1833, January 2015.
 (8) Bo Xin, Yizhou Wang, Wen Gao, David Wipf, and Baoyuan Wang. Maximal sparsity with deep networks? In Advances in Neural Information Processing Systems, pages 4340–4348, December 2016.
 (9) Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, December 2014.

(10)
Justin Johnson, Alexandre Alahi, and Li FeiFei.
Perceptual losses for realtime style transfer and superresolution.
In
European Conference on Computer Vision
, pages 694–711. Springer, 2016.  (11) Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2813–2821. IEEE, 2017.
 (12) Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photorealistic single image superresolution using a generative adversarial network. arXiv preprint, 2016.
 (13) Raymond Yeh, Chen Chen, Teck Yian Lim, Mark HasegawaJohnson, and Minh N Do. Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:1607.07539, 2016.
 (14) Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1):47–57, 2017.
 (15) A. Majumdar. Realtime dynamic MRI reconstruction using stacked denoising autoencoder. arXiv preprint, arXiv:1503.06383 [cs.CV], March 2015.
 (16) Jian Sun, Huibin Li, Zongben Xu, et al. Deep ADMMnet for compressive sensing MRI. In Advances in Neural Information Processing Systems, pages 10–18, 2016.

(17)
Hu Chen, Yi Zhang, Mannudeep K Kalra, Feng Lin, Yang Chen, Peixi Liao, Jiliu
Zhou, and Ge Wang.
Lowdose CT with a residual encoderdecoder convolutional neural network.
IEEE transactions on medical imaging, 36(12):2524–2535, 2017.  (18) Morteza Mardani, Enhao Gong, Joseph Y Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M Pauly, et al. Deep generative adversarial networks for compressed sensing automates MRI. arXiv preprint arXiv:1706.00051, 2017.
 (19) Bo Zhu, Jeremiah Z. Liu, Bruce R. Rosen, and Matthew S. Rosen. Neural network MR image reconstruction with AUTOMAP: Automated transform by manifold approximation. In Proceedings of the 25st Annual Meeting of ISMRM, Honolulu, HI, USA, 2017.
 (20) Shanshan Wang, Ningbo Huang, Tao Zhao, Yong Yang, Leslie Ying, and Dong Liang. 1D partial fourier parallel MR imaging with deep convolutional neural network. In Proceedings of the 25st Annual Meeting of ISMRM, Honolulu, HI, USA, 2017.
 (21) Jo Schlemper, Jose Caballero, Joseph V. Hajnal, Anthony Price, and Daniel Rueckert. A deep cascade of convolutional neural networks for MR image reconstruction. In Proceedings of the 25st Annual Meeting of ISMRM, Honolulu, HI, USA, 2017.
 (22) Dongwook Lee, Jaejun Yoo, and Jong Chul Ye. Compressed sensing and parallel MRI using deep residual learning. In Proceedings of the 25st Annual Meeting of ISMRM, Honolulu, HI, USA, 2017.
 (23) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016.
 (24) Steven Diamond, Vincent Sitzmann, Felix Heide, and Gordon Wetzstein. Unrolled optimization with deep priors. arXiv preprint arXiv:1705.08041, 2017.
 (25) Chris Metzler, Ali Mousavi, and Richard Baraniuk. Learned DAMP: Principled neural network based compressive image recovery. In Advances in Neural Information Processing Systems, pages 1770–1781, 2017.
 (26) Jonas Adler and Ozan Öktem. Learned primaldual reconstruction. arXiv preprint arXiv:1707.06474, 2017.
 (27) Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. arXiv preprint arXiv:1703.03208, 2017.
 (28) Paul Hand and Vladislav Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. arXiv preprint arXiv:1705.07576, 2017.
 (29) Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trends® in Optimization, 1(3):127–239, 2014.
 (30) Amir Beck and Marc Teboulle. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
 (31) Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. 2018.
 (32) Klaus Greff, Rupesh K Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrolled iterative estimation. arXiv preprint arXiv:1612.07771, 2016.
 (33) Sergey Zagoruyko and Nikos Komodakis. Diracnets: training very deep neural networks without skipconnections. arXiv preprint arXiv:1706.00388, 2017.
 (34) https://github.com/MortezaMardani/NeuralPGD.html.
 (35) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
 (36) Jonathan I Tamir, Frank Ong, Joseph Y Cheng, Martin Uecker, and Michael Lustig. Generalized Magnetic Resonance Image Reconstruction using The Berkeley Advanced Reconstruction Toolbox. In ISMRM Workshop on Data Sampling and Image Reconstruction, Sedona, 2016.
 (37) Joan Bruna, Pablo Sprechmann, and Yann LeCun. Superresolution with deep convolutional sufficient statistics. arXiv preprint arXiv:1511.05666, 2015.
 (38) Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image superresolution. arXiv preprint arXiv:1610.04490, 2016.
 (39) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.