1 Introduction
In inverse imaging problems, we seek to reconstruct a latent image from measurements taken under a known physical image formation. Such inverse problems arise throughout computational photography, computer vision, medical imaging, and scientific imaging. Residing in the early vision layers of every autonomous vision system, they are essential for all vision based autonomous agents. Recent years have seen tremendous progress in both classical and deep methods for solving inverse problems in imaging. Classical and deep approaches have relative advantages and disadvantages. Classical algorithms, based on formal optimization, exploit knowledge of the image formation model in a principled way, but struggle to incorporate sophisticated learned models of natural images. Deep methods easily learn complex statistics of natural images, but lack a systematic approach to incorporating prior knowledge of the image formation model. What is missing is a general framework for designing deep networks that incorporate prior information, as well as a clear understanding of when prior information is useful.
In this paper we propose unrolled optimization with deep priors (ODP): a principled, general purpose framework for integrating prior knowledge into deep networks. We focus on applications of the framework to inverse problems in imaging. Given an image formation model and a few generic, highlevel design choices, the ODP framework provides an easy to train, high performance network architecture. The framework suggests novel network architectures that outperform prior work across a variety of imaging problems.
The ODP framework is based on unrolled optimization, in which we truncate a classical iterative optimization algorithm and interpret it as a deep network. Unrolling optimization has been a common practice among practitioners in imaging, and training unrolled optimization models has recently been explored for various imaging applications, all using variants of fieldofexperts priors [12, 21, 5, 24]. We differ from existing approaches in that we propose a general framework for unrolling optimization methods along with deep convolutional prior architectures within the unrolled optimization. By training deep CNN priors within unrolled optimization architectures, instances of ODP outperform stateoftheart results on a broad variety of inverse imaging problems.
Our empirical results clarify the benefits and limitations of encoding prior information for inverse problems in deep networks. Layers that (approximately) invert the image formation operator are useful because they simplify the reconstruction task to denoising and correcting artifacts introduced by the inversion layers. On the other hand, prior layers improve network generalization, boosting performance on unseen image formation operators. For deblurring and compressed sensing MRI, we found that a single ODP model trained on many image formation operators outperforms existing stateoftheart methods where a specialized model was trained for each operator.
Moreover, we offer insight into the open question of what iterative algorithm is best for unrolled optimization, given a linear image formation model. Our main finding is that simple primal algorithms that (approximately) invert the image formation operator each iteration perform best.
In summary, our contributions are as follows:

We introduce ODP, a principled, general purpose framework for inverse problems in imaging, which incorporates prior knowledge of the image formation into deep networks.

We demonstrate that instances of the ODP framework for denoising, deblurring, and compressed sensing MRI outperform stateoftheart results by a large margin.

We present empirically derived insights on how the ODP framework and related approaches are best used, such as when exploiting prior information is advantageous and which optimization algorithms are most suitable for unrolling.
2 Motivation
Bayesian model
The proposed ODP framework is inspired by an extensive body of work on solving inverse problems in imaging via maximumaposteriori (MAP) estimation under a Bayesian model. In the Bayesian model, an unknown image
is drawn from a prior distribution with parameters . The imaging system applies a linear operator to this image, representing all optical processes in the capture, and then measures an image on the sensor, drawn from a noise distribution that models sensor noise, e.g., read noise, and noise in the signal itself, e.g., photonshot noise.Let
be the probability of sampling
from and be the probability of sampling from . Then the probability of an unknown image yielding an observation is proportional to .The MAP pointestimate of is given by , or equivalently
(1) 
where the data term and prior term are negative loglikelihoods. Computing thus involves solving an optimization problem [3, Chap. 7].
Unrolled iterative methods
A large variety of algorithms have been developed for solving problem (1) efficiently for different convex data terms and priors (e.g., FISTA [1], ChambollePock [4], ADMM [2]). The majority of these algorithms are iterative methods, in which a mapping is applied repeatedly to generate a series of iterates that converge to solution , starting with an initial point .
Iterative methods are usually terminated based on a stopping condition that ensures theoretical convergence properties. An alternative approach is to execute a predetermined number of iterations , in other words unrolling the optimization algorithm. This approach is motivated by the fact that for many imaging applications very high accuracy, e.g., convergence below tolerance of for every local pixel state, is not needed in practice, as opposed to optimization problems in, for instance, control. Fixing the number of iterations allows us to view the iterative method as an explicit function of the initial point . Parameters such as may be fixed across all iterations or vary by iteration. The unrolled iterative algorithm can be interpreted as a deep network [24].
Parameterization
The parameters
in an unrolled iterative algorithm are the algorithm hyperparameters, such as step sizes, and model parameters defining the prior. Generally the number of algorithm hyperparameters is small (15 per iteration), so the model capacity of the unrolled algorithm is primarily determined by the representation of the prior.
Many efficient iterative optimization methods do not interact with the prior term directly, but instead minimize via its (sub)gradient or proximal operator , defined as
The proximal operator is a generalization of Euclidean projection. In the ODP framework, we propose to parameterize the gradient or proximal operator of directly and define implicitly.
3 Unrolled optimization with deep priors
We propose the ODP framework to incorporate knowledge of the image formation into deep convolutional networks. The framework factors networks into data steps, which are functions of the measurements encoding prior information about the image formation model, and CNN steps, which represent statistical image priors. The factorization follows a principled approach inspired by classical optimization methods, thereby combining the best of deep models and classical algorithms.
3.1 Framework
The ODP framework is summarized by the network template in Algorithm 1. The design choices in the template are the optimization algorithm, which defines the data step and algorithm state , the number of iterations that the algorithm is unrolled, the function to initialize the algorithm from the measurements , and the CNN used in the prior step, whose output represents either or , depending on the optimization algorithm. Figure 1 shows an example ODP instance for deblurring under Gaussian noise.
Instances of the ODP framework have two complementary interpretations. From the perspective of classical optimization based methods, an ODP architecture applies a standard optimization algorithm but learns a prior defined by a CNN. From the perspective of deep learning, the network is a CNN with layers tailored to the image formation model.
ODP networks are motivated by minimizing the objective in problem (1), but they are trained to minimize a higherlevel loss, which is defined on a metric between the network output and the groundtruth latent image over a training set of image/measurement pairs. Classical metrics for images are meansquared error, PSNR, or SSIM. Let be the network output given measurements and parameters . Then we train the network by (approximately) solving the optimization problem
(2) 
where is the optimization variable, is the chosen reconstruction loss, e.g., PSNR, and is the true distribution over images (as opposed to the parameterized approximation ). The ability to train directly for expected reconstruction loss is a primary advantage of deep networks and unrolled optimization over classical image optimization methods. In contrast to pretraining priors on natural image training sets, directly training priors within the optimization algorithm allows ODP networks to learn applicationspecific priors and efficiently share information between prior and data steps, allowing drastically fewer iterations than classical methods.
Since ODPs are close to conventional CNNs, we can approximately solve problem (2) using the many effective stochastic gradient based methods developed for CNNs (e.g., Adam [13]). Similarly, we can initialize the portions of corresponding to the CNN prior steps using standard CNN initialization schemes (e.g., Xavier initialization [11]). The remaining challenge in training ODPs is initializing the portions of corresponding to algorithm parameters in the data step . Most optimizaton algorithms only have one or two parameters per data step, however, so an effective initialization can be found through standard grid search.
3.2 Design choices
The ODP framework makes it straightforward to design a stateoftheart network for solving an inverse problem in imaging. The design choices are the choice of the optimization algorithm to unroll, the CNN parameterization of the prior, and the initialization scheme. In this section we discuss these design choices in detail and present defaults guided by the empirical results in Section 5.
Optimization algorithm
The choice of optimization algorithm to unroll plays an important but poorly understood role in the performance of unrolled optimization networks. The only formal requirement is that each iteration of the unrolled algorithm be almost everywhere differentiable. Prior work has unrolled the proximal gradient method [5], the halfquadratic splitting (HQS) algorithm [21, 10], the alternating direction method of multipliers (ADMM) [27, 2], the ChambollePock algorithm [24, 4], ISTA [12], and a primaldual algorithm with Bregman distances [17]. No clear consensus has emerged as to which methods perform best in general or even for specific problems.
In the context of solving problem (1), we propose the proximal gradient method as a good default choice. The method requires that the proximal operator of and its Jacobian can be computed efficiently. Algorithm 2 lists the ODP framework for the proximal gradient method. We interpret the CNN prior as . Note that for the proximal gradient network, the CNN prior is naturally a residual network because its output is summed with its input in Step 4.
The algorithm parameters represent the gradient step sizes. The proposed initialization is based on an alternate interpretation of Algorithm 2 as an unrolled HQS method. Adopting the aggressively decreasing from HQS minimizes the number of iterations needed [10].
In Section 5.5, we compare deblurring and compressed sensing MRI results for ODP with proximal gradient, ADMM, linearized ADMM (LADMM), and gradient descent. The ODP formulations of ADMM, LADMM, and gradient descent can be found in the supplement. We find that all algorithms that approximately invert the image formation operator each iteration perform on par. Algorithms such as ADMM and LADMM that incorporate Lagrange multipliers were at best slightly better than simple primal algorithms like proximal gradient and gradient descent for the low number of iterations typical for unrolled optimization methods.
CNN prior
The choice of parameterizing each prior step as a separate CNN offers tremendous flexibility, even allowing the learning of a specialized function for each step. Algorithm 2
naturally introduces a residual connection to the CNN prior, so a standard residual CNN is a reasonable default architecture choice. The experiments in Section
5 show this architecture achieves stateoftheart results, while being easy to train with random initialization.Choosing a CNN prior presents a tradeoff between increasing the number of algorithm iterations , which adds alternation between data and prior steps, and making the CNN deeper. For example, in our experiments we found that for denoising, where the data step is trivial, larger CNN priors with fewer algorithm iterations gave better results, while for deconvolution and MRI, where the data step is a complicated global operation, smaller priors and more iterations gave better results.
Initialization
The initialization function
could in theory be an arbitrarily complicated algorithm or neural network. We found that the simple initialization
, which is known as backprojection, was sufficient for our applications [23, Ch. 25].4 Related work
The ODP framework generalizes and improves upon previous work on unrolled optimization and deep models for inverse imaging problems.
Unrolled optimization networks
An immediate choice in constructing unrolled optimization networks for inverse problems in imaging is to parameterize the prior as , where is a filterbank, meaning a linear operator given by for convolution kernels . The parameterizaton is inspired by handengineered priors that exploit the sparsity of images in a particular (dual) basis, such as anisotropic totalvariation. Unrolled optimization networks with learned sparsity priors have achieved good results but are limited in representational power by the choice of a simple norm [12, 24].
Fieldofexperts
A more sophisticated approach than learned sparsity priors is to parameterize the prior gradient or proximal operator as a fieldofexperts (FoE) , where is again a filterbank and is a separable nonlinearity parameterized by
, such as a sum of radial basis functions
[20, 21, 5, 27]. The ODP framework improves upon the FoE approaches both empirically, as shown in the model comparisons in Section 5, and theoretically, as the FoE model is essentially a 2layer CNN and so has less representational power than deeper CNN priors.Deep models for direct inversion
Several recent approaches propose CNNs that directly solve specific imaging problems. These architectures resemble instances of the ODP framework, though with far different motivations for the design. Schuler et al. propose a network for deblurring that applies a single, fixed deconvolution step followed by a learned CNN, akin to a prior step in ODP with a different initial iterate [22]. Xu et al. propose a network for deblurring that applies a single, learned deconvolution step followed by a CNN, similar to a one iteration ODP network [26]. Wang et al. propose a CNN for MRI whose output is averaged with observations in space, similar to a one iteration ODP network but without jointly learning the prior and data steps [25]. We improve on these deep models by recognizing the connection to classical optimization based methods via the ODP framework and using the framework to design more powerful, multiiteration architectures.
5 Experiments
In this section we present results and analysis of ODP networks for denoising, deblurring, and compressed sensing MRI. Figure 2 shows a qualitative overview for this variety of inverse imaging problems. Please see the supplement for details of the training procedure for each experiment.
5.1 Denoising
We consider the Gaussian denoising problem with image formation , with . The corresponding Bayesian estimation problem (1) is
(3) 
We trained a 4 iteration proximal gradient ODP network with a 10 layer, 64 channel residual CNN prior on the 400 image training set from [24]. Table 2 shows that the ODP network outperforms all stateoftheart methods on the 68 image test set evaluated in [24].
Method  Time per image  
BM3D [6]  28.56  2.57s (CPU) 
EPLL [29]  28.68  108.72s (CPU) 
Schmidt [21]  28.72  5.10s (CPU) 
Wang [24]  28.79  0.011s (GPU) 
Chen [5]  28.95  1.42s (CPU) 
ODP  29.04  0.13s (GPU) 
Method  Disk  Motion 
Krishnan [14]  25.94  25.07 
Levin [15]  24.54  24.47 
Schuler [22]  24.67  25.27 
Schmidt [21]  24.71  25.49 
Xu [26]  26.01  27.92 
ODP  26.11  28.49 
5.2 Deblurring
We consider the problem of joint Gaussian denoising and deblurring, in which the latent image is convolved with a known blur kernel in addition to being corrupted by Gaussian noise. The image formation model is , where is the blur kernel and . The corresponding Bayesian estimation problem (1) is
(4) 
To demonstrate the benefit of ODP networks specialized to specific problem instances, we first train models per kernel, as proposed in [26]. Specifically, we train 8 iteration proximal gradient ODP networks with residual 5 layer, 64 channel CNN priors for the outoffocus disk kernel and motion blur kernel from [26]
. Following Xu et al., we train one model per kernel on ImageNet
[8], including clipping and JPEG artifacts as required by the authors. Table 2 shows that the ODP networks outperform prior work slightly on the lowpass disk kernel, which completely removes high frequency content, while a substantial gain is achieved for the motion blur kernel, which preserves more frequency content and thus benefits from the inverse image formation steps in ODP.Next, we show that ODP networks can generalize across image formation models. Table 3 compares ODP networks (same architecture as above) on the test scenarios from [22]. We trained one ODP network for the four outoffocus kernels and associated noise levels in [22]. The outoffocus model is on par with of Schuler et al., even though Schuler et al. train a specialized model for each kernel and associated noise level. We trained a second ODP network on randomly generated motion blur kernels. This model outperforms Schuler et al. on the unseen test set motion blur kernel, even though Schuler et al. trained specifically on the motion blur kernel from their test set.
Model  Gaussian (a)  Gaussian (b)  Gaussian (c)  Box (d)  Motion (e) 
EPLL [29]  24.04  26.64  21.36  21.04  29.25 
Levin [15]  24.09  26.51  21.72  21.91  28.33 
Krishnan [14]  24.17  26.60  21.73  22.07  28.17 
IDDBM3D [7]  24.68  27.13  21.99  22.69  29.41 
Schuler [22]  24.76/26.19  27.23/28.41  22.20/24.06  22.75/24.37  29.42/29.91 
ODP  24.89/26.20  27.44/28.44  22.26/23.97  23.17/24.37  30.54/30.50 
. We also evaluate Schuler et al. and ODP on the BSDS500 data set for a lower variance comparison
[16].5.3 Compressed sensing MRI
In compressed sensing (CS) MRI, a latent image is measured in the Fourier domain with subsampling. Following [27], we assume noise free measurements. The image formation model is , where is the DFT and is a diagonal binary sampling matrix for a given subsampling pattern. The corresponding Bayesian estimation problem (1) is
(5) 
We trained an 8 iteration proximal gradient ODP network with a residual 7 layer, 64 channel CNN prior on the 100 training images and pseudoradial sampling pattern from [27], range from sampling 20% to 50% of the Fourier domain. We evaluate on the 50 test images from the same work.
Method  20%  30%  40%  50%  Time per image 
PBDW [18]  36.34  38.64  40.31  41.81  35.36s (CPU) 
PANO [19]  36.52  39.13  41.01  42.76  53.48s (CPU) 
FDLCP [28]  36.95  39.13  40.62  42.00  52.22s (CPU) 
ADMMNet [27]  37.17  39.84  41.56  43.00  0.79s (CPU) 
BM3DMRI [9]  37.98  40.33  41.99  43.47  40.91s (CPU) 
ODP  38.50  40.71  42.34  43.85  0.090s (GPU) 
Table 4 shows that the ODP network outperforms even the the best alternative method, BM3DMRI, for all sampling patterns. The improvement over BM3DMRI is larger for sparser (and thus more challenging) sampling patterns. The fact that a single ODP model performed so well on all four sampling patterns, particularly in comparison to the ADMMNet models, which were trained separately per sampling pattern, again demonstrates that ODP generalizes across image formation models.
5.4 Contribution of prior information
In this section, we analyze how much of the reconstruction performance is due to incorporating the image formation model in the data step and how much is due to the CNN prior steps. To this end, we performed an ablation study where ODP models were trained without the data steps. The resulting models are pure residual networks.
Application  Proximal gradient  Prior only 
Denoising  29.04  29.04 
Deblurring  31.04  26.04 
CS MRI  41.35  37.70 
Model  Deblurring  CS MRI 
Proximal gradient  31.04  41.35 
ADMM  31.01  41.39 
LADMM  30.17  41.37 
Gradient descent  29.96  N/A 
Table 6 shows that for denoising the residual network performed as well as the proximal gradient ODP network, while for deblurring and CS MRI the proximal gradient network performed substantially better. The difference between denoising and the other two inverse problems is that in the case of denoising the ODP data step is trivial: inverting . However, for deblurring the data step applies , a complicated global operation that for motion blur satisfies . Similarly, for CS MRI the ODP proximal gradient network applies in the data step, a complicated global operation (involving a DFT) that satisfies .
The results suggest an interpretation of the ODP proximal gradient networks as alternating between applying local corrections in the CNN prior step and applying a global operator such that in the data step. The CNNs in the ODP networks conceptually learn to denoise and correct errors introduced by the approximate inversion of with , whereas the pure residual networks must learn to denoise and invert directly. When approximately inverting is a complicated global operation, direct inversion using residual networks poses an extreme challenge, overcome by the indirect approach taken by the ODP networks.
5.5 Comparing algorithms
The existing work on unrolled optimization has done little to clarify which optimization algorithms perform best when unrolled. We investigate the relative performance of unrolled optimization algorithms in the context of ODP networks. We have compared ODP networks for ADMM, LADMM, proximal gradient and gradient descent in Table 6, using the same initialization and network parameters as for deblurring and CS MRI.
For deblurring, the proximal gradient and ADMM models, which apply the regularized pseudoinverse in the data step, outperform the LADMM and gradient descent models, which only apply and . The results suggest that taking more aggressive steps to approximately invert in the data step improves performance. For CS MRI, all algorithms apply the pseudoinverse (because ) and have similar performance, which matches the observations for deblurring.
Our algorithm comparison shows minimal benefits to incorporating Lagrange multipliers, which is expected for the relatively low number of iterations in our models. The ODP ADMM networks for deblurring and CS MRI are identical to the respective proximal gradient networks except that ADMM includes Lagrange multipliers, and the performance is on par. For deblurring, LADMM and gradient descent are similar architectures, but LADMM incorporates Lagrange multipliers and shows a small performance gain. Note that Gradient descent cannot be applied to CS MRI as problem (5) is constrained.
6 Conclusion
The proposed ODP framework offers a principled approach to incorporating prior knowledge of the image formation into deep networks for solving inverse problems in imaging, yielding stateoftheart results for denoising, deblurring, and CS MRI. The framework generalizes and outperforms previous approaches to unrolled optimization and deep networks for direct inversion. The presented ablation studies offer general insights into the benefits of prior information and what algorithms are most suitable for unrolled optimization.
Although the class of imaging problems considered in this work lies at the core of imaging and sensing, it is only a small fraction of the potential applications for ODP. In future work we will explore ODP for blind inverse problems, in which the image formation operator is not fully known, as well as nonlinear image formation models. Outside of imaging, control is a promising field in which to apply the ODP framework because deep networks may potentially benefit from prior knowledge of physical dynamics.
References
 [1] A. Beck and M. Teboulle. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.

[2]
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Foundations and Trends in Machine Learning
, 3(1):1–122, 2001.  [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
 [4] A. Chambolle and T. Pock. A firstorder primaldual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.

[5]
Y. Chen, W. Yu, and T. Pock.
On learning optimized reaction diffusion processes for effective
image restoration.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 5261–5269, 2015.  [6] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3D transformdomain collaborative filtering. IEEE Trans. Image Processing, 16(8):2080–2095, 2007.
 [7] A. Danielyan, V. Katkovnik, and K. Egiazarian. BM3D frames and variational image deblurring. IEEE Trans. Image Processing, 21(4):1715–1728, 2012.
 [8] Jia Deng, Wei Dong, Richard Socher, LiJia Li, Kai Li, and Li FeiFei. Imagenet: A largescale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
 [9] E. Eksioglu. Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3DMRI. Journal of Mathematical Imaging and Vision, 56(3):430–440, 2016.
 [10] D. Geman and C. Yang. Nonlinear image recovery with halfquadratic regularization. IEEE Transactions on Image Processing, 4(7):932–946, 1995.

[11]
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
networks.
In
Proceedings of the International Conference on Artificial Intelligence and Statistics
, volume 9, pages 249–256, 2010.  [12] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In Proceedings of the International Conference on Machine Learning, pages 399–406, 2010.
 [13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [14] D. Krishnan and R. Fergus. Fast image deconvolution using hyperLaplacian priors. In Advances in Neural Information Processing Systems, pages 1033–1041, 2009.
 [15] A. Levin, R. Fergus, F. Durand, and W. Freeman. Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, 26(3), 2007.
 [16] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE International Conference on Computer Vision, pages 416–423, 2001.
 [17] p. Ochs, R. Ranftl, T. Brox, and T. Pock. Techniques for gradientbased bilevel optimization with nonsmooth lower level problems. Journal of Mathematical Imaging and Vision, pages 1–20, 2016.
 [18] X. Qu, D. Guo, B. Ning, Y. Hou, Y. Lin, S. Cai, and Z. Chen. Undersampled MRI reconstruction with patchbased directional wavelets. Magnetic Resonance Imaging, 30(7):964–977, 2012.
 [19] X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen. Magnetic resonance image reconstruction from undersampled measurements using a patchbased nonlocal operator. Medical Image Analysis, 18(6):843–856, 2014.
 [20] S. Roth and M. Black. Fields of experts: A framework for learning image priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 860–867, 2005.
 [21] U. Schmidt and S. Roth. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2774–2781, 2014.
 [22] C. Schuler, H. Burger, S. Harmeling, and B. Scholkopf. A machine learning approach for nonblind image deconvolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1067–1074, 2013.
 [23] S. Smith. The Scientist and Engineer’s Guide to Digital Signal Processing. California Technical Publishing, San Diego, 1997.
 [24] S. Wang, S. Fidler, and R. Urtasun. Proximal deep structured models. In Advances in Neural Information Processing Systems 29, pages 865–873. 2016.
 [25] S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the IEEE International Symposium on Biomedical Imaging, pages 514–517, 2016.

[26]
L. Xu, J. Ren, C. Liu, and J. Jia.
Deep convolutional neural network for image deconvolution.
In Advances in Neural Information Processing Systems, pages 1790–1798, 2014.  [27] Y. Yang, J. Sun, H. Li, and Z. Xu. Deep ADMMNet for compressive sensing MRI. In Advances in Neural Information Processing Systems, pages 10–18, 2016.
 [28] Z. Zhan, J.F. Cai, D. Guo, Y. Liu, Z. Chen, and X. Qu. Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction. IEEE Transactions on Biomedical Engineering, 63(9):1850–1861, 2016.
 [29] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In Proceedings of the IEEE International Conference on Computer Vision, pages 479–486, 2011.