A collection of recent efforts surveyed by  consider the problem of using training data to solve inverse problems in imaging. Specifically, imagine we observe a corrupted set of measurements of an image according to a measurement operator with some noise according to
Our task is to compute an estimate ofgiven measurements and knowledge of . This task is particularly challenging when the inverse problem is ill-posed, i.e., when the system is underdetermined or ill-conditioned, in which case simple methods such as least squares estimation (i.e., ) may not exist or may produce estimates that are highly sensitive to noise.
Decades of research has explored geometric models of image structure that can be used to regularize solutions to this inverse problem, including [32, 29, 10] and many others. More recent efforts have focused instead on using large collections of training images, , to learn effective regularizers.
One particularly popular and effective approach involves augmenting standard iterative inverse problem solvers with learned deep networks. This approach, which we refer to as deep unrolling (DU), is reviewed in Section 2.1
. The basic idea is to build an architecture that mimics a small number of steps in of an iterative procedure. In practice, the number of steps is quite small (typically 5-10) because of issues stability, memory, and numerical issues arising in backpropagation. This paper sidesteps this key limitation of deep unrolling methods with a novel approach based ondeep equilibrium models (DEMs) , which are designed for training arbitrarily deep networks. The result is a novel approach to training networks to solve inverse problems in imaging that yields up to a 4dB improvement in performance above state-of-the-art alternatives and where the computational budget can be selected at test time to optimize context-dependent tradeoffs between accuracy and computation. The key empirical findings, which are detailed in Section 6.4, are illustrated in Fig. 1(a).
This paper presents a fundamentally new approach to machine-learning based methods for solving linear inverse problems in imaging. Unlike most state-of-the-art methods, which are based on unrolling a small number of iterations of an iterative reconstruction scheme (“deep unrolling”), our method is based on deep equilibrium models that correspond to a potentially infinite number of iterations. This framework yields more accurate reconstructions that the current state-of-the-art across a range of inverse problems and gives users the ability to navigate a tradeoff between reconstruction computation time and accuracy at test time; specifically, we observe up to a 4dB improvement in PSNR. Furthermore, because our formulation is based on finding fixed points, we can use standard accelerated fixed point methods to speed test time computations – something that is not possible with deep unrolling methods. In addition, our approach inherits provable convergence guarantees depending on the “base” algorithm used to select a fixed point equation for the deep equilibrium framework. Experimental results also show that our proposed initialization based on pre-training is superior than random initialization, and the proposed approach is more robust to noise than past methods. Overall, the proposed approach is a unique bridge between conventional fixed-point methods in numerical analysis and deep learning and optimization-based solvers for inverse problems.
2 Relationship to Prior Work
2.1 Review of Deep Unrolling Methods
Deep Unrolling methods describe approaches to solving inverse problems which consist of a fixed number of architecturally identical “blocks,” which are often inspired by a particular optimization algorithm. These methods represent the current state of the art in MRI reconstruction, with most top submissions to the fastMRI challenge  being some sort of unrolled net. Unrolled networks have seen success in other imaging tasks, e.g. low-dose CT , light-field photography , and emission tomography .
We describe here a specific variant of deep unrolling methods based on gradient descent, although many variants exist based on alternative optimization or fixed point iteration schemes . Suppose we have a known regularization function that could be applied to an image ; e.g. in Tikhonov regularization, for some scalar . Then we could compute an image estimate by solving the optimization problem
If is differentiable, this can be accomplished via gradient descent. That is, we start with an initial estimate such as and choose a step size , such that for iteration , we set
where is the gradient of the regularizer.
The basic idea behind deep unrolled methods is to fix some number of iterations (typically ranges from 5 to 10), declare that will be our estimate , and model with a neural network, denoted , that can be learned with training data. We assume that all have identical weights, although other works also explore non-weight-tied variants . For example, we may define the unrolled gradient descent estimate to be where and for
Training attempts to minimize the cost function with respect to the network parameters . This form of training is often called “end-to-end”; that is, we do not train the network representing in isolation, but rather on the quality of the resulting estimate , which depends on the forward model .
The number of iterations is kept small for two reasons. First, at deployment, these systems are optimized to compute image estimates quickly – a desirable property we wish to retain in developing new methods. Second, it is challenging to train deep unrolled networks for many iterations due to memory limitations of GPUs because the memory required to calculate the backpropagation updates scales linearly with the number of unrolled iterations.
As a workaround, consider training such systems for a small number of iterations (e.g., ), then extracting the learned regularizer gradient , and using it within a gradient descent algorithm until convergence (i.e. for more iterations than used in training). Our numerical results highlight how poorly this method performs in practice (Section 6.4). Choosing the number of iterations (and hence the test time computational budget) at training time is essential. As we illustrate in Fig. 1(b), one cannot deviate from this choice after training and expect good performance.
2.2 Review of Deep Equilibrium Models
In , the authors propose a method for training arbitrarily-deep networks defined by repeated application of a single layer. Imagine an -layer network with input and weights . Letting denote the output of the hidden layer, we may write
where is the layer index and
is a nonlinear transformation such as inner products followed by the application of a nonlinear activation function. Recent prior work explored forcing this transformation at each layer to be the same (i.e.weight tying), so that for all and showed that such networks still yield competitive performance [11, 3]. Under weight tying, we have the recursion
and the output as is a fixed point of the operator .  show this fixed point can be computed without explicitly building an infinitely-deep network, and that the network weights can be learned using implicit differentiation and constant memory, bypassing computation and numerical stability issues associated with related techniques on large-scale problems [8, 15]. This past work focused on sequence models and time-series tasks, assuming that each was a single layer of a neural network, and did not explore the image reconstruction task that is the focus of this paper.
2.3 Relationship to Plug-and-Play and Regularization by Denoising Methods
Initiated by , a collection of methods based on the plug-and-play (PnP) framework have been proposed, allowing denoising algorithms to be used as priors for model-based image reconstruction. Given an inverse problem setting, one writes the reconstructed image as the solution to an optimization problem in which the objective function is the sum of a data-fit term and a regularization term. Applying alternating directions method of multipliers (ADMM, [6, 7]) to this optimization problem, we arrive at a collection of update equations, one of which has the form
where is the regularization function; this optimization problem in this update equation can be considered as “denoising” the image . PnP methods replace this explicit optimization step with a “plugged-in” denoising method. Notably, some state-of-the-art denoisers (e.g., BM3D  and U-nets ) do not have an explicit associated with them, but nevertheless empirically work well within the PnP framework.  propose Regularization by Denoising (RED) based on a similar philosophy, but consider a regularizer of the form
where corresponds to an image denoising function.
Recent PnP and RED efforts focuses on using training data to learn denoisers [22, 30, 38, 33, 17]. In contrast to the unrolling methods described in Section 2.1, these methods are not trained end-to-end; rather, the denoising module is trained independent of the inverse problem (i.e. ) at hand. As described by , decoupling the training of the learned component from results in a reconstruction system that is flexible and does not need to be re-trained for each new , but can require substantially more training samples to achieve the reconstruction accuracy of a method trained end-to-end for a specific .
3 Proposed Approach
Our approach is to choose a function so that a fixed point of the recursion in (4) is a good estimate of for a given . To the best of our knowledge, this is the first example of the application of DEMs to image reconstruction tasks. We describe choices of (and hence of the implicit infinite-depth neural network architecture) that explicitly account for the forward model and generally for the inverse problem at hand. Specifically, we propose choosing based on the ideas of deep unrolling (Section 2.1) extended to an infinite number of iterations – a paradigm that has been beyond the reach of all previous deep unrolling methods.
Let denote the image estimate after rounds of an iterative algorithm (as in Section 2.1) and denote the observation from (1) under forward operator . We consider three specific choices of below, but note that many other options are possible.
Deep equilibrium gradient descent (DE-Grad):
Deep equilibrium proximal gradient (DE-Prox):
Proximal gradient methods  use a proximal operator associated with a function :
and use this to solve the optimization problem in (2) via the iterates
where is a step size. Inspired by this optimization framework, we choose in the DEM framework as
Following , we replace with a trainable network , leading to the fixed-point iterations:
Deep equilibrium alternating direction method of multipliers (De-Admm):
The augmented Lagrangian (in its “scaled form” – see ) associated with this problem is given by
where is an additional auxiliary variable and is a user-defined parameter. The ADMM iterates are then
Here the - and -updates simplify as
As in the DE-Prox approach, can be replaced with a learned network, denoted . Making this replacement, and substituting directly into the expressions for and gives:
Note that the updates for and depend only on the previous iterates and . Therefore, the above updates can be interpreted as fixed-point iterations on the joint variable , where the iteration map is implicitly defined as the map that satisfies
Here we take the estimated image to be , where is the fixed-point of , i.e., the limit of as .
4 Calculating forward passes and gradient updates
Given a choice of , we confront the following obstacles. (1) Forward calculation: given an observation and weights , we need to compute the fixed point of efficiently. (2) Training: given a collection of training samples , we need to find the optimal .
4.1 Calculating Fixed-Points
Both training and inference in a DEM require calculating a fixed point of the iteration map given some initial point . The most straightforward approach is to use fixed-point iterations given in (4). Convergence of this scheme for specific designs is discussed in Section 5.
However, fixed-point iterations may not converge quickly. By viewing unrolled deep networks as fixed-point iterations, we inherit the ability to accelerate inference with standard fixed-point accelerators. To our knowledge, this work is the first time iterative inversion methods incorporating deep networks have been accelerated using fixed-point accelerators.
utilizes past iterates to identify promising directions to move during the iterations. This takes the form of identifying a vectorand setting
We find by solving the optimization problem:
with a matrix with columns, where the column is the (vectorized) residual . The optimization problem in (12) admits a least-squares solution, adding negligible computational overhead when is small (e.g., ).
An important practical consideration is that accelerating fixed-point iterations arising from optimization algorithms with auxiliary variables (like ADMM) is non-trivial. In these cases, standard fixed-point iterations may be preferred for their simplicity of implementation. This is the approach we take in finding fixed-points of our proposed DE-ADMM model.
4.2 Gradient Calculation
In this section, we provide a brief overview of the training procedure used to train all networks in Section 6.4
. We use stochastic gradient descent to find network parametersthat (locally) minimize a cost function of the form where
is a given loss function,is the th training image with paired measurements , and denotes the reconstructed image given as the fixed-point of . For our image reconstruction experiments, we use the mean-squared error (MSE) loss:
To simplify the calculations below, we consider gradients of the cost function with respect to a single training measurement/image pair, which we denote . Following , we leverage the fact that is a fixed-point of to find the gradient of the loss with respect to the network parameters without backpropagating through an arbitrarily-large number of fixed-point iterations. We summarize this approach below.
First, abbreviating by
, then by the chain rule
Since we assume is the MSE loss, the gradient is simply the residual between and the equilibrium point: .
In order to compute we start with the fixed point equation: . Implicitly differentiating and rearranging this equation with respect to gives
Plugging this expression into (14) gives
This converts the memory-intensive task of backpropagating through many iterations of to the problem of calculating an inverse Jacobian-vector product. To approximate the inverse Jacobian-vector product, first we define the vector by
Following , we note that is a fixed point of the equation
and the same machinery used to calculate the fixed point may be used to calculate . For analysis purposes, we note if , simple fixed-point iterations (16) may be represented by the Neumann series:
Convergence of the above Neumann series is discussed in Section 5.
The gradient calculation process is summarized in the following steps, assuming a fixed point of is known:
Compute the residual .
Compute an approximate fixed-point of the equation .
5 Convergence Theory
Here we study convergence of the proposed deep equilibrium models to a fixed-point at inference time, i.e., given the iteration map we give conditions that guarantee the convergence of to a fixed-point as .
Classical fixed-point theory ensures that the iterates converge to a unique fixed-point if the iteration map is contractive, i.e., if there exists a constant such that . Below we give conditions on the learned component (replacing the gradient or proximal mapping of a regularizer) used in the DE-Grad, DE-Prox and DE-ADMM models that that ensure the resulting iteration map is contractive and thus the fixed-point iterations for these models converge.
In particular, following , we assume that the learned component satisfies the following condition: there exists an such that for all we have
where . In other words, we assume the map is -Lipschitz.
If we interpret as a denoising or de-artifacting network, then is the map that outputs the noise or artifacts present in a degraded image. In practice, often is implemented with a residual “skip-connection”, such that , where is, e.g., a deep U-net. Therefore, in this case, (19) is equivalent to assuming the trained network is -Lipschitz.
We prove the following theorem in the supplement.
Theorem 1 (Convergence of DE-Grad).
Assume that is -Lipschitz (19), and let and , where and denote the maximum and minimum eigenvalue, respectively. If the step-size parameter
denote the maximum and minimum eigenvalue, respectively. If the step-size parameteris such that , then the DE-Grad iteration map defined in (5) satisfies
for all . The coefficient is less than if , in which case the the iterates of DE-Grad converge.
Let be the iteration map for DE-Grad. The Jacobian of with respect to is given by
where is the Jacobian of with respect to . To prove is contractive it suffices to show for all where denotes the spectral norm. Towards this end, we have
where denotes the th eigenvalue of , and in the final inequality (20) we used our assumption that the map is -Lipschitz, and therefore the spectral norm of its Jacobian is bounded by .
Finally, by our assumption where , we have for all , which implies for all . Therefore, the maximum in (20) is obtained at , which gives
This shows is -Lipschitz with , proving the claim.
Convergence of PnP approaches PnP-Prox and PnP-ADMM was studied in . At inference time, the proposed DE-Prox and DE-ADMM methods are equivalent to the corresponding PnP method but with a retrained denoising network . Therefore, the convergence results in  apply directly to DE-Prox and DE-ADMM. To keep the paper self-contained, we restate these results below, specialized to the case of the quadratic data-fidelity term assumed in (2).
Theorem 2 (Convergence of DE-Prox).
See Theorem 1 of .
Theorem 3 (Convergence of De-Admm).
See Corollary 1 of .
Unlike the convergence result for DE-Grad in Theorem 1, the convergence results for DE-Prox and DE-ADMM in Theorem 2 and Theorem 3 make the assumption that , i.e., has a trivial nullspace. This is condition is satisfied for certain inverse problems, such as denoising or deblurring, but violated in many others, including compressed sensing and undersampled MRI. However, in practice we observe that the iterates of DE-Prox and DE-ADMM still appear to converge even in situations where has a nontrivial nullspace, indicating this assumption may be stronger than necessary.
Finally, an important practical concern when training deep equilibrium models is whether the fixed-point iterates used to compute gradients (as detailed in Section 4.2) will converge. Specifically, the gradient of the loss at the training pair involves computing the truncated Neumann series in (17). This series converges if the Jacobian has spectral norm less than when evalated at any , which is true if and only if is contractive. Therefore, the same conditions in Theorems 1-3 that ensure the iteration map is contractive also ensure that the Neumann series in (17) used to compute gradients converges.
6 Experimental Results
6.1 Comparison Methods and Inverse Problems
Our numerical experiments include comparisons with a variety of models and methods. Total-variation Regularized Least Squares (TV) is an important baseline that does not use any training data but rather leverages geometric models of image structure [29, 31, 5]. The PnP and RED methods are described in Section 2.3; we consider both the original ADMM variant of  PnP-ADMM and a proximal gradient PnP-Prox method as described in . We utilize the ADMM formulation of RED. Deep Unrolled methods (DU) are described in Section 2.1; we consider DU using gradient descent, proximal gradient, and ADMM. The preconditioned Neumann network  represents the state of the art in unrolled approaches but does not have simple Deep Equilibrium or Plug-and-Play analogues.
We compare the above approaches across three inverse problems: Gaussian deblurring (Deblur), 8 Gaussian compressed sensing (CS), and 8 accelerated Cartesian single-coil MRI reconstruction (MRI). The compressed sensing and single-coil MRI measurements are corrupted with additive Gaussian noise with . Deblur (1) is blurred with additive Gaussian noise with , and Deblur (2) is corrupted with additive Gaussian noise with .
For deblurring and compressed sensing, we utilize a subset of the Celebrity Faces with Attributes (CelebA) dataset , which consists of centered human faces. We train on a subset of 10000 of the training images. All images are resized to 128128. For the single-coil MRI problem, we use a random subset of size 2000 of the fastMRI single-coil knee dataset  for training. We trim the fully-sampled images so they are 320320 pixels.
6.2 Architecture Specifics
For our learned network, we utilize a U-Net architecture 
with some modifications. First, we have removed all instance normalization layers. For both the CelebA and fastMRI datasets, we train six U-Net denoisers with noise varianceson the training split.
For the CelebA set, to ensure contractivity of the learned component, we add spectral normalization to all layers , ensuring that each layer has a Lipschitz constant bounded above by 1. This normalization is enforced during pretraining as well as during the Deep Equilibrium training phrase.
We found that adding spectral normalization resulted in significant PSNR drops on the fastMRI dataset, and so did not use spectral normalization for the fastMRI U-Nets. Instead, we initialized the U-Nets before pretraining by sampling kernel weights from a random Gaussian distribution with. Empirically, this initialization provides sufficient expressive power, but without causing a lack of contractivity, which can cause significant problems during training.
Further details on settings, parameter choices, and data may be found in the appendix and in our publicly-available code.333Available at: https://github.com/dgilton/deep_equilibrium_inverse
6.3 Parameter Tuning and Pretraining
The proposed approaches to solving inverse problems via Deep Equilibrium are all based on some iterative optimization algorithm. Each of these iterative optimization algorithms has their own set of hyperparameters to choose,e.g., the step size in DE-Grad, plus any parameters used to calculate the initial estimate .
We choose to fix all algorithm-specific hyperparameters prior to training a Deep Equilibrium network, for clarity and ease of training. We perform a grid search over algorithm-specific hyperparameters, testing the performance of the untrained Deep Equilibrium network on a held-out test set.
Tuning hyperparameters requires choosing a particular during tuning. We use an that has been pretrained for Gaussian denoising. Pretraining can be done on the target dataset (e.g., training on MRI images directly) or using an independent dataset (e.g., the BSD500 image dataset ). We use the former approach in our experiments. In the supplemental materials we show that pretraining provides a small improvement in reconstruction accuracy over random initialization.
Because we initialize our learned components with denoisers, the initial setup of our method exactly corresponds to tuning a PnP approach with a deep denoiser. Training adapts the iteration map to a particular inverse problem and data distribution. We note that our approach may be used to adapt any iterative optimization framework satisfying the conditions in Section 4, e.g., solvers for RED .
6.4 Main Results
We present the main reconstruction accuracy comparison in Table 1. Each entry for Deep Equilibrium (DE), Regularization by Denoising (RED), and Plug-and-Play (PnP) approaches is the result of running fixed-point iterations until the relative change between iterations is less than . During training, all DE models were limited to 50 forward updates. The DU models are tested at the number of iterations for which they were trained and all parameters for TV reconstructions (including number of TV iterations) are cross-validated to maximize PSNR. Performance as a function of iteration is shown in Figs. 1(a), 5(a), and 5(b), with example reconstructions in Fig. 6. Further example reconstructions are available for qualitative evaluation in Appendix 8. We observe our DE reconstructions improve reconstruction quality beyond the number of iterations they were trained for.
We observe our DE-based approaches consistently outperform DU approaches across choices. Among choices of iterative reconstruction architectures for , DE-Prox is a front-runner. However, some of the differences may be due to DE-ADMM not being accelerated, while DE-Prox and DE-Grad leverage Anderson acceleration.
Across problems, the DE approach is generally competitive with the end-to-end trained DU solvers. For two of three problems, our approach requires no more computation to be competitive with a DU network, with increasing advantage to our approach with further computation.
6.5 Effect of Pre-Training
Here we compare the effect of initializing the learned component in our deep equilibrium models with a pretrained denoiser versus initializing with random weights. We use identical hyperparameters for both initialization methods.
We present our results on Deep Equilibrium Proximal Gradient Descent (DE-Prox) and Deep Equilibrium ADMM (DE-ADMM) in Figure 7. We observe an improvement in reconstruction quality when utilizing our pretraining method compared to a random initialization. We also note that pretraining enables a simpler choice of algorithm-specific hyperparameters. For example, with random initialization, choosing the proper internal step size for DE-Prox would require training several different DE-Prox instances, and choosing the correct step size based on validation set, in addition to any other parameters, such as the learning rate used during training.
6.6 Noise Sensitivity
We observe empirically that the Deep Equilibrium approach to training achieves competitive reconstruction quality and increased flexibility with respect to allocating computation budget. Recent work in deep inversion has questioned these methods’ robustness to noise and unexpected inputs [2, 26, 13].
To examine whether the Deep Equilibrium approach is brittle to simple changes in the noise distribution, we varied the level of Gaussian noise added to the observations at test time and observed the effect on reconstruction quality. Fig. 7 demonstrates that the Deep Equilibrium model DE-Prox is more robust to variation in the noise level than the analogous Deep Unrolled approach DU-Prox. The forward model used in Fig. 7 is MRI reconstruction.
This paper illustrates non-trivial quantitative benefits to using implicitly-defined infinite-depth networks for solving linear inverse problems in imaging. Other recent work has focused on such implicit networks akin to the deep equilibrium models considered here (e.g. ). Whether these models could lead to additional advances in image reconstruction remains an open question for future work. Furthermore, while the exposition in this work focused on linear inverse problems, nonlinear inverse problems may be solved with iterative approaches just as well. The conditions under which deep equilibrium methods proposed here may be used on such iterative approaches are an active area of investigation.
-  (2018) Learned primal-dual reconstruction. IEEE transactions on medical imaging 37 (6), pp. 1322–1332. Cited by: §2.1.
-  (2020) On instabilities of deep learning in image reconstruction and the potential costs of ai. Proceedings of the National Academy of Sciences. Cited by: §6.6.
-  (2018) Trellis networks for sequence modeling. arXiv preprint arXiv:1810.06682. Cited by: §2.2.
-  (2019) Deep equilibrium models. In Advances in Neural Information Processing Systems, pp. 690–701. Cited by: §1, §2.2, §4.2, §6.2.
-  (2009) Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE transactions on image processing 18 (11), pp. 2419–2434. Cited by: §6.1.
-  (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc. Cited by: §2.3, §3.
-  (2016) Plug-and-play admm for image restoration: fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3 (1), pp. 84–98. Cited by: §2.3.
-  (2018) . In Advances in neural information processing systems, pp. 6571–6583. Cited by: §2.2.
-  (2020) Momentum-net: fast and convergent iterative neural network for inverse problems. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.1.
-  (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing 16 (8), pp. 2080–2095. Cited by: §1, §2.3.
Recurrent stacking of layers for compact neural machine translation models. In
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 6292–6299. Cited by: §2.2.
-  (2019) Implicit deep learning. arXiv preprint arXiv:1908.06315. Cited by: §7.
-  (2020) Solving inverse problems with deep neural networks–robustness included?. arXiv preprint arXiv:2011.04268. Cited by: §6.6.
-  (2019) Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging. Cited by: §6.1.
-  (2017) Stable architectures for deep neural networks. Inverse Problems 34 (1), pp. 014004. Cited by: §2.2.
-  (2020) Deep implicit layers - neural odes, deep equilibirum models, and beyond. External Links: Cited by: §4.2, footnote 2.
-  (2020) Rare: image reconstruction using deep priors learned without groundtruth. IEEE Journal of Selected Topics in Signal Processing 14 (6), pp. 1088–1099. Cited by: §2.3.
Deep learning face attributes in the wild.
Proceedings of International Conference on Computer Vision (ICCV), Cited by: §6.1.
-  (2018) Neural proximal gradient descent for compressive imaging. In Advances in Neural Information Processing Systems, pp. 9573–9583. Cited by: §3.
-  (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vol. 2, pp. 416–423. Cited by: §6.3.
-  (2020) Model-based deep learning pet image reconstruction using forward-backward splitting expectation maximisation. IEEE Transactions on Radiation and Plasma Medical Sciences. Cited by: §2.1.
-  (2017) Learning proximal operators: using denoising networks for regularizing inverse imaging problems. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790. Cited by: §2.3.
-  (2020) State-of-the-art machine learning mri reconstruction in 2020: results of the second fastmri challenge. arXiv preprint arXiv:2012.06318. Cited by: §2.1.
-  (2020) Deep learning techniques for inverse problems in imaging. arXiv preprint arXiv:2005.06001. Cited by: §1, §2.1, §2.3.
-  (2014) Proximal algorithms. Foundations and Trends in optimization 1 (3), pp. 127–239. Cited by: §3.
-  (2020) Improving robustness of deep-learning-based image reconstruction. arXiv preprint arXiv:2002.11821. Cited by: §6.6.
-  (2017) The little engine that could: regularization by denoising (red). SIAM Journal on Imaging Sciences 10 (4), pp. 1804–1844. Cited by: §2.3, §6.3.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §2.3, §6.2.
-  (1992) Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60 (1-4), pp. 259–268. Cited by: §1, §6.1.
-  (2019) Plug-and-play methods provably converge with properly trained denoisers. In International Conference on Machine Learning, pp. 5546–5557. Cited by: §2.3, §5, §5, §5, §5, §6.1.
-  (2003) Edge-preserving and scale-dependent properties of total variation regularization. Inverse problems 19 (6), pp. S165. Cited by: §6.1.
-  (1943) On the stability of inverse problems. In Dokl. Akad. Nauk SSSR, Vol. 39, pp. 195–198. Cited by: §1.
-  (2019) Super-resolution via image-adapted denoising cnns: incorporating external and internal learning. IEEE Signal Processing Letters 26 (7), pp. 1080–1084. Cited by: §2.3.
-  (2013) Plug-and-play priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pp. 945–948. Cited by: §2.3, §6.1.
-  (2011) Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49 (4), pp. 1715–1735. Cited by: §4.1.
-  (2019) Computationally efficient deep neural network for computed tomography image reconstruction. Medical physics 46 (11), pp. 4763–4776. Cited by: §2.1.
-  (2018) fastMRI: an open dataset and benchmarks for accelerated MRI. External Links: Cited by: §6.1.
Learning deep cnn denoiser prior for image restoration.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3929–3938. Cited by: §2.3.
8.1 Further Qualitative Results
In this section, we provide further visualizations of the reconstructions produced by Deep Equilibrium models and the corresponding Deep Unrolled approaches, beyond those shown in the main body. Figures 10, 9, and 8 are best viewed electronically, and contain the ground-truth images, the measurements (projected back to image space in the case of MRI and compressed sensing), and reconstructions by DU-Prox and DE-Prox.
We also visualize the intermediate iterates in the fixed-point iterations, to further demonstrate the convergence properties of DEMs for image reconstruction. We find that DEMs converge quickly to reasonable reconstructions, and maintain high-quality reconstructions after more than one hundred iterations.
8.2 Visualizing Iterates
In Figures 13, 12, and 11 we visualize the outputs of the ’th iteration of the mapping in DE-Prox. Recall that during training, DE-Prox was limited to 50 forward iterations. We observe that across forward problems, the reconstructions continue converging after this number of iterations.
We illustrate 90 iterations for deblurring and MRI reconstructions, and illustrate 190 iterations for compressed sensing, since that problem requires additional iterations to converge.
For further illustration we also demonstrate the qualitative effects of running DU-Prox for more iterations than it was trained for.
8.3 Further Experimental Details
In this section we provide further details related to the experimental setup.
The input to every learned algorithm is the preconditioned measurement , where is generally set to be equal to the noise level . For MRI reconstruction experiments, was used. The masks used in the MRI reconstruction experiments are based on a Cartesian sampling pattern, as in the standard fastMRI setting. The center 4 of frequencies are fully sampled, and further frequencies are sampled according to a Gaussian distribution centered at 0 frequency with .
The compressed sensing design matrices have entries sampled and scaled so that each entry is drawn from a Gaussian distribution with variance , where . The same design matrix is used for all learned methods.
Optimization algorithm parameters for RED, Plug-and-Play, and all Deep Equilibrium approaches are all chosen via a logarithmic grid search from to with 20 elements in each dimension of the grid. All DU methods were trained for 10 iterations. All testing was done on an NVidia Titan X. All networks were trained on a cluster with a variety of computing resources 444See: https://slurm.ttic.edu/. Every experiment was run utilizing a single GPU-single CPU setup with less than 12 GB of GPU memory.