While in many vision tasks we often formulate the inverse problems that finding the latent image from the observed one as , where denotes some degradation matrix relating to an imaging/degradation system (such as blur kernel, downsamples or mask, etc.) and is an i.i.d white Gaussian noise term with unknown standard noise level. Typically, in most real-world scenarios, solving the inverse problems is challenging and mathematically ill-posed, that is, the optimal solution either not exists, or is not unique. Over the past decades, numerous methods have been developed to address these low-level vision problems, such as optimizing designed-priors [1, 2, 3, 4, 5]
, deep learning-based approaches[6, 7, 8, 9, 10, 11, 12] and prior distribution which can be Indeed, solving the MAP framework is equivalent to deal with the minimization problem who tries to figure out an optimal solution subject to a certain set of constraints such that their objectives reach the best. Following this perspective, the MAP model can be formulated as
where function and typically capture the loss of data fitting and the regularization, respectively. In this work, we assume that the loss is smooth, while the regularization can be nonconvex and nonsmooth.
Owing to the ill-posed nature of most image processing tasks, it is necessary to design priors for getting desired solutions. For example, many image restoration tasks utilize the sparsity prior as the regularization term [1, 2]. In particular, Eq. (1) can be minimized by a broad class of general numerical optimization methods, among which the Proximal Gradient (PG) , Half Quadratic Splitting (HQS)  and Alternating Direction Method of Multipliers (ADMM)  are proven to be the most reliable methods. Over the past decades, many efforts have been devoted to these schemes. For example, by integrating Nesterov’s accelerated gradient method  into the fundamental PG scheme, APG is initially developed for convex models [1, 17]. Subsequently, other typical APGs are derived solving problem (1) including monotone APG (mAPG) , inexact APG (niAPG)  and momentum APG for nonconvex problem (APGnc) , etc. Optimization designed priors strategies provide a mathematical understanding of their behaviors with well-defined regularization properties. Flexible and exact proper prior is challenging to construct and solve. However, simple regularizer performs poorly when compared with state-of-the-art methodologies in real-world applications. This is because these methods do not exploit the particular structures of the image processing tasks at hand nor the input data distributions. These limits make it difficult to solve the problems in a purely optimized manner with designed priors.
Different from the optimizing designed-priors schemes, learning-based approaches learn mapping functions to deduce the desirable high-quality images from the observed one. In recent years, various learning-based strategies [6, 9, 10] have been proposed to address practical image modeling problems. These discriminative learning-based methods combine the classical numerical solvers and the collected training data to obtain some task-specific iterations. Similar to this view, plug-in schemes have been recently studied extensively with a great empirical success that replace the regularization term by using task-related operator for vision problems [21, 8, 22, 23, 24]. Indeed, these algorithms perform better than some state-of-the-art methods in real-world applications. Unfortunately, these plug-in schemes consider the prior term with implicit regularization which may break the properties and structures of the objective in Eq. (1). Thus, the existing proofs only demonstrate the iteration sequences converge to a fixed point without knowing the relationship between it and the optimal solutions. By introducing spectral normalization technique for more accurately constraining deep learning-based denoisers, the fixed-point theory is established in . Whereas this theory result is effective only under the strongly convex condition of function which is unsatisfied in plenty of vision problems, for example when the data fitting term is setting as with non-column full rank matrix , the strong convex property is unattainable. In contrast to these implicit plug-and-play methods, an explicit plug-in scheme is developed in , named Regularization by Denoising (RED), with explicit Laplacian-based regularization functional. While the regularization term is required to be symmetric which is unsatisfied in many state-of-the-art methods, such as NLM , RF  and BM3D  etc. Further work has been discussed in [29, 30]. Optimal condition-based approaches, such as [31, 32] have been developed efficiently for solving Eq. (1). However, the implicit condition relies on estimating the sub-differential which is usually attained implicitly.
To address the above issues, we develop a Proximal Averaged Optimization (PAO) for the challenging MAP-based nonconvex and nonsmooth problem described in Eq. (1). The proposed PAO is a joint process of optimizing objective and feasibility constraint. Specifically, by enforcing task-driven latent feasibility for MAP-type model, we develop a new perspective to investigate domain knowledge and data distributions for image modeling (see Fig. 1 for illustration.). For Task-driven Feasibility (TF) type minimization problem, we establish a TF-PAO scheme. Considering the flexibility of PAO, we further embed the learnable form into the feasibility constraint and derive a LTF-PAO scheme. The learnable form incorporates designed/trained architectures that aim to find the task-related optimal solution. We rigorously prove that both the proposed PAO and its learning-based extension scheme converge to a critical point of the original problem (1) with a commonly used monotone descent condition. We also demonstrate how to apply our paradigm to address challenging real-world vision applications, such as image deblurring, inpainting and rain streaks removing. Different from manually designed-priors solved by general numerical methods, the proposed PAO exploits the particular structures and the data distributions of image processing. Learning-based methodologies [21, 23, 24, 8] embed learnable network architectures into the MAP framework, whereas the inference process and the exact behaviors of which are actually hard to investigate. In comparison, the developed PAO enables the MAP framework to insert the learnable structures without changing the properties of the objective. In summary, the contributions of this paper mainly include:
The developed PAO provides a novel perspective to introduce task-driven feasibility module for nonconvex and nonsmooth MAP-based image modeling problems. By embedding the additional learning-based deep architectures, PAO exploits the data distribution and the particular structure by incorporating the LTF module.
The developed TF-PAO and LTF-PAO keep the properties and structures of the objective. Specifically, the developed iteration sequences converge to one of the critical points of Eq. (1). Moreover, we prove in theory that the proposed frameworks can derive sequence convergence results.
Ii The Proposed Algorithm
In this section, by enforcing task-driven feasibility, we first reformulate the minimization problem (1) to a constraint-based scheme which can embed TF and LTF modules for vision tasks. Then, by incorporating with LF and LTF, the PAO is developed for the nonconvex modeling-based image restoration problem. The convergence behaviors about the PAO are performed under some loose conditions about the objective function accordingly.
Ii-a Enforcing Task-driven Feasibility
To solve the MAP-based nonconvex minimization problem as described in Eq. (1), we first reformulate it by enforcing task-driven feasibility module. Then, the original problem can be reformulated as the following constraint-based form
In general, is designed as a rough estimation, such as, the bounded domain , or a set estimated by some equality or inequality constraints, which is normal to characterize the solution space in image processing problems [33, 34]. In this work, the energy-based latent feasibility constraint can be constructed by the TF module, named , for image restoration problems,
where is differentiable and is nonsmooth and nonconvex. We assume that and are proper and lower-semicontinuous.
Considering the important feature of the complex data distribution in real-world applications, we further introduce Learning-based TF (LTF) module to incorporate designed/trained architectures to optimize Eq. (1). Specifically, the network-based building block at the -th iteration can be denoted as , where is the set of learnable parameters with -th training stage111Please refer to the next section for the detailed structures of in real-world applications.. We denote the temporary variable at the -th iteration as . By further considering LTF form as the proximal approximation of Eq. (3) with parameter at the -th iterative, the learning-based task-driven feasibility module can be
Indeed, the modules and are task-related which either can be selected to maintain some of the characteristics of the problem, such as, smooth, edge-enhanced, denoiser and sparse, etc., or can be introduced to characterize the task on another domain. For example, the argument of minimized objective function and the constraint module can be constructed as the image domain and the gradient domain respectively.
Ii-B Proximal Averaged Optimization
In the light of the model (2), two task-driven feasibility type (TF and LTF) PAO are developed in this section.
Ii-B1 PAO with Task-driven Feasibility
Specifically, for objective minimization problem (1), we just adopt PG method to update the variable which yields
where and is the step size. As for the Eq. (3), we actually perform any first-order methods to solve this subproblem such as, PG , APG , ADMM [15, 35] and HQS  etc. By introducing a linear averaging form about and with a weighted parameter sequence ,
our complete TF-PAO iterations are summarized in Alg. 1.
Actually, the temporary variable maybe a bad extrapolation which has potential to fail. To address this issue, we introduce the correction step named as Monotone Descent Updating Scheme (MDUS) in Alg. 2 to ensuring the descent property, i.e., . This technique can be commonly found in first-order methods [17, 18, 36]. Indeed, it is not difficult to understand that only the descent property could neither ensure the decreasing of nor the convergence to a critical point in nonconvex programming. Under the property of proximal gradient, our algorithm obtains sufficient descent. Then we would like to summarize the convergence behaviors for the proposed algorithm in the subsection III222We move the proofs of all our theoretical results to the Appendix Materials..
Ii-B2 PAO with Learning-based Task-driven Feasibility
By embedding designed/trained architectures to TF module, we then develop the LTF-PAO to optimize MAP-based model described in Eq. (1). What share with the statement in subsection II-B is the freedom for selecting method to solve this constraint-based subproblem. Then by introducing a relative loose boundness condition about , we summarize the complete LTF-PAO scheme in Alg. 3. Indeed, is the output of network which is used to approximate the task-driven module at the -th iteration. We introduce a boundness condition (in Steps 5 of Alg. 3) to control the iteration sequence. This aims to prevent any improperly designed/trained architectures, which may deflect our iterative trajectory towards unwanted solutions. The monitor is obtained by checking the boundedness of . The convergence behaviors are summarized in the following section.
Iii Convergence Results
In this part, we would like to discuss the convergence behaviors for the proposed TF-PAO and LTF-PAO algorithms. We suggest readers to refer to  for some definitions in variational analysis, such as proper, lower-semicontinuous, coercive and the limiting subdifferential which will be useful in the following analysis. Our convergence analyses are also based on the following fairly loose assumptions.
The objective function in Eq. (1) is proper, lower-semicontinuous and coercive. Function is convex and Lipschitz smooth, i.e., , we have , where is the Lipschitz constant for .
we conclud that From the MDUS correction scheme, the sufficient descent property is obtained with . Then, we will show the second item. The sufficient descent inequality in the first item implies which means that i.e., there exist subsequence and convergence to a same point as . Incorporating the lower-semicontinuous of and the supreme principle, we obtain that . With the optimal condition we know that Actually, for , we have The above imply that is a critical point. This complete the proof. ∎
|(a) PSNR / SSIM||(b) 28.1463 / 0.8064||(c) 28.6781 / 0.8436||(d) 29.2157 / 0.8447|
|Input||Eq. (1)||Eq. (4)||Eq. (2)|
Note that, the proposed algorithm is a modification PG scheme. Under some mild conditions, for example, the semi-algebraic property on , the sequence convergent property still holds.
With the semi-algebraic property333Please refer to  for the formal definition of semi-algebraic function. Actually, many functions arising in learning and vision areas, including norm, rational norms (i.e., with positive integers and ) and their finite sums or products, are all semi-algebraic. of function , we can further assert that the sequence in Alg. 1 has finite length, i.e.,
With the KL property (see ) and the definition of the sub-differential, we have where is the desingularizing function. From the concavity of , we obtain
If we denote and , the inequality holds which implies Subsequently, we have
Obviously, the above inequality implies the finite length of sequence . If , the inequality holds. If , with the proper, lower-semicontinuous and coercive property of , the sequence is bounded. Then with the update scheme about and the finite length of , we have
This completes the proof. ∎
Indeed, the summable sequence as stated in Theorem 2 implies that there exist satisfying , as . Subsequently, it follows that the iteration is a Cauchy sequence and hence is a globally convergent sequence which is also defined as sequence convergence.
According to the convergence analysis described in II-B, the objective function is sufficiently descent in Alg. 3 and it is easy to check that the convergence results of Alg. 3 can be obtained in the same manner as stated in Theorem 1. The temporary iteration in Alg. 3 is bonded under the checking condition. This implies the boundness of . Then, in Alg. 3, the sequence convergent property of is attained.
We emphasize that different from these existing image modeling approaches, the proposed PAO allows us to introduce a task-driven feasibility module related to the application areas when solving the optimization model in Eq. (1
). This section first considers non-blind deconvolution and image inpainting. We take non-blind deconvolution as an illustrative example for establishing PAO. Then, we extend the PAO to even more challenging single image rain streaks removal task.
Iv-a Image Deconvolution
Here we consider a particular non-blind deconvolution problem, which aims to recover latent image from blurred observation . By formulating this problem using sparse coding model , where denotes the sparse code, is a given dictionary444We follow standard settings in image processing to define as the inverse wavelet transform in our problem. Indeed, the form is denoted as , where is the matrix of the blur kernel , is the inverse of the wavelet transform of . and is the unknown noise, we derive a specific case of Eq. (1), that is where , . Subsequently, it can be equivalently described as the following intuitive form, i.e.,
In the following, we will give an example to illustrate PAO with two module settings, i.e., task-driven feasibility and learning-based task-driven feasibility .
Iv-A1 With Task-driven Feasibility
As for the module , we aim to introduce a relatively simple and task-related model to enforce our distribution assumptions on the latent image. Then, we introduce the widely used Total Variation (TV) model  in image domain i.e.,
where is threshold parameter, , and respectively denote the gradient on the horizontal and vertical directions. As it is flexible to select operators for solving Eq. (7), we indeed apply HQS scheme to update in this paper. By introducing two auxiliary variables (named and ), the iteration is
where and are two constant parameters. and are updated by proximal gradient operator that we omit it here. Then, applying the proximal gradient approach to update which can be transformed as yields the following form
where . It is clearly to obtain the detailed updating steps following Alg. 1.
Iv-A2 With Learning-based Task-driven Feasibility
Specifically, in this work, the network is considered as a residual formulation. The learning-based iteration step can be directly adopted by where is the set of learnable parameters with -th training stage and
is the basic network unit. In our network, there are nineteen layers which includes seven convolution layers, six ReLU layers, five batch normalization layers and one loss layer. The detailed information aboutcan be found in the experimental results section. Notice that standard training strategies can be directly adopted to optimize parameters of our basic architecture. If necessary, one may further jointly fine-tune parameters of the whole network after the design phase. By setting , the learning-based scheme of is
Hence, following the iteration form of and the above learning-based scheme, we obtain that
By considering the latent image as the uniform augment, we actually obtain a PAO to integrate both synthesis and analysis mechanisms to address different vision applications, including deblurring, inpainting and rain streaks removal etc. Here the matrix actually formulates the observation forward model for particular image processing paradigm. Possible choices of include an identity operator for denoising, convolution operators for deblurring, filtered subsampling operators for superresolution, the Fourier -domain subsampling operator for magnetic resonance imaging (MRI) reconstruction or mask for image inpainting. We incorporate experimentally designed and trained network architectures into the PAO for solving these problems. In summary, the proposed PAO indeed could integrates advantages from different domain knowledge.
Iv-B Rain Streaks Removal
This subsection focuses on single image rain streaks removal task which is a challenging real-world computer vision problem. A rainy image is often considered as linear combination of rain-free background and rain streaks layer , i.e., . We set . In terms of designing the optimization model, rather than make efforts in exploiting complex priors, we consider the fundamental energy-based sparsity of the observation in certain transform domain as
where , , , and are two positive constants. is the indicator function, i.e., if , then , otherwise . and respectively denote the sparse codes of , on which are two auxiliary variables serving for the subproblem. As for stated in Eq. (2), we consider the general TV regularization as described in the following form
In this part, we first introduce two residual network and as stated in . Then we denote two temporary variable and respectively for background and rain streaks layers. For the background layer network , we just follow  to build a series of denoising CNNs which extract the natural image well. For rain streaks layer, the learns rain streaks behavior from rainny images by training rainy image and synthetic rain layer as the degraded clean image pair. We update variables and synchronously
where the auxiliary variables and are updated by proximal gradient operator. Similarily, we obtain that
Then, following the steps described in LTF-PAO, we obtain the updating scheme.
|30.8229 / 0.9108||38.2246 / 0.9247||38.2269 / 0.9251||38.0961 / 0.9324||38.3633 / 0.9581||40.4552 / 0.9802|
|PG||mAPG||niAPG||APGnc||TF-PAO (Ours)||LTF-PAO (Ours)|
V Experimental Results
In this section, we first verify our theoretical results by investigating the iteration behaviors of the proposed TF-PAO and LTF-PAO on standard deconvolution formulation with Eq. (6). We then evaluated the state-of-the-art performance of LTF-PAO both with general and learning-based methods on different vision applications. We conducted these experiments on a computer with Intel Core i7-7700 CPU (3.6 GHz), 32GB RAM and an NVIDIA GeForce GTX 1060 6GB GPU.
V-a Theoretical Verifications
To verify our theoretical investigations, we performed experiments on non-blind deconvolution. Notice that this problem can be directly addressed by our TF-PAO and LTF-PAO.
V-A1 Modularization Settings
We first provide an comparison about the optimization models in image deblurring application and the corresponding results are shown in Fig. 2. Observed that the proposed PAO with LTF module performs better both than the modeling scheme in Eq. (1) and the LTF in Eq. (4). This illustrates the effectiveness of the proposed PAO scheme.
We then analyze the performance and flexibility of PAO with different operator settings and the corresponding PSNR (i.e., peak signal-to-noise ratio) results withnoise level are plotted in Fig. 3. As for solving the subproblem (7) specified in , four different first-order methods, such as PG (), APG (), HQS () and ADMM () are considered as mentioned in subsection II-B. The PSNR scores of TF-PAO under four different strategies are plotted in Fig. 3 (a). Observed that various methods when obtaining have a slight influence on the performance of our TF-PAO scheme. We adopt HQS as the approach obtaining the iteration steps of in TF-PAO and in LTF-PAO. Note to say, to provide a relative fairness comparison, we keep parameters same under four different circumstances mentioned above. Hereafter, we select relative error (i.e., ) as stop criterion.
|24.02 / 0.79||25.57 / 0.84||23.21 / 0.76||26.00 / 0.89||25.95 / 0.85||29.02 / 0.92|
To analyze the effects of network-based block , four different task-specific structures, i.e., TV , RF , CNNs and BM3D  (named as , , and , respectively) are adopted under the LTF-PAO scheme. For CNNs architecture, we introduce a residual network which consists of nineteen layers: seven dilated convolutions with
filter size, six ReLu operations (plugged between each two convolution layers) and five batch normalizations (plugged between convolution and ReLU, expect the first convolution layer). In training stage, we randomly select 800 natural images from ImageNet database. The selected pictures are cropped into patches of size . Fig. 3 (b) plotted the PSNR with , , and . As can be seen, LTF-PAO performs better and faster with than others. Hence, we setting as CNNs hereafter.
We further compared our LTF-PAO with the traditional methods under three different Gaussian noise levels (i.e., , and ) on the image set (collected by ) and the corresponding results are shown in Tab. I with quantitative performance (i.e., PSNR and SSIM metrics). It can be seen that our LTF-PAO outperforms classical numerical solvers by a large margin in terms of the performance.
V-A2 Convergence of PAO
Next, we illustrate the convergence behaviors of PAO Schemes. To evaluate the variation trend of described in PAO, the variable of and intermediate variables (, in TF-PAO and in LTF-PAO) between and are plotted in Fig. 4 (a) and (b). In Fig. 4 (a), the iteration behaviors of , , and prove the boundness of , and . Similarly, we plotted the convergence curves of LTF-PAO in Fig. 4 (b). Aiming at illustrating the iteration steps of LTF-PAO, we show the select condition about the relationship between and described in Fig. 4 (c). This implies the boundness of .
For the proposed schemes of TF-PAO and LTF-PAO are proximal-based methods, it is necessary to compare our methods with the existing proximal-based first-order approaches, such as, the classical proximal gradient (PG), monotone APG (mAPG) , inexact APG (niAPG)  and momentum APG for nonconvex problem (APGnc) , with additional 1‰ noise level and kernel size. The comparison results are shown in Fig. 5 with relative error after log transformation (), reconstruction error (), functional value and PSNR, where denotes the ground truth. Here, the stop criterion is set as . Obviously, the proposed TF-PAO converge faster than other PGs under the same stop condition. Observed that, LTF-PAO perform the best both in PSNR scores and the iteration steps. The corresponding visual results are shown in Fig. 6 with PSNR and SSIM (i.e., structural similarity) scores. Observed that the proposed LTF-PAO remove more noise while keeping the details.
|32.87 / 0.91||32.12 / 0.92||29.69 / 0.86||33.40 / 0.96||28.18 / 0.89||37.10 / 0.97|
V-B State-of-the-art Comparisons
We then evaluated our LTF-PAO on a variety of low-level vision applications including image deblurring, image inpainting and rain streaks removal.
V-B1 Image Deblurring
In this task, matrix stated in the application part is the blur kernel and is blurry image. As usual, the blurry images are synthesized by applying a blur kernel and adding additive Gaussian noise. We consider the circular boundary conditions when performing the convolution. We reported the results of our LTF-PAO on Sun et al’ challenging benchmark  and Levin et al’ dataset , together with other state-of-the-art methods including the traditional methods (e.g., IDDBM3D , TV , parameters learning based methods (e.g., EPLL , CSF , ) and network based methods (e.g., MLP , IRCNN , FDN ). It can be seen in Tab. II that our method obtained the best quantitative performance (i.e., PSNR and SSIM metrics) on Sun et al’ and Levin et al’ dataset. Moreover, we illustrated the visual comparisons on real image deblurring  with unknown blur kernel which is estimated roughly by Pan et al.’ method . As shown in Fig. 7, our method reserve more details.
V-B2 Image Inpainting
In image inpainting task, the matrix is mask, and is the missing pixels image. This task aims to recover the missing pixels of the observation. Here we compared our LTF-PAO with TV , FOE , VNL , ISDSB , WNNM  and IRCNN  on this problem. We normalized the pixel values to . We generated random masks of different levels including , , missing pixels on CBSD68 dataset . Moreover, we collected 12 different text masks to further evaluate the proposed methods. Tab. III presented the PSNR and SSIM comparison results with different masks. Observed that our method perform better than the state-of-the-art approaches regardless the proportion of masks. Further, in comparison with the visual performance of LTF-PAO with other methods, we presented the missing pixels comparisons in Fig. 8 with top five scores (TV, FoE, ISDSB, WNNM and IRCNN). It can be observed that our approach successfully recovered the image with better visual quality, especially in the zoomed-in regions with rich details.
V-B3 Single-image Rain Streaks Removal
In this part, we evaluated our method on the task of rain streaks removal, in comparison with the state-of-the-art including GMM , DN , DDN , JCAS , JORDER , UGSM , and DID-MDN . All the comparisons shown in this paper are conducted under the same hardware configuration.
Tab. IV reported the quantitative scores on three different datasets: (1) Test1 is obtained by , includes 12 synthesized rain images with only one type of rain streaks rendering technique. (2) Rain100H is collected from BSD200  and synthesized with five streak directions; (3) Test2 data-set consists of 7 images, using photorealistic rendering of rain streaks . Here we just adopt the training set provided by Yang et al.  for . According to the quantitative results reported in Tab. IV, we provide visual comparisons for five methods with relative high PSNR and SSIM scores (i.e., GMM, DDN, UGSM, JORDER, DID-MDN) in Fig. 9. It can be observed that the proposed LTF-PAO scheme can reserves more details with very few rain streaks left no matter in synthesized or real-world rainy images.
|DDN (27.79 / 0.8371)||Ours (28.48 / 0.8531)|
In this paper, we developed a Proximal Averaged Optimization (PAO) method for the challenging nonconvex MAP-based model in Eq. (1). By introducing two constraint schemes, i.e., task-driven feasibility and learning-based task-driven feasibility module, TF-PAO and LTF-PAO were established respectively. Then we proved the convergence of PAO with some relatively loose assumptions. Extensive experiments on some challenging tasks showed that our method has better visual performance and quantitative scores against other state-of-the-art methods.
-  A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183–202, 2009.
-  L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: nonlinear phenomena, vol. 60, no. 1-4, pp. 259–268, 1992.
-  L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Transactions on Graphics (TOG), vol. 31, no. 6, p. 139, 2012.
-  J. Cheng, Y. Gao, B. Guo, and W. Zuo, “Image restoration using spatially variant hyper-laplacian prior,” Signal, Image and Video Processing, vol. 13, no. 1, pp. 155–162, 2019.
-  D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in NeurIPS, 2009, pp. 1033–1041.
-  R. Liu, G. Zhong, J. Cao, Z. Lin, S. Shan, and Z. Luo, “Learning to diffuse: A new perspective to design pdes for visual analysis,” IEEE TPAMI, vol. 38, no. 12, pp. 2457–2471, 2016.
-  Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (red),” SIAM Journal on Imaging Sciences, vol. 10, no. 4, pp. 1804–1844, 2017.
-  K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in IEEE CVPR, 2017, pp. 3929–3938.
-  Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration,” IEEE TPAMI, vol. 39, no. 6, 2017.
-  K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in ICML. Omnipress, 2010, pp. 399–406.
-  P. Mu, J. Chen, R. Liu, X. Fan, and Z. Luo, “Learning bilevel layer priors for single image rain streaks removal,” IEEE Signal Processing Letters, vol. 26, no. 2, pp. 307–311, 2018.
-  R. Liu, Z. Jiang, X. Fan, and Z. Luo, “Knowledge-driven deep unrolling for robust image layer separation,” IEEE TNNLS, 2019.
-  D. P. Bertsekas and A. Scientific, Convex optimization algorithms. Athena Scientific Belmont, 2015.
-  M. Nikolova and M. K. Ng, “Analysis of half-quadratic minimization methods for signal and image recovery,” SIAM Journal on Scientific computing, vol. 27, no. 3, pp. 937–966, 2005.
-  S. Boyd, “Alternating direction method of multipliers,” in NeruIPS, 2011.
-  Y. E. Nesterov, “A method for solving the convex programming problem with convergence rate o (1/k^ 2),” in Dokl. akad. nauk Sssr, vol. 269, 1983, pp. 543–547.
-  A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE TIP, vol. 18, no. 11, 2009.
-  H. Li and Z. Lin, “Accelerated proximal gradient methods for nonconvex programming,” in NeruIPS, 2015.
-  Q. Yao, J. T. Kwok, F. Gao, W. Chen, and T.-Y. Liu, “Efficient inexact proximal gradient algorithm for nonconvex problems,” in IJCAI, 2017.
-  Q. Li, Y. Zhou, Y. Liang, and P. K. Varshney, “Convergence analysis of proximal gradient with momentum for nonconvex optimization,” in ICML, 2017.
-  S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in 2013 Global Conference on Signal and Information Processing. IEEE, 2013, pp. 945–948.
-  X. Wang and S. H. Chan, “Parameter-free plug-and-play admm for image restoration,” in IEEE International Conference on Acoustics, 2017.
-  S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play admm for image restoration: Fixed-point convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2017.
K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” inIEEE CVPR, 2019, pp. 1671–1681.
-  E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, “Plug-and-play methods provably converge with properly trained denoisers,” arXiv preprint arXiv:1905.05406, 2019.
-  D. Kostadin, F. Alessandro, K. Vladimir, and E. Karen, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE TIP, vol. 16, no. 8, pp. 2080–2095, 2007.
-  M. Unser, A. Aldroubi, and M. Eden, “Recursive regularization filters: design, properties, and applications,” IEEE TPAMI, no. 3, pp. 272–277, 1991.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE TIP, vol. 16, no. 8, pp. 2080–2095, 2007.
T. Hong, Y. Romano, and M. Elad, “Acceleration of red via vector extrapolation,”Journal of Visual Communication and Image Representation, vol. 63, p. 102575, 2019.
-  E. T. Reehorst and P. Schniter, “Regularization by denoising: Clarifications and new interpretations,” IEEE Transactions on Computational Imaging, vol. 5, no. 1, pp. 52–67, 2019.
-  R. Liu, S. Cheng, L. Ma, X. Fan, and Z. Luo, “Deep proximal unrolling: Algorithmic framework, convergence analysis and applications,” IEEE TIP, 2019.
-  R. Liu, S. Cheng, Y. He, X. Fan, Z. Lin, and Z. Luo, “On the convergence of learning-based iterative methods for nonconvex inverse problems,” IEEE TPAMI, 2019.
-  C. Bao, H. Ji, Y. Quan, and Z. Shen, “L0 norm based dictionary learning by proximal methods with global convergence,” in IEEE CVPR, 2014, pp. 3858–3865.
-  Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in IEEE CVPR, 2016, pp. 2736–2744.
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al.,
“Distributed optimization and statistical learning via the alternating
direction method of multipliers,”
Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
-  P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye, “A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems,” in ICML, 2013, pp. 37–45.
-  R. T. Rockafellar and R. J.-B. Wets, Variational analysis. Springer Science & Business Media, 2009, vol. 317.
-  J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 1-2, pp. 459–494, 2014.
-  S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
-  U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE CVPR, 2014.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” IJCV, vol. 115, no. 3, pp. 211–252, 2015.
-  K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE TIP, vol. 26, no. 7, 2017.
-  L. Sun, S. Cho, J. Wang, and J. Hays, “Edge-based blur kernel estimation using patch priors,” in ICCP, 2013, pp. 1–8.
-  A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in IEEE CVPR, 2009, pp. 1964–1971.
-  A. Danielyan, V. Katkovnik, and K. Egiazarian, “Bm3d frames and variational image deblurring,” IEEE TIP, vol. 21, no. 4, 2012.
-  Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.
-  D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in IEEE ICCV, 2011, pp. 479–486.
-  C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, “A machine learning approach for non-blind image deconvolution,” in IEEE CVPR, 2013.
-  J. Kruse, C. Rother, and U. Schmidt, “Learning to push the limits of efficient fft-based image deconvolution,” in IEEE ICCV, 2017.
-  R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling, “Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database,” in ECCV, 2012.
J. Pan, Z. Lin, Z. Su, and M.-H. Yang, “Robust kernel estimation with outliers handling for image deblurring,” inIEEE CVPR, 2016.
-  S. Roth and M. J. Black, “Fields of experts,” IJCV, vol. 82, no. 2, 2009.
-  P. Arias, G. Facciolo, V. Caselles, and G. Sapiro, “A variational framework for exemplar-based image inpainting,” IJCV, vol. 93, no. 3, pp. 319–347, 2011.
-  L. He and Y. Wang, “Iterative support detection-based split bregman method for wavelet frame-based image inpainting,” IEEE TIP, vol. 23, no. 12, 2014.
-  S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted nuclear norm minimization and its applications to low level vision,” IJCV, vol. 121, no. 2, pp. 183–208, 2017.
-  S. Tariq, “Rain. nvidia whitepaper,” 2007.
-  D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in IEEE ICCV, 2001, pp. 416–423.
-  X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” IEEE TIP, vol. 26, no. 6, 2017.
-  X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in IEEE CVPR, 2017.
-  S. Gu, D. Meng, W. Zuo, and L. Zhang, “Joint convolutional analysis and synthesis sparse representation for single image layer separation,” in IEEE ICCV, 2017, pp. 1717–1725.
-  W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in IEEE CVPR, 2017.
-  T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “Fastderain: A novel video rain streak removal method using directional gradient priors,” IEEE TIP, 2018.
-  H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in IEEE CVPR, 2018, pp. 695–704.