I Introduction
While in many vision tasks we often formulate the inverse problems that finding the latent image from the observed one as , where denotes some degradation matrix relating to an imaging/degradation system (such as blur kernel, downsamples or mask, etc.) and is an i.i.d white Gaussian noise term with unknown standard noise level. Typically, in most realworld scenarios, solving the inverse problems is challenging and mathematically illposed, that is, the optimal solution either not exists, or is not unique. Over the past decades, numerous methods have been developed to address these lowlevel vision problems, such as optimizing designedpriors [1, 2, 3, 4, 5]
, deep learningbased approaches
[6, 7, 8, 9, 10, 11, 12]. These inverse problems are often formulated as Maximum A Posterior (MAP) estimation with some conditional probability
and prior distribution which can be Indeed, solving the MAP framework is equivalent to deal with the minimization problem who tries to figure out an optimal solution subject to a certain set of constraints such that their objectives reach the best. Following this perspective, the MAP model can be formulated as(1) 
where function and typically capture the loss of data fitting and the regularization, respectively. In this work, we assume that the loss is smooth, while the regularization can be nonconvex and nonsmooth.
Owing to the illposed nature of most image processing tasks, it is necessary to design priors for getting desired solutions. For example, many image restoration tasks utilize the sparsity prior as the regularization term [1, 2]. In particular, Eq. (1) can be minimized by a broad class of general numerical optimization methods, among which the Proximal Gradient (PG) [13], Half Quadratic Splitting (HQS) [14] and Alternating Direction Method of Multipliers (ADMM) [15] are proven to be the most reliable methods. Over the past decades, many efforts have been devoted to these schemes. For example, by integrating Nesterov’s accelerated gradient method [16] into the fundamental PG scheme, APG is initially developed for convex models [1, 17]. Subsequently, other typical APGs are derived solving problem (1) including monotone APG (mAPG) [18], inexact APG (niAPG) [19] and momentum APG for nonconvex problem (APGnc) [20], etc. Optimization designed priors strategies provide a mathematical understanding of their behaviors with welldefined regularization properties. Flexible and exact proper prior is challenging to construct and solve. However, simple regularizer performs poorly when compared with stateoftheart methodologies in realworld applications. This is because these methods do not exploit the particular structures of the image processing tasks at hand nor the input data distributions. These limits make it difficult to solve the problems in a purely optimized manner with designed priors.
Different from the optimizing designedpriors schemes, learningbased approaches learn mapping functions to deduce the desirable highquality images from the observed one. In recent years, various learningbased strategies [6, 9, 10] have been proposed to address practical image modeling problems. These discriminative learningbased methods combine the classical numerical solvers and the collected training data to obtain some taskspecific iterations. Similar to this view, plugin schemes have been recently studied extensively with a great empirical success that replace the regularization term by using taskrelated operator for vision problems [21, 8, 22, 23, 24]. Indeed, these algorithms perform better than some stateoftheart methods in realworld applications. Unfortunately, these plugin schemes consider the prior term with implicit regularization which may break the properties and structures of the objective in Eq. (1). Thus, the existing proofs only demonstrate the iteration sequences converge to a fixed point without knowing the relationship between it and the optimal solutions. By introducing spectral normalization technique for more accurately constraining deep learningbased denoisers, the fixedpoint theory is established in [25]. Whereas this theory result is effective only under the strongly convex condition of function which is unsatisfied in plenty of vision problems, for example when the data fitting term is setting as with noncolumn full rank matrix , the strong convex property is unattainable. In contrast to these implicit plugandplay methods, an explicit plugin scheme is developed in [7], named Regularization by Denoising (RED), with explicit Laplacianbased regularization functional. While the regularization term is required to be symmetric which is unsatisfied in many stateoftheart methods, such as NLM [26], RF [27] and BM3D [28] etc. Further work has been discussed in [29, 30]. Optimal conditionbased approaches, such as [31, 32] have been developed efficiently for solving Eq. (1). However, the implicit condition relies on estimating the subdifferential which is usually attained implicitly.
To address the above issues, we develop a Proximal Averaged Optimization (PAO) for the challenging MAPbased nonconvex and nonsmooth problem described in Eq. (1). The proposed PAO is a joint process of optimizing objective and feasibility constraint. Specifically, by enforcing taskdriven latent feasibility for MAPtype model, we develop a new perspective to investigate domain knowledge and data distributions for image modeling (see Fig. 1 for illustration.). For Taskdriven Feasibility (TF) type minimization problem, we establish a TFPAO scheme. Considering the flexibility of PAO, we further embed the learnable form into the feasibility constraint and derive a LTFPAO scheme. The learnable form incorporates designed/trained architectures that aim to find the taskrelated optimal solution. We rigorously prove that both the proposed PAO and its learningbased extension scheme converge to a critical point of the original problem (1) with a commonly used monotone descent condition. We also demonstrate how to apply our paradigm to address challenging realworld vision applications, such as image deblurring, inpainting and rain streaks removing. Different from manually designedpriors solved by general numerical methods, the proposed PAO exploits the particular structures and the data distributions of image processing. Learningbased methodologies [21, 23, 24, 8] embed learnable network architectures into the MAP framework, whereas the inference process and the exact behaviors of which are actually hard to investigate. In comparison, the developed PAO enables the MAP framework to insert the learnable structures without changing the properties of the objective. In summary, the contributions of this paper mainly include:

The developed PAO provides a novel perspective to introduce taskdriven feasibility module for nonconvex and nonsmooth MAPbased image modeling problems. By embedding the additional learningbased deep architectures, PAO exploits the data distribution and the particular structure by incorporating the LTF module.

The developed TFPAO and LTFPAO keep the properties and structures of the objective. Specifically, the developed iteration sequences converge to one of the critical points of Eq. (1). Moreover, we prove in theory that the proposed frameworks can derive sequence convergence results.

Further, we also consider the PAO as a flexible ensemble framework to solve the optimization model as described in Eq. (1
) when addressing different realworld computer vision tasks. Extensive experiments show the superiority of our PAO method on the tested problems.
Ii The Proposed Algorithm
In this section, by enforcing taskdriven feasibility, we first reformulate the minimization problem (1) to a constraintbased scheme which can embed TF and LTF modules for vision tasks. Then, by incorporating with LF and LTF, the PAO is developed for the nonconvex modelingbased image restoration problem. The convergence behaviors about the PAO are performed under some loose conditions about the objective function accordingly.
Iia Enforcing Taskdriven Feasibility
To solve the MAPbased nonconvex minimization problem as described in Eq. (1), we first reformulate it by enforcing taskdriven feasibility module. Then, the original problem can be reformulated as the following constraintbased form
(2) 
In general, is designed as a rough estimation, such as, the bounded domain , or a set estimated by some equality or inequality constraints, which is normal to characterize the solution space in image processing problems [33, 34]. In this work, the energybased latent feasibility constraint can be constructed by the TF module, named , for image restoration problems,
(3) 
where is differentiable and is nonsmooth and nonconvex. We assume that and are proper and lowersemicontinuous.
Considering the important feature of the complex data distribution in realworld applications, we further introduce Learningbased TF (LTF) module to incorporate designed/trained architectures to optimize Eq. (1). Specifically, the networkbased building block at the th iteration can be denoted as , where is the set of learnable parameters with th training stage^{1}^{1}1Please refer to the next section for the detailed structures of in realworld applications.. We denote the temporary variable at the th iteration as . By further considering LTF form as the proximal approximation of Eq. (3) with parameter at the th iterative, the learningbased taskdriven feasibility module can be
(4) 
Indeed, the modules and are taskrelated which either can be selected to maintain some of the characteristics of the problem, such as, smooth, edgeenhanced, denoiser and sparse, etc., or can be introduced to characterize the task on another domain. For example, the argument of minimized objective function and the constraint module can be constructed as the image domain and the gradient domain respectively.
IiB Proximal Averaged Optimization
In the light of the model (2), two taskdriven feasibility type (TF and LTF) PAO are developed in this section.
IiB1 PAO with Taskdriven Feasibility
Specifically, for objective minimization problem (1), we just adopt PG method to update the variable which yields
(5) 
where and is the step size. As for the Eq. (3), we actually perform any firstorder methods to solve this subproblem such as, PG [1], APG [18], ADMM [15, 35] and HQS [14] etc. By introducing a linear averaging form about and with a weighted parameter sequence ,
our complete TFPAO iterations are summarized in Alg. 1.
Actually, the temporary variable maybe a bad extrapolation which has potential to fail. To address this issue, we introduce the correction step named as Monotone Descent Updating Scheme (MDUS) in Alg. 2 to ensuring the descent property, i.e., . This technique can be commonly found in firstorder methods [17, 18, 36]. Indeed, it is not difficult to understand that only the descent property could neither ensure the decreasing of nor the convergence to a critical point in nonconvex programming. Under the property of proximal gradient, our algorithm obtains sufficient descent. Then we would like to summarize the convergence behaviors for the proposed algorithm in the subsection III^{2}^{2}2We move the proofs of all our theoretical results to the Appendix Materials..
IiB2 PAO with Learningbased Taskdriven Feasibility
By embedding designed/trained architectures to TF module, we then develop the LTFPAO to optimize MAPbased model described in Eq. (1). What share with the statement in subsection IIB is the freedom for selecting method to solve this constraintbased subproblem. Then by introducing a relative loose boundness condition about , we summarize the complete LTFPAO scheme in Alg. 3. Indeed, is the output of network which is used to approximate the taskdriven module at the th iteration. We introduce a boundness condition (in Steps 5 of Alg. 3) to control the iteration sequence. This aims to prevent any improperly designed/trained architectures, which may deflect our iterative trajectory towards unwanted solutions. The monitor is obtained by checking the boundedness of . The convergence behaviors are summarized in the following section.
Iii Convergence Results
In this part, we would like to discuss the convergence behaviors for the proposed TFPAO and LTFPAO algorithms. We suggest readers to refer to [37] for some definitions in variational analysis, such as proper, lowersemicontinuous, coercive and the limiting subdifferential which will be useful in the following analysis. Our convergence analyses are also based on the following fairly loose assumptions.
Assumption 1.
The objective function in Eq. (1) is proper, lowersemicontinuous and coercive. Function is convex and Lipschitz smooth, i.e., , we have , where is the Lipschitz constant for .
Theorem 1.
Proof.
We first show the sufficient descent property about . By using the proximal update scheme in Step 2 of Alg. 1 and the Lipschitz property of , i.e.,
we conclud that From the MDUS correction scheme, the sufficient descent property is obtained with . Then, we will show the second item. The sufficient descent inequality in the first item implies which means that i.e., there exist subsequence and convergence to a same point as . Incorporating the lowersemicontinuous of and the supreme principle, we obtain that . With the optimal condition we know that Actually, for , we have The above imply that is a critical point. This complete the proof. ∎
(a) PSNR / SSIM  (b) 28.1463 / 0.8064  (c) 28.6781 / 0.8436  (d) 29.2157 / 0.8447 
Input  Eq. (1)  Eq. (4)  Eq. (2) 
.
Note that, the proposed algorithm is a modification PG scheme. Under some mild conditions, for example, the semialgebraic property on , the sequence convergent property still holds.
Theorem 2.
With the semialgebraic property^{3}^{3}3Please refer to [38] for the formal definition of semialgebraic function. Actually, many functions arising in learning and vision areas, including norm, rational norms (i.e., with positive integers and ) and their finite sums or products, are all semialgebraic. of function , we can further assert that the sequence in Alg. 1 has finite length, i.e.,
Proof.
With the KL property (see [38]) and the definition of the subdifferential, we have where is the desingularizing function. From the concavity of , we obtain
If we denote and , the inequality holds which implies Subsequently, we have
Obviously, the above inequality implies the finite length of sequence . If , the inequality holds. If , with the proper, lowersemicontinuous and coercive property of , the sequence is bounded. Then with the update scheme about and the finite length of , we have
This completes the proof. ∎
Indeed, the summable sequence as stated in Theorem 2 implies that there exist satisfying , as . Subsequently, it follows that the iteration is a Cauchy sequence and hence is a globally convergent sequence which is also defined as sequence convergence.
Remark 1.
According to the convergence analysis described in IIB, the objective function is sufficiently descent in Alg. 3 and it is easy to check that the convergence results of Alg. 3 can be obtained in the same manner as stated in Theorem 1. The temporary iteration in Alg. 3 is bonded under the checking condition. This implies the boundness of . Then, in Alg. 3, the sequence convergent property of is attained.
Iv Applications
We emphasize that different from these existing image modeling approaches, the proposed PAO allows us to introduce a taskdriven feasibility module related to the application areas when solving the optimization model in Eq. (1
). This section first considers nonblind deconvolution and image inpainting. We take nonblind deconvolution as an illustrative example for establishing PAO. Then, we extend the PAO to even more challenging single image rain streaks removal task.
Iva Image Deconvolution
Here we consider a particular nonblind deconvolution problem, which aims to recover latent image from blurred observation . By formulating this problem using sparse coding model , where denotes the sparse code, is a given dictionary^{4}^{4}4We follow standard settings in image processing to define as the inverse wavelet transform in our problem. Indeed, the form is denoted as , where is the matrix of the blur kernel , is the inverse of the wavelet transform of . and is the unknown noise, we derive a specific case of Eq. (1), that is where , . Subsequently, it can be equivalently described as the following intuitive form, i.e.,
(6) 
In the following, we will give an example to illustrate PAO with two module settings, i.e., taskdriven feasibility and learningbased taskdriven feasibility .
IvA1 With Taskdriven Feasibility
As for the module , we aim to introduce a relatively simple and taskrelated model to enforce our distribution assumptions on the latent image. Then, we introduce the widely used Total Variation (TV) model [39] in image domain i.e.,
(7) 
where is threshold parameter, , and respectively denote the gradient on the horizontal and vertical directions. As it is flexible to select operators for solving Eq. (7), we indeed apply HQS scheme to update in this paper. By introducing two auxiliary variables (named and ), the iteration is
where and are two constant parameters. and are updated by proximal gradient operator that we omit it here. Then, applying the proximal gradient approach to update which can be transformed as yields the following form
where . It is clearly to obtain the detailed updating steps following Alg. 1.
IvA2 With Learningbased Taskdriven Feasibility
Specifically, in this work, the network is considered as a residual formulation. The learningbased iteration step can be directly adopted by where is the set of learnable parameters with th training stage and
is the basic network unit. In our network, there are nineteen layers which includes seven convolution layers, six ReLU layers, five batch normalization layers and one loss layer. The detailed information about
can be found in the experimental results section. Notice that standard training strategies can be directly adopted to optimize parameters of our basic architecture. If necessary, one may further jointly finetune parameters of the whole network after the design phase. By setting , the learningbased scheme of isHence, following the iteration form of and the above learningbased scheme, we obtain that
By considering the latent image as the uniform augment, we actually obtain a PAO to integrate both synthesis and analysis mechanisms to address different vision applications, including deblurring, inpainting and rain streaks removal etc. Here the matrix actually formulates the observation forward model for particular image processing paradigm. Possible choices of include an identity operator for denoising, convolution operators for deblurring, filtered subsampling operators for superresolution, the Fourier domain subsampling operator for magnetic resonance imaging (MRI) reconstruction or mask for image inpainting. We incorporate experimentally designed and trained network architectures into the PAO for solving these problems. In summary, the proposed PAO indeed could integrates advantages from different domain knowledge.
(a)  (b) 
IvB Rain Streaks Removal
This subsection focuses on single image rain streaks removal task which is a challenging realworld computer vision problem. A rainy image is often considered as linear combination of rainfree background and rain streaks layer , i.e., . We set . In terms of designing the optimization model, rather than make efforts in exploiting complex priors, we consider the fundamental energybased sparsity of the observation in certain transform domain as
where , , , and are two positive constants. is the indicator function, i.e., if , then , otherwise . and respectively denote the sparse codes of , on which are two auxiliary variables serving for the subproblem. As for stated in Eq. (2), we consider the general TV regularization as described in the following form
In this part, we first introduce two residual network and as stated in [8]. Then we denote two temporary variable and respectively for background and rain streaks layers. For the background layer network , we just follow [8] to build a series of denoising CNNs which extract the natural image well. For rain streaks layer, the learns rain streaks behavior from rainny images by training rainy image and synthetic rain layer as the degraded clean image pair. We update variables and synchronously
where the auxiliary variables and are updated by proximal gradient operator. Similarily, we obtain that
Then, following the steps described in LTFPAO, we obtain the updating scheme.
Methods  1  2  3  
PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  
APG  27.32  0.71  25.61  0.63  24.63  0.57 
mAPG  26.68  0.67  25.20  0.60  24.39  0.55 
niAPG  27.24  0.73  25.63  0.64  24.76  0.61 
FTVD  27.56  0.77  26.63  0.73  24.88  0.62 
Ours  28.48  0.81  27.06  0.75  26.13  0.71 
30.8229 / 0.9108  38.2246 / 0.9247  38.2269 / 0.9251  38.0961 / 0.9324  38.3633 / 0.9581  40.4552 / 0.9802 
PG  mAPG  niAPG  APGnc  TFPAO (Ours)  LTFPAO (Ours) 
Blurry  IDDBM3D  MLP  FDN  IRCNN  Ours 
Methods  IDDBM3D  TV  EPLL  CSF 
Levin  31.35/0.90  29.38/0.88  31.65/0.93  31.55/0.87 
Sun  30.79/0.86  30.67/0.85  32.44/0.89  31.55/0.88 
Times  48.66  6.38  721.98  0.50 
Methods  MLP  IRCNN  FDN  Ours 
Levin  31.32/0.90  32.28/0.92  32.04/0.93  32.98/0.94 
Sun  31.47/0.88  32.61/0.89  32.65/0.89  32.90/0.90 
Times  4.59  16.67  2.70  2.41 
V Experimental Results
In this section, we first verify our theoretical results by investigating the iteration behaviors of the proposed TFPAO and LTFPAO on standard deconvolution formulation with Eq. (6). We then evaluated the stateoftheart performance of LTFPAO both with general and learningbased methods on different vision applications. We conducted these experiments on a computer with Intel Core i77700 CPU (3.6 GHz), 32GB RAM and an NVIDIA GeForce GTX 1060 6GB GPU.
Va Theoretical Verifications
To verify our theoretical investigations, we performed experiments on nonblind deconvolution. Notice that this problem can be directly addressed by our TFPAO and LTFPAO.
VA1 Modularization Settings
We first provide an comparison about the optimization models in image deblurring application and the corresponding results are shown in Fig. 2. Observed that the proposed PAO with LTF module performs better both than the modeling scheme in Eq. (1) and the LTF in Eq. (4). This illustrates the effectiveness of the proposed PAO scheme.
We then analyze the performance and flexibility of PAO with different operator settings and the corresponding PSNR (i.e., peak signaltonoise ratio) results with
noise level are plotted in Fig. 3. As for solving the subproblem (7) specified in , four different firstorder methods, such as PG (), APG (), HQS () and ADMM () are considered as mentioned in subsection IIB. The PSNR scores of TFPAO under four different strategies are plotted in Fig. 3 (a). Observed that various methods when obtaining have a slight influence on the performance of our TFPAO scheme. We adopt HQS as the approach obtaining the iteration steps of in TFPAO and in LTFPAO. Note to say, to provide a relative fairness comparison, we keep parameters same under four different circumstances mentioned above. Hereafter, we select relative error (i.e., ) as stop criterion.24.02 / 0.79  25.57 / 0.84  23.21 / 0.76  26.00 / 0.89  25.95 / 0.85  29.02 / 0.92 
TV  FoE  ISDSB  WNNM  IRCNN  Ours 
To analyze the effects of networkbased block , four different taskspecific structures, i.e., TV [39], RF [27], CNNs and BM3D [28] (named as , , and , respectively) are adopted under the LTFPAO scheme. For CNNs architecture, we introduce a residual network which consists of nineteen layers: seven dilated convolutions with
filter size, six ReLu operations (plugged between each two convolution layers) and five batch normalizations (plugged between convolution and ReLU, expect the first convolution layer). In training stage, we randomly select 800 natural images from ImageNet database
[41]. The selected pictures are cropped into patches of size . Fig. 3 (b) plotted the PSNR with , , and . As can be seen, LTFPAO performs better and faster with than others. Hence, we setting as CNNs hereafter.We further compared our LTFPAO with the traditional methods under three different Gaussian noise levels (i.e., , and ) on the image set (collected by [40]) and the corresponding results are shown in Tab. I with quantitative performance (i.e., PSNR and SSIM metrics). It can be seen that our LTFPAO outperforms classical numerical solvers by a large margin in terms of the performance.
Mask  Text  
TV  32.22/0.93  29.20/0.86  26.07/0.74  35.29/0.97 
FoE  34.01/0.90  30.81/0.81  27.64/0.65  37.05/0.95 
VNL  27.55/0.91  26.13/0.85  24.23/0.75  28.58/0.95 
ISDSB  31.32/0.91  28.23/0.83  24.92/0.70  34.91/0.96 
WNNM  31.75/0.94  28.71/0.89  25.63/0.78  34.89/0.97 
IRCNN  34.92/0.95  31.45/0.91  26.44/0.79  37.26/0.97 
Ours  34.94/0.96  31.61/0.91  27.88/0.81  37.38/0.98 
VA2 Convergence of PAO
Next, we illustrate the convergence behaviors of PAO Schemes. To evaluate the variation trend of described in PAO, the variable of and intermediate variables (, in TFPAO and in LTFPAO) between and are plotted in Fig. 4 (a) and (b). In Fig. 4 (a), the iteration behaviors of , , and prove the boundness of , and . Similarly, we plotted the convergence curves of LTFPAO in Fig. 4 (b). Aiming at illustrating the iteration steps of LTFPAO, we show the select condition about the relationship between and described in Fig. 4 (c). This implies the boundness of .
For the proposed schemes of TFPAO and LTFPAO are proximalbased methods, it is necessary to compare our methods with the existing proximalbased firstorder approaches, such as, the classical proximal gradient (PG), monotone APG (mAPG) [18], inexact APG (niAPG) [19] and momentum APG for nonconvex problem (APGnc) [20], with additional 1‰ noise level and kernel size. The comparison results are shown in Fig. 5 with relative error after log transformation (), reconstruction error (), functional value and PSNR, where denotes the ground truth. Here, the stop criterion is set as . Obviously, the proposed TFPAO converge faster than other PGs under the same stop condition. Observed that, LTFPAO perform the best both in PSNR scores and the iteration steps. The corresponding visual results are shown in Fig. 6 with PSNR and SSIM (i.e., structural similarity) scores. Observed that the proposed LTFPAO remove more noise while keeping the details.
32.87 / 0.91  32.12 / 0.92  29.69 / 0.86  33.40 / 0.96  28.18 / 0.89  37.10 / 0.97 
GMM  DDN  UGSM  JORDER  DIDMDN  Ours 
VB Stateoftheart Comparisons
We then evaluated our LTFPAO on a variety of lowlevel vision applications including image deblurring, image inpainting and rain streaks removal.
VB1 Image Deblurring
In this task, matrix stated in the application part is the blur kernel and is blurry image. As usual, the blurry images are synthesized by applying a blur kernel and adding additive Gaussian noise. We consider the circular boundary conditions when performing the convolution. We reported the results of our LTFPAO on Sun et al’ challenging benchmark [43] and Levin et al’ dataset [44], together with other stateoftheart methods including the traditional methods (e.g., IDDBM3D [45], TV [46], parameters learning based methods (e.g., EPLL [47], CSF [40], ) and network based methods (e.g., MLP [48], IRCNN [8], FDN [49]). It can be seen in Tab. II that our method obtained the best quantitative performance (i.e., PSNR and SSIM metrics) on Sun et al’ and Levin et al’ dataset. Moreover, we illustrated the visual comparisons on real image deblurring [50] with unknown blur kernel which is estimated roughly by Pan et al.’ method [51]. As shown in Fig. 7, our method reserve more details.
VB2 Image Inpainting
In image inpainting task, the matrix is mask, and is the missing pixels image. This task aims to recover the missing pixels of the observation. Here we compared our LTFPAO with TV [39], FOE [52], VNL [53], ISDSB [54], WNNM [55] and IRCNN [8] on this problem. We normalized the pixel values to . We generated random masks of different levels including , , missing pixels on CBSD68 dataset [42]. Moreover, we collected 12 different text masks to further evaluate the proposed methods. Tab. III presented the PSNR and SSIM comparison results with different masks. Observed that our method perform better than the stateoftheart approaches regardless the proportion of masks. Further, in comparison with the visual performance of LTFPAO with other methods, we presented the missing pixels comparisons in Fig. 8 with top five scores (TV, FoE, ISDSB, WNNM and IRCNN). It can be observed that our approach successfully recovered the image with better visual quality, especially in the zoomedin regions with rich details.
Methods  Test1  Test2  Rain100H  
PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  
JCAS  31.61  0.9183  28.37  0.9050  15.23  0.5150 
GMM  32.33  0.9042  29.57  0.8878  14.26  0.4225 
DN  30.30  0.9151  27.34  0.9009  13.72  0.4417 
DDN  33.41  0.9442  29.91  0.9433  17.93  0.5655 
UGSM  33.30  0.9253  27.07  0.9220  14.90  0.4674 
JORDER  35.93  0.9530  35.11  0.9732  23.45  0.7490 
DIDMDN  29.08  0.9015  27.92  0.8695  17.28  0.6035 
Ours  36.39  0.9630  34.88  0.9737  24.30  0.8044 
VB3 Singleimage Rain Streaks Removal
In this part, we evaluated our method on the task of rain streaks removal, in comparison with the stateoftheart including GMM [34], DN [58], DDN [59], JCAS [60], JORDER [61], UGSM [62], and DIDMDN [63]. All the comparisons shown in this paper are conducted under the same hardware configuration.
Tab. IV reported the quantitative scores on three different datasets: (1) Test1 is obtained by [34], includes 12 synthesized rain images with only one type of rain streaks rendering technique. (2) Rain100H is collected from BSD200 [57] and synthesized with five streak directions; (3) Test2 dataset consists of 7 images, using photorealistic rendering of rain streaks [56]. Here we just adopt the training set provided by Yang et al. [61] for . According to the quantitative results reported in Tab. IV, we provide visual comparisons for five methods with relative high PSNR and SSIM scores (i.e., GMM, DDN, UGSM, JORDER, DIDMDN) in Fig. 9. It can be observed that the proposed LTFPAO scheme can reserves more details with very few rain streaks left no matter in synthesized or realworld rainy images.
Methods  PSNR  SSIM  Methods  PSNR  SSIM 
DN  25.51  0.8885  DIDMDN  27.94  0.8696 
UGSM  26.38  0.8261  JORDER  27.50  0.8515 
DDN  29.90  0.8999  Ours  31.18  0.9152 
DDN (27.79 / 0.8371)  Ours (28.48 / 0.8531) 
Vi Conclusions
In this paper, we developed a Proximal Averaged Optimization (PAO) method for the challenging nonconvex MAPbased model in Eq. (1). By introducing two constraint schemes, i.e., taskdriven feasibility and learningbased taskdriven feasibility module, TFPAO and LTFPAO were established respectively. Then we proved the convergence of PAO with some relatively loose assumptions. Extensive experiments on some challenging tasks showed that our method has better visual performance and quantitative scores against other stateoftheart methods.
References
 [1] A. Beck and M. Teboulle, “A fast iterative shrinkagethresholding algorithm for linear inverse problems,” SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183–202, 2009.
 [2] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: nonlinear phenomena, vol. 60, no. 14, pp. 259–268, 1992.
 [3] L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Transactions on Graphics (TOG), vol. 31, no. 6, p. 139, 2012.
 [4] J. Cheng, Y. Gao, B. Guo, and W. Zuo, “Image restoration using spatially variant hyperlaplacian prior,” Signal, Image and Video Processing, vol. 13, no. 1, pp. 155–162, 2019.
 [5] D. Krishnan and R. Fergus, “Fast image deconvolution using hyperlaplacian priors,” in NeurIPS, 2009, pp. 1033–1041.
 [6] R. Liu, G. Zhong, J. Cao, Z. Lin, S. Shan, and Z. Luo, “Learning to diffuse: A new perspective to design pdes for visual analysis,” IEEE TPAMI, vol. 38, no. 12, pp. 2457–2471, 2016.
 [7] Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (red),” SIAM Journal on Imaging Sciences, vol. 10, no. 4, pp. 1804–1844, 2017.
 [8] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in IEEE CVPR, 2017, pp. 3929–3938.
 [9] Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration,” IEEE TPAMI, vol. 39, no. 6, 2017.
 [10] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in ICML. Omnipress, 2010, pp. 399–406.
 [11] P. Mu, J. Chen, R. Liu, X. Fan, and Z. Luo, “Learning bilevel layer priors for single image rain streaks removal,” IEEE Signal Processing Letters, vol. 26, no. 2, pp. 307–311, 2018.
 [12] R. Liu, Z. Jiang, X. Fan, and Z. Luo, “Knowledgedriven deep unrolling for robust image layer separation,” IEEE TNNLS, 2019.
 [13] D. P. Bertsekas and A. Scientific, Convex optimization algorithms. Athena Scientific Belmont, 2015.
 [14] M. Nikolova and M. K. Ng, “Analysis of halfquadratic minimization methods for signal and image recovery,” SIAM Journal on Scientific computing, vol. 27, no. 3, pp. 937–966, 2005.
 [15] S. Boyd, “Alternating direction method of multipliers,” in NeruIPS, 2011.
 [16] Y. E. Nesterov, “A method for solving the convex programming problem with convergence rate o (1/k^ 2),” in Dokl. akad. nauk Sssr, vol. 269, 1983, pp. 543–547.
 [17] A. Beck and M. Teboulle, “Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems,” IEEE TIP, vol. 18, no. 11, 2009.
 [18] H. Li and Z. Lin, “Accelerated proximal gradient methods for nonconvex programming,” in NeruIPS, 2015.
 [19] Q. Yao, J. T. Kwok, F. Gao, W. Chen, and T.Y. Liu, “Efficient inexact proximal gradient algorithm for nonconvex problems,” in IJCAI, 2017.
 [20] Q. Li, Y. Zhou, Y. Liang, and P. K. Varshney, “Convergence analysis of proximal gradient with momentum for nonconvex optimization,” in ICML, 2017.
 [21] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plugandplay priors for model based reconstruction,” in 2013 Global Conference on Signal and Information Processing. IEEE, 2013, pp. 945–948.
 [22] X. Wang and S. H. Chan, “Parameterfree plugandplay admm for image restoration,” in IEEE International Conference on Acoustics, 2017.
 [23] S. H. Chan, X. Wang, and O. A. Elgendy, “Plugandplay admm for image restoration: Fixedpoint convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2017.

[24]
K. Zhang, W. Zuo, and L. Zhang, “Deep plugandplay superresolution for arbitrary blur kernels,” in
IEEE CVPR, 2019, pp. 1671–1681.  [25] E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, “Plugandplay methods provably converge with properly trained denoisers,” arXiv preprint arXiv:1905.05406, 2019.
 [26] D. Kostadin, F. Alessandro, K. Vladimir, and E. Karen, “Image denoising by sparse 3d transformdomain collaborative filtering,” IEEE TIP, vol. 16, no. 8, pp. 2080–2095, 2007.
 [27] M. Unser, A. Aldroubi, and M. Eden, “Recursive regularization filters: design, properties, and applications,” IEEE TPAMI, no. 3, pp. 272–277, 1991.
 [28] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3d transformdomain collaborative filtering,” IEEE TIP, vol. 16, no. 8, pp. 2080–2095, 2007.

[29]
T. Hong, Y. Romano, and M. Elad, “Acceleration of red via vector extrapolation,”
Journal of Visual Communication and Image Representation, vol. 63, p. 102575, 2019.  [30] E. T. Reehorst and P. Schniter, “Regularization by denoising: Clarifications and new interpretations,” IEEE Transactions on Computational Imaging, vol. 5, no. 1, pp. 52–67, 2019.
 [31] R. Liu, S. Cheng, L. Ma, X. Fan, and Z. Luo, “Deep proximal unrolling: Algorithmic framework, convergence analysis and applications,” IEEE TIP, 2019.
 [32] R. Liu, S. Cheng, Y. He, X. Fan, Z. Lin, and Z. Luo, “On the convergence of learningbased iterative methods for nonconvex inverse problems,” IEEE TPAMI, 2019.
 [33] C. Bao, H. Ji, Y. Quan, and Z. Shen, “L0 norm based dictionary learning by proximal methods with global convergence,” in IEEE CVPR, 2014, pp. 3858–3865.
 [34] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in IEEE CVPR, 2016, pp. 2736–2744.

[35]
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al.,
“Distributed optimization and statistical learning via the alternating
direction method of multipliers,”
Foundations and Trends® in Machine learning
, vol. 3, no. 1, pp. 1–122, 2011.  [36] P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye, “A general iterative shrinkage and thresholding algorithm for nonconvex regularized optimization problems,” in ICML, 2013, pp. 37–45.
 [37] R. T. Rockafellar and R. J.B. Wets, Variational analysis. Springer Science & Business Media, 2009, vol. 317.
 [38] J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 12, pp. 459–494, 2014.
 [39] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variationbased image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
 [40] U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE CVPR, 2014.
 [41] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” IJCV, vol. 115, no. 3, pp. 211–252, 2015.
 [42] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE TIP, vol. 26, no. 7, 2017.
 [43] L. Sun, S. Cho, J. Wang, and J. Hays, “Edgebased blur kernel estimation using patch priors,” in ICCP, 2013, pp. 1–8.
 [44] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in IEEE CVPR, 2009, pp. 1964–1971.
 [45] A. Danielyan, V. Katkovnik, and K. Egiazarian, “Bm3d frames and variational image deblurring,” IEEE TIP, vol. 21, no. 4, 2012.
 [46] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.
 [47] D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in IEEE ICCV, 2011, pp. 479–486.
 [48] C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, “A machine learning approach for nonblind image deconvolution,” in IEEE CVPR, 2013.
 [49] J. Kruse, C. Rother, and U. Schmidt, “Learning to push the limits of efficient fftbased image deconvolution,” in IEEE ICCV, 2017.
 [50] R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling, “Recording and playback of camera shake: Benchmarking blind deconvolution with a realworld database,” in ECCV, 2012.

[51]
J. Pan, Z. Lin, Z. Su, and M.H. Yang, “Robust kernel estimation with outliers handling for image deblurring,” in
IEEE CVPR, 2016.  [52] S. Roth and M. J. Black, “Fields of experts,” IJCV, vol. 82, no. 2, 2009.
 [53] P. Arias, G. Facciolo, V. Caselles, and G. Sapiro, “A variational framework for exemplarbased image inpainting,” IJCV, vol. 93, no. 3, pp. 319–347, 2011.
 [54] L. He and Y. Wang, “Iterative support detectionbased split bregman method for wavelet framebased image inpainting,” IEEE TIP, vol. 23, no. 12, 2014.
 [55] S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted nuclear norm minimization and its applications to low level vision,” IJCV, vol. 121, no. 2, pp. 183–208, 2017.
 [56] S. Tariq, “Rain. nvidia whitepaper,” 2007.
 [57] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in IEEE ICCV, 2001, pp. 416–423.
 [58] X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for singleimage rain removal,” IEEE TIP, vol. 26, no. 6, 2017.
 [59] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in IEEE CVPR, 2017.
 [60] S. Gu, D. Meng, W. Zuo, and L. Zhang, “Joint convolutional analysis and synthesis sparse representation for single image layer separation,” in IEEE ICCV, 2017, pp. 1717–1725.
 [61] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in IEEE CVPR, 2017.
 [62] T.X. Jiang, T.Z. Huang, X.L. Zhao, L.J. Deng, and Y. Wang, “Fastderain: A novel video rain streak removal method using directional gradient priors,” IEEE TIP, 2018.
 [63] H. Zhang and V. M. Patel, “Densityaware single image deraining using a multistream dense network,” in IEEE CVPR, 2018, pp. 695–704.
Comments
There are no comments yet.