1 Introduction
In applications throughout statistics, machine learning and computer vision, one is often faced with the challenge of solving illposed inverse problems. In general, the basic inverse problem leads to a discrete linear system of the form
, whereis the latent variable to be estimated,
denotes some given linear operations on , and are the observation and an unknown error term, respectively. Typically, these inverse problems can be addressed by solving the composite minimization model:(1) 
where is the fidelity that captures the loss of data fitting, and refers to the prior that promotes desired distribution on the solution. Recent studies illustrate that many problems (e.g., image deconvolution, matrix factorization and dictionary learning) naturally require to be solved in the nonconvex scenario. This trend motivates us to investigate Nonconvex Inverse Problems (NIPs) in the form of Eq. (1) and with the practical configuration that is continuously differentiable, is nonsmooth, and both and are possibly nonconvex.
Over the past decades, a broad class of firstorder methods have been developed to solve special instances of Eq. (1). For example, by integrating Nesterov’s acceleration [1] into the fundamental Proximal Gradient (PG) scheme, Accelerated Proximal Gradient (APG, a.k.a. FISTA [2]) method is initially proposed to solve convex models in the form of Eq. (1) for different applications, such as image restoration [2], image deblurring [3], and sparse/lowrank learning [4], etc. While these APGs generate a sequence of objectives that may oscillate [2], [5] developed a variant of APG that guarantees the monotonicity of the sequence. For nonconvex energies in Eq. (1), Li and Lin [6] investigated a monotone APG (mAPG) and proved the convergence under the KurdykaŁojasiewicz (KŁ) constraint [7]. The work in [8] developed another variation of APG (APGnc) for nonconvex problems, but their original analysis only characterized the fixedpoint convergence. Recently, Li et al. [9] also proved the subsequence convergence of APGnc and estimated its convergence rates by further exploiting KŁ property.
Unfortunately, even with some theoretically proved convergence properties, these classical numerical solvers may still fail in realworld scenarios. This is mainly because that the abstractly designed and fixed updating schemes do not exploit the particular structure of the problem at hand nor the input data distribution [10].
In recent years, various learningbased strategies [11, 12, 13, 14, 15] have been proposed to address practical inverse problems in the form of Eq. (1
). These methods first introduced hyperparameters into the classical numerical solvers and then performed discriminative learning on collected training data to obtain some dataspecific (but possibly inconsistent) iteration schemes. Inspired by the success of deep learning in different application fields, some preliminary studies considered the handcrafted network architectures as the implicit priors (a.k.a. deep priors) for inverse problems. Following this perspective, various deep priors are designed and nested into numerical iterations
[16, 17, 18]. Alternately, the works in [19] and [20] addressed the iteration learning issues from the perspectives of deep reinforcement and recurrent learning, respectively.Nevertheless, existing hyperparameters learning approaches can only build iterations based on the specific energy forms (e.g., penalty and MRFs), so that they are inapplicable for more generic inverse problems. Meanwhile, due to severe inconstancy of parameters during iterations, rigorous analysis on the resulted trajectories is also missing. Deep iterative methods have been executed in many learning and vision problems in practice. However, due to the complex network structure, little or even to no results have been proposed for the convergence behaviors of these methods. In summary, the lack of strict theoretical investigations is one of the most fundamental limits in prevalent learningbased iterative methods, especially in the challenging nonconvex scenario.
To break the limits of prevalent approaches, this paper explores Flexible Iterative Modularization Algorithm (FIMA), a generic and convergent algorithmic framework that combines together the learnable architecture (e.g., mainstream deep networks) with principled knowledges (formulated by mathematical models), to tackle challenging NIPs in Eq. (1). Specifically, derived from the fundamental forwardbackward updating mechanism, FIMA replaces specific calculations corresponding to the fidelity and priors in Eq. (1) with two userspecified (learnable) computational modules. A series of theoretical investigations are established for FIMA. For example, we first prove the subsequence convergence of FIMA with explicit momentum policy (called eFIMA), which is as good as those mathematically designed nonconvex proximal methods with Nesterov’s acceleration (e.g., various APGs in [6, 8, 9]). By introducing a carefully devised errorcontrol policy (i.e., implicit momentum policy, called iFIMA), we further enhance the results and obtain a globally convergent Cauchy sequence for Eq. (1). We prove that this guarantee can also be preserved for FIMA with multiple blocks of unknown variables (called mFIMA). As a nontrivial byproduct, we finally show how to specify modules in FIMA for challenging inverse problems in lowlevel vision area (e.g., nonblind and blind image deconvolution). Our primary contributions are summarized as follows:

FIMA provides a generic framework that unifies almost all existing learningbased iterative methods, as well as a series of scheduling policies that make it possible to develop theoretically convergent learningbased iterations for challenging nonconvex inverse problems in the form of Eq. (1).

Even with highly flexible (learnable) iterations, the convergence guarantees obtained by FIMA is still as good as (eFIMA) or better (iFIMA) than prevalent mathematically designed nonconvex APGs. So it is worth noting that our devised scheduling policies together with the flexible algorithmic structures should also be beneficial for classical nonconvex algorithms.

FIMA also provides us a practical and effective ensemble of domain knowledge and sophisticated learned data distributions for real applications. Thus we can bring the expressive power of knowledgebased and datadriven methodologies to yield stateoftheart performance on challenging lowlevel vision tasks.
2 Related Work
2.1 Classical Firstorder Numerical Solvers
We first briefly review a group of classical firstorder algorithms, which have been widely used to solve inverse problems. The gradient descent (GD) scheme on a differentiable function can be reformulated as minimizing the following quadratic approximation of at given point with step size , i.e., . As for the nonsmooth function , its proximal mapping (PM) with parameter can be defined as . So it is natural to consider PG as cascade of GD (on ) and PM (on ), or equivalently optimizing the quadratic approximation of Eq. (1), i.e., , where is some calculated variable at th iteration. Thus most prevalent proximal schemes can be summarized as
where and in (B2) denote the errors in PM and GD calculations, respectively [21]. Within this general scheme, we first obtain original PG by setting (i.e., (A1)) and computing PM in (B1) [2]. Using Nesterov’s acceleration [1] (i.e., (A2) with ), we have the wellknown APG method [2, 6, 9]. Moreover, by introducing and to respectively capture the inexactness of PM and GD (i.e., (B2)), we actually consider inexact PG and APG for both convex [22] and nonconvex [21] problems. Notice that in the nonconvex scenario, most classical APGs can only guarantee the subsequence convergence to the critical points [6, 9].
2.2 Learningbased Iterative Methods
In [11], a trained version of FISTA (called LISTA) is introduced to approximate the solution of LASSO. [23, 10] extended LISTA for more generic sparse coding tasks and provided an adaptive acceleration. Unfortunately, LISTA is built on convex regularization, thus may not be applicable for other complex nonconvex inverse problems (e.g.,
prior). By introducing hyperparameters in MRF and solving the resulted variational model with different iteration schemes, various learningbased iterative methods are proposed for inverse problems in image domain (e.g., denoising, superresolution, and MRI imaging). For example,
[24, 13, 14, 25, 15] have considered halfquadratic splitting, gradient descent, Alternating Direction Method of Multiplier (ADMM) and primaldual method, respectively. But their parameterizations are completely based on MRF priors. Even worse, the original convergence properties are lost in these resulted iterations.To better model complex image degradations, [16, 17, 18]
considered Convolutional Neural Networks (CNNs) as implicit priors for image restoration. Since these methods discard the regularization term in Eq. (
1), we may not enforce principled constraints on their solutions. It is also unclear when and where these iterative trajectories should stop. Another group of very recent works [19, 20]directly formulated the descent directions from reinforcement learning perspective or using recurrent networks. However, due to the high computational budgets, they can only be applied to relative simple tasks (e.g., linear regression). Besides, due to the complex topological network structure, it is extremely hard to provide strict theoretical analysis for these methods.
3 The Proposed Algorithms
This section develops Flexible Iterative Modularization Algorithm (FIMA) for nonconvex inverse problems in Eq. (1). The convergence behaviors are also investigated accordingly. Hereafter, some fairly loose assumptions are enforced on Eq. (1): is proper and Lipschitz smooth (with modulus ) on a bounded set, is proper, lower semicontinuous and proximable^{1}^{1}1The function is proximable if can be easily solved by the given and . and is coercive. Notice that the proofs and definitions are deferred until Supplementary Materials.
3.1 Abstract Iterative Modularization
As summarized in Sec. 2.1, a large amount of firstorder methods can be summarized as forwardbackwardtype iterations. This motivates us to consider the following even more abstract updating principle:
(2) 
where and respectively stand for the userspecified modules for and , and denotes operation composition. Building upon this formulation, it is easy to see that designing a learningbased iterative method reduces to the problem of iteratively specifying and learning and .
It is straightforward that most prevalent approaches [16, 17, 18, 24, 13, 14, 15] naturally fall into this general formulation. Nevertheless, currently it is still impossible to provide any strict theoretical results for practical trajectories of Eq. (2). This is mainly due to the lack of efficient mechanisms to control the propagations generated by these handcrafted operations. Fortunately, in the following, we will introduce different scheduling policies to automatically guide the iterations in Eq. (2), resulting in a series of theoretically convergent learningbased iterative methods.
3.2 Explicit Momentum: A Straightforward Strategy
The momentum of objective values is one of the most important properties for numerical iterations. This property is also necessary for analyzing the convergence of some classical algorithms. Inspired by these points, we present an explicit momentum FIMA (eFIMA) (i.e., Alg. 1), in which we explicitly compare and and choose the variable with less objective value as our monitor (denoted as ). Finally, a proximal refinement is performed to adjust the learningbased updating at each stage.
The following theorem first verifies the sufficient descent of and then proves the subsequence convergence of eFIMA. It is nice to observe that these results are not based on any specific choices of and .
Theorem 1.
Based on Theorem 1 and considering as a semialgebraic function^{2}^{2}2Indeed, a variety of functions (e.g., the indicator function of polyhedral set, and rational penalties) satisfy the semialgebraic property [26]. , the convergence rate of eFIMA can be straightforwardly estimated as follows.
Corollary 1.
Let be a desingularizing function with a constant and a parameter [27]. Then generated by eFIMA converges after finite iterations if . The linear and sublinear rates can be obtained if choosing and , respectively.
Remark 1.
Theorem 1 and Corollary 1 actually provide us a unified methodology to analyze the convergence issues for not only learningbased methods, but also classical nonconvex solvers. That is, on the one hand, within eFIMA, we can provide an easilyimplemented and strictly convergent way to extend almost all the learningbased methods reviewed in Sec. 2.2. On the other hand, by respectively specifying and as proximal operation and Nesterov’s acceleration, eFIMA will reduce to the classical nonconvex APG, thus we can also obtain the same convergence results for a variety of prevalent APG methods [6, 8, 9].
3.3 Implicit Momentum via Error Control
Indeed, even with the explicit momentum schedule, we may still not obtain a globally convergent iteration. This is mainly because that there is no policy to efficiently control the inexactness of the userspecified modules (i.e., ). In this subsection, we show how address this issue by controlling the firstorder optimality error during iterations.
Specifically, we consider the auxiliary of at (denoted as ) and denote its subdifferential (denoted as )^{3}^{3}3Strictly speaking, is the socalled limiting Frechét subdifferential. We state its formal definition and propose a practical computation scheme for in Supplemental Materials. as
(4) 
where is the penalty parameter and .
As shown in Alg. 2, at stage , a variable is obtained by proximally minimizing at (i.e., Step 3 of Alg. 2). Roughly, this new variable is just an ensemble of the last updated and the output of userspecified following the specific proximal structure in Eq. (1). Then the monitor is obtained by checking the boundedness of . Notice that the constant actually reveals our tolerance to the inexactness of at th iteration.
Proposition 1.
Equipped with Proposition 1, it will be straightforward to guarantee that the objective values generated by Alg. 2 (i.e., ) also has sufficient descent. So we call this version of FIMA as implicit momentum FIMA (iFIMA). Then the global convergence of iFIMA is proved as follows.
Theorem 2.
Let be the sequence generated by iFIMA. Then is bounded and any of its accumulation points are the critical points of . If is semialgebraic, we further have that is a Cauchy sequence, thus globally converges to a critical point of in Eq. (1).
Indeed, based on Theorem 2, it is also easy to obtain the same convergence rate as that in Corollary 1 for iFIMA.
Remark 2.
The results in Theorem 2 is even better than that for prevalent nonconvex APGs. This actually suggests that our devised errorcontrol policy together with the flexible algorithmic structures should also be beneficial for classical nonconvex algorithms.
Remark 3.
Remark 4.
However, it will be shown in Sec. 5 that the choices of and do affect our speed and accuracy in practice. This is because in FIMA, the scheduling of learnable and numerical modules are automatically and adaptively adjusted, so that improper or will directly result in too many expensive refinements.
3.3.1 Practical Calculation of in iFIMA
Here we propose a practical calculation scheme for defined in Eq. (4) and used in Alg. 2. In fact, it is challenging to directly calculate since the subdifferential is often intractable in the nonconvex scenario. Fortunately, our following analysis provides an efficient practical calculation scheme for within FIMA framework. Specifically, from Alg. 2, we have
(5) 
On the other hand, from definition in Eq. (4), we have
(6) 
By the property of proximal operation, we have
(7) 
Therefore, by comparing Eqs. (5) and (7), we actually have the following practically calculation scheme for :
3.4 Multiblock Extension
In order to tackle the inverse problems with blocks of unknown variables (e.g., blind deconvolution and dictionary learning), we now discuss how to extend FIMA for multiblock NIPs, which is formulated as , where is a set of unknown variables to be estimated. Notice that here should be some given linear operations on . The inference of such problem can be addressed by solving
(8) 
where is still differentiable and each may also nonsmooth and possibly nonconvex. Here both and blockwise () follow the same assumptions as that in Eq. (1) and should also satisfy the generalized Lipschitz smooth property on bounded subsets of . For ease of presentation, we denote , and the subscripts and are defined in the same manner. Then we summarize the main iterations of multivariable FIMA (mFIMA) as follows^{4}^{4}4Due to space limit, the details of mFIMA are presented in Supplemental Material.:
Here is the monitor of , obtained by the same error control strategy as that in iFIMA. Then we summarize our multiblock FIMA in Alg. 3 and prove the convergence of mFIMA in Corollary 2.
Corollary 2.
Then we summarize our multiblock FIMA in Alg. 3. Notice that here we adopt the errorcontrol policy in iFIMA to guide the iterations of mFIMA.
4 Applications
As a nontrivial byproduct, this section illustrates how to apply FIMA to tackle practical inverse problems in lowlevel vision area, such as image deconvolution in the standard nonblind and even more challenging blind scenarios.
Nonblind Deconvolution (Uniblock) aims to restore the latent image from corrupted observation with known blur kernel . In this part, we utilize the wellknown sparse coding formulation [2]: , where , and are the sparse code, given dictionary and unknown noises, respectively. Indeed, the form of is given as , where is the matrix form of , denotes the inverse of the wavelet transform (i.e., and ). So by defining and (), we obtain a special case of Eq. (1) as follows
(9) 
Now we are ready to design iterative modules (i.e., and ) to optimize the SC model in Eq. (9). With the wellknown imaging formulation ( denotes the convolution operator), we actually update by solving to aggregate principles of the task and information from last updated variable, where and is a positive constant. Then on can be defined as , i.e.,
(10) 
where
is the identity matrix. It is easy to check that
can be efficiently calculated by FFT [24].Blind Deconvolution (Multiblock) involves the joint estimation of both the latent image and blur kernel , given only an observed . Here we formulate this problem on image gradient domain and solve the following special case of Eq. (8) with two unknown variables ^{5}^{5}5Notice that in this section, is defined with different meanings, i.e., image gradient in Eq. (11), while sparse code in Eq. (9).:
(11) 
where , , and . Here is the indicator function of the set , where denotes the th element. So the proximal updating in mFIMA corresponding to and can be respectively calculated by hardthresholding [3] and simplex projection [28]. Here we need to specify three modules (i.e., , and ) for miFPG. We first follow similar idea in the nonblind case to define using the aggregated deconvolution energy
(12) 
where and are positive constants. We then train CNNs on image gradient domain and solve using conjugate gradient method [29] to formulate and , respectively.
5 Experimental Results
This section conducts experiments to verify our theoretical results and compares the performance of FIMA with other stateoftheart learningbased iterative methods on realworld inverse problems. All experiments are performed on a PC with Intel Core i7 CPU at 3.4 GHz, 32 GB RAM and a NVIDIA GeForce GTX 1050 Ti GPU. More results can also be found in Supplemental Materials.
5.1 Nonblind Image Deconvolution
We first evaluate FIMA on solving Eq. (9) for image restoration. The test images are collected by [24, 30] and different levels of Gaussian noise are further added to generate our corrupted observations.
Modules Evaluation: Firstly, the influences of different choices of in FIMA is studied. Following Eq. (10), we adopt with varying . As for , different choices are also considered: classical PG (), Recursive Filter [31] (), Total Variation [32] () and CNNs (). For , we introduce a residual structure [33] and define as a cascade of dilated convolution layers (with filter size
). ReLUs are added between each two linear layers and batch normalizations are used for the
nd to th linear layers. We collect 800 images, in which 400 have been used in [24]and the other 400 are randomly sampled from ImageNet
[34]. Here we just adopt similar strategies in [17] to train with different noise levels. Fig. 1 analyzes the contributions of () and . We observe that is relatively better than and , while performs consistently better and faster than other strategies. So hereafter we always utilize in eFIMA and iFIMA. We also observe that even with different , relatively large in will result in analogous quantitative results. Thus we experimentally set for in eFIMA and iFIMA for all the experiments.(a)  (b)  (c) 
Convergence Behaviors: We then verify the convergence properties of FIMA. The convergence behaviors of both each module in our algorithms and other nonconvex APGs are considered. To be fair and comprehensive, we adopt specific iteration numbers () and iteration errors () as stopping criterion in Figs. 2 and 3, respectively.
In Fig. 2(a), (b), and (c), we plot the curves of objective values (), reconstruction errors () and iteration errors for FIMA with different settings. The legends “”, “”, and “” respectively denote that at each iteration, we only perform classical PG (i.e., only the last step in Algs. 1 and 2), taskdriven modules (i.e., only perform Eq. (2)), and their naive combination (without any scheduling policies). It can be seen that the function values and reconstruction errors of PG decrease slower than our FIMA strategies, while both “”curve (i.e., naive ) and “”curve (i.e., with PG refinement but no “explicit momentum” or “errorcontrol” policy) have oscillations and could not converge after only 30 iterations. Moreover, we observe that adding PG to “” (i.e., “”) make the curve worse rather than correct it to the descent direction. It illustrates that the pure adding strategies indeed break the convergence guarantee. In contrast, since of the choice mechanism in our algorithms, both eFIMA and iFIMA can provide a reliable variable () in the current iteration to satisfy the convergence condition. We further explore the choice mechanism of FIMA in Fig. 2(d). The “circles” in each curve represent the “explicit momentum” or “errorcontrol” policy is satisfied, while the “triangles” denote only perform PG in the current stage. It can be seen that the eFIMA strategy is more strict than iFIMA, the judgment policy fails only iterations in eFIMA while remains almost iterations in iFIMA. Both eFIMA and iFIMA have better performance than other compared schemes, thus verifies the efficiency of our proposed scheduling policies in Sec. 3.
(a)  (b) 
(c)  (d) 
We also compare the iteration behaviors of FIMA to classical nonconvex APGs, including mAPG [6], APGnc [9]) and inexact niAPG [8] on the dataset collected by [24], which consists of 68 images corrupted by different blur kernels of the size ranging from 1717 to 3737. We add 1‰and 1% Gaussian noise to generate our corrupted observations, respectively. In Fig. 3, the left four subfigures compare curves of iteration errors and PSNR on an example image and the rightmost one illustrate the averaged iteration numbers and run time on the whole dataset. It can be seen that our eFIMA and iFIMA are faster and better than these abstractly designed classical solvers under the same iteration error (). Moreover, we observe that the performance of these nonconvex APGs is not satisfied when the noise level is bigger. The PSNRs of them (Fig. 3(d)) descent after dozens of steps, while our FIMA remains higher PSNR and fewer iterations. It illustrates that our strategy is more stable than traditional nonconvex APGs in image restoration because of the flexible modules and effective choice mechanisms.
In Fig. 4, we illustrate the visual results of eFIMA and iFIMA with comparisons to both convex image restoration approaches, including FISTA [2] (APG) and FTVd [35]) (ADMM), and nonconvex mAPG, APGnc, and niAPG on an example image with 1% noise level but large kernel size (i.e, 7575) [30]. Here FISTA and FTVd solve their original convex models, while mAPG, APGnc, and niAPG are based on the nonconvex model in Eq. (9). We have that APGs outperformed the original PG. The inexact niAPG is better than exact mAPG and APGnc. Since FTVd is specifically designed for this task, it is the best among all classical solvers, but worse than our FIMA. Overall, iFIMA obtain higher PSNR than eFIMA since the errorcontrol mechanism actually tend to perform more accurate refinements.
(a)  (b)  (c)  (d)  (e) 
Input  PG  mAPG  APGnc  niAPG 
  (24.97/0.79)  (25.67/0.73)  (25.68/0.73)  (26.17/0.78) 
FISTA  FTVd  eFIMA  iFIMA  Curves of scores 
(25.03/0.68)  (27.75/0.88)  (29.04/0.92)  (29.34/0.92) 
Input  PPADMM  IRCNN  eFIMA  iFIMA 
(17.6 / 0.72)  (20.96 / 0.82)  (21.18 / 0.83)  (21.23 / 0.83) 
Stateoftheart Comparisons: We compare FIMA with stateoftheart image restoration approaches, such as IDDBM3D [36], EPLL [37], PPADMM [25], RTF [38] and IRCNN [17]. Fig. 5 first compares our FIMA with two prevalent learningbased iterative approaches (i.e., PPADMM and IRCNN) on an example image with 5% noise. Tab. I then reports the averaged quantitative results of all the compared methods on the image set (collected by [24]) with different levels of Gaussian noise (i.e., 1%, 2%, 3% and 4%). We have that eFIMA and iFIMA not only outperform classical numerical solvers by a large margin in terms of speed and accuracy, but also achieve better performance than other stateoftheart approaches. Within FIMA, it can be seen that the speed of eFIMA is faster, while PSNR and SSIM of iFIMA are relatively higher. This is mainly because the “error control” strategy tends to perform more refinements than the “explicit momentum” rule during iterations.
Metric  Stateoftheart Image Restoration Methods  Classical Nonconvex Methods  Ours  
IDDBM3D  EPLL  PPADMM  RTF  IRCNN  PG  mAPG  APGnc  niAPG  eFIMA  iFIMA  
1%  PSNR  28.83  28.67  28.01  29.12  29.78  27.32  26.68  26.69  27.24  29.81  29.85 
SSIM  0.81  0.81  0.78  0.83  0.84  0.71  0.67  0.67  0.73  0.85  0.85  
Time(s)  193.13  112.03  293.99  249.83  2.67  20.36  13.02  7.16  5.29  1.89  2.06  
2%  PSNR  27.60  26.79  26.54  25.58  27.90  25.61  25.20  25.28  25.63  28.02  28.06 
SSIM  0.76  0.74  0.72  0.66  0.78  0.63  0.60  0.61  0.64  0.79  0.79  
Time(s)  198.66  100.52  270.45  254.26  2.68  15.43  7.70  4.66  3.30  1.90  2.07  
3%  PSNR  26.72  25.68  25.78  21.18  26.81  24.63  24.39  24.48  24.76  27.05  27.07 
SSIM  0.72  0.69  0.68  0.42  0.73  0.57  0.55  0.56  0.61  0.74  0.75  
Time(s)  191.25  96.32  257.94  252.47  2.68  13.89  6.44  5.37  2.63  1.89  2.07  
4%  PSNR  26.06  24.88  25.27  17.95  26.10  24.05  23.88  23.95  24.14  26.20  26.37 
SSIM  0.69  0.65  0.66  0.28  0.70  0.54  0.53  0.53  0.59  0.70  0.72  
Time(s)  183.44  93.82  258.45  255.84  2.67  11.99  6.01  7.82  2.35  1.89  2.07 
5.2 Blind Image Deconvolution
Blind deconvolution is known as one of the most challenging lowlevel vision tasks. Here we evaluate miFIAM on solving Eq. (11) to address this fundamentally illposed multivariables inverse problem. We adopt the same CNN module as that in Sec. 5.1 but train it on image gradient domain to enhance its ability for sharp edge detection.
In Fig. 6, we show the visual performances of mFIMA in different settings (i.e., with and without ) on an example blurry image from [39]. We observe that mFIMA without almost failed on this experiment. This is not surprising since [39, 40] have proved that standard optimization strategy is likely to lead to degenerate global solutions like the delta kernel (frequently called the noblur solution), or many suboptimal local minima. In contrast, the CNNbased modules successful avoid trivial results and significantly improve the deconvolution performance. We also plot the curves of quantitative scores (i.e., PSNR for the latent image and Kernel Similarity (KS) for the blur kernel) on the bottom row for these two strategies on the bottom row. As these scores are stable after 20 iterations, here we only plot curves of the first 20 iterations.
Method  PSNR  SSIM  ER  KS  Time(s) 

Perrone et al.  29.27  0.88  1.35  0.80  113.70 
Levin et al.  29.03  0.89  1.40  0.81  41.77 
Sun et al.  29.71  0.90  1.32  0.82  209.47 
Zhang et al.  28.01  0.86  1.25  0.58  37.45 
Pan et al.  29.78  0.89  1.33  0.80  102.60 
Ours  30.37  0.91  1.20  0.83  5.65 
We then compare mFIMA with stateoftheart deblurring methods^{6}^{6}6In this and the following experiments, the widely used multiscale techniques are adopted for all the compared methods., such as Perrone et al. [41], Levin et al. [39], Sun et al. [40], Zhang et al. [42] and Pan et al. [43] on the most widelyused Levin et al’s benchmark [39], which consists of 32 blurred images generated by 4 clean images and 8 blur kernels. Tab. II reports the averaged quantitative scores, including PSNR, SSIM, and Error Rate (ER) for the latent image, Kernel Similarity (KS) for the blur kernel and the overall run time. Fig. 7 further compares the visual performance of mFIMA to Perrone et al., Sun et al. and Pan et al. (i.e., top 3 in Tab. II) on a realworld challenging blurry image collected by [30]. It can be seen that mFIMA consistently outperforms all the compared methods both quantitatively and qualitatively, which verifies the efficiency of our proposed learningbased iteration methodology.
In Figs. 8 and 9, we further compare the blind image deconvolution performance of mFIMA with Perrone et al. [41], Sun et al. [40] and Pan et al. [43] (top 3 among all the compared methods in Tab. 2) on example images corrupted by not only unknown blur kernels, but also different levels of Gaussian noises (1% and 3% in Figs. 8 and 9, respectively). It can be seen that mFIMA is robust to these corruptions and outperforms all the compared stateoftheart deblurring methods.
Input  Perrone et al.  Sun et al.  Pan et al.  Ours 
Input  Perrone et al.  Sun et al.  Pan et al.  mFIMA 
  (15.96 / 0.49 / 0.80)  (17.35 / 0.60 / 0.88)  (14.39 / 0.44 / 0.54)  (18.11 / 0.58 / 0.95) 
Input  Perrone et al.  Sun et al.  Pan et al.  mFIMA 
  (24.76 / 0.75 / 0.48)  (20.48 / 0.56 / 0.32)  (28.05 / 0.83 / 0.40)  (31.25 / 0.87 / 0.89) 
6 Conclusion
This paper provided FIMA, a framework to analyze the convergence behaviors of learningbased iterative methods for nonconvex inverse problems. We proposed two novel mechanisms to adaptively guide the trajectories of learningbased iterations and proved their strict convergence. We also showed how to apply FIMA for realworld applications, such as nonblind and blind image deconvolution.
Appendix A Proofs
We first give some preliminaries on variational analysis and nonconvex optimization in Sec. A.1. Secs. A.2A.4 then prove the main results in our manuscript.
a.1 Preliminaries
Definition 1.
[44] The necessary function properties, including proper, lower semicontinuous, Lipschitz smooth, and coercive are summarized as follows. Let . Then we have

Proper and lower semicontinuous: is proper if is nonempty and . is lower semicontinuous if at any point .

Coercive: is said to be coercive, if is bounded from below and if , where is the norm.

Lipschitz smooth (i.e., ): is Lipschitz smooth if is differentiable and there exists such that
If f is Lipschitz smooth, we have the following inequality
Definition 2.
[44, 7] Let be a proper and lower semicontinuous function. Then we have

Subdifferential: The Frech t subdifferential (denoted as ) of at point
is the set of all vectors
which satisfieswhere denotes the inner product. Then the limiting Frech t subdifferential (denoted as ) at is the following closure of :
where when .

KurdykaŁojasiewicz property: is said to have the KurdykaŁojasiewicz property at if there exist , a neighborhood of and a desingularizing function which satisfies (1) is continuous at and ; (2) is concave and on ; (3) for all , such that for all
the following inequality holds
Moreover, if satisfies the KŁ property at each point of then is called a KŁ function.

Semialgebraic set and function: A subset of is a real semialgebraic set if there exist a finit number of real polynomial functions such that
(13) is called semialgebraic if its graph is a semialgebraic subset of . It is verified in [7] that all semialgebraic functions satisfy the KŁ property.
a.2 Explicit Momentum FIMA (eFIMA)
a.2.1 Proof of Theorem 1
Proof.
We first prove the inequality relationship of and . According to the update rule of (Step 8 in Alg. 1): ), we have
(14) 
thus
(15) 
Since is , we have
(16) 
where is the Lipschitz moduli of . Combining this with Eqs. (15) and (16), we have
(17) 
Set and define , we have and .
Then we prove the boundness and convergence of . Based on the momentum scheduling policy in Alg. 1, we obviously have . This together with the result in Eq. (17) (i.e., with ) concludes that for any ,
(18) 
Since both and are proper, we also have . Thus both sequences and are nonincreasing and bounded. This together with the coercive of concludes that both and are bounded and thus have accumulation points.
Then we prove that all accumulation points are the critical points of . From Eq. (18), we actually have that the objective sequences and converge to the same value , i.e.,
(19) 
From Eqs. (17) and (18), we have
(20) 
Summing over , we further have
(21) 
The above inequality implies that and hence and share the same set of accumulation points (denoted as ). Consider that is any accumulation point of , i.e., if . Then by Eq. (14), we have
(22) 
Let in Eq. (22) and , by taking on both sides of Eq. (22), we have . On the other hand, since is lower semicontinuous and , it follows that . So we have . Note that the continuity of yields , so we conclude
(23) 
Recall that in Eq. (19), we have , so
(24) 
By firstorder optimality condition of Eq. (14) and , we have
(25) 
Thus, we have
Comments
There are no comments yet.