# On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems

Numerous tasks at the core of statistics, learning and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis about the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. This paper moves beyond these limits and proposes Flexible Iterative Modularization Algorithm (FIMA), a generic and provable paradigm for nonconvex inverse problems. Our theoretical analysis reveals that FIMA allows us to generate globally convergent trajectories for learning-based iterative methods. Meanwhile, the devised scheduling policies on flexible modules should also be beneficial for classical numerical methods in the nonconvex scenario. Extensive experiments on real applications verify the superiority of FIMA.

There are no comments yet.

## Authors

• 25 publications
• 6 publications
• 9 publications
• 17 publications
• 62 publications
• 20 publications
• ### Differentiable Linearized ADMM

Recently, a number of learning-based optimization methods that combine d...
05/15/2019 ∙ by Xingyu Xie, et al. ∙ 0

• ### An Analytic Solution to the Inverse Ising Problem in the Tree-reweighted Approximation

Many iterative and non-iterative methods have been developed for inverse...
05/29/2018 ∙ by Takashi Sano, et al. ∙ 0

• ### A Novel Learnable Gradient Descent Type Algorithm for Non-convex Non-smooth Inverse Problems

Optimization algorithms for solving nonconvex inverse problem have attra...
03/15/2020 ∙ by Qingchao Zhang, et al. ∙ 0

• ### Toward Designing Convergent Deep Operator Splitting Methods for Task-specific Nonconvex Optimization

Operator splitting methods have been successfully used in computational ...
04/28/2018 ∙ by Risheng Liu, et al. ∙ 0

• ### Bilevel Integrative Optimization for Ill-posed Inverse Problems

Classical optimization techniques often formulate the feasibility of the...
07/06/2019 ∙ by Risheng Liu, et al. ∙ 1

• ### The tangential cone condition for some coefficient identification model problems in parabolic PDEs

The tangential condition was introduced in [Hanke et al., 95] as a suffi...
08/03/2019 ∙ by Barbara Kaltenbacher, et al. ∙ 0

• ### The Projected GSURE for Automatic Parameter Tuning in Iterative Shrinkage Methods

Linear inverse problems are very common in signal and image processing. ...
03/21/2010 ∙ by Raja Giryes, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In applications throughout statistics, machine learning and computer vision, one is often faced with the challenge of solving ill-posed inverse problems. In general, the basic inverse problem leads to a discrete linear system of the form

, where

is the latent variable to be estimated,

denotes some given linear operations on , and are the observation and an unknown error term, respectively. Typically, these inverse problems can be addressed by solving the composite minimization model:

 minxΨ(x):=f(x;T,y)+g(x), (1)

where is the fidelity that captures the loss of data fitting, and refers to the prior that promotes desired distribution on the solution. Recent studies illustrate that many problems (e.g., image deconvolution, matrix factorization and dictionary learning) naturally require to be solved in the nonconvex scenario. This trend motivates us to investigate Nonconvex Inverse Problems (NIPs) in the form of Eq. (1) and with the practical configuration that is continuously differentiable, is nonsmooth, and both and are possibly nonconvex.

Over the past decades, a broad class of first-order methods have been developed to solve special instances of Eq. (1). For example, by integrating Nesterov’s acceleration [1] into the fundamental Proximal Gradient (PG) scheme, Accelerated Proximal Gradient (APG, a.k.a. FISTA [2]) method is initially proposed to solve convex models in the form of Eq. (1) for different applications, such as image restoration [2], image deblurring [3], and sparse/low-rank learning [4], etc. While these APGs generate a sequence of objectives that may oscillate [2], [5] developed a variant of APG that guarantees the monotonicity of the sequence. For nonconvex energies in Eq. (1), Li and Lin [6] investigated a monotone APG (mAPG) and proved the convergence under the Kurdyka-Łojasiewicz (KŁ) constraint [7]. The work in [8] developed another variation of APG (APGnc) for nonconvex problems, but their original analysis only characterized the fixed-point convergence. Recently, Li et al. [9] also proved the subsequence convergence of APGnc and estimated its convergence rates by further exploiting KŁ property.

Unfortunately, even with some theoretically proved convergence properties, these classical numerical solvers may still fail in real-world scenarios. This is mainly because that the abstractly designed and fixed updating schemes do not exploit the particular structure of the problem at hand nor the input data distribution [10].

In recent years, various learning-based strategies [11, 12, 13, 14, 15] have been proposed to address practical inverse problems in the form of Eq. (1

). These methods first introduced hyperparameters into the classical numerical solvers and then performed discriminative learning on collected training data to obtain some data-specific (but possibly inconsistent) iteration schemes. Inspired by the success of deep learning in different application fields, some preliminary studies considered the handcrafted network architectures as the implicit priors (a.k.a. deep priors) for inverse problems. Following this perspective, various deep priors are designed and nested into numerical iterations

[16, 17, 18]. Alternately, the works in [19] and [20] addressed the iteration learning issues from the perspectives of deep reinforcement and recurrent learning, respectively.

Nevertheless, existing hyperparameters learning approaches can only build iterations based on the specific energy forms (e.g., -penalty and MRFs), so that they are inapplicable for more generic inverse problems. Meanwhile, due to severe inconstancy of parameters during iterations, rigorous analysis on the resulted trajectories is also missing. Deep iterative methods have been executed in many learning and vision problems in practice. However, due to the complex network structure, little or even to no results have been proposed for the convergence behaviors of these methods. In summary, the lack of strict theoretical investigations is one of the most fundamental limits in prevalent learning-based iterative methods, especially in the challenging nonconvex scenario.

To break the limits of prevalent approaches, this paper explores Flexible Iterative Modularization Algorithm (FIMA), a generic and convergent algorithmic framework that combines together the learnable architecture (e.g., mainstream deep networks) with principled knowledges (formulated by mathematical models), to tackle challenging NIPs in Eq. (1). Specifically, derived from the fundamental forward-backward updating mechanism, FIMA replaces specific calculations corresponding to the fidelity and priors in Eq. (1) with two user-specified (learnable) computational modules. A series of theoretical investigations are established for FIMA. For example, we first prove the subsequence convergence of FIMA with explicit momentum policy (called eFIMA), which is as good as those mathematically designed nonconvex proximal methods with Nesterov’s acceleration (e.g., various APGs in [6, 8, 9]). By introducing a carefully devised error-control policy (i.e., implicit momentum policy, called iFIMA), we further enhance the results and obtain a globally convergent Cauchy sequence for Eq. (1). We prove that this guarantee can also be preserved for FIMA with multiple blocks of unknown variables (called mFIMA). As a nontrivial byproduct, we finally show how to specify modules in FIMA for challenging inverse problems in low-level vision area (e.g., non-blind and blind image deconvolution). Our primary contributions are summarized as follows:

1. FIMA provides a generic framework that unifies almost all existing learning-based iterative methods, as well as a series of scheduling policies that make it possible to develop theoretically convergent learning-based iterations for challenging nonconvex inverse problems in the form of Eq. (1).

2. Even with highly flexible (learnable) iterations, the convergence guarantees obtained by FIMA is still as good as (eFIMA) or better (iFIMA) than prevalent mathematically designed nonconvex APGs. So it is worth noting that our devised scheduling policies together with the flexible algorithmic structures should also be beneficial for classical nonconvex algorithms.

3. FIMA also provides us a practical and effective ensemble of domain knowledge and sophisticated learned data distributions for real applications. Thus we can bring the expressive power of knowledge-based and data-driven methodologies to yield state-of-the-art performance on challenging low-level vision tasks.

## 2 Related Work

### 2.1 Classical First-order Numerical Solvers

We first briefly review a group of classical first-order algorithms, which have been widely used to solve inverse problems. The gradient descent (GD) scheme on a differentiable function can be reformulated as minimizing the following quadratic approximation of at given point with step size , i.e., . As for the nonsmooth function , its proximal mapping (PM) with parameter can be defined as . So it is natural to consider PG as cascade of GD (on ) and PM (on ), or equivalently optimizing the quadratic approximation of Eq. (1), i.e., , where is some calculated variable at -th iteration. Thus most prevalent proximal schemes can be summarized as

where and in (B-2) denote the errors in PM and GD calculations, respectively [21]. Within this general scheme, we first obtain original PG by setting (i.e., (A-1)) and computing PM in (B-1) [2]. Using Nesterov’s acceleration [1] (i.e., (A-2) with ), we have the well-known APG method [2, 6, 9]. Moreover, by introducing and to respectively capture the inexactness of PM and GD (i.e., (B-2)), we actually consider inexact PG and APG for both convex [22] and nonconvex [21] problems. Notice that in the nonconvex scenario, most classical APGs can only guarantee the subsequence convergence to the critical points [6, 9].

### 2.2 Learning-based Iterative Methods

In [11], a trained version of FISTA (called LISTA) is introduced to approximate the solution of LASSO. [23, 10] extended LISTA for more generic sparse coding tasks and provided an adaptive acceleration. Unfortunately, LISTA is built on convex regularization, thus may not be applicable for other complex nonconvex inverse problems (e.g.,

prior). By introducing hyperparameters in MRF and solving the resulted variational model with different iteration schemes, various learning-based iterative methods are proposed for inverse problems in image domain (e.g., denoising, super-resolution, and MRI imaging). For example,

[24, 13, 14, 25, 15] have considered half-quadratic splitting, gradient descent, Alternating Direction Method of Multiplier (ADMM) and primal-dual method, respectively. But their parameterizations are completely based on MRF priors. Even worse, the original convergence properties are lost in these resulted iterations.

To better model complex image degradations, [16, 17, 18]

considered Convolutional Neural Networks (CNNs) as implicit priors for image restoration. Since these methods discard the regularization term in Eq. (

1), we may not enforce principled constraints on their solutions. It is also unclear when and where these iterative trajectories should stop. Another group of very recent works [19, 20]

directly formulated the descent directions from reinforcement learning perspective or using recurrent networks. However, due to the high computational budgets, they can only be applied to relative simple tasks (e.g., linear regression). Besides, due to the complex topological network structure, it is extremely hard to provide strict theoretical analysis for these methods.

## 3 The Proposed Algorithms

This section develops Flexible Iterative Modularization Algorithm (FIMA) for nonconvex inverse problems in Eq. (1). The convergence behaviors are also investigated accordingly. Hereafter, some fairly loose assumptions are enforced on Eq. (1): is proper and Lipschitz smooth (with modulus ) on a bounded set, is proper, lower semi-continuous and proximable111The function is proximable if can be easily solved by the given and . and is coercive. Notice that the proofs and definitions are deferred until Supplementary Materials.

### 3.1 Abstract Iterative Modularization

As summarized in Sec. 2.1, a large amount of first-order methods can be summarized as forward-backward-type iterations. This motivates us to consider the following even more abstract updating principle:

 xk+1=Ag∘Af(xk), (2)

where and respectively stand for the user-specified modules for and , and denotes operation composition. Building upon this formulation, it is easy to see that designing a learning-based iterative method reduces to the problem of iteratively specifying and learning and .

It is straightforward that most prevalent approaches [16, 17, 18, 24, 13, 14, 15] naturally fall into this general formulation. Nevertheless, currently it is still impossible to provide any strict theoretical results for practical trajectories of Eq. (2). This is mainly due to the lack of efficient mechanisms to control the propagations generated by these handcrafted operations. Fortunately, in the following, we will introduce different scheduling policies to automatically guide the iterations in Eq. (2), resulting in a series of theoretically convergent learning-based iterative methods.

### 3.2 Explicit Momentum: A Straightforward Strategy

The momentum of objective values is one of the most important properties for numerical iterations. This property is also necessary for analyzing the convergence of some classical algorithms. Inspired by these points, we present an explicit momentum FIMA (eFIMA) (i.e., Alg. 1), in which we explicitly compare and and choose the variable with less objective value as our monitor (denoted as ). Finally, a proximal refinement is performed to adjust the learning-based updating at each stage.

The following theorem first verifies the sufficient descent of and then proves the subsequence convergence of eFIMA. It is nice to observe that these results are not based on any specific choices of and .

###### Theorem 1.

Let be the sequence generated by eFIMA. Then at the -th iteration, there exists a sequence , such that

 Ψ(xk+1)≤Ψ(vk)−αk∥xk+1−vk∥2, (3)

where is the monitor in Alg. 1. Furthermore, is bounded and any of its accumulation points are the critical points of in Eq. (1).

Based on Theorem 1 and considering as a semi-algebraic function222Indeed, a variety of functions (e.g., the indicator function of polyhedral set, and rational penalties) satisfy the semi-algebraic property [26]. , the convergence rate of eFIMA can be straightforwardly estimated as follows.

###### Corollary 1.

Let be a desingularizing function with a constant and a parameter [27]. Then generated by eFIMA converges after finite iterations if . The linear and sub-linear rates can be obtained if choosing and , respectively.

###### Remark 1.

Theorem 1 and Corollary 1 actually provide us a unified methodology to analyze the convergence issues for not only learning-based methods, but also classical nonconvex solvers. That is, on the one hand, within eFIMA, we can provide an easily-implemented and strictly convergent way to extend almost all the learning-based methods reviewed in Sec. 2.2. On the other hand, by respectively specifying and as proximal operation and Nesterov’s acceleration, eFIMA will reduce to the classical nonconvex APG, thus we can also obtain the same convergence results for a variety of prevalent APG methods [6, 8, 9].

### 3.3 Implicit Momentum via Error Control

Indeed, even with the explicit momentum schedule, we may still not obtain a globally convergent iteration. This is mainly because that there is no policy to efficiently control the inexactness of the user-specified modules (i.e., ). In this subsection, we show how address this issue by controlling the first-order optimality error during iterations.

Specifically, we consider the auxiliary of at (denoted as ) and denote its sub-differential (denoted as )333Strictly speaking, is the so-called limiting Frechét sub-differential. We state its formal definition and propose a practical computation scheme for in Supplemental Materials. as

 Ψk(x)=f(x)+g(x)+μk2∥x−xk∥2,dxΨk=dxg+∇f(x)+μk(x−xk)∈∂Ψk(x), (4)

where is the penalty parameter and .

As shown in Alg. 2, at stage , a variable is obtained by proximally minimizing at (i.e., Step 3 of Alg. 2). Roughly, this new variable is just an ensemble of the last updated and the output of user-specified following the specific proximal structure in Eq. (1). Then the monitor is obtained by checking the boundedness of . Notice that the constant actually reveals our tolerance to the inexactness of at -th iteration.

###### Proposition 1.

Let be the sequences generated by Alg. 2. Then there exist two sequences and , such that the inequality (3) in Theorem 1 and are respectively satisfied.

Equipped with Proposition 1, it will be straightforward to guarantee that the objective values generated by Alg. 2 (i.e., ) also has sufficient descent. So we call this version of FIMA as implicit momentum FIMA (iFIMA). Then the global convergence of iFIMA is proved as follows.

###### Theorem 2.

Let be the sequence generated by iFIMA. Then is bounded and any of its accumulation points are the critical points of . If is semi-algebraic, we further have that is a Cauchy sequence, thus globally converges to a critical point of in Eq. (1).

Indeed, based on Theorem 2, it is also easy to obtain the same convergence rate as that in Corollary 1 for iFIMA.

###### Remark 2.

The results in Theorem 2 is even better than that for prevalent nonconvex APGs. This actually suggests that our devised error-control policy together with the flexible algorithmic structures should also be beneficial for classical nonconvex algorithms.

###### Remark 3.

Theorems 1 and 2 indicate that the convergence of FIMA does not depend on the particular choices of and in general. This allows us to utilize different types of iterative modules, such as classical numerical schemes, off-the-shelf methods, and deep networks.

###### Remark 4.

However, it will be shown in Sec. 5 that the choices of and do affect our speed and accuracy in practice. This is because in FIMA, the scheduling of learnable and numerical modules are automatically and adaptively adjusted, so that improper or will directly result in too many expensive refinements.

#### 3.3.1 Practical Calculation of d~ukΨk in iFIMA

Here we propose a practical calculation scheme for defined in Eq. (4) and used in Alg. 2. In fact, it is challenging to directly calculate since the sub-differential is often intractable in the non-convex scenario. Fortunately, our following analysis provides an efficient practical calculation scheme for within FIMA framework. Specifically, from Alg. 2, we have

 ~uk∈proxγkg(uk−γk(∇f(uk)+μk(uk−xk))). (5)

On the other hand, from definition in Eq. (4), we have

 d~ukΨk=d~ukg+∇f(~uk)+μk(~uk−xk)⇒d~ukg=d~ukΨk−∇f(~uk)−μk(~uk−xk)∈∂g(~uk). (6)

By the property of proximal operation, we have

 (7)

Therefore, by comparing Eqs. (5) and (7), we actually have the following practically calculation scheme for :

 d~ukΨk=(μk−1/γk)(~uk−uk)−(∇f(uk)−∇f(~uk)).

### 3.4 Multi-block Extension

In order to tackle the inverse problems with blocks of unknown variables (e.g., blind deconvolution and dictionary learning), we now discuss how to extend FIMA for multi-block NIPs, which is formulated as , where is a set of unknown variables to be estimated. Notice that here should be some given linear operations on . The inference of such problem can be addressed by solving

 minXΨ(X):=f(X;T,y)+N∑n=1gn(xn), (8)

where is still differentiable and each may also nonsmooth and possibly nonconvex. Here both and block-wise () follow the same assumptions as that in Eq. (1) and should also satisfy the generalized Lipschitz smooth property on bounded subsets of . For ease of presentation, we denote , and the subscripts and are defined in the same manner. Then we summarize the main iterations of multivariable FIMA (mFIMA) as follows444Due to space limit, the details of mFIMA are presented in Supplemental Material.:

 ukn=Agn∘Af(Xk+1[n])).

Here is the monitor of , obtained by the same error control strategy as that in iFIMA. Then we summarize our multi-block FIMA in Alg. 3 and prove the convergence of mFIMA in Corollary 2.

###### Corollary 2.

Let be the sequence generated by mFIMA. Then we have the same convergence properties as that in Theorem 2 and Corollary 1 for .

Then we summarize our multi-block FIMA in Alg. 3. Notice that here we adopt the error-control policy in iFIMA to guide the iterations of mFIMA.

## 4 Applications

As a nontrivial byproduct, this section illustrates how to apply FIMA to tackle practical inverse problems in low-level vision area, such as image deconvolution in the standard non-blind and even more challenging blind scenarios.

Non-blind Deconvolution (Uni-block) aims to restore the latent image from corrupted observation with known blur kernel . In this part, we utilize the well-known sparse coding formulation [2]: , where , and are the sparse code, given dictionary and unknown noises, respectively. Indeed, the form of is given as , where is the matrix form of , denotes the inverse of the wavelet transform (i.e., and ). So by defining and (), we obtain a special case of Eq. (1) as follows

 minxf(x;D,y)+g(x). (9)

Now we are ready to design iterative modules (i.e., and ) to optimize the SC model in Eq. (9). With the well-known imaging formulation ( denotes the convolution operator), we actually update by solving to aggregate principles of the task and information from last updated variable, where and is a positive constant. Then on can be defined as , i.e.,

 Af(xk)=W(BTB+τI)−1(BTy+τW⊤xk), (10)

where

is the identity matrix. It is easy to check that

can be efficiently calculated by FFT [24].

Blind Deconvolution (Multi-block) involves the joint estimation of both the latent image and blur kernel , given only an observed . Here we formulate this problem on image gradient domain and solve the following special case of Eq. (8) with two unknown variables 555Notice that in this section, is defined with different meanings, i.e., image gradient in Eq. (11), while sparse code in Eq. (9).:

 minx,bf(x,b;∇y)+gx(x)+gb(b), (11)

where , , and . Here is the indicator function of the set , where denotes the -th element. So the proximal updating in mFIMA corresponding to and can be respectively calculated by hard-thresholding [3] and simplex projection [28]. Here we need to specify three modules (i.e., , and ) for miFPG. We first follow similar idea in the non-blind case to define using the aggregated deconvolution energy

 (12)

where and are positive constants. We then train CNNs on image gradient domain and solve using conjugate gradient method [29] to formulate and , respectively.

## 5 Experimental Results

This section conducts experiments to verify our theoretical results and compares the performance of FIMA with other state-of-the-art learning-based iterative methods on real-world inverse problems. All experiments are performed on a PC with Intel Core i7 CPU at 3.4 GHz, 32 GB RAM and a NVIDIA GeForce GTX 1050 Ti GPU. More results can also be found in Supplemental Materials.

### 5.1 Non-blind Image Deconvolution

We first evaluate FIMA on solving Eq. (9) for image restoration. The test images are collected by [24, 30] and different levels of Gaussian noise are further added to generate our corrupted observations.

Modules Evaluation: Firstly, the influences of different choices of in FIMA is studied. Following Eq. (10), we adopt with varying . As for , different choices are also considered: classical PG (), Recursive Filter [31] (), Total Variation [32] () and CNNs (). For , we introduce a residual structure [33] and define as a cascade of dilated convolution layers (with filter size

). ReLUs are added between each two linear layers and batch normalizations are used for the

-nd to -th linear layers. We collect 800 images, in which 400 have been used in [24]

and the other 400 are randomly sampled from ImageNet

[34]. Here we just adopt similar strategies in [17] to train with different noise levels. Fig. 1 analyzes the contributions of () and . We observe that is relatively better than and , while performs consistently better and faster than other strategies. So hereafter we always utilize in eFIMA and iFIMA. We also observe that even with different , relatively large in will result in analogous quantitative results. Thus we experimentally set for in eFIMA and iFIMA for all the experiments.

Convergence Behaviors: We then verify the convergence properties of FIMA. The convergence behaviors of both each module in our algorithms and other nonconvex APGs are considered. To be fair and comprehensive, we adopt specific iteration numbers () and iteration errors () as stopping criterion in Figs. 2 and 3, respectively.

In Fig. 2(a), (b), and (c), we plot the curves of objective values (), reconstruction errors () and iteration errors for FIMA with different settings. The legends “”, “”, and “-” respectively denote that at each iteration, we only perform classical PG (i.e., only the last step in Algs. 1 and 2), task-driven modules (i.e., only perform Eq. (2)), and their naive combination (without any scheduling policies). It can be seen that the function values and reconstruction errors of PG decrease slower than our FIMA strategies, while both “”-curve (i.e., naive ) and “-”-curve (i.e., with PG refinement but no “explicit momentum” or “error-control” policy) have oscillations and could not converge after only 30 iterations. Moreover, we observe that adding PG to “” (i.e., “-”) make the curve worse rather than correct it to the descent direction. It illustrates that the pure adding strategies indeed break the convergence guarantee. In contrast, since of the choice mechanism in our algorithms, both eFIMA and iFIMA can provide a reliable variable () in the current iteration to satisfy the convergence condition. We further explore the choice mechanism of FIMA in Fig. 2(d). The “circles” in each curve represent the “explicit momentum” or “error-control” policy is satisfied, while the “triangles” denote only perform PG in the current stage. It can be seen that the eFIMA strategy is more strict than iFIMA, the judgment policy fails only iterations in eFIMA while remains almost iterations in iFIMA. Both eFIMA and iFIMA have better performance than other compared schemes, thus verifies the efficiency of our proposed scheduling policies in Sec. 3.

We also compare the iteration behaviors of FIMA to classical nonconvex APGs, including mAPG [6], APGnc [9]) and inexact niAPG [8] on the dataset collected by [24], which consists of 68 images corrupted by different blur kernels of the size ranging from 1717 to 3737. We add 1‰and 1% Gaussian noise to generate our corrupted observations, respectively. In Fig. 3, the left four subfigures compare curves of iteration errors and PSNR on an example image and the rightmost one illustrate the averaged iteration numbers and run time on the whole dataset. It can be seen that our eFIMA and iFIMA are faster and better than these abstractly designed classical solvers under the same iteration error (). Moreover, we observe that the performance of these nonconvex APGs is not satisfied when the noise level is bigger. The PSNRs of them (Fig. 3(d)) descent after dozens of steps, while our FIMA remains higher PSNR and fewer iterations. It illustrates that our strategy is more stable than traditional nonconvex APGs in image restoration because of the flexible modules and effective choice mechanisms.

In Fig. 4, we illustrate the visual results of eFIMA and iFIMA with comparisons to both convex image restoration approaches, including FISTA [2] (APG) and FTVd [35]) (ADMM), and nonconvex mAPG, APGnc, and niAPG on an example image with 1% noise level but large kernel size (i.e, 7575) [30]. Here FISTA and FTVd solve their original convex models, while mAPG, APGnc, and niAPG are based on the nonconvex model in Eq. (9). We have that APGs outperformed the original PG. The inexact niAPG is better than exact mAPG and APGnc. Since FTVd is specifically designed for this task, it is the best among all classical solvers, but worse than our FIMA. Overall, iFIMA obtain higher PSNR than eFIMA since the error-control mechanism actually tend to perform more accurate refinements.

State-of-the-art Comparisons: We compare FIMA with state-of-the-art image restoration approaches, such as IDDBM3D [36], EPLL [37], PPADMM [25], RTF [38] and IRCNN [17]. Fig. 5 first compares our FIMA with two prevalent learning-based iterative approaches (i.e., PPADMM and IRCNN) on an example image with 5% noise. Tab. I then reports the averaged quantitative results of all the compared methods on the image set (collected by [24]) with different levels of Gaussian noise (i.e., 1%, 2%, 3% and 4%). We have that eFIMA and iFIMA not only outperform classical numerical solvers by a large margin in terms of speed and accuracy, but also achieve better performance than other state-of-the-art approaches. Within FIMA, it can be seen that the speed of eFIMA is faster, while PSNR and SSIM of iFIMA are relatively higher. This is mainly because the “error control” strategy tends to perform more refinements than the “explicit momentum” rule during iterations.

### 5.2 Blind Image Deconvolution

Blind deconvolution is known as one of the most challenging low-level vision tasks. Here we evaluate miFIAM on solving Eq. (11) to address this fundamentally ill-posed multi-variables inverse problem. We adopt the same CNN module as that in Sec. 5.1 but train it on image gradient domain to enhance its ability for sharp edge detection.

In Fig. 6, we show the visual performances of mFIMA in different settings (i.e., with and without ) on an example blurry image from [39]. We observe that mFIMA without almost failed on this experiment. This is not surprising since [39, 40] have proved that standard optimization strategy is likely to lead to degenerate global solutions like the delta kernel (frequently called the no-blur solution), or many suboptimal local minima. In contrast, the CNN-based modules successful avoid trivial results and significantly improve the deconvolution performance. We also plot the curves of quantitative scores (i.e., PSNR for the latent image and Kernel Similarity (KS) for the blur kernel) on the bottom row for these two strategies on the bottom row. As these scores are stable after 20 iterations, here we only plot curves of the first 20 iterations.

We then compare mFIMA with state-of-the-art deblurring methods666In this and the following experiments, the widely used multi-scale techniques are adopted for all the compared methods., such as Perrone et al. [41], Levin et al. [39], Sun et al. [40], Zhang et al. [42] and Pan et al. [43] on the most widely-used Levin et al’s benchmark [39], which consists of 32 blurred images generated by 4 clean images and 8 blur kernels. Tab. II reports the averaged quantitative scores, including PSNR, SSIM, and Error Rate (ER) for the latent image, Kernel Similarity (KS) for the blur kernel and the overall run time. Fig. 7 further compares the visual performance of mFIMA to Perrone et al., Sun et al. and Pan et al. (i.e., top 3 in Tab. II) on a real-world challenging blurry image collected by [30]. It can be seen that mFIMA consistently outperforms all the compared methods both quantitatively and qualitatively, which verifies the efficiency of our proposed learning-based iteration methodology.

In Figs. 8 and 9, we further compare the blind image deconvolution performance of mFIMA with Perrone et al. [41], Sun et al. [40] and Pan et al. [43] (top 3 among all the compared methods in Tab. 2) on example images corrupted by not only unknown blur kernels, but also different levels of Gaussian noises (1% and 3% in Figs. 8 and 9, respectively). It can be seen that mFIMA is robust to these corruptions and outperforms all the compared state-of-the-art deblurring methods.

## 6 Conclusion

This paper provided FIMA, a framework to analyze the convergence behaviors of learning-based iterative methods for nonconvex inverse problems. We proposed two novel mechanisms to adaptively guide the trajectories of learning-based iterations and proved their strict convergence. We also showed how to apply FIMA for real-world applications, such as non-blind and blind image deconvolution.

## Appendix A Proofs

We first give some preliminaries on variational analysis and nonconvex optimization in Sec. A.1. Secs. A.2-A.4 then prove the main results in our manuscript.

### a.1 Preliminaries

###### Definition 1.

[44] The necessary function properties, including proper, lower semi-continuous, Lipschitz smooth, and coercive are summarized as follows. Let . Then we have

• Proper and lower semi-continuous: is proper if is nonempty and . is lower semi-continuous if at any point .

• Coercive: is said to be coercive, if is bounded from below and if , where is the norm.

• -Lipschitz smooth (i.e., ): is -Lipschitz smooth if is differentiable and there exists such that

 ∥∇f(x)−∇f(y)∥≤L∥x−y∥, ∀ x,y∈RD.

If f is -Lipschitz smooth, we have the following inequality

###### Definition 2.

[44, 7] Let be a proper and lower semi-continuous function. Then we have

• Sub-differential: The Frech t sub-differential (denoted as ) of at point

is the set of all vectors

which satisfies

 liminfy≠x,y→xg(y)−g(x)−⟨z,y−x⟩∥y−x∥≥0,

where denotes the inner product. Then the limiting Frech t sub-differential (denoted as ) at is the following closure of :

 {z∈Rn:∃(xk,g(xk))→(x,g(x))},

where when .

• Kurdyka-Łojasiewicz property: is said to have the Kurdyka-Łojasiewicz property at if there exist , a neighborhood of and a desingularizing function which satisfies (1) is continuous at and ; (2) is concave and on ; (3) for all , such that for all

 x∈U¯x∩[g(¯x)

the following inequality holds

 ϕ′(g(x)−g(¯x))dist(0,∂g(x))≥1.

Moreover, if satisfies the KŁ property at each point of then is called a KŁ function.

• Semi-algebraic set and function: A subset of is a real semi-algebraic set if there exist a finit number of real polynomial functions such that

 Ω=p⋃j=1q⋂i=1{x∈RD:rij(x)=0 and hij(x)<0}. (13)

is called semi-algebraic if its graph is a semi-algebraic subset of . It is verified in [7] that all semi-algebraic functions satisfy the KŁ property.

### a.2 Explicit Momentum FIMA (eFIMA)

#### a.2.1 Proof of Theorem 1

###### Proof.

We first prove the inequality relationship of and . According to the update rule of (Step 8 in Alg. 1): ), we have

 xk+1∈argminxg(x)+⟨∇f(vk),x−vk⟩+12γk∥x−vk∥2, (14)

thus

 g(xk+1)+⟨∇f(vk),xk+1−vk⟩+12γk∥xk+1−vk∥2≤g(vk). (15)

Since is , we have

 f(xk+1)≤f(vk)+⟨∇f(vk),xk+1−vk⟩+L2∥xk+1−vk∥2, (16)

where is the Lipschitz moduli of . Combining this with Eqs. (15) and (16), we have

 Ψ(xk+1)≤Ψ(vk)−(12γk−L2)∥xk+1−vk∥2. (17)

Set and define , we have and .

Then we prove the boundness and convergence of . Based on the momentum scheduling policy in Alg. 1, we obviously have . This together with the result in Eq. (17) (i.e., with ) concludes that for any ,

 Ψ(xk+1)≤Ψ(vk)≤Ψ(xk)≤Ψ(vk−1)≤Ψ(x0). (18)

Since both and are proper, we also have . Thus both sequences and are non-increasing and bounded. This together with the coercive of concludes that both and are bounded and thus have accumulation points.

Then we prove that all accumulation points are the critical points of . From Eq. (18), we actually have that the objective sequences and converge to the same value , i.e.,

 limk→∞Ψ(xk)=limk→∞Ψ(vk)=Ψ∗. (19)

From Eqs. (17) and (18), we have

 (20)

Summing over , we further have

 mink{12γk−L2}∞∑k=0∥xk+1−vk∥2≤Ψ(x0)−Ψ∗<∞. (21)

The above inequality implies that and hence and share the same set of accumulation points (denoted as ). Consider that is any accumulation point of , i.e., if . Then by Eq. (14), we have

 g(xk+1)+⟨∇f(vk),xk+1−vk⟩+12γk∥xk+1−vk∥2≤g(x∗)+⟨∇f(vk),x∗−vk⟩+12γk∥x∗−vk∥2. (22)

Let in Eq. (22) and , by taking on both sides of Eq. (22), we have . On the other hand, since is lower semi-continuous and , it follows that . So we have . Note that the continuity of yields , so we conclude

 limj→∞Ψ(xkj)=Ψ(x∗). (23)

Recall that in Eq. (19), we have , so

 Ψ(x∗)=Ψ∗, ∀ x∗∈Ω. (24)

By first-order optimality condition of Eq. (14) and , we have

 0∈∂g(xkj)+∇f(vk)+1γk(xkj−vk). (25)

Thus, we have

 ∇f