Linear Convergence of ISTA and FISTA

12/13/2022
by   Bowen Li, et al.
0

In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2022

Proximal Subgradient Norm Minimization of ISTA and FISTA

For first-order smooth optimization, the research on the acceleration ph...
research
06/16/2023

Linear convergence of Nesterov-1983 with the strong convexity

For modern gradient-based optimization, a developmental landmark is Nest...
research
09/19/2022

Gradient Norm Minimization of Nesterov Acceleration: o(1/k^3)

In the history of first-order algorithms, Nesterov's accelerated gradien...
research
04/28/2023

On Underdamped Nesterov's Acceleration

The high-resolution differential equation framework has been proven to b...
research
12/29/2016

Geometric descent method for convex composite minimization

In this paper, we extend the geometric descent method recently proposed ...
research
12/12/2022

Revisiting the acceleration phenomenon via high-resolution differential equations

Nesterov's accelerated gradient descent (NAG) is one of the milestones i...

Please sign up or login with your details

Forgot password? Click here to reset