Faster gradient descent and the efficient recovery of images

08/12/2013
by   Hui Huang, et al.
0

Much recent attention has been devoted to gradient descent algorithms where the steepest descent step size is replaced by a similar one from a previous iteration or gets updated only once every second step, thus forming a faster gradient descent method. For unconstrained convex quadratic optimization these methods can converge much faster than steepest descent. But the context of interest here is application to certain ill-posed inverse problems, where the steepest descent method is known to have a smoothing, regularizing effect, and where a strict optimization solution is not necessary. Specifically, in this paper we examine the effect of replacing steepest descent by a faster gradient descent algorithm in the practical context of image deblurring and denoising tasks. We also propose several highly efficient schemes for carrying out these tasks independently of the step size selection, as well as a scheme for the case where both blur and significant noise are present. In the above context there are situations where many steepest descent steps are required, thus building slowness into the solution procedure. Our general conclusion regarding gradient descent methods is that in such cases the faster gradient descent methods offer substantial advantages. In other situations where no such slowness buildup arises the steepest descent method can still be very effective.

READ FULL TEXT

page 7

page 9

page 11

page 12

page 13

page 14

page 15

research
02/20/2019

LOSSGRAD: automatic learning rate in gradient descent

In this paper, we propose a simple, fast and easy to implement algorithm...
research
12/30/2021

Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size

Establishing a fast rate of convergence for optimization methods is cruc...
research
07/09/2019

Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent

Gradient descent is arguably one of the most popular online optimization...
research
09/04/2023

Homomorphically encrypted gradient descent algorithms for quadratic programming

In this paper, we evaluate the different fully homomorphic encryption sc...
research
11/02/2022

Gradient Descent and the Power Method: Exploiting their connection to find the leftmost eigen-pair and escape saddle points

This work shows that applying Gradient Descent (GD) with a fixed step si...
research
05/02/2023

Random Function Descent

While gradient based methods are ubiquitous in machine learning, selecti...
research
10/29/2021

New Step-Size Criterion for the Steepest Descent based on Geometric Numerical Integration

This paper deals with unconstrained optimization problems based on numer...

Please sign up or login with your details

Forgot password? Click here to reset