Convergence rates of a dual gradient method for constrained linear ill-posed problems

06/15/2022
by   Qinian Jin, et al.
0

In this paper we consider a dual gradient method for solving linear ill-posed problems Ax = y, where A : X → Y is a bounded linear operator from a Banach space X to a Hilbert space Y. A strongly convex penalty function is used in the method to select a solution with desired feature. Under variational source conditions on the sought solution, convergence rates are derived when the method is terminated by either an a priori stopping rule or the discrepancy principle. We also consider an acceleration of the method as well as its various applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2022

Dual gradient method for ill-posed problems using multiple repeated measurement data

We consider determining -minimizing solutions of linear ill-posed proble...
research
09/13/2022

Dual gradient flow for solving linear ill-posed problems in Banach spaces

We consider determining the -minimizing solution of ill-posed problem A ...
research
01/20/2021

Optimal-order convergence of Nesterov acceleration for linear ill-posed problems

We show that Nesterov acceleration is an optimal-order iterative regular...
research
02/21/2020

Source Conditions for non-quadratic Tikhonov Regularisation

In this paper we consider convex Tikhonov regularisation for the solutio...
research
07/14/2022

Stochastic mirror descent method for linear ill-posed problems in Banach spaces

Consider linear ill-posed problems governed by the system A_i x = y_i fo...
research
11/30/2020

Mean value methods for solving the heat equation backwards in time

We investigate an iterative mean value method for the inverse (and highl...
research
12/10/2021

Modular-proximal gradient algorithms in variable exponent Lebesgue spaces

We consider structured optimisation problems defined in terms of the sum...

Please sign up or login with your details

Forgot password? Click here to reset