Dual gradient flow for solving linear ill-posed problems in Banach spaces

09/13/2022
by   Qinian Jin, et al.
0

We consider determining the -minimizing solution of ill-posed problem A x = y for a bounded linear operator A: X → Y from a Banach space X to a Hilbert space Y, where : X → (-∞, ∞] is a strongly convex function. A dual gradient flow is proposed to approximate the sought solution by using noisy data. Due to the ill-posedness of the underlying problem, the flow demonstrates the semi-convergence phenomenon and a stopping time should be chosen carefully to find reasonable approximate solutions. We consider the choice of a proper stopping time by various rules such as the a priori rules, the discrepancy principle, and the heuristic discrepancy principle and establish the respective convergence results. Furthermore, convergence rates are derived under the variational source conditions on the sought solution. Numerical results are reported to test the performance of the dual gradient flow.

READ FULL TEXT
research
06/15/2022

Convergence rates of a dual gradient method for constrained linear ill-posed problems

In this paper we consider a dual gradient method for solving linear ill-...
research
07/14/2022

Stochastic mirror descent method for linear ill-posed problems in Banach spaces

Consider linear ill-posed problems governed by the system A_i x = y_i fo...
research
11/26/2022

Dual gradient method for ill-posed problems using multiple repeated measurement data

We consider determining -minimizing solutions of linear ill-posed proble...
research
04/13/2021

Optimal Convergence of the Discrepancy Principle for polynomially and exponentially ill-posed Operators under White Noise

We consider a linear ill-posed equation in the Hilbert space setting und...
research
06/06/2023

A rational conjugate gradient method for linear ill-conditioned problems

We consider linear ill-conditioned operator equations in a Hilbert space...
research
01/20/2021

Optimal-order convergence of Nesterov acceleration for linear ill-posed problems

We show that Nesterov acceleration is an optimal-order iterative regular...
research
04/17/2020

Analyzing the discrepancy principle for kernelized spectral filter learning algorithms

We investigate the construction of early stopping rules in the nonparame...

Please sign up or login with your details

Forgot password? Click here to reset