Modular-proximal gradient algorithms in variable exponent Lebesgue spaces

12/10/2021
by   Marta Lazzaretti, et al.
0

We consider structured optimisation problems defined in terms of the sum of a smooth and convex function, and a proper, l.s.c., convex (typically non-smooth) one in reflexive variable exponent Lebesgue spaces L_p(·)(Ω). Due to their intrinsic space-variant properties, such spaces can be naturally used as solution space and combined with space-variant functionals for the solution of ill-posed inverse problems. For this purpose, we propose and analyse two instances (primal and dual) of proximal gradient algorithms in L_p(·)(Ω), where the proximal step, rather than depending on the natural (non-separable) L_p(·)(Ω) norm, is defined in terms of its modular function, which, thanks to its separability, allows for the efficient computation of algorithmic iterates. Convergence in function values is proved for both algorithms, with convergence rates depending on problem/space smoothness. To show the effectiveness of the proposed modelling, some numerical tests highlighting the flexibility of the space L_p(·)(Ω) are shown for exemplar deconvolution and mixed noise removal problems. Finally, a numerical verification on the convergence speed and computational costs of both algorithms in comparison with analogous ones defined in standard L_p(Ω) spaces is presented.

READ FULL TEXT
research
11/14/2019

Primal-dual block-proximal splitting for a class of non-convex problems

We develop block structure adapted primal-dual algorithms for non-convex...
research
03/16/2023

Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

We consider a stochastic gradient descent (SGD) algorithm for solving li...
research
06/15/2022

Convergence rates of a dual gradient method for constrained linear ill-posed problems

In this paper we consider a dual gradient method for solving linear ill-...
research
12/06/2022

Proximal methods for point source localisation

Point source localisation is generally modelled as a Lasso-type problem ...
research
12/22/2022

Distributed Random Block-Coordinate descent methods for ill-posed composite convex optimisation problems

We develop a novel randomised block coordinate descent primal-dual algor...
research
02/08/2020

Predictive online optimisation with applications to optical flow

Online optimisation revolves around new data being introduced into a pro...
research
09/23/2020

Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space

Deep learning has achieved notable success in various fields, including ...

Please sign up or login with your details

Forgot password? Click here to reset