A preconditioned deepest descent algorithm for a class of optimization problems involving the p(x)-Laplacian operator

05/22/2022
by   Sergio Gonzalez-Andrade, et al.
0

In this paper we are concerned with a class of optimization problems involving the p(x)-Laplacian operator, which arise in imaging and signal analysis. We study the well-posedness of this kind of problems in an amalgam space considering that the variable exponent p(x) is a log-Hölder continuous function. Further, we propose a preconditioned descent algorithm for the numerical solution of the problem, considering a "frozen exponent" approach in a finite dimension space. Finally, we carry on several numerical experiments to show the advantages of our method. Specifically, we study two detailed example whose motivation lies in a possible extension of the proposed technique to image processing.

READ FULL TEXT

page 14

page 16

research
04/21/2019

Stability Analysis for a Class of Sparse Optimization Problems

The sparse optimization problems arise in many areas of science and engi...
research
09/30/2016

A Primer on Coordinate Descent Algorithms

This monograph presents a class of algorithms called coordinate descent ...
research
07/22/2019

Bilevel Optimization, Deep Learning and Fractional Laplacian Regularization with Applications in Tomography

In this work we consider a generalized bilevel optimization framework fo...
research
10/17/2022

On the linear convergence of additive Schwarz methods for the p-Laplacian

We consider additive Schwarz methods for boundary value problems involvi...
research
11/07/2016

Hamiltonian operator for spectral shape analysis

Many shape analysis methods treat the geometry of an object as a metric ...
research
04/17/2020

First-Order Methods for Optimal Experimental Design Problems with Bound Constraints

We consider a class of convex optimization problems over the simplex of ...

Please sign up or login with your details

Forgot password? Click here to reset