DeepAI AI Chat
Log In Sign Up

A preconditioned deepest descent algorithm for a class of optimization problems involving the p(x)-Laplacian operator

by   Sergio Gonzalez-Andrade, et al.
National Polytechnic School

In this paper we are concerned with a class of optimization problems involving the p(x)-Laplacian operator, which arise in imaging and signal analysis. We study the well-posedness of this kind of problems in an amalgam space considering that the variable exponent p(x) is a log-Hölder continuous function. Further, we propose a preconditioned descent algorithm for the numerical solution of the problem, considering a "frozen exponent" approach in a finite dimension space. Finally, we carry on several numerical experiments to show the advantages of our method. Specifically, we study two detailed example whose motivation lies in a possible extension of the proposed technique to image processing.


page 14

page 16


Stability Analysis for a Class of Sparse Optimization Problems

The sparse optimization problems arise in many areas of science and engi...

A Primer on Coordinate Descent Algorithms

This monograph presents a class of algorithms called coordinate descent ...

Bilevel Optimization, Deep Learning and Fractional Laplacian Regularization with Applications in Tomography

In this work we consider a generalized bilevel optimization framework fo...

On the linear convergence of additive Schwarz methods for the p-Laplacian

We consider additive Schwarz methods for boundary value problems involvi...

Hamiltonian operator for spectral shape analysis

Many shape analysis methods treat the geometry of an object as a metric ...

First-Order Methods for Optimal Experimental Design Problems with Bound Constraints

We consider a class of convex optimization problems over the simplex of ...