Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

07/07/2017
by   Peter Ochs, et al.
0

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward--Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.

READ FULL TEXT

page 24

page 29

page 30

research
01/23/2019

Model Function Based Conditional Gradient Method with Armijo-like Line Search

The Conditional Gradient Method is generalized to a class of non-smooth ...
research
04/19/2018

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption

The problem of minimization of a separable convex objective function has...
research
04/18/2014

iPiano: Inertial Proximal Algorithm for Non-Convex Optimization

In this paper we study an algorithm for solving a minimization problem c...
research
03/10/2020

Learning to be Global Optimizer

The advancement of artificial intelligence has cast a new light on the d...
research
06/22/2021

Morse-Smale complexes on convex polyhedra

Motivated by applications in geomorphology, the aim of this paper is to ...
research
07/16/2021

Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements

We propose a new family of adaptive first-order methods for a class of c...
research
11/13/2020

Convex Optimization with an Interpolation-based Projection and its Application to Deep Learning

Convex optimizers have known many applications as differentiable layers ...

Please sign up or login with your details

Forgot password? Click here to reset