Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms

12/24/2020
by   Mahesh Chandra Mukkamala, et al.
0

Lipschitz continuity of the gradient mapping of a continuously differentiable function plays a crucial role in designing various optimization algorithms. However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient. This led to the development of a generalized notion known as the L-smad property, which is based on generalized proximity measures called Bregman distances. However, the L-smad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like x^4-1 and also many practical composite problems are out of scope. We fix this issue by proposing the MAP property, which generalizes the L-smad property and is also valid for a large class of nonconvex nonsmooth composite problems. Based on the proposed MAP property, we propose a globally convergent algorithm called Model BPG, that unifies several existing algorithms. The convergence analysis is based on a new Lyapunov function. We also numerically illustrate the superior performance of Model BPG on standard phase retrieval problems, robust phase retrieval problems, and Poisson linear inverse problems, when compared to a state of the art optimization method that is valid for generic nonconvex nonsmooth optimization problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2017

First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

We focus on nonconvex and nonsmooth minimization problems with a composi...
research
03/29/2023

An inexact linearized proximal algorithm for a class of DC composite optimization problems and applications

This paper is concerned with a class of DC composite optimization proble...
research
10/08/2019

Bregman Proximal Framework for Deep Linear Neural Networks

A typical assumption for the analysis of first order optimization method...
research
06/26/2023

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

The widely used stochastic gradient methods for minimizing nonconvex com...
research
09/25/2018

Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

Substantial progress has been made recently on developing provably accur...
research
02/20/2018

Composite Optimization by Nonconvex Majorization-Minimization

Many tasks in imaging can be modeled via the minimization of a nonconvex...
research
02/04/2015

Composite convex minimization involving self-concordant-like cost functions

The self-concordant-like property of a smooth convex function is a new a...

Please sign up or login with your details

Forgot password? Click here to reset