Bregman Proximal Framework for Deep Linear Neural Networks

10/08/2019
by   Mahesh Chandra Mukkamala, et al.
0

A typical assumption for the analysis of first order optimization methods is the Lipschitz continuity of the gradient of the objective function. However, for many practical applications this assumption is violated, including loss functions in deep learning. To overcome this issue, certain extensions based on generalized proximity measures known as Bregman distances were introduced. This initiated the development of the Bregman proximal gradient (BPG) algorithm and an inertial variant (momentum based) CoCaIn BPG, which however rely on problem dependent Bregman distances. In this paper, we develop Bregman distances for using BPG methods to train Deep Linear Neural Networks. The main implications of our results are strong convergence guarantees for these algorithms. We also propose several strategies for their efficient implementation, for example, closed form updates and a closed form expression for the inertial parameter of CoCaIn BPG. Moreover, the BPG method requires neither diminishing step sizes nor line search, unlike its corresponding Euclidean version. We numerically illustrate the competitiveness of the proposed methods compared to existing state of the art schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2019

Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms

Matrix Factorization is a popular non-convex objective, for which altern...
research
12/24/2020

Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms

Lipschitz continuity of the gradient mapping of a continuously different...
research
01/23/2018

On the complexity of convex inertial proximal algorithms

The inertial proximal gradient algorithm is efficient for the composite ...
research
04/19/2018

BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption

The problem of minimization of a separable convex objective function has...
research
12/16/2019

Leveraging Two Reference Functions in Block Bregman Proximal Gradient Descent for Non-convex and Non-Lipschitz Problems

In the applications of signal processing and data analytics, there is a ...
research
05/05/2020

Inertial Stochastic PALM and its Application for Learning Student-t Mixture Models

Inertial algorithms for minimizing nonsmooth and nonconvex functions as ...
research
11/05/2016

Loss-aware Binarization of Deep Networks

Deep neural network models, though very powerful and highly successful, ...

Please sign up or login with your details

Forgot password? Click here to reset