Proximal Quasi-Newton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates

07/11/2016
by   Hiva Ghanbari, et al.
0

In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of this method by establishing a simple stopping criterion for the subproblem optimization. Furthermore, we consider an accelerated variant, based on FISTA [1], to the proximal quasi-Newton algorithm. A similar accelerated method has been considered in [7], where the convergence rate analysis relies on very strong impractical assumptions. We present a modified analysis while relaxing these assumptions and perform a practical comparison of the accelerated proximal quasi- Newton algorithm and the regular one. Our analysis and computational results show that acceleration may not bring any benefit in the quasi-Newton setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2023

Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization

In this paper, we propose an accelerated quasi-Newton proximal extragrad...
research
05/30/2023

Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates

Despite the impressive numerical performance of quasi-Newton and Anderso...
research
01/25/2022

A semismooth Newton-proximal method of multipliers for ℓ_1-regularized convex quadratic programming

In this paper we present a method for the solution of ℓ_1-regularized co...
research
02/22/2022

An accelerated proximal gradient method for multiobjective optimization

Many descent methods for multiobjective optimization problems have been ...
research
08/07/2021

Variable metric extrapolation proximal iterative hard thresholding method for ℓ_0 minimization problem

In this paper, we consider the ℓ_0 minimization problem whose objective ...
research
06/27/2014

Proximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators

We consider the class of optimization problems arising from computationa...
research
10/19/2021

Faster Rates for the Frank-Wolfe Algorithm Using Jacobi Polynomials

The Frank Wolfe algorithm (FW) is a popular projection-free alternative ...

Please sign up or login with your details

Forgot password? Click here to reset