Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization

06/03/2023
by   Ruichen Jiang, et al.
0

In this paper, we propose an accelerated quasi-Newton proximal extragradient (A-QPNE) method for solving unconstrained smooth convex optimization problems. With access only to the gradients of the objective, we prove that our method can achieve a convergence rate of O(min{1/k^2, √(dlog k)/k^2.5}), where d is the problem dimension and k is the number of iterations. In particular, in the regime where k = O(d), our method matches the optimal rate of O(1/k^2) by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where k = Ω(d log d), it outperforms NAG and converges at a faster rate of O(√(dlog k)/k^2.5). To the best of our knowledge, this result is the first to demonstrate a provable gain of a quasi-Newton-type method over NAG in the convex setting. To achieve such results, we build our method on a recent variant of the Monteiro-Svaiter acceleration framework and adopt an online learning perspective to update the Hessian approximation matrices, in which we relate the convergence rate of our method to the dynamic regret of a specific online convex optimization problem in the space of matrices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2016

Proximal Quasi-Newton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates

In [19], a general, inexact, efficient proximal quasi-Newton algorithm f...
research
02/16/2023

Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence

Quasi-Newton algorithms are among the most popular iterative methods for...
research
11/04/2021

Quasi-Newton Methods for Saddle Point Problems

This paper studies quasi-Newton methods for solving strongly-convex-stro...
research
04/26/2017

Stochastic Orthant-Wise Limited-Memory Quasi-Newton Methods

The ℓ_1-regularized sparse model has been popular in machine learning so...
research
01/27/2022

From the Ravine method to the Nesterov method and vice versa: a dynamical system perspective

We revisit the Ravine method of Gelfand and Tsetlin from a dynamical sys...
research
10/17/2022

On Accelerated Perceptrons and Beyond

The classical Perceptron algorithm of Rosenblatt can be used to find a l...
research
12/09/2020

Enhancing Parameter-Free Frank Wolfe with an Extra Subproblem

Aiming at convex optimization under structural constraints, this work in...

Please sign up or login with your details

Forgot password? Click here to reset