Adaptive Three Operator Splitting

04/06/2018
by   Fabian Pedregosa, et al.
0

We propose and analyze a novel adaptive step size variant of the Davis-Yin three operator splitting, a method that can solve optimization problems composed of a sum of a smooth term for which we have access to its gradient and an arbitrary number of potentially non-smooth terms for which we have access to their proximal operator. The proposed method leverages local information of the objective function, allowing for larger step sizes while preserving the convergence properties of the original method. It only requires two extra function evaluations per iteration and does not depend on any step size hyperparameter besides an initial estimate. We provide a convergence rate analysis of this method, showing sublinear convergence rate for general convex functions and linear convergence under stronger assumptions, matching the best known rates of its non adaptive variant. Finally, an empirical comparison with related methods on 6 different problems illustrates the computational advantage of the adaptive step size strategy.

READ FULL TEXT

page 33

page 34

page 35

research
10/25/2016

On the convergence rate of the three operator splitting scheme

The three operator splitting scheme was recently proposed by [Davis and ...
research
01/12/2023

A Stochastic Proximal Polyak Step Size

Recently, the stochastic Polyak step size (SPS) has emerged as a competi...
research
05/28/2021

Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions

Generalized self-concordance is a key property present in the objective ...
research
07/26/2023

Function Value Learning: Adaptive Learning Rates Based on the Polyak Stepsize and Function Splitting in ERM

Here we develop variants of SGD (stochastic gradient descent) with an ad...
research
02/09/2015

Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Reconstruction with a Convex Constraint

We develop a projected Nesterov's proximal-gradient (PNPG) approach for ...
research
08/25/2022

Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

This paper applies an idea of adaptive momentum for the nonlinear conjug...
research
12/13/2022

Self-adaptive algorithms for quasiconvex programming and applications to machine learning

For solving a broad class of nonconvex programming problems on an unboun...

Please sign up or login with your details

Forgot password? Click here to reset