Faster Subgradient Methods for Functions with Hölderian Growth

04/01/2017
by   Patrick R. Johnstone, et al.
0

The purpose of this manuscript is to derive new convergence results for several subgradient methods for minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and functions with weakly sharp minima as special cases. To this end there are four main contributions. First, for a constant and sufficiently small stepsize, we show that the subgradient method achieves linear convergence up to a certain region including the optimal set with error of the order of the stepsize. Second, we derive nonergodic convergence rates for the subgradient method under nonsummable decaying stepsizes. Thirdly if appropriate problem parameters are known we derive a possibly-summable stepsize which obtains a much faster convergence rate. Finally we develop a novel "descending stairs" stepsize which obtains this faster convergence rate but also obtains linear convergence for the special case of weakly sharp functions. We also develop a variant of the "descending stairs" stepsize which achieves essentially the same convergence rate without requiring an error bound constant which is difficult to estimate in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2017

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

We generalize the classic convergence rate theory for subgradient method...
research
01/01/2021

On a Faster R-Linear Convergence Rate of the Barzilai-Borwein Method

The Barzilai-Borwein (BB) method has demonstrated great empirical succes...
research
05/31/2023

On the Linear Convergence of Policy Gradient under Hadamard Parameterization

The convergence of deterministic policy gradient under the Hadamard para...
research
04/02/2021

Neurons learn slower than they think

Recent studies revealed complex convergence dynamics in gradient-based m...
research
10/12/2020

RNN Training along Locally Optimal Trajectories via Frank-Wolfe Algorithm

We propose a novel and efficient training method for RNNs by iteratively...
research
03/08/2023

A note on L^1-Convergence of the Empiric Minimizer for unbounded functions with fast growth

For V : ℝ^d →ℝ coercive, we study the convergence rate for the L^1-dista...
research
05/28/2021

Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions

Generalized self-concordance is a key property present in the objective ...

Please sign up or login with your details

Forgot password? Click here to reset