Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

08/16/2016
by   Hamed Karimi, et al.
0

In 1963, Polyak proposed a simple condition that is sufficient to show a global linear convergence rate for gradient descent. This condition is a special case of the Łojasiewicz inequality proposed in the same year, and it does not require strong convexity (or even convexity). In this work, we show that this much-older Polyak-Łojasiewicz (PL) inequality is actually weaker than the main conditions that have been explored to show linear convergence rates without strong convexity over the last 25 years. We also use the PL inequality to give new analyses of randomized and greedy coordinate descent methods, sign-based gradient descent methods, and stochastic gradient methods in the classic setting (with decreasing or constant step-sizes) as well as the variance-reduced setting. We further propose a generalization that applies to proximal-gradient methods for non-smooth optimization, leading to simple proofs of linear convergence of these methods. Along the way, we give simple convergence results for a wide variety of problems in machine learning: least squares, logistic regression, boosting, resilient backpropagation, L1-regularization, support vector machines, stochastic dual coordinate ascent, and stochastic variance-reduced gradient methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2016

Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization

Here we study non-convex composite optimization: first, a finite-sum of ...
research
06/20/2020

Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

We present a unified theorem for the convergence analysis of stochastic ...
research
06/10/2013

Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

We consider the stochastic approximation problem where a convex function...
research
04/25/2017

A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

This paper considers the problem of decentralized optimization with a co...
research
11/18/2019

Coordinate-wise Armijo's condition

Let z=(x,y) be coordinates for the product space R^m_1×R^m_2. Let f:R^m_...
research
09/09/2018

SEGA: Variance Reduction via Gradient Sketching

We propose a randomized first order optimization method--SEGA (SkEtched ...
research
02/19/2012

Beneath the valley of the noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences

Randomized algorithms that base iteration-level decisions on samples fro...

Please sign up or login with your details

Forgot password? Click here to reset