A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

04/25/2017
by   Zhi Li, et al.
0

This paper considers the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. The studied algorithm is closely related to a previous decentralized optimization algorithm, PG-EXTRA [37], but has a few advantages. First of all, in our new scheme, agents use uncoordinated step-sizes and the stable upper bounds on step-sizes are independent from network topology. The step-sizes depend on local objective functions, and they can be as large as that of the gradient descent. Secondly, for the special case without non-smooth terms, linear convergence can be achieved under the strong convexity assumption. The dependence of the convergence rate on the objective functions and the network are separated, and the convergence rate of our new scheme is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging. We also provide some numerical experiments to demonstrate the efficacy of the introduced algorithms and validate our theoretical discoveries.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2023

Decentralized Inexact Proximal Gradient Method With Network-Independent Stepsizes for Convex Composite Optimization

This paper considers decentralized convex composite optimization over un...
research
05/20/2019

A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization

Decentralized optimization is a promising paradigm that finds various ap...
research
10/09/2020

A Decentralized Multi-Objective Optimization Algorithm

During the past two decades, multi-agent optimization problems have draw...
research
12/11/2018

Efficient learning of smooth probability functions from Bernoulli tests with guarantees

We study the fundamental problem of learning an unknown, smooth probabil...
research
08/16/2016

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show a...
research
03/20/2022

Convergence rates of the stochastic alternating algorithm for bi-objective optimization

Stochastic alternating algorithms for bi-objective optimization are cons...
research
10/09/2018

Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD

We study Stochastic Gradient Descent (SGD) with diminishing step sizes f...

Please sign up or login with your details

Forgot password? Click here to reset