DeepAI AI Chat
Log In Sign Up

A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization

by   Sulaiman A. Alghunaim, et al.

Decentralized optimization is a promising paradigm that finds various applications in engineering and machine learning. This work studies decentralized composite optimization problems with a non-smooth regularization term. Most existing gradient-based proximal decentralized methods are shown to converge to the desired solution with sublinear rates, and it remains unclear how to prove the linear convergence for this family of methods when the objective function is strongly convex. To tackle this problem, this work considers the non-smooth regularization term to be common across all networked agents, which is the case for most centralized machine learning implementations. Under this scenario, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers some well known decentralized algorithms such as EXTRA and DIGing.


page 1

page 2

page 3

page 4


A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features

This work studies multi-agent sharing optimization problems with the obj...

On linear convergence of two decentralized algorithms

Decentralized algorithms solve multi-agent problems over a connected net...

Decentralized Inexact Proximal Gradient Method With Network-Independent Stepsizes for Convex Composite Optimization

This paper considers decentralized convex composite optimization over un...

A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

This paper considers the problem of decentralized optimization with a co...

Decentralized Stochastic Proximal Gradient Descent with Variance Reduction over Time-varying Networks

In decentralized learning, a network of nodes cooperate to minimize an o...

A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm

We develop and analyze an asynchronous algorithm for distributed convex ...

A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic Composite Optimization

We focus on decentralized stochastic non-convex optimization, where n ag...